text
stringlengths
60
353k
source
stringclasses
2 values
**Adobe InCopy** Adobe InCopy: Adobe InCopy is a professional word processor made by Adobe Inc. that integrates with Adobe InDesign. InCopy is used for general word processing, in contrast to InDesign, which is used to publish printed material, including newspapers and magazines. The software enables editors to write, edit, and design documents. The software includes standard word processing features such as spell check, track changes, and word count, and has various viewing modes that allow editors to visually inspect design elements — just as it looks to the designer working in Adobe InDesign.Version 3.0 of InCopy was included in the first Adobe Creative Suite in 2003, and later versions were included in versions of the Creative Suite up until version CS6. Since 2013 newer versions have been made available only through Adobe Creative Cloud. Viewing modes: InCopy has three viewing modes: Story mode, galley mode and layout mode. The story mode is for reading and editing text in a screen-wide view without page formatting. The galley mode displays text without page formatting but with line numbers and the same line breaks seen in the layout mode. Both galley and story views show the names of the style sheets applied to the text but do not display the actual formatting. The layout mode shows the true page design layout along with images and overset text.Although InCopy can be used as a word processor (with full printing and exporting functions), it is primarily used to integrate with Adobe InDesign. Once integrated, writers, editors and designers can simultaneously work on the same page; the designer creates the page layout with InDesign, while editors simultaneously edit different stories with InCopy, via the Adobe LiveEdit rights management system. Publishers often use a publishing system including workflow- and rights-management to the design and editing capabilities of the publishing system software. Internationalization and localization: A Middle Eastern edition of InCopy is specifically developed for Arabic and Hebrew languages. It features: Text settings: Special settings for laying out Arabic or Hebrew text, such as: Possibility to use Arabic, Persian or Hindi digits Use kashida for letter spacing and full justification Ligature option Adjust the position of diacritics (such as Arabic vowels) Justify text in three possible ways: Standard, Arabic, Naskh Option to insert special characters, including three Hebrew characters (geresh, gershayim, maqaf) and an Arabic one (Kashida) Apply standard, Arabic or Hebrew styles for page, paragraph and footnote numbering Bi-directional text flow: The notion of right-to-left behavior applies to several objects: Story, paragraph, character and table. Right-to-left and left-to-right content can be mixed. Internationalization and localization: Dictionary and hyphenation module: Includes a dictionary for Hebrew or Arabic for spelling check, with a choice of rules, like strict alef hamza, strict final yāʾ, both or none. Enhanced font support: Supports most fonts shipped with the OS as well as a large number of third-party fonts. Search and replace: It is possible to search and change specific occurrences of Middle Eastern characters, words, groups of words, or text formatted a certain way across a selection, one or more stories, a document, or multiple open documents. Searching for OpenType attributes such as fractions and swashes is also supported. Importing and exporting: Can import QuarkXPress files, even using Arabic XT, Arabic Phonyx or Hebrew QXPressWay fonts, retaining the layout and content. Includes 50 import and export filters. History: InCopy 1.0: October 1999 InCopy 2.0: 2002, first release with Mac OS X support InCopy CS (3.0): Late 2003 InCopy CS2 (4.0): May 2005 InCopy CS3 (5.0): June 2007 InCopy CS4 (6.0): November 2008 InCopy CS5 (7.0): May 2010 InCopy CS6 (8.0): April 2012 InCopy CC (9.0): June 2013 InCopy CC2014 (10.0): June 2014 InCopy CC2015 (11.0) InCopy CC2017 (12.0) InCopy CC2018 (13.0): October 2017 InCopy CC2019 (14.0)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OR4K14** OR4K14: Olfactory receptor 4K14 is a protein that in humans is encoded by the OR4K14 gene.Olfactory receptors interact with odorant molecules in the nose, to initiate a neuronal response that triggers the perception of a smell. The olfactory receptor proteins are members of a large family of G-protein-coupled receptors (GPCR) arising from single coding-exon genes. Olfactory receptors share a 7-transmembrane domain structure with many neurotransmitter and hormone receptors and are responsible for the recognition and G protein-mediated transduction of odorant signals. The olfactory receptor gene family is the largest in the genome. The nomenclature assigned to the olfactory receptor genes and proteins for this organism is independent of other organisms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fainting room** Fainting room: A fainting room was a private room, common in the Victorian era, which typically contained fainting couches. Such couches or sofas typically had an arm on one side only to permit easy access to a reclining position, similar to its cousin the Chaise longue, although the sofa style most typically featured a back at one end (usually the side with the arm) so that the resulting position was not purely supine. Fainting room: There are also accounts that mention fainting rooms in eighteenth-century America. These rooms, which were also referred to as bedrooms (bedrooms were called chambers), were located in the ground floor and contained a day bed that allowed occupants to rest for brief periods during the day. Theories for prevalence: One theory for the predominance of fainting couches is that women were actually fainting because their corsets were laced too tightly, thus restricting blood flow. By preventing movement of the ribs, corsets restricted airflow to the lungs and, as a result, if the wearer exerted themselves to the point of needing large quantities of oxygen and was unable to fully inflate the lungs, this could lead to fainting. Hyperventilation for any reason could also potentially result in brief loss of consciousness. Theories for prevalence: Victorian fainting rooms are associated with a claim that they are part of the legacy of female containment where such rooms served as a deeply female space meant to force women to remain indoors and inactive under the guise of ensuring privacy, class, and interiority.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Remote Play** Remote Play: Remote Play is a feature of Sony video game consoles that allow the PlayStation 3, PlayStation 4 and PlayStation 5 to transmit video and audio output to another device; previously this could only be a PlayStation Portable or PlayStation Vita. In 2014, it was expanded to include the use of PlayStation TV, Xperia smartphones and tablets (Z2 and later), and PlayStation Now. In 2016, it was expanded to Microsoft Windows PCs and macOS. In 2019, support for Android and iOS devices was eventually added. Support for remote play of PlayStation 5 games to other devices was added in November 2020 just prior to the new console's launch. Remote Play: Similar functionality is provided on Nintendo's Wii U console, using the Off-TV Play function. This feature essentially allows compatible home console games to be played on the handheld. While seldom implemented on PS3, Remote Play is a mandatory feature on all PS4 games, except for games that utilize peripherals such as PlayStation Move. Concept: Sony defined Remote Play as follows: "Remote Play allows a PSP system to connect wirelessly to a PS3 system and transfers some functionality of the PS3 to the PSP system. With remote play, a PSP system may access files that are located on the PS3, as well as, play certain software titles ..." Sony later amended this definition to apply between the PlayStation 4 and PlayStation Vita as well. The premise of Off-TV Play on the Wii U is similar in concept, in how the video game console does all of the processing, but sends the image and sound straight to the Wii U GamePad's screen instead of a television screen. Similarly, in the case of Remote Play, the PlayStation 3 or PlayStation 4 do all of the processing, but transmit the image and sound to the PlayStation Portable or PlayStation Vita screens and speakers. While typically in reference to Sony consoles and handhelds, it has been used in different ways as well. In April 2010, a firmware update was released for the PS3 that allowed Remote Play between it and the Sony VAIO brand desktops and laptops and Sony Xperia brand smartphones and tablets as well. Background: PS3 to PSP Interactivity between Sony's home video game consoles and handheld video game console is traced back as far as 2006, prior to the PlayStation 3's launch, when journalists noticed a PlayStation Portable icon, with the title "Remote Play", on pre-release versions of their PS3. The functionality was officially revealed just prior to the PS3's launch in October 2006, at Sony's "Gamer's Day" event, where Sony demonstrated the ability to transfer the PS3's output to a PSP instead of a television, through showing downloaded PlayStation games and movie films being transmitted to a PSP's screen and speakers. Sony announced that all original PlayStation games would support the feature, but they had to be digital, not disc-based, media from the PS3's internal harddrive. This later changed by the end of 2007, when a firmware update made it so any original PlayStation game was compatible with Remote Play, even disc-based ones.Despite Sony's early emphasis on Remote Play and original PlayStation game support, it was used very sparingly between the PS3 and PSP, with very few PS3 titles allowing for its use. The feature was even removed from several titles before their final release, most notably Gran Turismo HD and Formula One Championship Edition. Most titles were small PlayStation Network-only titles. The 2007 PS3 title Lair was notable for being one of the few original, physical Blu-ray disc releases to work between the PS3 and PSP. Background: PS3 to PS Vita In late 2011, just prior to the launch of the PlayStation Vita, video game website Eurogamer published a rumor stating that a firmware update for the PS3 would provide Remote Play compatibility for all PS3 games when using Remote Play between a PS3 and Vita. The premise seemed plausible, with websites reporting that Sony had shown working demonstrations of the concept prior to the rumor at the Tokyo Game Show, showing LittleBigPlanet 2 and Killzone 3 supporting the feature. Despite this, the rumor was declared false by Sony, who said that the feature had to be implemented on the software side by developers on an individual basis, not on a hardware level.PS3 to Vita Remote Play went on to be rarely implemented as well. It retained any games supported by PS3 to PSP Remote Play support, including all original PlayStation games, but was again rarely used by actual PS3 games. Only a few games supported it, namely HD Remasters such as The Ico & Shadow of the Colossus Collection and the God of War Collection.President of Sony's Worldwide Studios for Sony Interactive Entertainment Inc. Shuhei Yoshida summarized the issues with PS3 to Vita Remote Play: "The single biggest issue, why there are not many PlayStation 3 games that support Remote Play, was that it was optional – the system didn't do much. The game has to set aside some memory or CPU to be able to do that, and usually, memory is the most precious resource that [development] teams fight amongst each other for. So when it comes down to the priorities, these are features that are very easy to drop." Despite this, unofficial hacks to the PS3 firmware have been reported to unlock the Remote Play feature in a number of PS3 games with varying degrees of success. Games such as Battlefield 3 and BioShock Infinite have been shown to technically be feasible, though still impossible to do without unofficially hacking the PS3's firmware. Background: PS4 to PS Vita In June 2013, Sony announced that all PlayStation 4 games would be compatible with Remote Play with the PS Vita, with the exception of games which conceptually would not work, such as ones that heavily revolve around PlayStation Eye use. Otherwise, contrary to PS3 to PS Vita Remote Play, PS4 to PS Vita Remote Play is designed on a hardware level, meaning that all games are automatically compatible, and it is only up to developers to make sure the controls adapt well to being played on a Vita instead of a DualShock 4. This iteration of Remote Play was developed by Gaikai, who also developed PlayStation Now. Remote Play on the updated PlayStation Vita 2000 was shown at Tokyo Game Show in 2013. Background: Other variants PS4 firmware update 1.70 introduced full remote play functionality for the PlayStation TV, allowing users to play PS4 games in a separate room or house, on a television set with a PS TV device remotely connected to the PS4.Remote Play with the PS4 is available for Android smartphones and tablet computers running Android 5.0 Lollipop or later, and requires a DualShock 4 in order to play games. The service was made available on 28 October 2014, exclusively on Sony's Xperia Z3 series phones, and was expanded to Sony's older Z2 series a month later. In October 2019, support was expanded to all Android smartphones with the release of PS4 system software 7.00.With the release of PS4 system software 3.50 on 6 April 2016, Remote Play was made available on Windows PCs and macOS. A DualShock 4 controller is required to use it, and must be connected through a USB cable or wirelessly via a separate accessory. 1080p streaming is available when using a PS4 Pro model. Background: Cloud gaming and Remote Play are some of several Gaikai-powered streaming services announced for the PlayStation 4 through its PlayStation Now service. Cloud gaming differs from Remote Play in that Remote Play allows games on home devices to operate remotely over a wireless network, while cloud gaming refers to a game that resides on a distant server rather than on a user's device.Support for PlayStation 5 games was added to the app in early November 2020, just prior to the console's launch on November 12, 2020. Software compatibility: In 2007, Sony made all original PlayStation games, when played on a PlayStation 3, compatible with Remote Play on the PSP. Additionally, Sony announced that all PlayStation 4 games will be playable on the PlayStation Vita. Beyond these two scenarios, Remote Play was a feature that was sparingly implemented in games. The below chart indicates instances when Remote Play on PS3 is an available feature.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Estradiol** Estradiol: Estradiol (E2), also spelled oestradiol, is an estrogen steroid hormone and the major female sex hormone. It is involved in the regulation of the estrous and menstrual female reproductive cycles. Estradiol is responsible for the development of female secondary sexual characteristics such as the breasts, widening of the hips and a female-associated pattern of fat distribution. It is also important in the development and maintenance of female reproductive tissues such as the mammary glands, uterus and vagina during puberty, adulthood and pregnancy. It also has important effects in many other tissues including bone, fat, skin, liver, and the brain. Estradiol: Though estradiol levels in males are much lower than in females, estradiol has important roles in males as well. Apart from humans and other mammals, estradiol is also found in most vertebrates and crustaceans, insects, fish, and other animal species.Estradiol is produced especially within the follicles of the ovaries, but also in other tissues including the testicles, the adrenal glands, fat, liver, the breasts, and the brain. Estradiol is produced in the body from cholesterol through a series of reactions and intermediates. The major pathway involves the formation of androstenedione, which is then converted by aromatase into estrone and is subsequently converted into estradiol. Alternatively, androstenedione can be converted into testosterone, which can then be converted into estradiol. Upon menopause in females, production of estrogens by the ovaries stops and estradiol levels decrease to very low levels. Estradiol: In addition to its role as a natural hormone, estradiol is used as a medication, for instance in menopausal hormone therapy and feminizing hormone therapy for transgender women; for information on estradiol as a medication, see the estradiol (medication) article. Biological function: Sexual development The development of secondary sex characteristics in women is driven by estrogens, to be specific, estradiol. These changes are initiated at the time of puberty, most are enhanced during the reproductive years, and become less pronounced with declining estradiol support after menopause. Thus, estradiol produces breast development, and is responsible for changes in the body shape, affecting bones, joints, and fat deposition. In females, estradiol induces breast development, widening of the hips, a feminine fat distribution (with fat deposited particularly in the breasts, hips, thighs, and buttocks), and maturation of the vagina and vulva, whereas it mediates the pubertal growth spurt (indirectly via increased growth hormone secretion) and epiphyseal closure (thereby limiting final height) in both sexes. Biological function: Reproduction Female reproductive system In the female, estradiol acts as a growth hormone for tissue of the reproductive organs, supporting the lining of the vagina, the cervical glands, the endometrium, and the lining of the fallopian tubes. It enhances growth of the myometrium. Estradiol appears necessary to maintain oocytes in the ovary. During the menstrual cycle, estradiol produced by the growing follicles triggers, via a positive feedback system, the hypothalamic-pituitary events that lead to the luteinizing hormone surge, inducing ovulation. In the luteal phase, estradiol, in conjunction with progesterone, prepares the endometrium for implantation. During pregnancy, estradiol increases due to placental production. The effect of estradiol, together with estrone and estriol, in pregnancy is less clear. They may promote uterine blood flow, myometrial growth, stimulate breast growth and at term, promote cervical softening and expression of myometrial oxytocin receptors. In baboons, blocking of estrogen production leads to pregnancy loss, suggesting estradiol has a role in the maintenance of pregnancy. Research is investigating the role of estrogens in the process of initiation of labor. Actions of estradiol are required before the exposure of progesterone in the luteal phase. Biological function: Male reproductive system The effect of estradiol (and estrogens in general) upon male reproduction is complex. Estradiol is produced by action of aromatase mainly in the Leydig cells of the mammalian testis, but also by some germ cells and the Sertoli cells of immature mammals. It functions (in vitro) to prevent apoptosis of male sperm cells. While some studies in the early 1990s claimed a connection between globally declining sperm counts and estrogen exposure in the environment, later studies found no such connection, nor evidence of a general decline in sperm counts. Suppression of estradiol production in a subpopulation of subfertile men may improve the semen analysis.Males with certain sex chromosome genetic conditions, such as Klinefelter's syndrome, will have a higher level of estradiol. Biological function: Skeletal system Estradiol has a profound effect on bone. Individuals without it (or other estrogens) will become tall and eunuchoid, as epiphyseal closure is delayed or may not take place. Bone density is also affected, resulting in early osteopenia and osteoporosis. Low levels of estradiol may also predict fractures, with post-menopausal women having the highest incidence of bone fracture. Women past menopause experience an accelerated loss of bone mass due to a relative estrogen deficiency. Biological function: Skin health The estrogen receptor, as well as the progesterone receptor, have been detected in the skin, including in keratinocytes and fibroblasts. At menopause and thereafter, decreased levels of female sex hormones result in atrophy, thinning, and increased wrinkling of the skin and a reduction in skin elasticity, firmness, and strength. These skin changes constitute an acceleration in skin aging and are the result of decreased collagen content, irregularities in the morphology of epidermal skin cells, decreased ground substance between skin fibers, and reduced capillaries and blood flow. The skin also becomes more dry during menopause, which is due to reduced skin hydration and surface lipids (sebum production). Along with chronological aging and photoaging, estrogen deficiency in menopause is one of the three main factors that predominantly influences skin aging.Hormone replacement therapy consisting of systemic treatment with estrogen alone or in combination with a progestogen, has well-documented and considerable beneficial effects on the skin of postmenopausal women. These benefits include increased skin collagen content, skin thickness and elasticity, and skin hydration and surface lipids. Topical estrogen has been found to have similar beneficial effects on the skin. In addition, a study has found that topical 2% progesterone cream significantly increases skin elasticity and firmness and observably decreases wrinkles in peri- and postmenopausal women. Skin hydration and surface lipids, on the other hand, did not significantly change with topical progesterone. These findings suggest that progesterone, like estrogen, also has beneficial effects on the skin, and may be independently protective against skin aging. Biological function: Nervous system Estrogens can be produced in the brain from steroid precursors. As antioxidants, they have been found to have neuroprotective function.The positive and negative feedback loops of the menstrual cycle involve ovarian estradiol as the link to the hypothalamic-pituitary system to regulate gonadotropins. (See Hypothalamic–pituitary–gonadal axis.) Estrogen is considered to play a significant role in women's mental health, with links suggested between the hormone level, mood and well-being. Sudden drops or fluctuations in, or long periods of sustained low levels of estrogen may be correlated with significant mood-lowering. Clinical recovery from depression postpartum, perimenopause, and postmenopause was shown to be effective after levels of estrogen were stabilized and/or restored.The volumes of sexually dimorphic brain structures in transgender women were found to change and approximate typical female brain structures when exposed to estrogen concomitantly with androgen deprivation over a period of months, suggesting that estrogen and/or androgens have a significant part to play in sex differentiation of the brain, both prenatally and later in life. Biological function: There is also evidence the programming of adult male sexual behavior in many vertebrates is largely dependent on estradiol produced during prenatal life and early infancy. It is not yet known whether this process plays a significant role in human sexual behavior, although evidence from other mammals tends to indicate a connection.Estrogen has been found to increase the secretion of oxytocin and to increase the expression of its receptor, the oxytocin receptor, in the brain. In women, a single dose of estradiol has been found to be sufficient to increase circulating oxytocin concentrations. Biological function: Gynecological cancers Estradiol has been tied to the development and progression of cancers such as breast cancer, ovarian cancer and endometrial cancer. Estradiol affects target tissues mainly by interacting with two nuclear receptors called estrogen receptor α (ERα) and estrogen receptor β (ERβ). One of the functions of these estrogen receptors is the modulation of gene expression. Once estradiol binds to the ERs, the receptor complexes then bind to specific DNA sequences, possibly causing damage to the DNA and an increase in cell division and DNA replication. Eukaryotic cells respond to damaged DNA by stimulating or impairing G1, S, or G2 phases of the cell cycle to initiate DNA repair. As a result, cellular transformation and cancer cell proliferation occurs. Biological function: Cardiovascular system Estrogen affects certain blood vessels. Improvement in arterial blood flow has been demonstrated in coronary arteries. 17-beta-estradiol (E2) is considered the most potent estrogen found in humans. E2 influences vascular function, apoptosis, and damage during cardiac ischemia and reperfusion. E2 can protect the heart and individual cardiac myocytes from injuries related to ischemia. After a heart attack or long periods of hypertension, E2 inhibits the adverse effects of pathologic remodeling of the heart.During pregnancy, high levels of estrogens, namely estradiol, increase coagulation and the risk of venous thromboembolism. Biological function: Other functions Estradiol has complex effects on the liver. It affects the production of multiple proteins, including lipoproteins, binding proteins, and proteins responsible for blood clotting. In high amounts, estradiol can lead to cholestasis, for instance cholestasis of pregnancy. Certain gynecological conditions are dependent on estrogen, such as endometriosis, leiomyomata uteri, and uterine bleeding. Biological activity: Estradiol acts primarily as an agonist of the estrogen receptor (ER), a nuclear steroid hormone receptor. There are two subtypes of the ER, ERα and ERβ, and estradiol potently binds to and activates both of these receptors. The result of ER activation is a modulation of gene transcription and expression in ER-expressing cells, which is the predominant mechanism by which estradiol mediates its biological effects in the body. Estradiol also acts as an agonist of membrane estrogen receptors (mERs), such as GPER (GPR30), a recently discovered non-nuclear receptor for estradiol, via which it can mediate a variety of rapid, non-genomic effects. Unlike the case of the ER, GPER appears to be selective for estradiol, and shows very low affinities for other endogenous estrogens, such as estrone and estriol. Additional mERs besides GPER include ER-X, ERx, and Gq-mER.ERα/ERβ are in inactive state trapped in multimolecular chaperone complexes organized around the heat shock protein 90 (HSP90), containing p23 protein, and immunophilin, and located in majority in cytoplasm and partially in nucleus. In the E2 classical pathway or estrogen classical pathway, estradiol enters the cytoplasm, where it interacts with ERs. Once bound E2, ERs dissociate from the molecular chaperone complexes and become competent to dimerize, migrate to nucleus, and to bind to specific DNA sequences (estrogen response element, ERE), allowing for gene transcription which can take place over hours and days. Biological activity: Given by subcutaneous injection in mice, estradiol is about 10-fold more potent than estrone and about 100-fold more potent than estriol. As such, estradiol is the main estrogen in the body, although the roles of estrone and estriol as estrogens are said not to be negligible. Biochemistry: Biosynthesis Estradiol, like other steroid hormones, is derived from cholesterol. After side chain cleavage and using the Δ5 or the Δ4- pathway, androstenedione is the key intermediary. A portion of the androstenedione is converted to testosterone, which in turn undergoes conversion to estradiol by aromatase. In an alternative pathway, androstenedione is aromatized to estrone, which is subsequently converted to estradiol via 17β-hydroxysteroid dehydrogenase (17β-HSD).During the reproductive years, most estradiol in women is produced by the granulosa cells of the ovaries by the aromatization of androstenedione (produced in the theca folliculi cells) to estrone, followed by conversion of estrone to estradiol by 17β-HSD. Smaller amounts of estradiol are also produced by the adrenal cortex, and, in men, by the testes.Estradiol is not produced in the gonads only; in particular, fat cells produce active precursors to estradiol, and will continue to do so even after menopause. Estradiol is also produced in the brain and in arterial walls. Biochemistry: In men, approximately 15 to 25% of circulating estradiol is produced in the testicles. The rest is synthesized via peripheral aromatization of testosterone into estradiol and of androstenedione into estrone (which is then transformed into estradiol via peripheral 17β-HSD). This peripheral aromatization occurs predominantly in adipose tissue, but also occurs in other tissues such as bone, liver, and the brain. Approximately 40 to 50 µg of estradiol is produced per day in men. Biochemistry: Distribution In plasma, estradiol is largely bound to SHBG, and also to albumin. Only a fraction of 2.21% (± 0.04%) is free and biologically active, the percentage remaining constant throughout the menstrual cycle. Biochemistry: Metabolism Inactivation of estradiol includes conversion to less-active estrogens, such as estrone and estriol. Estriol is the major urinary metabolite. Estradiol is conjugated in the liver to form estrogen conjugates like estradiol sulfate, estradiol glucuronide and, as such, excreted via the kidneys. Some of the water-soluble conjugates are excreted via the bile duct, and partly reabsorbed after hydrolysis from the intestinal tract. This enterohepatic circulation contributes to maintaining estradiol levels. Biochemistry: Estradiol is also metabolized via hydroxylation into catechol estrogens. In the liver, it is non-specifically metabolized by CYP1A2, CYP3A4, and CYP2C9 via 2-hydroxylation into 2-hydroxyestradiol, and by CYP2C9, CYP2C19, and CYP2C8 via 17β-hydroxy dehydrogenation into estrone, with various other cytochrome P450 (CYP) enzymes and metabolic transformations also being involved.Estradiol is additionally conjugated with an ester into lipoidal estradiol forms like estradiol palmitate and estradiol stearate to a certain extent; these esters are stored in adipose tissue and may act as a very long-lasting reservoir of estradiol. Biochemistry: Excretion Estradiol is excreted in the form of glucuronide and sulfate estrogen conjugates in urine. Following an intravenous injection of labeled estradiol in women, almost 90% is excreted in urine and feces within 4 to 5 days. Enterohepatic recirculation causes a delay in excretion of estradiol. Biochemistry: Levels Levels of estradiol in premenopausal women are highly variable throughout the menstrual cycle and reference ranges widely vary from source to source. Estradiol levels are minimal and according to most laboratories range from 20 to 80 pg/mL during the early to mid follicular phase (or the first week of the menstrual cycle, also known as menses). Levels of estradiol gradually increase during this time and through the mid to late follicular phase (or the second week of the menstrual cycle) until the pre-ovulatory phase. At the time of pre-ovulation (a period of about 24 to 48 hours), estradiol levels briefly surge and reach their highest concentrations of any other time during the menstrual cycle. Circulating levels are typically between 130 and 200 pg/mL at this time, but in some women may be as high as 300 to 400 pg/mL, and the upper limit of the reference range of some laboratories are even greater (for instance, 750 pg/mL). Following ovulation (or mid-cycle) and during the latter half of the menstrual cycle or the luteal phase, estradiol levels plateau and fluctuate between around 100 and 150 pg/mL during the early and mid luteal phase, and at the time of the late luteal phase, or a few days before menstruation, reach a low of around 40 pg/mL. The mean integrated levels of estradiol during a full menstrual cycle have variously been reported by different sources as 80, 120, and 150 pg/mL. Although contradictory reports exist, one study found mean integrated estradiol levels of 150 pg/mL in younger women whereas mean integrated levels ranged from 50 to 120 pg/mL in older women.During the reproductive years of the human female, levels of estradiol are somewhat higher than that of estrone, except during the early follicular phase of the menstrual cycle; thus, estradiol may be considered the predominant estrogen during human female reproductive years in terms of absolute serum levels and estrogenic activity. During pregnancy, estriol becomes the predominant circulating estrogen, and this is the only time at which estetrol occurs in the body, while during menopause, estrone predominates (both based on serum levels). The estradiol produced by male humans, from testosterone, is present at serum levels roughly comparable to those of postmenopausal women (14–55 versus <35 pg/mL, respectively). It has also been reported that if concentrations of estradiol in a 70-year-old man are compared to those of a 70-year-old woman, levels are approximately 2- to 4-fold higher in the man. Biochemistry: Measurement In women, serum estradiol is measured in a clinical laboratory and reflects primarily the activity of the ovaries. The Estradiol blood test measures the amount of estradiol in the blood. It is used to check the function of the ovaries, placenta, adrenal glands. This can detect baseline estrogen in women with amenorrhea or menstrual dysfunction, and to detect the state of hypoestrogenicity and menopause. Furthermore, estrogen monitoring during fertility therapy assesses follicular growth and is useful in monitoring the treatment. Estrogen-producing tumors will demonstrate persistent high levels of estradiol and other estrogens. In precocious puberty, estradiol levels are inappropriately increased. Biochemistry: Ranges Individual laboratory results should always be interpreted using the ranges provided by the laboratory that performed the test. In the normal menstrual cycle, estradiol levels measure typically <50 pg/mL at menstruation, rise with follicular development (peak: 200 pg/mL), drop briefly at ovulation, and rise again during the luteal phase for a second peak. At the end of the luteal phase, estradiol levels drop to their menstrual levels unless there is a pregnancy. During pregnancy, estrogen levels, including estradiol, rise steadily toward term. The source of these estrogens is the placenta, which aromatizes prohormones produced in the fetal adrenal gland. Medical use: Estradiol is used as a medication, primarily in hormone therapy for menopausal symptoms as well as feminizing hormone therapy for trans individuals. Chemistry: Estradiol is an estrane steroid. It is also known as 17β-estradiol (to distinguish it from 17α-estradiol) or as estra-1,3,5(10)-triene-3,17β-diol. It has two hydroxyl groups, one at the C3 position and the other at the 17β position, as well as three double bonds in the A ring. Due to its two hydroxyl groups, estradiol is often abbreviated as E2. The structurally related estrogens, estrone (E1), estriol (E3), and estetrol (E4) have one, three, and four hydroxyl groups, respectively. Neuropsychopharmacology: In a randomized, double-blind, placebo-controlled study, estradiol was shown to have gender-specific effects on fairness sensitivity. Overall, when the division of a given amount of money was framed as either fair or unfair in a modified version of the ultimatum game, estradiol increased the acceptance rate of fair-framed proposals among men and decreased it among women. However, among the placebo-group "the mere belief of receiving estradiol treatment significantly increased the acceptance of unfair-framed offers in both sexes", indicating that so-called "environmental" factors played a role in organising the responses towards these presentations of the ultimatum game. History: The discovery of estrogen is usually credited to the American scientists Edgar Allen and Edward A. Doisy. In 1923, they observed that injection of fluid from porcine ovarian follicles produced pubertal- and estrus-type changes (including vaginal, uterine, and mammary gland changes and sexual receptivity) in sexually immature, ovariectomized mice and rats. These findings demonstrated the existence of a hormone which is produced by the ovaries and is involved in sexual maturation and reproduction. At the time of its discovery, Allen and Doisy did not name the hormone, and simply referred to it as an "ovarian hormone" or "follicular hormone"; others referred to it variously as feminin, folliculin, menformon, thelykinin, and emmenin. In 1926, Parkes and Bellerby coined the term estrin to describe the hormone on the basis of it inducing estrus in animals. Estrone was isolated and purified independently by Allen and Doisy and German scientist Adolf Butenandt in 1929, and estriol was isolated and purified by Marrian in 1930; they were the first estrogens to be identified.Estradiol, the most potent of the three major estrogens, was the last of the three to be identified. It was discovered by Schwenk and Hildebrant in 1933, who synthesized it via reduction of estrone. Estradiol was subsequently isolated and purified from sow ovaries by Doisy in 1935, with its chemical structure determined simultaneously, and was referred to variously as dihydrotheelin, dihydrofolliculin, dihydrofollicular hormone, and dihydroxyestrin. In 1935, the name estradiol and the term estrogen were formally established by the Sex Hormone Committee of the Health Organization of the League of Nations; this followed the names estrone (which was initially called theelin, progynon, folliculin, and ketohydroxyestrin) and estriol (initially called theelol and trihydroxyestrin) having been established in 1932 at the first meeting of the International Conference on the Standardization of Sex Hormones in London. Following its discovery, a partial synthesis of estradiol from cholesterol was developed by Inhoffen and Hohlweg in 1940, and a total synthesis was developed by Anner and Miescher in 1948. Society and culture: Etymology The name estradiol derives from estra-, Gk. οἶστρος (oistros, literally meaning "verve or inspiration"), which refers to the estrane steroid ring system, and -diol, a chemical term and suffix indicating that the compound is a type of alcohol bearing two hydroxyl groups.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hemangiopericytoma** Hemangiopericytoma: A hemangiopericytoma is a type of soft-tissue sarcoma that originates in the pericytes in the walls of capillaries. When inside the nervous system, although not strictly a meningioma tumor, it is a meningeal tumor with a special aggressive behavior. It was first characterized in 1942. Signs and symptoms: Symptoms of hemangiopericytoma vary greatly depending on both tumor stage and affected organs. Most patients report pain and mass-related symptoms, while others also report vascular disease-related symptoms, and some have no symptoms until late in the disease process. Hemangiopericytomas are most commonly found in the meninges, lower extremities, retroperitoneum, pelvis, lungs, and pleura. Histopathology: Hemangiopericytomas are tumors that are derived from specialized spindle shaped cells called pericytes, which line capillaries.Hemangiopericytoma located in the cerebral cavity is an aggressive tumor of the mesenchyme with oval nuclei with scant cytoplasm. "There is dense intercellular reticulin staining. Tumor cells can be fibroblastic, myxoid, or pericytic. These tumors, in contrast to meningiomas, do not stain with epithelial membrane antigen. They have a grade 2 or 3 biological behavior, and need to be distinguished from benign meningiomas because of their high rate of recurrence (68.2%) and metastases. Diagnosis: Computerized tomography and magnetic resonance imaging are not effective methods for diagnosis of hemangiocytomas. In practice, a presumptive diagnosis is often reached through exclusion of other soft tissue tumors, and a tissue biopsy is required to confirm diagnosis. Treatment: Depending on the grade of the sarcoma, it is treated with surgery, chemotherapy, and/or radiotherapy. Though surgery is the current standard of care for hemangiopericytomas, metastasis and tumor recurrence occur in more than 30% of patients, in particular recurrence in the pelvis and retroperitoneum and metastasis in bone and lungs. Radiotherapy does not appear to provide a significant survival benefit but is recommended for use in patients with tumors greater than 5 cm in diameter or with inadequate resection margins after surgery. Clinical benefits of chemotherapy in soft tissue tumors remains unclear. However, the combination of surgery and chemotherapy appears to worsen survival in hemangiopericytoma patients.More research is needed to determine efficacy of different types of treatment. Epidemiology: In one series, the median age of affected individuals was 45 years, with a 10-year survival rate of 70 percent. In another study, age over 45 and female sex were associated with worse survival rates in hemangiopericytomas.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Glass cloth** Glass cloth: Glass cloth is a textile material woven from glass fiber yarn. Home and garden: Glass cloth was originally developed to be used in greenhouse paneling, allowing sunlight's ultraviolet rays to be filtered out, while still allowing visible light through to plants. Home and garden: Glass cloth is also a term for a type of tea towel suited for polishing glass. The cloth is usually woven with the plain weave, and may be patterned in various ways, though checked cloths are the most common. The original cloth was made from linen, but a large quantity is made with cotton warp and tow weft, and in some cases they are composed entirely of cotton. Short fibres of the cheaper kind are easily detached from the cloth.In the Southern Plains during the Dust Bowl, states' health officials recommended attaching translucent glass cloth to the inside frames of windows to help in keeping the dust out of buildings, although people also used paperboard, canvas or blankets. Eyewitness accounts indicate they were not completely successful. Use in technology: Due to properties of glass such as heat resistance and an inability to ignite, glass has been used to create fire barriers in hazardous environments such as inside of racecars. Its poor flexibility, and its being a source of skin irritation, made the fibers inadequate for apparel uses. Its bi-directional strength make glass cloth useful for some fiberglass reinforced plastics . For example, the Rutan VariEze homebuilt aircraft uses a moldless glass cloth - epoxy composite structure and skin. Glass cloth is commonly used as the reinforcing lattice for pre-pregs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gerchberg–Saxton algorithm** Gerchberg–Saxton algorithm: The Gerchberg–Saxton (GS) algorithm is an iterative phase retrieval algorithm for retrieving the phase of a complex-valued wavefront from two intensity measurements acquired in two different planes. Typically, the two planes are the image plane and the far field (diffraction) plane, and the wavefront propagation between these two planes is given by the Fourier transform. The original paper by Gerchberg and Saxton considered image and diffraction pattern of a sample acquired in an electron microscope. Gerchberg–Saxton algorithm: It is often necessary to know only the phase distribution from one of the planes, since the phase distribution on the other plane can be obtained by performing a Fourier transform on the plane whose phase is known. Although often used for two-dimensional signals, the GS algorithm is also valid for one-dimensional signals. The pseudocode below performs the GS algorithm to obtain a phase distribution for the plane "Source", such that its Fourier transform would have the amplitude distribution of the plane "Target". Pseudocode algorithm: Let: FT – forward Fourier transform IFT – inverse Fourier transform i – the imaginary unit, √−1 (square root of −1) exp – exponential function (exp(x) = ex) Target and Source be the Target and Source Amplitude planes respectively A, B, C & D be complex planes with the same dimension as Target and Source Amplitude – Amplitude-extracting function: e.g. for complex z = x + iy, amplitude(z) = sqrt(x·x + y·y) for real x, amplitude(x) = |x| Phase – Phase extracting function: e.g. Phase(z) = arctan(y / x) end Let algorithm Gerchberg–Saxton(Source, Target, Retrieved_Phase) is A := IFT(Target) while error criterion is not satisfied B := Amplitude(Source) × exp(i × Phase(A)) C := FT(B) D := Amplitude(Target) × exp(i × Phase(C)) A := IFT(D) end while Retrieved_Phase = Phase(A) This is just one of the many ways to implement the GS algorithm. Aside from optimizations, others may start by performing a forward Fourier transform to the source distribution.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bitmap index** Bitmap index: A bitmap index is a special kind of database index that uses bitmaps. Bitmap index: Bitmap indexes have traditionally been considered to work well for low-cardinality columns, which have a modest number of distinct values, either absolutely, or relative to the number of records that contain the data. The extreme case of low cardinality is Boolean data (e.g., does a resident in a city have internet access?), which has two values, True and False. Bitmap indexes use bit arrays (commonly called bitmaps) and answer queries by performing bitwise logical operations on these bitmaps. Bitmap indexes have a significant space and performance advantage over other structures for query of such data. Their drawback is they are less efficient than the traditional B-tree indexes for columns whose data is frequently updated: consequently, they are more often employed in read-only systems that are specialized for fast query - e.g., data warehouses, and generally unsuitable for online transaction processing applications. Bitmap index: Some researchers argue that bitmap indexes are also useful for moderate or even high-cardinality data (e.g., unique-valued data) which is accessed in a read-only manner, and queries access multiple bitmap-indexed columns using the AND, OR or XOR operators extensively.Bitmap indexes are also useful in data warehousing applications for joining a large fact table to smaller dimension tables such as those arranged in a star schema. Example: Continuing the internet access example, a bitmap index may be logically viewed as follows: On the left, Identifier refers to the unique number assigned to each resident, HasInternet is the data to be indexed, the content of the bitmap index is shown as two columns under the heading bitmaps. Each column in the left illustration under the Bitmaps header is a bitmap in the bitmap index. In this case, there are two such bitmaps, one for "has internet" Yes and one for "has internet" No. It is easy to see that each bit in bitmap Y shows whether a particular row refers to a person who has internet access. This is the simplest form of bitmap index. Most columns will have more distinct values. For example, the sales amount is likely to have a much larger number of distinct values. Variations on the bitmap index can effectively index this data as well. We briefly review three such variations. Example: Note: Many of the references cited here are reviewed at (John Wu (2007)). For those who might be interested in experimenting with some of the ideas mentioned here, many of them are implemented in open source software such as FastBit, the Lemur Bitmap Index C++ Library, the Roaring Bitmap Java library and the Apache Hive Data Warehouse system. Compression: For historical reasons, bitmap compression and inverted list compression were developed as separate lines of research, and only later were recognized as solving essentially the same problem.Software can compress each bitmap in a bitmap index to save space. There has been considerable amount of work on this subject. Compression: Though there are exceptions such as Roaring bitmaps, Bitmap compression algorithms typically employ run-length encoding, such as the Byte-aligned Bitmap Code, the Word-Aligned Hybrid code, the Partitioned Word-Aligned Hybrid (PWAH) compression, the Position List Word Aligned Hybrid, the Compressed Adaptive Index (COMPAX), Enhanced Word-Aligned Hybrid (EWAH) and the COmpressed 'N' Composable Integer SEt (CONCISE). These compression methods require very little effort to compress and decompress. More importantly, bitmaps compressed with BBC, WAH, COMPAX, PLWAH, EWAH and CONCISE can directly participate in bitwise operations without decompression. This gives them considerable advantages over generic compression techniques such as LZ77. BBC compression and its derivatives are used in a commercial database management system. BBC is effective in both reducing index sizes and maintaining query performance. BBC encodes the bitmaps in bytes, while WAH encodes in words, better matching current CPUs. "On both synthetic data and real application data, the new word aligned schemes use only 50% more space, but perform logical operations on compressed data 12 times faster than BBC." PLWAH bitmaps were reported to take 50% of the storage space consumed by WAH bitmaps and offer up to 20% faster performance on logical operations. Similar considerations can be done for CONCISE and Enhanced Word-Aligned Hybrid.The performance of schemes such as BBC, WAH, PLWAH, EWAH, COMPAX and CONCISE is dependent on the order of the rows. A simple lexicographical sort can divide the index size by 9 and make indexes several times faster. The larger the table, the more important it is to sort the rows. Reshuffling techniques have also been proposed to achieve the same results of sorting when indexing streaming data. Encoding: Basic bitmap indexes use one bitmap for each distinct value. It is possible to reduce the number of bitmaps used by using a different encoding method. For example, it is possible to encode C distinct values using log(C) bitmaps with binary encoding.This reduces the number of bitmaps, further saving space, but to answer any query, most of the bitmaps have to be accessed. This makes it potentially not as effective as scanning a vertical projection of the base data, also known as a materialized view or projection index. Finding the optimal encoding method that balances (arbitrary) query performance, index size and index maintenance remains a challenge. Encoding: Without considering compression, Chan and Ioannidis analyzed a class of multi-component encoding methods and came to the conclusion that two-component encoding sits at the kink of the performance vs. index size curve and therefore represents the best trade-off between index size and query performance. Binning: For high-cardinality columns, it is useful to bin the values, where each bin covers multiple values and build the bitmaps to represent the values in each bin. This approach reduces the number of bitmaps used regardless of encoding method. However, binned indexes can only answer some queries without examining the base data. For example, if a bin covers the range from 0.1 to 0.2, then when the user asks for all values less than 0.15, all rows that fall in the bin are possible hits and have to be checked to verify whether they are actually less than 0.15. The process of checking the base data is known as the candidate check. In most cases, the time used by the candidate check is significantly longer than the time needed to work with the bitmap index. Therefore, binned indexes exhibit irregular performance. They can be very fast for some queries, but much slower if the query does not exactly match a bin. History: The concept of bitmap index was first introduced by Professor Israel Spiegler and Rafi Maayan in their research "Storage and Retrieval Considerations of Binary Data Bases", published in 1985. The first commercial database product to implement a bitmap index was Computer Corporation of America's Model 204. Patrick O'Neil published a paper about this implementation in 1987. This implementation is a hybrid between the basic bitmap index (without compression) and the list of Row Identifiers (RID-list). Overall, the index is organized as a B+tree. When the column cardinality is low, each leaf node of the B-tree would contain long list of RIDs. In this case, it requires less space to represent the RID-lists as bitmaps. Since each bitmap represents one distinct value, this is the basic bitmap index. As the column cardinality increases, each bitmap becomes sparse and it may take more disk space to store the bitmaps than to store the same content as RID-lists. In this case, it switches to use the RID-lists, which makes it a B+tree index. In-memory bitmaps: One of the strongest reasons for using bitmap indexes is that the intermediate results produced from them are also bitmaps and can be efficiently reused in further operations to answer more complex queries. Many programming languages support this as a bit array data structure. For example, Java has the BitSet class. Some database systems that do not offer persistent bitmap indexes use bitmaps internally to speed up query processing. For example, PostgreSQL versions 8.1 and later implement a "bitmap index scan" optimization to speed up arbitrarily complex logical operations between available indexes on a single table. For tables with many columns, the total number of distinct indexes to satisfy all possible queries (with equality filtering conditions on either of the fields) grows very fast, being defined by this formula: Cn[n2]≡n!(n−[n2])![n2]! .A bitmap index scan combines expressions on different indexes, thus requiring only one index per column to support all possible queries on a table. In-memory bitmaps: Applying this access strategy to B-tree indexes can also combine range queries on multiple columns. In this approach, a temporary in-memory bitmap is created with one bit for each row in the table (1 MB can thus store over 8 million entries). Next, the results from each index are combined into the bitmap using bitwise operations. After all conditions are evaluated, the bitmap contains a "1" for rows that matched the expression. Finally, the bitmap is traversed and matching rows are retrieved. In addition to efficiently combining indexes, this also improves locality of reference of table accesses, because all rows are fetched sequentially from the main table. The internal bitmap is discarded after the query. If there are too many rows in the table to use 1 bit per row, a "lossy" bitmap is created instead, with a single bit per disk page. In this case, the bitmap is just used to determine which pages to fetch; the filter criteria are then applied to all rows in matching pages.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stylolite** Stylolite: Stylolites (Greek: stylos, pillar; lithos, stone) are serrated surfaces within a rock mass at which mineral material has been removed by pressure dissolution, in a deformation process that decreases the total volume of rock. Minerals which are insoluble in water, such as clays, pyrite and oxides, as well as insoluble organic matter, remain within the stylolites and make them visible. Sometimes host rocks contain no insoluble minerals, in which case stylolites can be recognized by change in texture of the rock. They occur most commonly in homogeneous rocks, carbonates, cherts, sandstones, but they can be found in certain igneous rocks and ice. Their size vary from microscopic contacts between two grains (microstylolites) to large structures up to 20 m in length and up to 10 m in amplitude in ice. Stylolites usually form parallel to bedding, because of overburden pressure, but they can be oblique or even perpendicular to bedding, as a result of tectonic activity. Classification of stylolites: In structural geology and diagenesis, pressure solution or pressure dissolution is a deformation mechanism that involves the dissolution of minerals at grain-to-grain contacts into an aqueous pore fluid in areas of relatively high stress and either deposition in regions of relatively low stress within the same rock or their complete removal from the rock within the fluid. It is an example of diffusive mass transfer. Stylolites are formed by this process. Classification of stylolites: Stylolites can be classified according to their geometry or their orientation and relationship to bedding. Classification of stylolites: Geometric classification Park and Schot (1968) recognized six different geometries in stylolites: Simple or primitive wave-like Sutured type Up-peak type (rectangular type) Down-peak type (rectangular type) Sharp-peak type (tapered and pointed) Seismogram type Relationship to bedding Horizontal stylolites This is the most commonly observed stylolite type. They occur parallel or nearly parallel to the bedding of rocks. This type is most frequently found in layered sedimentary rocks, mostly in carbonate rocks, which have not been affected by intensive tectonic structural activity or metamorphism. Classification of stylolites: Inclined stylolites or slickolites This type occurs oblique to bedding. It appears in rocks which are both affected or unaffected by tectonic activity, and can also be found in metamorphic and layered igneous rocks. Horizontal-inclined (vertical) or crosscutting stylolites This type is a combination of horizontal and inclined types of stylolites. Horizontal stylolites usually have a higher amplitude than inclined stylolites. Horizontal-inclined can be found in rocks affected by pressure parallel to the bedding plane followed by pressure perpendicular to bedding. Vertical stylolites This type of stylolite is related to the bedding at right angles. It may or may not be associated with tectonic activity. It is caused by pressure acting perpendicularly to the bedding. Classification of stylolites: Interconnecting network stylolites This type is a network of stylolites, which are related to each other with relatively small angles. This type can be divided into two subtypes. Stylolites of subtype A are characterized by higher amplitudes. They are related to the bedding either horizontally, or at a small angle. Stylolites of subtype B usually appear in rocks which have been affected by tectonic and/or metamorphic activity. These stylolites have a low amplitude with undulations. Their relation to the bedding can vary from horizontal to vertical. Classification of stylolites: Vertical-inclined (horizontal) or crosscutting stylolites This type is a combination of horizontal or inclined and vertical stylolite types. In this case the inclined or horizontal stylolites were formed first and the vertical later. This type can be divided into two subtypes by directions of displacement of the inclined stylolites. In subtype A, the displacements could have happened during vertical stylolization, while in subtype B, the displacements could have happened before vertical stylolization. Development: A stylolite is not a structural fracture, although they have been described as a form of 'anticrack', with the sides moving together rather than apart. Proof exists in the form of fossiliferous limestone where fossils are crosscut by a stylolite and only one half still exists; the other half has been dissolved away. Rye & Bradbury (1988) investigated 13/12C and 18/16O stable isotope systematics in limestone on either side of a stylolite plane and found differences confirming different degrees of fluid-rock interaction. Development: In order for a stylolite to develop, a solution into which minerals can dissolve needs to be present, along with a pore network through which dissolved solids can migrate by advection or diffusion from the developing stylolite. Stylolite development can be improved with porosity, as it localizes stress on grains, increasing the stress there. Therefore, it is suggested that bedding-parallel stylolites form in areas of high porosity, and most of the transverse stylolites form along preexisting fractures. Significance: Stylolites are significant in several fields. In petrology, stylolites are important because they alter rock fabrics and dissolve solids that precipitate as cement. In stratigraphy, weathering of stylolites generates apparent bedding in many stratigraphic sections and loss of material along stylolites can have a result similar to erosion, with significant stratigraphic thinning. In hydrology, stylolites prevent fluid flow and, in other settings, serve for fluid flow. Also, stylolites are indicators of compressive stress in tectonic studies, and development of transverse stylolites contributes to crustal shortening parallel to the direction of their column.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ELISA** ELISA: The enzyme-linked immunosorbent assay (ELISA) (, ) is a commonly used analytical biochemistry assay, first described by Eva Engvall and Peter Perlmann in 1971. The assay uses a solid-phase type of enzyme immunoassay (EIA) to detect the presence of a ligand (commonly a protein) in a liquid sample using antibodies directed against the protein to be measured. ELISA has been used as a diagnostic tool in medicine, plant pathology, and biotechnology, as well as a quality control check in various industries. ELISA: In the most simple form of an ELISA, antigens from the sample to be tested are attached to a surface. Then, a matching antibody is applied over the surface so it can bind the antigen. This antibody is linked to an enzyme and then any unbound antibodies are removed. In the final step, a substance containing the enzyme's substrate is added. If there was binding, the subsequent reaction produces a detectable signal, most commonly a color change. ELISA: Performing an ELISA involves at least one antibody with specificity for a particular antigen. The sample with an unknown amount of antigen is immobilized on a solid support (usually a polystyrene microtiter plate) either non-specifically (via adsorption to the surface) or specifically (via capture by another antibody specific to the same antigen, in a "sandwich" ELISA). After the antigen is immobilized, the detection antibody is added, forming a complex with the antigen. The detection antibody can be covalently linked to an enzyme or can itself be detected by a secondary antibody that is linked to an enzyme through bioconjugation. Between each step, the plate is typically washed with a mild detergent solution to remove any proteins or antibodies that are non-specifically bound. After the final wash step, the plate is developed by adding an enzymatic substrate to produce a visible signal, which indicates the quantity of antigen in the sample. ELISA: Of note, ELISA can perform other forms of ligand binding assays instead of strictly "immuno" assays, though the name carried the original "immuno" because of the common use and history of development of this method. The technique essentially requires any ligating reagent that can be immobilized on the solid phase along with a detection reagent that will bind specifically and use an enzyme to generate a signal that can be properly quantified. In between the washes, only the ligand and its specific binding counterparts remain specifically bound or "immunosorbed" by antigen-antibody interactions to the solid phase, while the nonspecific or unbound components are washed away. Unlike other spectrophotometric wet lab assay formats where the same reaction well (e.g., a cuvette) can be reused after washing, the ELISA plates have the reaction products immunosorbed on the solid phase, which is part of the plate, and so are not easily reusable. Principle: As an analytical biochemistry assay and a "wet lab" technique, ELISA involves detection of an analyte (i.e., the specific substance whose presence is being quantitatively or qualitatively analyzed) in a liquid sample by a method that continues to use liquid reagents during the analysis (i.e., controlled sequence of biochemical reactions that will generate a signal which can be easily quantified and interpreted as a measure of the amount of analyte in the sample) that stays liquid and remains inside a reaction chamber or well needed to keep the reactants contained. This is in contrast to "dry lab" techniques that use dry strips. Even if the sample is liquid (e.g., a measured small drop), the final detection step in "dry" analysis involves reading of a dried strip by methods such as reflectometry and does not need a reaction containment chamber to prevent spillover or mixing between samples.As a heterogenous assay, ELISA separates some component of the analytical reaction mixture by adsorbing certain components onto a solid phase which is physically immobilized. In ELISA, a liquid sample is added onto a stationary solid phase with special binding properties and is followed by multiple liquid reagents that are sequentially added, incubated, and washed, followed by some optical change (e.g., color development by the product of an enzymatic reaction) in the final liquid in the well from which the quantity of the analyte is measured. The quantitative "reading" is usually based on detection of intensity of transmitted light by spectrophotometry, which involves quantitation of transmission of some specific wavelength of light through the liquid (as well as the transparent bottom of the well in the multiple-well plate format). The sensitivity of detection depends on amplification of the signal during the analytic reactions. Since enzyme reactions are very well known amplification processes, the signal is generated by enzymes which are linked to the detection reagents in fixed proportions to allow accurate quantification, and thus the name "enzyme-linked".The analyte is also called the ligand because it will specifically bind or ligate to a detection reagent, thus ELISA falls under the bigger category of ligand binding assays. The ligand-specific binding reagent is "immobilized", i.e., usually coated and dried onto the transparent bottom and sometimes also side wall of a well (the stationary "solid phase"/"solid substrate" here as opposed to solid microparticle/beads that can be washed away), which is usually constructed as a multiple-well plate known as the "ELISA plate". Conventionally, like other forms of immunoassays, the specificity of antigen-antibody type reaction is used because it is easy to raise an antibody specifically against an antigen in bulk as a reagent. Alternatively, if the analyte itself is an antibody, its target antigen can be used as the binding reagent. History: Before the development of the ELISA, the only option for conducting an immunoassay was radioimmunoassay, a technique using radioactively labeled antigens or antibodies. In radioimmunoassay, the radioactivity provides the signal, which indicates whether a specific antigen or antibody is present in the sample. Radioimmunoassay was first described in a scientific paper by Rosalyn Sussman Yalow and Solomon Berson published in 1960.As radioactivity poses a potential health threat, a safer alternative was sought. A suitable alternative to radioimmunoassay would substitute a nonradioactive signal in place of the radioactive signal. When enzymes (such as horseradish peroxidase) react with appropriate substrates (such as ABTS or TMB), a change in color occurs, which is used as a signal. However, the signal has to be associated with the presence of antibody or antigen, which is why the enzyme has to be linked to an appropriate antibody. This linking process was independently developed by Stratis Avrameas and G. B. Pierce. Since it is necessary to remove any unbound antibody or antigen by washing, the antibody or antigen has to be fixed to the surface of the container; i.e., the immunosorbent must be prepared. A technique to accomplish this was published by Wide and Jerker Porath in 1966.In 1971, Peter Perlmann and Eva Engvall at Stockholm University in Sweden, and Anton Schuurs and Bauke van Weemen in the Netherlands independently published papers that synthesized this knowledge into methods to perform EIA/ELISA.Traditional ELISA typically involves chromogenic reporters and substrates that produce some kind of observable color change to indicate the presence of antigen or analyte. Newer ELISA-like techniques use fluorogenic, electrochemiluminescent, and quantitaoppositiontive PCR reporters to create quantifiable signals. These new reporters can have various advantages, including higher sensitivities and multiplexing. In technical terms, newer assays of this type are not strictly ELISAs, as they are not "enzyme-linked", but are instead linked to some nonenzymatic reporter. However, given that the general principles in these assays are largely similar, they are often grouped in the same category as ELISAs. History: In 2012, an ultrasensitive, enzyme-based ELISA test using nanoparticles as a chromogenic reporter was able to give a naked-eye colour signal, from the detection of mere attograms of analyte. A blue color appears for positive results and red color for negative. Note that this detection only can confirm the presence or the absence of analyte, not the actual concentration. Types: There are many ELISA tests for particular molecules that use the matching antibodies. ELISA tests are broken into several types of tests based on how the analytes and antibodies are bonded and used. The major types are described here. Direct ELISA The steps of direct ELISA follows the mechanism below: A buffered solution of the antigen to be tested for is added to each well (usually 96-well plates) of a microtiter plate, where it is given time to adhere to the plastic through charge interactions. A solution of nonreacting protein, such as bovine serum albumin or casein, is added to each well in order to cover any plastic surface in the well which remains uncoated by the antigen. The primary antibody with an attached (conjugated) enzyme is added, which binds specifically to the test antigen coating the well. A substrate for this enzyme is then added. Often, this substrate changes color upon reaction with the enzyme. Types: The higher the concentration of the primary antibody present in the serum, the stronger the color change. Often, a spectrometer is used to give quantitative values for color strength.The enzyme acts as an amplifier; even if only few enzyme-linked antibodies remain bound, the enzyme molecules will produce many signal molecules. Within common-sense limitations, the enzyme can go on producing color indefinitely, but the more antibody is bound, the faster the color will develop. A major disadvantage of the direct ELISA is that the method of antigen immobilization is not specific; when serum is used as the source of test antigen, all proteins in the sample may stick to the microtiter plate well, so small concentrations of analyte in serum must compete with other serum proteins when binding to the well surface. The sandwich or indirect ELISA provides a solution to this problem, by using a "capture" antibody specific for the test antigen to pull it out of the serum's molecular mixture.ELISA may be run in a qualitative or quantitative format. Qualitative results provide a simple positive or negative result (yes or no) for a sample. The cutoff between positive and negative is determined by the analyst and may be statistical. Two or three times the standard deviation (error inherent in a test) is often used to distinguish positive from negative samples. In quantitative ELISA, the optical density (OD) of the sample is compared to a standard curve, which is typically a serial dilution of a known-concentration solution of the target molecule. For example, if a test sample returns an OD of 1.0, the point on the standard curve that gave OD = 1.0 must be of the same analyte concentration as the sample.The use and meaning of the names "indirect ELISA" and "direct ELISA" differs in the literature and on web sites depending on the context of the experiment. When the presence of an antigen is analyzed, the name "direct ELISA" refers to an ELISA in which only a labelled primary antibody is used, and the term "indirect ELISA" refers to an ELISA in which the antigen is bound by the primary antibody which then is detected by a labeled secondary antibody. In the latter case a sandwich ELISA is clearly distinct from an indirect ELISA. When the "primary" antibody is of interest, e.g. in the case of immunization analyses, this antibody is directly detected by the secondary antibody and the term "indirect ELISA" applies to a setting with two antibodies. Types: Sandwich ELISA A "sandwich" ELISA is used to detect sample antigen. The steps are: A surface is prepared with a known quantity of capture antibody. Any nonspecific binding sites on the surface are blocked. The antigen-containing sample is applied to the plate, and captured by antibody. The plate is washed to remove unbound antigen. A specific antibody is added, and binds to antigen (hence the 'sandwich': the antigen is stuck between two antibodies). This primary antibody could be in the serum of a donor, to be tested for reactivity towards the antigen. Enzyme-linked secondary antibodies are applied as detection antibodies, which bind specifically to the antibody's Fc region (nonspecific). The plate is washed to remove the unbound antibody-enzyme conjugates. A chemical is added to be converted by the enzyme into a color, fluorescent, or electrochemical signal. Types: The absorbance, fluorescence, or electrochemical signal (e.g., current) of the plate's wells is measured to determine the presence and quantity of the antigen.The image to the right includes the use of a secondary antibody conjugated to an enzyme, although, in the technical sense, this is not necessary if the primary antibody is conjugated to an enzyme (which would be direct ELISA). However, the use of a secondary-antibody conjugate avoids the expensive process of creating enzyme-linked antibodies for every antigen one might want to detect. By using an enzyme-linked antibody that binds the Fc region of other antibodies, this same enzyme-linked antibody can be used in a variety of situations. Without the first layer of "capture" antibody, any proteins in the sample (including serum proteins) may competitively adsorb to the plate surface, lowering the quantity of antigen immobilized. Use of the purified specific antibody to attach the antigen to the plastic eliminates a need to purify the antigen from complicated mixtures before the measurement, simplifying the assay, and increasing the specificity and the sensitivity of the assay. Therefore, a sandwich ELISA used for research often needs validation, to reduce the risk of false positive results. Types: Competitive ELISA A third use of ELISA is through competitive binding. The steps for this ELISA are somewhat different from the first two examples: Unlabeled antibody is incubated in the presence of its antigen (sample). These bound antibody/antigen complexes are then added to an antigen-coated well. The plate is washed, so unbound antibodies are removed. (The more antigen in the sample, the more Ag-Ab complexes are formed and so there are less unbound antibodies available to bind to the antigen in the well, hence "competition".) The secondary antibody, specific to the primary antibody, is added. This second antibody is coupled to the enzyme. A substrate is added, and remaining enzymes elicit a chromogenic or fluorescent signal. The reaction is stopped to prevent eventual saturation of the signal.Some competitive ELISA kits include enzyme-linked antigen rather than enzyme-linked antibody. The labeled antigen competes for primary antibody binding sites with the sample antigen (unlabeled). The less antigen in the sample, the more labeled antigen is retained in the well and the stronger the signal. Commonly, the antigen is not first positioned in the well. Types: For the detection of HIV antibodies, the wells of microtiter plate are coated with the HIV antigen. Two specific antibodies are used, one conjugated with enzyme and the other present in serum (if serum is positive for the antibody). Cumulative competition occurs between the two antibodies for the same antigen, causing a stronger signal to be seen. Sera to be tested are added to these wells and incubated at 37 °C, and then washed. If antibodies are present, the antigen-antibody reaction occurs. No antigen is left for the enzyme-labelled specific HIV antibodies. These antibodies remain free upon addition and are washed off during washing. Substrate is added, but there is no enzyme to act on it, so a positive result shows no color change. Types: Reverse ELISA (Indirect ELISA) A fourth ELISA test does not use the traditional wells. This test leaves the antigens suspended in the test fluid. Unlabeled antibody is incubated in the presence of its antigen (sample) A sufficient incubation period is provided to allow the antibodies to bind to the antigens. The sample is then passed through the Scavenger container. This can be a test tube or a specifically designed flow through channel. The surface of the Scavenger container or channel has "Scavenger Antigens" bound to it. These can be identical or sufficiently similar to the primary antigens that the free antibodies will bind. The Scavenger container must have sufficient surface area and sufficient time to allow the Scavenger Antigens to bind to all the excess Antibodies introduced into the sample. Types: The sample, that now contains the tagged and bound antibodies, is passed through a detector. This device can be a flow cytometer or other device that illuminates the tags and registers the response.This test allows multiple antigens to be tagged and counted at the same time. This allows specific strains of bacteria to be identified by two (or more) different color tags. If both tags are present on a cell, then the cell is that specific strain. If only one is present, it is not. Types: This test is done, generally, one test at a time and cannot be done with the microtiter plate. The equipment needed is usually less complicated and can be used in the field. Commonly used enzymatic markers: The following table lists the enzymatic markers commonly used in ELISA assays, which allow the results of the assay to be measured upon completion. OPD (o-phenylenediamine dihydrochloride) turns amber to detect HRP (Horseradish Peroxidase), which is often used to as a conjugated protein. TMB (3,3',5,5'-tetramethylbenzidine) turns blue when detecting HRP and turns yellow after the addition of sulfuric or phosphoric acid. ABTS (2,2'-Azinobis [3-ethylbenzothiazoline-6-sulfonic acid]-diammonium salt) turns green when detecting HRP. PNPP (p-Nitrophenyl Phosphate, Disodium Salt) turns yellow when detecting alkaline phosphatase. Applications: Because the ELISA can be performed to evaluate either the presence of antigen or the presence of antibody in a sample, it is a useful tool for determining serum antibody concentrations (such as with the HIV test or West Nile virus). It has also found applications in the food industry in detecting potential food allergens, such as milk, peanuts, walnuts, almonds, and eggs and as serological blood test for coeliac disease. ELISA can also be used in toxicology as a rapid presumptive screen for certain classes of drugs. Applications: The ELISA was the first screening test widely used for HIV because of its high sensitivity. In an ELISA, a person's serum is diluted 400 times and applied to a plate to which HIV antigens are attached. If antibodies to HIV are present in the serum, they may bind to these HIV antigens. The plate is then washed to remove all other components of the serum. A specially prepared "secondary antibody"—an antibody that binds to other antibodies—is then applied to the plate, followed by another wash. This secondary antibody is chemically linked in advance to an enzyme. Applications: Thus, the plate will contain enzyme in proportion to the amount of secondary antibody bound to the plate. A substrate for the enzyme is applied, and catalysis by the enzyme leads to a change in color or fluorescence. ELISA results are reported as a number; the most controversial aspect of this test is determining the "cut-off" point between a positive and a negative result. Applications: A cut-off point may be determined by comparing it with a known standard. If an ELISA test is used for drug screening at workplace, a cut-off concentration, 50 ng/ml, for example, is established, and a sample containing the standard concentration of analyte will be prepared. Unknowns that generate a stronger signal than the known sample are "positive". Those that generate weaker signal are "negative". Applications: There are ELISA tests to detect various kind of diseases, such as dengue, malaria, Chagas disease, Johne's disease, and others. ELISA tests also are extensively employed for in vitro diagnostics in medical laboratories. The other uses of ELISA include: detection of SARS-CoV-2 antibodies in blood samples
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Register (air and heating)** Register (air and heating): A register is a grille with moving parts, capable of being opened and closed and the air flow directed, which is part of a building's heating, ventilation, and air conditioning (HVAC) system. The placement and size of registers is critical to HVAC efficiency. Register dampers are also important, and can serve a safety function. Register vs. grille: A grille is a perforated cover for an air duct (used for heating, cooling, or ventilation, or a combination thereof). Grilles sometimes have louvers which allow the flow of air to be directed. A register differs from a grille in that a damper is included. However, in practice, the terms "grille", "register", and "return" are often used interchangeably, and care must be taken to determine the meaning of the term used. Register size and placement: Placement of registers is key in creating an efficient HVAC system. Usually, a register is placed near a window or door, which is where the greatest heat/cooling loss occurs. In contrast, returns (grilled ducts which suck air back into the HVAC system for heating or cooling) are usually placed in the wall or ceiling nearest the center of the building. Generally, in rooms where it is critical to maintain a constant temperature two registers (one placed near the ceiling to deliver cold air, and one placed in the floor to deliver hot air) and two returns (one high, one low) will be used. HVAC systems generally have one register and one return per room. Register size and placement: Registers vary in size with the heating and cooling requirements of the room. If a register is too small, the HVAC system will need to push air through the ducts at a faster rate in order to achieve the desired heating or cooling. This can create rushing sounds which can disturb occupants or interfere with conversation or work (such as sound recording). The velocity of air through a register is usually kept low enough so that it is masked by background noise. (Higher ambient levels of background noise, such as those in restaurants, allow higher air velocities.) On the other hand, air velocity must be high enough to achieve the desired temperature. Registers are a critical part of the HVAC system. If not properly installed and tightly connected to the ductwork, air will spill around the register and greatly reduce the HVAC system's efficiency. Ideally, a room will have both heating and cooling registers. In practice, cost considerations usually require that heating and cooling be provided by the same register. In such cases, heating most often takes precedence over cooling, and registers are usually found close to the floor.For heating purposes, a floor register is preferred. This is because hot air rises, and as it cools it falls. This creates good air circulation in a room, and helps to maintain a more even temperature as hot and cold air is mixed more thoroughly. Floor registers generally have a grille strong enough for a human being to walk on without damaging the grille. It is rare to find a floor register installed less than 6 inches (15 cm) from the corner of a room. When a floor register is not practical or desired, a wall register is used. The correct placement of wall heating registers is critical. Generally, the heating register will be directly across from an exterior window. The hot air from the register will mix with the cold air coming off the window, cool, and drop to the floor—creating good air circulation. However, the hot air must be pushed from the register with enough force (or "throw") so that it will cross the room and reach the window. If there is too little throw, the hot air will stop moving partway across the room, the cold air from the window will not be heated (creating the feeling of a cool draft), and air circulation will suffer. Register dampers: A register's damper provides a critical function. Primarily, the damper allows the amount of hot or cool air entering a room to be controlled, providing for more accurate control over room temperature. Dampers also allow air to be shut off in unused rooms, improving the efficiency of the HVAC system. Dampers can also help adjust a HVAC system for seasonal use. During winter months, for example, an air conditioning register can be closed to prevent cold air from being pulled from the room. This allows the hot air to mix more completely with the cold air in the room, improving the efficiency of the HVAC system. (The return should be efficient enough to draw off the cooler air.)Some registers, particularly those in commercial buildings or institutions which house large numbers of people (such as hotels or hospitals) have a fire damper attached to them. This damper automatically senses smoke or extreme heat, and shuts the register closed so that fire and smoke do not travel throughout the building via the HVAC system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vlog** Vlog: A vlog (), also known as a video blog or video log, is a form of blog for which the medium is video. Vlog entries often combine embedded video (or a video link) with supporting text, images, and other metadata. Entries can be recorded in one take or cut into multiple parts. Vlog category is popular on the video-sharing platform YouTube. Vlog: In recent years, "vlogging" has spawned a large community on social media, becoming one of the most popular forms of digital entertainment. It is popularly believed that, alongside being entertaining, vlogs can deliver deep context through imagery as opposed to written blogs. Video logs (vlogs) also often take advantage of web syndication to allow for the distribution of video over the Internet using either the RSS or Atom syndication formats, for automatic aggregation and playback on mobile devices and personal computers (see video podcast). History: In the 1980s, New York artist Nelson Sullivan documented his experiences travelling around New York City and South Carolina by recording videos in a distinctive vlog-like style.On January 2, 2000, Adam Kontras posted a video alongside a blog entry aimed at informing his friends and family of his cross-country move to Los Angeles in pursuit of show business, marking the first post on what would later become the longest-running video blog in history. In November of that year, Adrian Miles posted a video of changing text on a still image, coining the term vog to refer to his video blog. Filmmaker and musician Luuk Bouwman started in 2002 the now-defunct Tropisms.org site as a video diary of his post-college travels, one of the first sites to be called a vlog or videolog. In 2004, Steve Garfield launched his own video blog and declared that year "the year of the video blog". History: YouTube Vlogging saw a strong increase in popularity beginning in 2005. The most popular video sharing site, YouTube, was founded in February 2005. The site's co-founder Jawed Karim uploaded the first YouTube vlog clip Me at the zoo on his channel "jawed" in April 2005. The ordinary "everydayness" and "dry aesthetics" of Me at the zoo set the tone for the type of amateur vlogging content that would become typical of YouTube, especially among YouTubers. By July 2006, YouTube had become the fifth most popular web destination, with 100 million videos viewed daily and 65,000 new uploads per day. The Yahoo! Videoblogging Group also saw its membership increase dramatically by August 2005.Many open source content management systems have enabled the inclusion of video content, allowing bloggers to host and administer their own video blogging sites. In addition, the convergence of mobile phones with digital cameras allows publishing of video content to the Web almost as it is recorded. Radio and television stations may use video blogging as a way to help interact more with listeners and viewers.Throughout the lifetime of the YouTube platform, vloggers have developed large social communities by expressing emotions of vulnerability and encouraging their viewers to do the same. The effect of this emotional exchange between strangers has been documented, for example, in the popularity of bereavement vlogs, in which grieving individuals reassure each other through friendly comments. History: Miscellaneous events 2005, January – Vloggercon, the first vlogger conference, is held in New York City. 2006, November – Irina Slutsky created and hosted The Vloggies, the first annual video blog awards. 2007, May and August – The Wall Street Journal places a grandmother on the front page of its Personal Journal section. In August 2007, she was featured on an ABC World News Tonight segment showing the elderly now becoming involved in the online video world. Guinness World Record In May 2019, Charles Trippy was awarded the Guinness World Record for the "Most Consecutive Daily Personal Video Blogs Posted On YouTube", having recorded 3653 consecutive videos to his Charles and Allie YouTube channel over the previous ten years. Uses: Impressions Vlogs have made it possible to learn about a Vlogger's persona, culture, and impressions using non-verbal hints. Researchers have conducted experiments using crowdsourcing for Amazons Mechanical Turk to determine what kind of personality traits the Vlogger might have. Many Vlogs have been personified by five big personality traits such as Extraversion, Conscientiousness, Agreeableness, Neuroticism, and Openness to Experience. Along with Mechanical Turk, researchers also looked at the cues that take place within Vlogs. Vlogs can be broken down to their elements considering that there are a lot of factors that play in the creation of one such as placement of camera, lighting, location, amount of time spent looking at the camera, pitch, delivery and amount of the interactions. Using this information and crowdsourcing, results have revealed that the highest rate in personality research was Agreeableness which makes Vlogging a great place to form Agreeable impressions. However, more non-verbal hints are more noticeable in other form traits such as Extraversion. Regardless, Personality impressions have made a more interesting Vlog viewing experience. Uses: Education Vlogging has been experimented with school systems to determine if it is a reliable platform to deliver higher educational practices to students. Researchers have done an experiment that placed 42 college freshmen into a control and experimental group of 21 each. Oral proficiency exams were given to all students to reflect their current speech skills, after a year of teachings based on each of the groups preference. The control group was instructed to work with their standard writing skills and create their own blogs, while the Experimental group tested their skills with online interaction. Scores for both groups had increased after both tests, however the experimental group had outperformed the control group due to the improvement of speech proficiency that came as a result of a more interactive learning environment between teachers and classmates. The control group claimed that not using video blogs "lowered their confidence" in their speaking proficiency. Uses: Health Researchers have investigated how vlog-style YouTube videos made by creators who suffer from chronic illnesses can raise health awareness among viewers and create social communities among those suffering. A 2014 study evaluated the contextual relationship between vloggers who shared that they had diabetes, cancer, or human immunodeficiency virus (HIV) and their audiences. Most of the creators of these vlogs chose to focus their videos on how disease diagnosis and treatment had impacted them physically and emotionally. Commenters on the vlogs who shared personal characteristics formed ad hoc small groups; these impromptu support groups expanded over time as more and more people discovered the health vlogs. Live broadcasting: YouTube announced a live broadcasting feature called YouTube Live in 2008. This feature was also established by other social platforms such as Instagram, Facebook and Twitch. YouTube presence: YouTube currently ranks among the top three most-visited sites on the web. As a high traffic area for video bloggers, or vloggers, YouTube has created a platform for these participants to present their personal videos, which oftentimes are filmed using hand held point and shoot cameras. The popularity of vlogs in the YouTube community has risen exponentially in the past few years; out of the top 100 most subscribed YouTube channels, 17 provide vlogs as their primary style of footage. Many of these vloggers are a part of the YouTube Partner Program, which professionalizes the industry and allows for monetary gain from video production. This professionalization additionally helps increase exposure to various channels as well as creates a sense of stability within the field. Additionally, this professionalization allows content creators to be deemed a credible source by their viewers. Furthermore, many vloggers have been able to turn their channels into sustainable careers; in 2013, the highest paid vlogger brought in a minimum of $720,000 for the year. Hollywood is taking notice of this rising medium, and has placed its value ranked over other entertainment companies such as Marvel, which is also owned by Disney.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Uracil/thymine dehydrogenase** Uracil/thymine dehydrogenase: Uracil/thymine dehydrogenase (EC 1.17.99.4, uracil oxidase, uracil-thymine oxidase, uracil dehydrogenase) is an enzyme with systematic name uracil:acceptor oxidoreductase. This enzyme catalyses the following chemical reaction (1) uracil + H2O + acceptor ⇌ barbiturate + reduced acceptor (2) thymine + H2O + acceptor ⇌ 5-methylbarbiturate + reduced acceptorUracil/thymine dehydrogenase forms part of the oxidative pyrimidine-degrading pathway in some microorganisms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stabs** Stabs: stabs (sometimes written STABS) is a debugging data format for storing information about computer programs for use by symbolic and source-level debuggers. (The information is stored in symbol table strings; hence the name "stabs".) Cygnus Support attributes the invention of stabs to Peter Kessler for the Berkeley Pascal pdx debugger, however, he claims otherwise, stating stabs came with adb and sdb but could predate those. Mark Linton, who created pdx for his 1981 master's thesis and later developed it into dbx, states his doctoral adviser Michael L. Powell "contributed to the stabstrings design, especially to support Modula-2". History: When stabs was created in the 1980s, the dominant object file format was a.out, which (unlike more recent formats such as ELF) makes no provision for storing debugging information. Stabs works around this problem by encoding the information using special entries in the symbol table. At one stage stabs was widely used on Unix systems, but the newer DWARF format has largely supplanted it.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Magic number (sports)** Magic number (sports): In certain sports, a magic number is a number used to indicate how close a front-running team is to clinching a division title and/or a playoff spot. It represents the total of additional wins by the front-running team or additional losses (or any combination thereof) by the rival teams after which it is mathematically impossible for the rival teams to capture the title in the remaining number of games (assuming some highly unlikely occurrence such as disqualification or expulsion from the competition or retroactive forfeiture of games does not occur). Magic numbers are generally confined to sports where each game results in a win or a loss, but not a tie. It could also be referred to as the "clinching number". Magic number (sports): Teams other than the front-running team have what is called an elimination number (or "tragic number") (often abbreviated E#). This number represents the number of wins by the leading team or losses by the trailing team which will eliminate the trailing team. The largest elimination number among the non-first place teams is the magic number for the leading team. Magic number (sports): The magic number is calculated as G + 1 − WA − LB, where G is the total number of games in the season WA is the number of wins that Team A has in the season LB is the number of losses that Team B has in the seasonFor example, in Major League Baseball there are 162 games in a season. Suppose the top of the division standings late in the season are as follows: Then the magic number for Team B to be eliminated is 162 + 1 − 96 − 62 = 5. Magic number (sports): Any combination of wins by Team A and losses by Team B totaling 5 makes it impossible for Team B to win the division title. Magic number (sports): The "+1" in the formula serves the purpose of eliminating ties; without it, if the magic number were to decrease to zero and stay there, the two teams in question would wind up with identical records. If circumstances dictate that the front-running team would win the tiebreaker regardless of any future results, then the additional constant 1 can be eliminated. For example, the NBA uses complicated formulae for breaking ties, using several other statistics of merit besides overall win–loss record; however the first tiebreaker between two teams is their head-to-head record; if the front-running team has already clinched the better head-to-head record, then the +1 is unnecessary. In 2022, Major League Baseball introduced tiebreaking scenarios (such as head-to-head for division ties) that made the use of the "+1" pointless (as Game 163 was eliminated). Magic number (sports): The magic number can also be calculated as WB + GRB − WA + 1, where WB is the number of wins that Team B has in the season GRB is the number of games remaining for Team B in the season WA is the number of wins that Team A has in the seasonThis second formula basically says: Assume Team B wins every remaining game. Calculate how many games team A needs to win to surpass team B's maximum total by 1. Using the example above and with the same 162-game season, team B has 7 games remaining. Magic number (sports): The magic number for Team A to win the division is still "5": 93 + 7 − 96 + 1 = 5. Team B can win as many as 100 games. If Team A wins 101, Team B is eliminated. The magic number would decrease with a Team A win and would also decrease with a Team B loss, as its maximum win total would decrease by one. Magic number (sports): A variation of the above looks at the relation between the losses of the two teams. The magic number can be calculated as LA + GRA − LB + 1, where LA is the number of losses that Team A has in the season GRA is the number of games remaining for Team A in the season LB is the number of losses that Team B has in the seasonThis third formula basically says: Assume Team A loses every remaining game. Calculate how many games team B needs to lose to surpass team A's maximum total by 1. Using the example above and with the same 162-game season, team A has 8 games remaining. Magic number (sports): The magic number for Team A to win the division is still "5": 58 + 8 − 62 + 1 = 5. As you can see, the magic number is the same whether calculating it based on potential wins of the leader or potential losses of the trailing team. Indeed, mathematical proofs will show that the three formulas presented here are mathematically equivalent. Magic number (sports): Team A can lose as many as 66 games. If Team B loses 67, Team B is eliminated. Once again, the magic number would decrease with a Team A win and would also decrease with a Team B loss. Magic number (sports): In some sports, ties are broken by an additional one-game playoff(s) between the teams involved. When a team gets to the point where its magic number is 1, it is said to have "clinched a tie" for the division or the wild card. However, if they end the season tied with another team, and only one is eligible for the playoffs, the extra playoff game will erase that "clinching" for the team that loses the playoff game. Magic number (sports): Some sports use a tiebreaker formula instead of staging a one-game playoff. In such cases, it is necessary to look beyond the won-lost records of the teams to determine the magic number, since a team that has already guaranteed itself the edge in the tiebreaker formula would not need to include "+1" in calculating its magic number. For example, assume a basketball league that plays an 82-game season with no one-game tiebreakers shows division standings late in the season as follows: Suppose further that the first step in the league's tiebreaker formula is results in head-to-head meetings. Team A and Team B have met four times during the season with Team A winning three of the four games. They are not scheduled to meet again in the regular season. Therefore, Team A holds a tiebreaker edge over Team B and only needs to finish with the same number of wins as Team B in order to be placed ahead of Team B in the standings. Therefore, we can calculate Team A's magic number as 82 – 60 – 20 = 2. If Team A wins two of its seven remaining games, it will finish 62–20. If Team B wins all seven of its remaining games, it will also finish 62–20. However, since Team B loses the tiebreaker on head-to-head results, Team A is the division winner. Magic number (sports): By convention, the magic number typically is used to describe the first place team only, relative to the teams it leads. However, the same mathematical formulas could be applied to any team, teams that are tied for the lead, as well as teams that trail. In these cases, a team that is not in first place will depend on the leading team to lose some games so that it may catch up, so the magic number will be larger than the number of games remaining. Ultimately, for teams that are no longer in contention, their magic number would be larger than their remaining games + the remaining games for the first place team — which would be impossible to overcome. Derivation: The formula for the magic number is derived straightforwardly as follows. As before, at some particular point in the season let Team A have WA wins and LA losses. Suppose that at some later time, Team A has wA additional wins and lA additional losses, and define similarly WB, LB, wB, lB for Team B. The total number of wins that Team B needs to make up is thus given by (WA + wA) − (WB + wB). Team A clinches when this number exceeds the number of games Team B has remaining, since at that point Team B cannot make up the deficit even if Team A fails to win any more games. If there are a total of G games in the season, then the number of games remaining for Team B is given by G − (WB + wB + LB + lB). Thus the condition for Team A to clinch is that (WA + wA) − (WB + wB) = 1 + G − (WB + wB + LB + lB). Canceling the common terms, we obtain wA + lB = G + 1 − WA − LB, which establishes the magic number formula. Games played quirk: In the following example, Team A's Magic Number is 5, because even though it can eliminate second-place Team B in 4 additional games, it would take 5 games to assuredly eliminate third-place Team C. Calculating the magic number requires using the lowest number of losses among the other competing teams: 162 + 1 − 88 − 70 = 5. Tie-breaker quirk: Another scenario where the Magic Number may vary from the mathematical calculation of the number can occur when there is a tie-breaker scenario. Most sports have a number of tie-breaker methods set up to deal with eventualities of tying records at the end of the season. Typically, the first of these methods involves head-to-head matchups of the teams and which team has won more games against the other during the season. Tie-breaker quirk: In the below example, Teams A and B both have 12 games remaining and the mathematical formula would dictate a Magic Number of 6 for Team A. 162+1-83-74=6. Tie-breaker quirk: However, if Team A wins only 5 of their remaining games and ends the season with a record of 88-74 and Team B wins all remaining games and ends the season with a tying record, Team A would win the division title if they have a winning record over Team B during the season, which would mean that in the below example, Team A actually has a Magic Number of 5. Subtlety: Sometimes a team can appear to have a mathematical chance to win even though they have actually been eliminated already, due to scheduling. In this Major League Baseball scenario, there are three games remaining in the season. Teams A, B and C are assumed to be eligible only for the division championship; teams with better records in other divisions have already clinched the three available "wild card" spots: If Team C were to win all three remaining games, it would finish at 88–74, and if both Teams A and B were to lose their three remaining games, they would finish at 87–75, which would make Team C the division winner. However, if Teams A and B are playing against each other in the final weekend (in a 3-game series), it would be impossible for both teams to lose the three remaining games. One of them will win at least two games and thereby clinch the division title with a record of either 90–72 or 89–73. The more direct consequence of this situation is that it is also not possible for Teams A and B to finish in a tie with each other, and Team C cannot win the division. Subtlety: One can say definitely whether a team has been eliminated by use of the algorithm for the maximum flow problem.The addition of a second Wild Card team makes the reverse scenario (in which a team has actually clinched a postseason berth even though it appears they could still be eliminated) possible in baseball. In this scenario for the Wild Card: If Teams B and C are playing their final three games against each other and all other teams have clinched their divisions or been mathematically eliminated from catching Team A, then Team A will have clinched at least the second Wild Card berth since it will be impossible for Teams B and C to both win enough games to catch Team A. Subtlety: The reverse scenario is more common in sports that have more postseason berths, benefitting teams that are in the final playoff positions but being chased by teams that still have to play each other. Sometimes, both scenarios can occur simultaneously. In the following National Basketball Association scenario for teams placed seventh through tenth in the conference standings: If Teams B and C have to play one of their last two games against each other and Team A holds the tiebreaker against Teams B, C and D, then Team A will have clinched a playoff berth since they cannot be overtaken by both Teams B and C. Also, if Team D does not hold a tiebreaker against any of Teams A, B and C then it will be out of playoff contention since it cannot overtake both Teams B and C. A similar scenario occasionally occurs in European soccer leagues and other competitions that use promotion and relegation. In this scenario for a 20 team soccer league that plays a double round robin format, awards three points for a win and one for a draw and relegates the 18th, 19th and 20th place teams: If Team A loses its last two matches, it will finish with 38 points while if Team D wins its last two matches, it will finish with 34. Nevertheless, regardless of goal difference or any other tiebreaker, if Teams B and C still have to play each other then Team A is safe from relegation since Teams B and C cannot both reach 38 points, while Team D will be relegated since Teams B and C cannot both finish with less than 35 points. Alternative method: Another method can be used to determine the Elimination Number which uses only the Games Remaining ( GRL,GRT ) and Games Behind Leader (GBL) statistics, as follows: E=GRL+GRT2−GBL+1 where GRL means Games Remaining for Leader (similarly, GRT means Games Remaining for Trailer). Alternative method: Refer back to the example presented above. The elimination number for Team B is once again "5": 3.5 +1 It is necessary to use this method if the teams play different numbers of games in the full season, for instance due to cancellations or ties that will not be replayed. Note that this algorithm also is limited by the aforementioned subtleties.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Piano wire** Piano wire: Piano wire, or "music wire", is a specialized type of wire made for use in piano strings but also in other applications as springs. It is made from tempered high-carbon steel, also known as spring steel, which replaced iron as the material starting in 1834. Piano wire has a very high tensile strength to cope with the heavy demands placed upon piano strings; accordingly, piano wire is also used for a number of other purposes, including springs, surgical uses, and in special effects. History: The oldest record of wire being made for musical instruments is from Augsburg in 1351.Starting around 1800, the piano began to be built ever more ambitiously, with sturdier (eventually, iron) framing and greater string tension. This led to innovations in making tougher piano wire. In 1834, the Webster & Horsfal firm of Birmingham, United Kingdom brought out a form of piano wire made from cast steel; according to Dolge it was "so superior to the iron wire that the English firm soon had a monopoly." But a better steel wire was soon created in 1840 by the Viennese firm of Martin Miller, and a period of innovation and intense competition ensued, with rival brands of piano wire being tested against one another at international competitions, leading ultimately to the modern form of piano wire. Manufacture and use: The tensile strength of one popular brand of piano wire is listed as 2620–2930 MPa (380–425 ksi). Other applications: Piano wire is also used in the fabrication of springs, fishing lures, special effects in the movie industry, scaffold cross-bracing, orthodontic and pharyngeal surgery, and for the cutting of cheese and soap. It is also commonly used in hobby applications such as model railroading, both control line and radio-controlled aircraft, and knitting. At least in urban legend, it is employed by assassins as a garrote.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Enterocele** Enterocele: An enterocele is a protrusion of the small intestines and peritoneum into the vaginal canal. It may be treated transvaginally or by laparoscopy. An enterocele may also obstruct the rectum, leading to symptoms of obstructed defecation. Enteroceles can form after treatment for gynecological cancers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Powerhead (firearm)** Powerhead (firearm): A powerhead is a specialized firearm used underwater that is fired when in direct contact with the target. Powerheads are often used for spear fishing and against sharks or alligators for sport, defense, or to kill nuisance animals. The term powerhead refers to the firearm-like part of the device; when attached to a shaft to form a spear, it may be referred to as a bang stick or shark stick. The spear in question may be handheld or launchable from a spear gun. Design: A powerhead consists of a length of tubing which is the chamber for the cartridge, a firing pin to fire the cartridge, and usually some form of safety pin or latch that prevents firing when it is engaged. The rear of the power head is fitted with some provision for attaching to a spear.Powerheads are available that chamber a variety of handgun, rifle, and shotgun cartridges, from .22 WMR to 12 gauge and larger. .357 Magnum is probably the most common, as it is fairly powerful yet still compact enough to be used in a spear gun. Large cartridges such as the 12 gauge are generally only used on a handheld spear.Some powerheads use the cartridge to propel a barbed spear point into the target. These are generally used on a bangstick for alligator hunting, to secure a line to the alligator to prevent escape. Design: Purpose of contact-shooting Bullets are generally designed to work in air, which has a very low density. The density of water is roughly 800 times higher than that of air at sea level, and that reduces the penetration of a bullet proportionally. A bullet might travel a mile (1.6 km) in air, but travel no more than a few feet (about a meter) in water. Expanding hunting or defensive ammunition, such as that using hollow point bullets, will penetrate even less, as the water is dense enough to cause the bullet to expand. By firing while in contact with the target, a powerhead does not waste energy on traveling through the water, but rather expends all its energy directly on the target. Design: How they work Although most commercial powerheads use standard handgun ammunition, such as .357 Magnum or .44 Magnum, many users choose to use larger caliber ammunition to do maximum damage to the intended target. A common misconception of how powerheads function is that the muzzle blast does the damage, as much high-pressure gas is forced into the flesh of the target. While the gas does do some minimal damage, it is ultimately the penetration of the slug that causes the damage to the target. Most powerheads function just as traditional firearms do, except that it is the spear which acts as the firing hammer. One commercially produced version used a modified .30-30 Winchester cartridge case, loaded backwards, with a primed .38 Special case loaded in its mouth holding the primer. The cartridge was loaded with the .30-30 case facing outwards, so that the .30-30 case full of burning powder was propelled into the target upon firing. This system is fast to reload and one of the most effective despite the fact that it does not use a bullet. Design: Ammunition issues Since most powerheads are designed to use commercial ammunition, which is not designed to be used underwater, the ammunition used must be waterproofed. A coating of nail polish or varnish is commonly used around the primer and case mouth. For shotshells, a layer of rubber, such as a balloon, can be used to seal the crimped front of the shell. Legal issues: Australia In Australia they are classed as category A firearms requiring a legitimate reason for issuance of a permit to acquire to a weapons licence holder. When not in use, they must be safely stored in a locked container with ammunition stored separately. Regulations vary between states, with some states permitting their use for defense against sharks only, and not for spearfishing. Legal issues: United States A powerhead may be considered a firearm under some circumstances. In the US, the ATF considers a powerhead a firearm if it is not permanently affixed to a shaft; generally powerheads are sold spot welded to a temporary steel shaft giving an overall length of greater than 18 inches (45 cm). After installing permanently on a spear shaft, the spot weld is cut, and the temporary shaft discarded. Revenue Ruling 55-569, C.B. 1955–2, 483 says: A device ostensibly designed for submarine spear fishing, but capable of chambering and firing .22 caliber rimfire ammunition, is a firearm within the purview of the National Firearms Act. However, such device, if permanently attached to the speargun shaft by the manufacturer, would not be a firearm. Legal issues: This ruling is with regard to the National Firearms Act, and not to the 1968 Gun Control Act. (The National Firearms Act defines 'firearm' as machine guns, short barreled rifles, short barreled shotguns, and concealable firearms that are neither pistols nor revolvers.) This means that powerheads may still be under the authority of the 1968 Gun Control Act with regard to shipping them and purchase of them from licensed dealers.Laws may also prohibit the use of powerheads in sport fishing. They are allowed in US federally controlled waters, but many states prohibit their use in state controlled waters. One can be easily in violation of state law despite being compliant with federal regulations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ice II** Ice II: Ice II is a rhombohedral crystalline form of ice with a highly ordered structure. It is formed from ice Ih by compressing it at a temperature of 198 K at 300 MPa or by decompressing ice V. When heated it undergoes transformation to ice III. Ordinary water ice is known as ice Ih, (in the Bridgman nomenclature). Different types of ice, from ice II to ice XIX, have been created in the laboratory at different temperatures and pressures. It is thought that the cores of icy moons like Jupiter's Ganymede may be made of ice II. History: The properties of ice II were first described and recorded by Gustav Heinrich Johann Apollon Tammann in 1900 during his experiments with ice under high pressure and low temperatures. Having produced ice III, Tammann then tried condensing the ice at a temperature between −70 and −80 °C (203 and 193 K; −94 and −112 °F) under 200 MPa (2,000 atm) of pressure. Tammann noted that in this state ice II was denser than he had observed ice III to be. He also found that both types of ice can be kept at normal atmospheric pressure in a stable condition so long as the temperature is kept at that of liquid air, which slows the change in conformation back to ice Ih.In later experiments by Bridgman in 1912, it was shown that the difference in volume between ice II and ice III was in the range of 0.0001 m3/kg (2.8 cu in/lb). This difference hadn't been discovered by Tammann due to the small change and was why he had been unable to determine an equilibrium curve between the two. The curve showed that the structural change from ice III to ice II was more likely to happen if the medium had previously been in the structural conformation of ice II. However, if a sample of ice III that had never been in the ice II state was obtained, it could be supercooled even below −70 °C without it changing into ice II. Conversely, however, any superheating of ice II was not possible in regards to retaining the same form. Bridgman found that the equilibrium curve between ice II and ice IV was much the same as with ice III, having the same stability properties and small volume change. The curve between ice II and ice V was extremely different, however, with the curve's bubble being essentially a straight line and the volume difference being almost always 0.0000545 m3/kg (1.51 cu in/lb). Quest for a hydrogen-disordered counterpart of ice II: As ice II is completely hydrogen ordered, the presence of its disordered counterpart is a great matter of interest. Shephard et al. investigated the phase boundaries of NH4F-doped ices because NH4F has been reported to be a hydrogen disordering reagent. However, adding 2.5 mol% of NH4F resulted in the disappearance of ice II instead of the formation of a disordered ice II. According to the DFC calculation by Nakamura et al., the phase boundary between ice II and its disordered counterpart is estimated to be in the stability region of liquid water.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Weyl integral** Weyl integral: In mathematics, the Weyl integral (named after Hermann Weyl) is an operator defined, as an example of fractional calculus, on functions f on the unit circle having integral 0 and a Fourier series. In other words there is a Fourier series for f of the form ∑n=−∞∞aneinθ with a0 = 0. Then the Weyl integral operator of order s is defined on Fourier series by ∑n=−∞∞(in)saneinθ where this is defined. Here s can take any real value, and for integer values k of s the series expansion is the expected k-th derivative, if k > 0, or (−k)th indefinite integral normalized by integration from θ = 0. The condition a0 = 0 here plays the obvious role of excluding the need to consider division by zero. The definition is due to Hermann Weyl (1917).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GRB 030329** GRB 030329: GRB 030329 was a gamma-ray burst (GRB) that was detected on 29 March 2003 at 11:37 UTC. A gamma-ray burst is a highly luminous flash associated with an explosion in a distant galaxy and producing gamma rays, the most energetic form of electromagnetic radiation, and often followed by a longer-lived "afterglow" emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, and radio). GRB 030329 was the first burst whose afterglow definitively exhibited characteristics of a supernova, confirming the existence of a relationship between the two phenomena. Observations: GRB 030329 was one of three gamma-ray bursts detected on 29 March 2003. The other two were labeled GRB 030329a and GRB 030329b. GRB 030329 was detected by multiple instruments onboard HETE at 11:37 UTC and lasted approximately 25 seconds. The burst's optical afterglow was first observed from Siding Spring Observatory less than two hours after the burst had been detected. The X-ray afterglow was first detected by RXTE approximately five hours after the burst. The radio afterglow was first detected by the Very Large Array and, at the time of its discovery, was the brightest radio afterglow ever observed. The burst was located at a sky position of R.A. = 10h 44m 49.95957s, Dec. = +21° 31′ 17.4375″ and had a redshift of z = 0.1685, corresponding to a distance of 587 Mpc. Supernova relation: GRB 030329's proximity to Earth enabled its afterglow to be studied in great detail. A spectrum taken of the burst's optical afterglow on 6 April 2003 showed peaks at approximately 570 nm and 470 nm. This spectrum was reproduced by combining a power-law distribution with the spectrum from SN 1998bw. These supernova-like features continued to develop in the weeks after the initial burst. Optical observations taken at Kitt Peak National Observatory on indicated that the burst's optical afterglow was brighter than a power-law decay would have predicted, a deviation that could have been explained by additional light from a supernova. On 10 April 2003, NASA announced that GRB 030329 had provided the definitive link between gamma-ray bursts and supernovae. The supernova was later referred to as SN 2003dh.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Restaurateur** Restaurateur: A restaurateur is a person who opens and runs restaurants professionally. Although over time the term has come to describe any person who owns a restaurant, traditionally it refers to a highly skilled professional who is proficient in all aspects of the restaurant business. Etymology: The French word restaurateur comes from the Late Latin term restaurator ("restorer") and from the Latin term restaurare. The word restaurateur is simply French for a person who owns or runs a restaurant. The feminine form of the French noun is restauratrice.A less common variant spelling restauranteur is formed from the "more familiar" term restaurant with the French suffix -eur borrowed from restaurateur. It is considered a misspelling by some. The Oxford English Dictionary gives examples of this variant (described as "originally American") going back to 1837. H. L. Mencken said that in using this form he was using an American, not a French, word.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**High-power field** High-power field: A high-power field (HPF), when used in relation to microscopy, references the field of view under the maximum magnification power of the objective being used. Often, this represents a 400-fold magnification when referenced in scientific papers. Area: Area per high-power field for some microscope types: Olympus BX50, BX40 or BH2 or AO: 0.096 mm2 AO with 10x eyepiece: 0.12 mm2 Olympus with 10x eyepiece: 0.16 mm2 Nikon Eclipse E400 with 10x eyepiece and 40x objective: 0.25mm2 Leitz Ortholux: 0.27 mm2 Leitz Diaplan: 0.31 mm2 Examples of usage: The area provides a reference unit, for example in reference ranges for urine tests.Used for grading of soft tissue tumors: Grading, usually on a scale of I to III, is based on the degree of differentiation, the average number of mitoses per high-power field, cellularity, pleomorphism, and an estimate of the extent of necrosis (presumably a reflection of rate of growth). Mitotic counts and necrosis are the most important predictors.The following grading is part of classification of breast cancer:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cherenkov detector** Cherenkov detector: A Cherenkov detector (pronunciation: /tʃɛrɛnˈkɔv/; Russian: Черенко́в) is a particle detector using the speed threshold for light production, the speed-dependent light output or the speed-dependent light direction of Cherenkov radiation. Fundamental: A particle passing through a material at a velocity greater than that at which light can travel through the material emits light. This is similar to the production of a sonic boom when an airplane is traveling through the air faster than sound waves can move through the air. The direction this light is emitted is on a cone with angle θc about the direction in which the particle is moving, with cos(θc) = c/nv (c = the vacuum speed of light, n = the refractive index of the medium, and v is the speed of the particle). The angle of the cone θc thus is a direct measure of the particle's speed. The Frank–Tamm formula d2N/dωdx = z2α/csin2θc gives the number of photons produced. Aspects: Most Cherenkov detectors aim at recording the Cherenkov light produced by a primary charged particle. Some sensor technologies explicitly aim at Cherenkov light produced (also) by secondary particles, be it incoherent emission as occurring in an electromagnetic particle shower or by coherent emission, for example Askaryan effect. Cherenkov radiation is not only present in the range of visible light or UV light but also in any frequency range where the emission condition can be met i.e. in the radiofrequency range. Different levels of information can be used. Binary information can be based on the absence or presence of detected Cherenkov radiation. The amount or the direction of Cherenkov light can be used. In contrast to a scintillation counter the light production is instantaneous. Detector types: In the simple case of a threshold detector the mass-dependent threshold energy allows the discrimination between a lighter particle (which does radiate) and a heavier particle (which does not radiate) of the same energy or momentum. Several threshold stages can be combined to extend the covered energy range. Cherenkov threshold detectors have been used for fast timing and time of flight measurements in particle detectors. More elaborate designs use the amount of light produced. Recording light from both primary and secondary particles, for a Cherenkov calorimeter the total light yield is proportional to the incident particle energy. Using the light direction are differential Cherenkov detectors. Detector types: Recording individual Cherenkov photon locations on a position-sensitive sensor area, RICH detectors then reconstruct Cherenkov angles from the recorded patterns. As RICH detectors hence provide information on the particle velocity, if the momentum of the particle is also known (from magnetic bending), combining these two pieces of information enables the particle mass to be deduced so that the particle type can be identified.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Somos sequence** Somos sequence: In mathematics, a Somos sequence is a sequence of numbers defined by a certain recurrence relation, described below. They were discovered by mathematician Michael Somos. From the form of their defining recurrence (which involves division), one would expect the terms of the sequence to be fractions, but nevertheless many Somos sequences have the property that all of their members are integers. Recurrence equations: For an integer number k larger than 1, the Somos-k sequence (a0,a1,a2,…) is defined by the equation anan−k=an−1an−k+1+an−2an−k+2+⋯+an−(k−1)/2an−(k+1)/2 when k is odd, or by the analogous equation anan−k=an−1an−k+1+an−2an−k+2+⋯+(an−k/2)2 when k is even, together with the initial values ai = 1 for i < k.For k = 2 or 3, these recursions are very simple (there is no addition on the right-hand side) and they define the all-ones sequence (1, 1, 1, 1, 1, 1, ...). In the first nontrivial case, k = 4, the defining equation is anan−4=an−1an−3+an−22 while for k = 5 the equation is anan−5=an−1an−4+an−2an−3. Recurrence equations: These equations can be rearranged into the form of a recurrence relation, in which the value an on the left hand side of the recurrence is defined by a formula on the right hand side, by dividing the formula by an − k. For k = 4, this yields the recurrence an=an−1an−3+an−22an−4 while for k = 5 it gives the recurrence an=an−1an−4+an−2an−3an−5. Recurrence equations: While in the usual definition of the Somos sequences, the values of ai for i < k are all set equal to 1, it is also possible to define other sequences by using the same recurrences with different initial values. Sequence values: The values in the Somos-4 sequence are 1, 1, 1, 1, 2, 3, 7, 23, 59, 314, 1529, 8209, 83313, 620297, 7869898, ... (sequence A006720 in the OEIS).The values in the Somos-5 sequence are 1, 1, 1, 1, 1, 2, 3, 5, 11, 37, 83, 274, 1217, 6161, 22833, 165713, ... (sequence A006721 in the OEIS).The values in the Somos-6 sequence are 1, 1, 1, 1, 1, 1, 3, 5, 9, 23, 75, 421, 1103, 5047, 41783, 281527, ... (sequence A006722 in the OEIS).The values in the Somos-7 sequence are 1, 1, 1, 1, 1, 1, 1, 3, 5, 9, 17, 41, 137, 769, 1925, 7203, 34081, ... (sequence A006723 in the OEIS). Integrality: The form of the recurrences describing the Somos sequences involves divisions, making it appear likely that the sequences defined by these recurrence will contain fractional values. Nevertheless, for k ≤ 7 the Somos sequences contain only integer values. Several mathematicians have studied the problem of proving and explaining this integer property of the Somos sequences; it is closely related to the combinatorics of cluster algebras.For k ≥ 8 the analogously defined sequences eventually contain fractional values. For Somos-8 the first fractional value is the 19th term with value 420514/7. For k < 7, changing the initial values (but using the same recurrence relation) also typically results in fractional values.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bedwetting alarm** Bedwetting alarm: A bedwetting alarm is a behavioral treatment for nocturnal enuresis. History: The enuresis alarm methodology originated from French and German physicians in the first decade of the 20th century. Meinhard von Pfaundler, a German pediatrician made the discovery accidentally, with the original intention to create an alarm device that would notify nursing staff when a child had bed wetting and needed to be changed, showing the device to have a significant therapeutic advantages after a certain time of use. Despite early success, the treatment was not developed until the 1930s by two independent groups of psychologists: Orval Mowrer and Willie Mae Mowrer (1938) and John Morgan and Frances Witmer (1939). Mowrer used a modified Pfaundler alarm device with 30 children (ages 3–13 years) showing empirical success of the bell and pad method as a treatment for nocturnal enuresis, with the maximum time required to accomplish the treatment not exceeding two months. Treatment process: The individual places the sensor (usually located in briefs or underwear) and turns the alarm device on (there are various types of alarms) before going to sleep. The enuresis alarm is triggered when a sensor in the sheets or night clothes becomes wet with urine, setting off an auditory signal with the intention of causing the individual to wake, cease voiding, and arise to void. Parents are advised to wake their child when the alarm is activated—otherwise, children are prone to turn it off and go back to sleep.It is highly suggested that during treatment the alarm should be worn every night. The treatment effect and response are not immediate and treatment should be continued for 2–3 months or until the child is dry for 14 consecutive nights (whichever comes first). There may be cultural differences in its acceptability, as it may be highly disruptive for the household and may require a significant commitment of time and effort. The family must be motivated and adhere to this therapy if it is to be successful so they should be preemptively apprised of likely difficulties, but assured the first few weeks are the most troublesome. If necessary, doctors should monitor the child's progress early to address any problems and facilitate adherence. Conditioning: The enuresis alarm utilizes both classical and operant conditioning to provide a means of causing the sleeping individual to be regularly awakened immediately after the onset of urination so they can void in the toilet and prevent bed wetting. Conditioning: Classical conditioning The classical conditioning paradigm components for the bell and pad method are the following: The unconditioned stimulus (US) is the awakening stimulus or the alarm sound, the unconditioned response (UR) is the awakening response and sphincter contraction, the neutral stimulus (NS) is the feeling produced by bladder distention (feeling of having a full bladder), the conditioned stimulus (CS) is the feeling produced by bladder distention, and the conditioned response (CR) is the awakening response and sphincter contraction. Initially the individual experiences the alarm sounding (activated by urination) (US) eliciting the awakening response and sphincter contraction (UR) to wake up, stop urinating, and travel to the bathroom. After continued pairing of the alarm sound (US) with the feeling of a full bladder (NS), the previous NS of feeling a full bladder becomes the CS and elicits the waking response (CR) of waking up to go use the bathroom and urinate. Conditioning: Operant conditioning In the operant conditioning paradigm the alarm sound serves as a noxious stimuli added to the environment, effectively implementing a positive punishment procedure whenever the individual activates the alarm by urinating. This eventually causes an avoidance response from the individual, maintain the behavior through negative reinforcement by avoiding the alarm sound altogether. In the future the individual wakes up to urinate and avoids wetting the bed. Conditioning: Conditioning theory dissonance Most researchers of the enuresis alarm credit the treatment effect to the classical conditioning paradigm as was explained in the original research by Mowrer. However, some researchers have noted an important difference between conditioning treatment and the usual classical conditioning treatment. In typical classical conditioning, when the unconditioned stimulus is withdrawn, the conditioned response gradually weakens with repeated application of the conditioned stimulus. In successful cases of the enuresis alarm conditioning treatment, no extinction occurs following the withdrawal of the alarm stimulus (US). This suggests that the conditioning treatment may follow the operant avoidance conditioning rather than the classical conditioning pattern. In addition, a strictly classical conditioning explanation fails to incorporate that social positive reinforcement may be introduced to the individuals environment from family members from signs of improvement taking into account social learning. However, it is theorized that classical and operant conditioning both contribute to the effectiveness of the treatment. Sensors: A urine sensor is a necessary part of any bedwetting alarm. A basic urine sensor consists of two electricity conductors separated by moisture absorbing insulating material. A low DC electric voltage, provided by batteries, is applied across these conductors. This low voltage is usually about 3 volts, so as not to be dangerous to the user. When this insulating material (frequently cotton cloth as in common briefs) absorbs urine, it allows electricity to pass through it and between the conductors, resulting in a small electric current in the conductors. The conductors are attached to an alarm device, which triggers an alarm when it senses this current. Most sensors and alarms are engineered based on this concept. Note that unless the urine reaches the sensor mechanism and adequately wets the briefs (or insulator between the conductors), the urine may not be sensed and the alarm will not activate. Sensors: Sensors are usually classified in terms of their attachment mechanisms to the briefs or other urine absorbing medium. The major sensor attachment categories are mechanical clips, sticky tape or pads for flat surface sensors, magnetic attachment, and wiring sewed into special briefs. Stainless steel clips are most often used and are easily attached and detached to the briefs at the point of urination. Flat surface sensors require sticky tape or pads to be attached to the briefs. The magnetic sensors are magnetically attached to the briefs. Magnetic sensors and wired briefs are typically used for wireless alarms. Sensors: Another consideration is how the sensor (through its cable, if applicable) is attached to its alarm or transmitter in the case of wired alarms or wireless alarms. Some wireless alarms are truly wireless, with the transmitter being part of the sensor and completely self-contained. For wired alarms, the sensor's wire (or cable) runs from the sensor (located at the point of urination) underneath the user's pajama shirt to wherever the alarm is located on the body (frequently on the collar of the pajama shirt, so that it is close to the ear). The attachment mechanism to the alarm, through which the electric current flows to the alarm, is important. If it is easily detached (unintentionally comes out from the alarm during use) the alarm may not be triggered. Most connectors are plastic telephone jacks which are very unlikely to be detached unintentionally (RJ-11, RJ-12, 616E, etc.). Types of alarms: Wearable alarms A wearable alarm is a design in which the child or patient wears the moisture sensor in or on their underwear or pajamas. This type of sensor will detect moisture almost immediately. The sensor is attached to the alarm unit with an electricity conducting wire or cable that can be worn under the shirt. Many wearable alarms vibrate as well as sound to wake deep sleepers. Types of alarms: Wireless alarms A wireless bedwetting alarm is one in which the sensor and the alarm unit communicate by a means other than a wire. The transmitter, which senses the moisture, is directly attached to the child's underwear. The signal is transmitted wirelessly to a unit that is across the room from the child or an alarm unit in the child's room. Once the alarm unit is activated, it is necessary to get out of bed to turn it off. New wireless alarms add the convenience of also sounding an alarm in the caregiver's room, allowing both patient and caregiver to sleep in the comfort and privacy of their own beds and rooms. Multiple alarms in the house can further increase convenience. Remote controls can facilitate using the wireless bedwetting alarm system, and be especially convenient for the parent or caregiver. Types of alarms: Pad-type alarms Bell-and pad alarms do not attach to the child in any way. The moisture sensor is in the form of a pad or mat that the child sleeps on top of. The pad detects moisture after urine has leaked onto it. The alarm unit is connected with a cord and usually sits on the bedside stand. This alarm requires a larger amount of urine before the sensor can detect moisture. The person must be on the pad for it to sense moisture. Factors of treatment success: Successful outcome of enuresis alarm treatment is associated with optimal motivation of the child and family, higher frequency of dry nights, and the absence of adverse environmental factors and psychiatric disorders. Reduced efficacy of the treatment is associated with lack of concern shown by the individual, lack of supervision, inconsistent use, family stress, abnormal scores on behavioral checklists, psychiatric disorders in the individual, failure to awaken in response to the alarm, unsatisfactory housing conditions, and more than one wetting episode per night.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hilary Blumberg** Hilary Blumberg: Hilary Patricia Blumberg is a medical doctor and the inaugural John and Hope Furth Professor of Psychiatry at the Yale School of Medicine. She is also a professor of Radiology and Biomedical Imaging, and works in the Child Study Center at Yale where she has been a faculty member since 1998. She attended Harvard University as an undergraduate, and completed medical school at Cornell University Medical College (1990). She completed her medical internship and psychiatry residency at Cornell University Medical College/New York Hospital, and her neuroimaging fellowship training at Cornell University, Weill Medical College. She has received the 2006 National Alliance for Research in Schizophrenia and Depression (NARSAD) and the Gerald L. Klerman Award for Clinical Research. Blumberg has authored a number of scientific articles that focus on bipolar disorder, neuroimaging, and effects of specific genetic variations, developmental trajectories and structure-function relationships. Career and research: Awards and honours Mogens Schou Award Winner by International Society for Bipolar Disorders in 2021 Blanche F. Ittleson Award by American Psychiatric Association in 2018 Colvin Prize for Outstanding Achievement in Mood Disorders Research by Brain and Behavior foundation in 2017 Publications A. Sankar et al., “Telehealth Social Rhythm Therapy to Reduce Mood Symptoms and Suicide Risk Among Adolescents and Young Adults With Bipolar Disorder,” American Journal of Psychotherapy, Jul. 2021 H. P. Blumberg et al., “Rostral and Orbital Prefrontal Cortex Dysfunction in the Manic State of Bipolar Disorder,” American Journal of Psychiatry, null156, no. 12, null1988, Dec. 1999. Career and research: M. N. Potenza et al., “An fMRI Stroop Task Study of Ventromedial Prefrontal Cortical Function in Pathological Gamblers,” American Journal of Psychiatry, vol. 160, no. 11, pp. 1990–1994, Nov. 2003. H. P. Blumberg et al., “Frontostriatal Abnormalities in Adolescents With Bipolar Disorder: Preliminary Observations From Functional MRI,” American Journal of Psychiatry, vol. 160, no. 7, pp. 1345–1347, Jul. 2003. J. A. Y. Johnston et al., “Multimodal Neuroimaging of Frontolimbic Structure and Function Associated With Suicide Attempts in Adolescents and Young Adults With Bipolar Disorder,” American Journal of Psychiatry, vol. 174, no. 7, pp. 667–675, Jul. 2017.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Parasite experiment** Parasite experiment: In experimental physics, and particularly in high energy and nuclear physics, a parasite experiment or parasitic experiment is an experiment performed using a big particle accelerator or other large facility, without interfering with the scheduled experiments of that facility. This allows the experimenters to proceed without the usual competitive time scheduling procedure. These experiments may be instrument tests or experiments whose scientific interest has not been clearly established.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PIK3C2B** PIK3C2B: Phosphatidylinositol-4-phosphate 3-kinase C2 domain-containing beta polypeptide is an enzyme that in humans is encoded by the PIK3C2B gene. Function: The protein encoded by this gene belongs to the phosphoinositide 3-kinase (PI3K) family. PI3-kinases play roles in signaling pathways involved in cell proliferation, oncogenic transformation, cell survival, cell migration, and intracellular protein trafficking. This protein contains a lipid kinase catalytic domain as well as a C-terminal C2 domain, a characteristic of class II PI3-kinases. C2 domains act as calcium-dependent phospholipid binding motifs that mediate translocation of proteins to membranes, and may also mediate protein-protein interactions. The PI3-kinase activity of this protein is sensitive to low nanomolar levels of the inhibitor wortmannin. The C2 domain of this protein was shown to bind phospholipids but not Ca2+, which suggests that this enzyme may function in a calcium-independent manner.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Allylcyclopentane** Allylcyclopentane: Allylcyclopentane is a hydrocarbon that has the formula C8H14. This compound is a colourless liquid at room temperature. It has been prepared from cyclopentylmagnesium bromide and allyl bromide.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cell envelope** Cell envelope: The cell envelope comprises the inner cell membrane and the cell wall of a bacterium. In gram-negative bacteria an outer membrane is also included. This envelope is not present in the Mollicutes where the cell wall is absent. Bacterial cell envelopes fall into two major categories: a gram-positive type and a gram-negative type, distinguished by Gram staining. Either type may have an enclosing capsule of polysaccharides for extra protection. As a group these are known as polysaccharide encapsulated bacteria. Function: As in other organisms, the bacterial cell wall provides structural integrity to the cell. In prokaryotes, the primary function of the cell wall is to protect the cell from internal turgor pressure caused by the much higher concentrations of proteins and other molecules inside the cell compared to its external environment. The bacterial cell wall differs from that of all other organisms by the presence of peptidoglycan (poly-N-acetylglucosamine and N-acetylmuramic acid), which is located immediately outside of the cytoplasmic membrane. Peptidoglycan is responsible for the rigidity of the bacterial cell wall and for the determination of cell shape. It is relatively porous and is not considered to be a permeability barrier for small substrates. While all bacterial cell walls (with a few exceptions e.g. intracellular parasites such as Mycoplasma) contain peptidoglycan, not all cell walls have the same overall structures. This is notably expressed through the classification into gram positive and gram negative bacteria. Types of bacterial cell envelopes: The gram-positive cell wall The gram-positive cell wall is characterized by the presence of a very thick peptidoglycan layer, which is responsible for the retention of the crystal violet dyes during the Gram staining procedure. It is found exclusively in organisms belonging to the Actinomycetota (or high %G+C gram-positive organisms) and the Bacillota (or low %G+C gram-positive organisms). Bacteria within the Deinococcota group may also exhibit gram-positive staining behavior but contain some cell wall structures typical of gram-negative organisms. Imbedded in the gram-positive cell wall are polyalcohols called teichoic acids, some of which are lipid-linked to form lipoteichoic acids. Because lipoteichoic acids are covalently linked to lipids within the cytoplasmic membrane they are responsible for linking the peptidoglycan to the cytoplasmic membrane. Teichoic acids give the gram-positive cell wall an overall negative charge due to the presence of phosphodiester bonds between teichoic acid monomers. Types of bacterial cell envelopes: Outside the cell wall, many Gram-positive bacteria have an S-layer of "tiled" proteins. The S-layer assists attachment and biofilm formation. Outside the S-layer, there is often a capsule of polysaccharides. The capsule helps the bacterium evade host phagocytosis. In laboratory culture, the S-layer and capsule are often lost by reductive evolution (the loss of a trait in absence of positive selection). Types of bacterial cell envelopes: The gram-negative cell wall The gram-negative cell wall contains a thinner peptidoglycan layer adjacent to the cytoplasmic membrane than the gram-positive wall, which is responsible for the cell wall's inability to retain the crystal violet stain upon decolourisation with ethanol during Gram staining. In addition to the peptidoglycan layer the gram-negative cell wall also contains an additional outer membrane composed by phospholipids and lipopolysaccharides which face into the external environment. The highly charged nature of lipopolysaccharides confer an overall negative charge to the gram -negative cell wall. The chemical structure of the outer membrane lipopolysaccharides is often unique to specific bacterial strains (i.e. sub-species) and is responsible for many of the antigenic properties of these strains. Types of bacterial cell envelopes: As a phospholipid bilayer, the lipid portion of the outer membrane is largely impermeable to all charged molecules. However, channels called porins are present in the outer membrane that allow for passive transport of many ions, sugars and amino acids across the outer membrane. These molecules are therefore present in the periplasm, the region between the plasma membrane and outer membrane. The periplasm contains the peptidoglycan layer and many proteins responsible for substrate binding or hydrolysis and reception of extracellular signals. The periplasm is thought to exist as a gel-like state rather than a liquid due to the high concentration of proteins and peptidoglycan found within it. Because of its location between the cytoplasmic and outer membranes, signals received and substrates bound are available to be transported across the cytoplasmic membrane using transport and signaling proteins imbedded there. Types of bacterial cell envelopes: In nature, many uncultivated Gram-negative bacteria also have an S-layer and a capsule. These structures are often lost during laboratory cultivation. Types of bacterial cell envelopes: Mycobacteria (Acid-fast bacteria) The Mycobacteria have a cell envelope which is not typical of gram-positives or gram-negatives. The mycobacterial cell envelope does not consist of the outer membrane characteristic of gram-negatives, but has a significant peptidoglycan-arabinogalactan-mycolic acid wall structure which provides an external permeability barrier. Therefore, there is thought to be a distinct 'pseudoperiplasm' compartment between the cytoplasmic membrane and this outer barrier. The nature of this compartment is not well understood. Acid-fast bacteria, like Mycobacteria, are resistant to decolorization by acids during staining procedures. The high mycolic acid content of Mycobacteria, is responsible for the staining pattern of poor absorption followed by high retention. The most common staining technique used to identify acid-fast bacteria is the Ziehl–Neelsen stain or acid-fast stain, in which the acid fast bacilli are stained bright red and stand out clearly against a blue background. Types of bacterial cell envelopes: Bacteria lacking a peptidoglycan cell wall The obligate intracellular bacteria in the family Chlamydiaceae are unique in their morphology as they do not contain detectable amounts of peptidoglycan in the cell wall of their infectious forms. Instead, the extracellular forms of these gram-negative bacteria maintain their structural integrity by relying on a layer of disulfide bond cross-linked cysteine-rich proteins, which is located between cytoplasmic membrane and outer membrane in a manner analogous to the peptidoglycan layer in other gram-negative bacteria. In the intracellular forms of the bacterium the disulfide cross linkage is not found, which confers this form more mechanically fragile. Types of bacterial cell envelopes: The cell envelopes of the bacterial class of mollicutes do not have a cell wall. The main pathogenic bacteria in this class are mycoplasma and ureaplasma.L-form bacteria are strains bacteria that lack cell walls derived from bacteria that normally possess cell walls.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Platform display** Platform display: A platform display, destination display or train describer (British English) is supplementing the destination sign on arriving trains giving passengers an advance information. Historically they did only show the next destination and sometimes the type of train. In later usage they were replaced by passenger information display systems (PIDS) allowing for real-time passenger information. Platform display: The first railway stations had only a time table for passenger information. On larger stations the train porters would help passengers to board the correct train matching with their ticket. They were supervised by a station manager that would handle the security requirements for each departing train. The first help in that task was a bell to remind passengers to board the train in time which on smaller stations does also announce the next train. Different directions would then be called out on the platform. At the time that trains grew into mass transport systems this was not enough anymore. The train handling became optimized to allow for less than a minute from arrival to departure at a stop which triggered the usage of loudspeakers and platform displays. The mechanical types were not standardized and every station had its own range of facilities as they seemed useful. Platform display: A train describer is originally an additional apparatus at British railways that ensures that the identity of each train is displayed on the signalbox panel together with the indication of that train's presence, usually offering routing information. This routing information would then be passed through to the platform display for passenger information. Technically the train reporting number was pushed from one signal box to the next. A series of interconnected signal boxes form a train describer system (TDS) transferring train describer data to be shown on the respective signalbox panel in a train describer display. The electric relay interlocking boxes were later replaced by electronic control boards where the train indication is just a text element on the video display. Platform display: In a centralized electronic interlocking the current train location and identification is used to predict the arrival at the next stop allowing for countdown clocks for passenger information. The term passenger information display has widely replaced the term platform display as station design can include different types of information displays - like a departure board in the main hall, a shorter list in the tunnels and an announcement of the next train on each platform side - which all get their information from a central electronic railway control system. Additionally passenger information displays have come into use for bus and tram stops as well where the destination display equipment is technically similar.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Oxyprothepin decanoate** Oxyprothepin decanoate: Oxyprothepin decanoate, sold under the brand name Meclopin, is a typical antipsychotic which was used in the treatment of schizophrenia in the Czech Republic but is no longer marketed. It is administered by depot injection into muscle. The medication has an approximate duration of 2 to 3 weeks. The history of oxyprothepin decanoate has been reviewed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sulodexide** Sulodexide: Sulodexide, traded as Aterina, is a highly purified mixture of glycosaminoglycans composed of heparan sulfate (80%) and dermatan sulfate (20%). Pharmacology: The low molecular weight of both sulodexide fractions allows for extensive oral absorption compared to unfractionated heparin. The pharmacological effects of sulodexide differ substantially from other glycosaminoglycans and are mainly characterized by a prolonged half-life and reduced effect on global coagulation and bleeding parameters. Due to the presence of both glycosaminoglycan fractions, sulodexide potentiates the antiprotease activities of both antithrombin III and heparin cofactor II simultaneously. Uses: Clinically, sulodexide is used for the prophylaxis and treatment of thromboembolic diseases; however, recent research has also demonstrated the beneficial effects of sulodexide in animal models of reperfusion injury and the treatment of diabetic nephropathy. In combination with melatonin, sulodexide has been shown to be a viable treatment option for patients suffering from central or sensorineural tinnitus. There have also been positive results in treating tinnitus using sulodexide as a monotherapy. Soludexide has also been effectively used in the treatment of chronic venous disease.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Production schedule** Production schedule: The production schedule is a project plan of how the production budget will be spent over a given timescale, for every phase of a business project. Production schedule: The scheduling process starts with the script, which is analysed and broken down, scene by scene, onto a sequence of breakdown sheets, each of which records the resources required to execute the scene. These resources include: Cast Actors Special Effects Wardrobe Special Equipment Stunts Extras/Silent Bits Props Make-up/Hair Extras/Atmosphere Vehicles/Animals Sound Effects/Music Production Notes OthersFrom the breakdown sheets, the Production Manager compiles a production board which is used as the basis for a shooting schedule for every day of the shoot.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Symbiotic nova** Symbiotic nova: Symbiotic novae are slow irregular eruptive variable stars with very slow nova-like outbursts with an amplitude of between 9 and 11 magnitudes. The symbiotic nova remains at maximum for one or a few decades, and then declines towards its original luminosity. Variables of this type are double star systems with one red giant, which probably is a Mira variable, and one a hot compact object (usually a white dwarf), with markedly contrasting spectra and whose proximity and mass characteristics indicate it as a symbiotic star. They are divided into D-type (dusty) or S-type (stellar), depending on whether the giant is a Mira variable or not.The red giant fills its Roche lobe so that matter is transferred to the white dwarf and accumulates until a nova-like outburst occurs, caused by ignition of thermonuclear fusion. The temperature at maximum is estimated to rise up to 200,000 K, similar to the energy source of novae, but dissimilar to the dwarf novae. The slow luminosity increase would then be simply due to time needed for growth of the ionization front in the outburst.It is believed that the white dwarf component of a symbiotic nova remains below the Chandrasekhar limit, so that it remains a white dwarf after its outburst.One example of a symbiotic nova is V1016 Cygni, whose outburst in 1971–2007 clearly indicated a thermonuclear explosion. Other examples are HM Sagittae and RR Telescopii.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Next-fit-decreasing bin packing** Next-fit-decreasing bin packing: Next-fit-decreasing (NFD) is an algorithm for bin packing. Its input is a list of items of different sizes. Its output is a packing - a partition of the items into bins of fixed capacity, such that the sum of sizes of items in each bin is at most the capacity. Ideally, we would like to use as few bins as possible, but minimizing the number of bins is an NP-hard problem. The NFD algorithm uses the following heuristic: Order the items from largest to smallest. Next-fit-decreasing bin packing: Initialize an empty bin and call it the "open bin". For each item in order, check if it can fit into the open bin: If it fits, then place the new item into it. Otherwise, close the current bin, open a new bin, and put the current item inside it.In short: NFD orders the items by descending size, and then calls next-fit bin packing. Performance upper bound: Baker and Coffman proved that, for every integer r, when the size of all items is at most 1/r, the asymptotic approximation ratio of RFD satisfies size ≤1/r)=h∞(r) ,where h∞(r) is a sequence whose first elements are approximately 1.69103, 1.42312, 1.30238. In particular, taking r=1 implies that 1.69103 Later, NFD has also been analyzed probabilistically. Variants: Next-Fit packs a list and its inverse into the same number of bins. Therefore, Next-Fit-Increasing has the same performance as Next-Fit-Decreasing.However, Next-Fit-Increasing performs better when there are general cost structures.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Academic grading in the Czech Republic** Academic grading in the Czech Republic: This article introduces the academic grading systems in the Czech Republic. Primary and secondary: In the Czech Republic, primary and secondary schools use a 5-point grade system, with 1 as the best and 5 as the worst. They correspond to the following ratings: 1 = výborně (excellent), 2 = chvalitebně (commendable), 3 = dobře (good), 4 = dostatečně (sufficient), and 5 = nedostatečně (insufficient). Only whole numbers appear on report cards, but tests or oral exams are often marked with additional distinctive signs: 3+ is slightly better than 3, 2− is slightly lower than 2, 1-2 or 1/2 means halfway between 1 and 2, and 1* means exceptionally excellent. Primary and secondary: Private high schools or Gymnasiums may use different academic grading. A 10-point grading scale consisting of 1 (A), 2 (A-), 3 (B+) ... 10 (D) has been used in some private schools in the Czech Republic. Tertiary: Most universities use a 4-point grade system, in which 1 is the highest and 4 indicates failing. They might also use the textual form of the grades in addition to the numerical form: 1 = výborně (excellent), 2 = velmi dobře (very good), 3 = dobře (good), 4 = neprospěl (fail). In recent years, many universities adopted ECTS system, with the grades A to F. It maps to the previous classification in the following way: Some Czech universities, such as University of Economics, Prague, map points acquired throughout the course (out of 100 acquirable) using a different ratio: 4+, or 50 - 59 points, may enable the students retake an exam to try to get above the 60 points cutoff. If the student fails to do so, it gets changed to 4 (fail) automatically at the end of the semester. First day of the next semester the grades from each subject (where points were inserted into the system) also get converted to the ECTS scale for international use like this:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Digital Library of India** Digital Library of India: Digital Library of India, initially hosted by Indian Institute of Science, CDAC, Noida, IIIT-Hyderabad during 2000s working in partnership with the Million Book Project, provides free access to many books in English and Indian languages. The scanning of Indian language books has created an opportunity for developing Indian language optical character recognition (OCR) software. The publications are mainly in PDF or QuickTime format.Because of copyright laws, the texts are all out of copyright and therefore not sources for current information, but rather useful for history and background. Digital Library of India: As of 2016, DLI had scanned 550,603 titles. Representative titles include: Ancient India, McCrindle J. W.. 1885. Ancient Indian Polity, Aiyangar K. V. Rangaswami. 1935. History of the Parsis Vol-I, Karaka Dosabhai Framji. 1884. A Treatise on Kala-Azar, Brahmachari Upendranath. 1928. "Aligarh kee taleemi tehreek", Khwaja Ghulamus Sayyedain, 1931 "Makateeb-e-Sanai" by Professor Nazir Ahmed, 1962Books in Urdu and Persian are also available. Examples include " Aligarh kee taaleemi tehreek" by Khwaja Ghulamus Sayyedain and Makateeb-e-Sanai by Professor Nazir Ahmad DLI website has not been operational for maintenance reasons from 2017. The contents are available from archive.org
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**11-Deoxycortisol** 11-Deoxycortisol: 11-Deoxycortisol, also known as cortodoxone (INN), cortexolone as well as 17α,21-dihydroxyprogesterone or 17α,21-dihydroxypregn-4-ene-3,20-dione, is an endogenous glucocorticoid steroid hormone, and a metabolic intermediate towards cortisol. It was first described by Tadeusz Reichstein in 1938 as Substance S, thus has also been referred to as Reichstein's Substance S or Compound S. Function: 11-Deoxycortisol acts as a glucocorticoid, though is less potent than cortisol.11-Deoxycortisol is synthesized from 17α-hydroxyprogesterone by 21-hydroxylase and is converted to cortisol by 11β-hydroxylase. Function: 11-Deoxycortisol in mammals has limited biological activity and mainly acts as metabolic intermediate within the glucocorticoid pathway, leading to cortisol. In sea lamprey, a member of the agnathans that evolved more than 500 million years ago, 11-deoxycortisol is the major and final glucocorticoid, with mineralocorticoid activity. 11-deoxycortisol also takes part, by binding to specific corticosteroid receptors, in intestinal osmoregulation in sea lamprey at metamorphosis, during which they develop seawater tolerance before downstream migration. Sea lamprey do not have 11β-hydroxylase enzyme (CYP11B1) that converts 11-deoxycortisol to cortisol and 11-deoxycorticosterone to corticosterone in mammals. This indicates that a complex and highly specific corticosteroid signaling pathway evolved at least 500 million years ago with the arrival of the earliest vertebrate. The absence of cortisol and corticosterone in sea lampreys suggests that the 11β-hydroxylase enzyme may not have been present early in vertebrate evolution. Clinical significance: 11-Deoxycortisol in mammals has limited glucocorticoid activity, but it is the direct precursor of the major mammalian glucocorticoid, cortisol. As a result, the level of 11-deoxycortisol is measured to diagnose impaired cortisol synthesis, to find out the enzyme deficiency that causes impairment along the pathway to cortisol, and to differentiate adrenal disorders.In 11β-hydroxylase deficiency, 11-deoxycortisol and 11-deoxycorticosterone levels increase, and excess of 11-deoxycorticosterone leads to mineralocorticoid-based hypertension (as opposed to 21-hydroxylase deficiency, in which patients have low blood pressure from a lack of mineralocorticoids). In 11β-hydroxylase deficiency, 11-deoxycortisol can also be converted to androstenedione in a pathway that could explain the increase in androstenedione levels this condition.In 21-hydroxylase deficiency, 11-deoxycortisol levels are low. History: In 1934, biochemist Tadeus Reichstein, working in Switzerland, began research on extracts from animal adrenal glands in order to isolate physiologically active compounds. He was publishing results of his findings along the way. By 1944, he already isolated and elucidated the chemical structure of 29 pure substances. He was assigning names that consisted of the word "Substance" and a letter from the Latin alphabet to the newly found substances. In 1938, he has published an article about "Substance R" and "Substance S" describing their chemical structures and properties. The Substance S since about 1955 became known as 11-Deoxycortisol.In the 1930s and 1940s clinicians were discovering many uses for the newly discovered hormones, however, only minute quantities could be extracted from animal organs. Chemists were looking for production of these hormones on a larger industrial scale. History: In 1949, American research chemist Percy Lavon Julian, in looking for ways to produce cortisone, announced the synthesis of the Compound S, from the cheap and readily available pregnenolone (synthesized from the soybean oil sterol stigmasterol).On April 5, 1952, biochemist Durey Peterson and microbiologist Herbert Murray at Upjohn, published the first report of a breakthrough fermentation process for the microbial 11α-oxygenation of steroids (e.g. progesterone) in a single step by common molds of the order Mucorales. History: 11α-oxygenation of Compound S produces 11α-hydrocortisone, which can be chemically oxidized to cortisone, or converted by further chemical steps to 11β-hydrocortisone (cortisol).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Inlays and onlays (bookbinding)** Inlays and onlays (bookbinding): In bookbinding, inlays and onlays are pieces of leather adhered to the cover of a book, usually differing in color, grain, or both from the main covering leather. While they are complementary techniques, and may appear similar in their final forms, they are distinct in how they are constructed. Inlays: Leather inlays, which are similar in form to inlays in woodworking, are shaped pieces of leather the same thickness as the covering leather on a book. A piece of leather the same shape, size, and thickness as the inlay is removed from the covering leather, and the inlay is placed into the resulting space. The edge of the inlay can be tooled, in which case the edge of the inlay is beveled, with the skin side of the leather slightly larger than the flesh side. This gives a smooth, well-supported surface for the impressions of the finishing tools. Onlays: Often mistaken for inlays, onlays are thin pieces of leather (often less than 0.2 mm) which are adhered over the covering leather on a book. Because of their thinness, onlays are not noticeably raised from the surface of the covering leather. Onlays are adhered to the covering leather with paste or PVA, and their edges are usually, though not universally, tooled over in order to hide any minor irregularities. They are often added to the spine of a book, containing the title and author's name.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carnitine O-palmitoyltransferase** Carnitine O-palmitoyltransferase: Carnitine O-palmitoyltransferase (also called carnitine palmitoyltransferase) is a mitochondrial transferase enzyme (EC 2.3.1.21) involved in the metabolism of palmitoylcarnitine into palmitoyl-CoA. A related transferase is carnitine acyltransferase. Human forms: There are four different forms of CPT in humans: CPT1A – associated with Carnitine palmitoyltransferase I deficiency CPT1B CPT1C CPT2 – associated with carnitine palmitoyltransferase II deficiency
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Screen heating** Screen heating: The method of screening has been linked to the early Egyptians who used the process to separate basic minerals by using a mesh that had equal openings. This idea fell down through the ages and developed into a series of European patents around 1924, when the thought of electrically heating the wire cloth for screens was considered. This was the first attempt at eliminating screen blinding (when the material being processed (clay, dirt, etc.) clogs the holes in the screen) by using heat, and it proved that the surface tension from blinding was reduced when heat was introduced. Screen heating: The first commercially available screen heating system was developed in 1947 by F.R. Hannon & Sons (now Hanco International ([1]), and Thomas W. Hannon obtained a patent for his electrically heated screen construction and method [2] Archived 2017-02-11 at the Wayback Machine[3][4] Archived 2017-02-11 at the Wayback Machine[5] Archived 2017-02-11 at the Wayback Machine. This was just in time, as industries were beginning to expand in coming out of World War II. The first applications of the screen heating system were limited almost exclusively to the clay industry, though it did not take long for other applications, in a variety of industries, to be identified. Screen heating: Another significant invention for screen heating was created by Thomas W. Hannon (for Hanco International). This was for screen-related clamp bars (also known as a Wedge Grip), that are used when mounting a screen on frame members. The clamp bars secured a screen to a tensioning rail, which in turn increased the electrical connection between the two components. ([6]) In the years following, copper and copper cables were implemented in an attempt to improve the system. Silicon-bonded fiberglass was also added to the insulation of screen boxes. For a short period, flexible shunts were also used, but a major deficiency was found in attaching the stationary transformer to a screen that vibrated. The Invention of the Flux-Power Screen Heating System: In exploring ways to properly attach a transformer to a vibrating screen, Hanco International ([7]) pioneered the Flux-Power Screen Heating System [8]. The invention came to be in 1961, created by Thomas W. Hannon ([9] Archived 2017-02-11 at the Wayback Machine), and it was unlike anything of its time. The design eliminated the use of physical connections as seen in earlier screen heating systems. Omitting the mechanical connection also deletes the possibility of primary winding failures that come with short circuits on the secondary of the transformer while eliminating wear and fatigue on the screen, itself. This design also allowed for the transformer to be encapsulated and protected from the elements and dust. The Invention of the Flux-Power Screen Heating System: When processing basic mineral deposits (clay, shale, limestone, etc.), Flux-Powered Screen Heating Systems are ideal. Heating the screen lessens surface tension from moisture which is a main reason for stickiness against cloth mediums. Introducing heat allows maximum contact and keeps the screen clear, allowing finer material separations and more precise sizing. This, in turn, adds to efficiency. Flux-Powered Screen Heating Systems have been used and proven effective in screening fertilizers (in handling phosphoric, potash and sulfate rock), foods, chemicals, and in recovering coal for power.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Anosodiaphoria** Anosodiaphoria: Anosodiaphoria is a condition in which a person who has a brain injury seems indifferent to the existence of their impairment. Anosodiaphoria is specifically used in association with indifference to paralysis. It is a somatosensory agnosia, or a sign of neglect syndrome. It might be specifically associated with defective functioning of the frontal lobe of the right hemisphere.Joseph Babinski first used the term anosodiaphoria in 1914 to describe a disorder of the body schema in which patients verbally acknowledge a clinical problem (such as hemiparesis) but fail to be concerned about it. Anosodiaphoria follows a stage of anosognosia, in which there may be verbal, explicit denial of the illness, and after several days to weeks, develop the lack of emotional response. Indifference is different from denial because it implies a lack of caring on the part of the patient, who otherwise acknowledges his or her deficit. Causes: A few possible explanations for anosodiaphoria exist: The patient is aware of the deficit but does not fully comprehend it or its significance for functioning May be related to an affective communication disorder and defective arousal. These emotional disorders cannot account for the verbal explicit denial of illness of anosognosia.Other explanations include reduced emotional experience, impaired emotional communication, alexithymia, behavioral abnormalities, dysexecutive syndrome, and the frontal lobes. Neurology: Anosodiaphoria occurs after stroke of the brain. 27% of patients with an acute hemispheric stroke had the stroke in the right hemisphere, while 2% have it in their left.Anosodiaphoria is thought to be related to unilateral neglect, a condition often found after damage to the non-dominant (usually the right) hemisphere of the cerebral cortex in which patients seem unable to attend to, or sometimes comprehend, anything on a certain side of their body (usually the left).The frontal lobe is thought to be the primary area for the lack of emotional insight seen in anosodiaphoria, such as in frontotemporal dementia. A recent 2011 study done by Mendez and Shapira found that people with frontotemporal dementia also had a loss of insight more properly described at "frontal anosodiaphoria", a lack of concern for proper self-appraisal. Patients were found to have a lack of emotional updating, or concern for having an illness; an absence of an emotional self-referent tagging of information on their disorder, which they think is possibly from disease in the ventromedial prefrontal cortex, anterior cingulate-anterior insula area, especially on the right. Treatment: Indifference to illness may have an adverse impact on a patient's engagement in neurological rehabilitation, cognitive rehabilitation and physical rehabilitation. Patients are not likely to implement rehabilitation for a condition about which they are indifferent. Although anosognosia often resolves in days to weeks after stroke, anosodiaphoria often persists. Therefore, the therapist has to be creative in their rehabilitation approach in order to maintain the interest of the patient.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**C++ string handling** C++ string handling: The C++ programming language has support for string handling, mostly implemented in its standard library. The language standard specifies several string types, some inherited from C, some designed to make use of the language's features, such as classes and RAII. The most-used of these is std::string. Since the initial versions of C++ had only the "low-level" C string handling functionality and conventions, multiple incompatible designs for string handling classes have been designed over the years and are still used instead of std::string, and C++ programmers may need to handle multiple conventions in a single application. History: The std::string type is the main string datatype in standard C++ since 1998, but it was not always part of C++. From C, C++ inherited the convention of using null-terminated strings that are handled by a pointer to their first element, and a library of functions that manipulate such strings. In modern standard C++, a string literal such as "hello" still denotes a NUL-terminated array of characters.Using C++ classes to implement a string type offers several benefits of automated memory management and a reduced risk of out-of-bounds accesses, and more intuitive syntax for string comparison and concatenation. Therefore, it was strongly tempting to create such a class. Over the years, C++ application, library and framework developers produced their own, incompatible string representations, such as the one in AT&T's Standard Components library (the first such implementation, 1983) or the CString type in Microsoft's MFC. While std::string standardized strings, legacy applications still commonly contain such custom string types and libraries may expect C-style strings, making it "virtually impossible" to avoid using multiple string types in C++ programs and requiring programmers to decide on the desired string representation ahead of starting a project.In a 1991 retrospective on the history of C++, its inventor Bjarne Stroustrup called the lack of a standard string type (and some other standard types) in C++ 1.0 the worst mistake he made in its development; "the absence of those led to everybody re-inventing the wheel and to an unnecessary diversity in the most fundamental classes". History: Implementation issues The various vendors' string types have different implementation strategies and performance characteristics. In particular, some string types use a copy-on-write strategy, where an operation such as does not actually copy the content of a to b; instead, both strings share their contents and a reference count on the content is incremented. The actual copying is postponed until a mutating operation, such as appending a character to either string, makes the strings' contents differ. Copy-on-write can make major performance changes to code using strings (making some operations much faster and some much slower). Though std::string no longer uses it, many (perhaps most) alternative string libraries still implement copy-on-write strings. History: Some string implementations store 16-bit or 32-bit code points instead of bytes, this was intended to facilitate processing of Unicode text. However, it means that conversion to these types from std::string or from arrays of bytes is dependent on the "locale" and can throw exceptions. Any processing advantages of 16-bit code units vanished when the variable-width UTF-16 encoding was introduced (though there are still advantages if you must communicate with a 16-bit API such as Windows). Qt's QString is an example.Third-party string implementations also differed considerably in the syntax to extract or compare substrings, or to perform searches in the text. Standard string types: The std::string class is the standard representation for a text string since C++98. The class provides some typical string operations like comparison, concatenation, find and replace, and a function for obtaining substrings. An std::string can be constructed from a C-style string, and a C-style string can also be obtained from one.The individual units making up the string are of type char, at least (and almost always) 8 bits each. In modern usage these are often not "characters", but parts of a multibyte character encoding such as UTF-8. Standard string types: The copy-on-write strategy was deliberately allowed by the initial C++ Standard for std::string because it was deemed a useful optimization, and used by nearly all implementations. However, there were mistakes, in particular the operator[] returned a non-const reference in order to make it easy to port C in-place string manipulations (such code often assumed one byte per character and thus this may not have been a good idea!) This allowed the following code that shows that it must make a copy even though it is almost always used only to examine the string and not modify it: This caused some implementations to abandon copy-on-write. It was also discovered that the overhead in multi-threaded applications due to the locking needed to examine or change the reference count was greater than the overhead of copying small strings on modern processors (especially for strings smaller than the size of a pointer). The optimization was finally disallowed in C++11, with the result that even passing a std::string as an argument to a function, viz. Standard string types: must be expected to perform a full copy of the string into newly allocated memory. The common idiom to avoid such copying is to pass as a const reference: In C++17 added a new string_view class that is only a pointer and length to read-only data, makes passing arguments far faster than either of the above examples: Example usage Related classes std::string is a typedef for a particular instantiation of the std::basic_string template class. Its definition is found in the <string> header: Thus string provides basic_string functionality for strings having elements of type char. There is a similar class std::wstring, which consists of wchar t, and is most often used to store UTF-16 text on Windows and UTF-32 on most Unix-like platforms. The C++ standard, however, does not impose any interpretation as Unicode code points or code units on these types and does not even guarantee that a wchar_t holds more bits than a char. To resolve some of the incompatibilities resulting from wchar_t's properties, C++11 added two new classes: std::u16string and std::u32string (made up of the new types char16_t and char32_t), which are the given number of bits per code unit on all platforms. Standard string types: C++11 also added new string literals of 16-bit and 32-bit "characters" and syntax for putting Unicode code points into null-terminated (C-style) strings.A basic_string is guaranteed to be specializable for any type with a char_traits struct to accompany it. As of C++11, only char, wchar_t, char16_t and char32_t specializations are required to be implemented.A basic_string is also a Standard Library container, and thus the Standard Library algorithms can be applied to the code units in strings. Standard string types: Critiques The design of std::string has been held up as an example of monolithic design by Herb Sutter, who reckons that of the 103 member functions on the class in C++98, 71 could have been decoupled without loss of implementation efficiency.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Normustine** Normustine: Normustine, also known as bis(2-chloroethyl)carbamic acid, is a nitrogen mustard and alkylating antineoplastic agent (i.e., chemotherapy agent). It is a metabolite of a number of antineoplastic agents that have been developed for the treatment of tumors, including estramustine phosphate, alestramustine, cytestrol acetate, and ICI-85966 (stilbostat), but only the former of which has actually been marketed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Indium(III) chloride** Indium(III) chloride: Indium(III) chloride is the chemical compound with the formula InCl3. This salt is a white, flaky solid with applications in organic synthesis as a Lewis acid. It is also the most available soluble derivative of indium. This is one of three known indium chlorides. Synthesis and structure: Being a relatively electropositive metal, indium reacts quickly with chlorine to give the trichloride. Indium trichloride is very soluble and deliquescent. A synthesis has been reported using an electrochemical cell in a mixed methanol-benzene solution.Like AlCl3 and TlCl3, InCl3 crystallizes as a layered structure consisting of a close-packed chloride arrangement containing layers of octahedrally coordinated In(III) centers, a structure akin to that seen in YCl3. In contrast, GaCl3 crystallizes as dimers containing Ga2Cl6. Molten InCl3 conducts electricity, whereas AlCl3 does not as it converts to the molecular dimer, Al2Cl6. Reactions: InCl3 is a Lewis acid and forms complexes with donor ligands, L, InCl3L, InCl3L2, InCl3L3. For example, with the chloride ion it forms tetrahedral InCl4−, trigonal bipyramidal InCl52−, and octahedral InCl63−.In diethyl ether solution, InCl3 reacts with lithium hydride, LiH, to form LiInH 4 . This unstable compound decomposes below 0 °C, and is reacted in situ in organic synthesis as a reducing agent and to prepare tertiary amine and phosphine complexes of InH3.Trimethylindium, InMe3, can be produced by reacting InCl3 in diethyl ether solution either with the Grignard reagent MeMgI or methyllithium, LiMe. Triethylindium can be prepared in a similar fashion but with the grignard reagent EtMgBr. Reactions: InCl LiMe Me In OEt LiCl InCl MeMgI Me In OEt MgClI InCl EtMgBr Et In OEt MgBr 2 InCl3 reacts with indium metal at high temperature to form the lower valent indium chlorides In5Cl9, In2Cl3 and InCl. Catalyst in chemistry: Indium chloride is a Lewis acid catalyst in organic reactions such as Friedel-Crafts acylations and Diels-Alder reactions. As an example of the latter, the reaction proceeds at room temperature, with 1 mole% catalyst loading in an acetonitrile-water solvent mixture. The first step is a Knoevenagel condensation between the barbituric acid and the aldehyde; the second step is a reverse electron-demand Diels-Alder reaction, which is a multicomponent reaction of N,N'-dimethyl-barbituric acid, benzaldehyde and ethyl vinyl ether. With the catalyst, the reported chemical yield is 90% and the percentage trans isomer is 70%. Without the catalyst added, the yield drops to 65% with 50% trans product.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Target hardening** Target hardening: Target hardening, also referred to simply as hardening when made clear by the context, is a term used by police officers, those working in security, and the military referring to the strengthening of the security of a building or installation in order to protect it in the event of attack or reduce the risk of theft. It is believed that a "strong, visible defense will deter or delay an attack".In terms of business and home security, target hardening is one of the suite of protective measures that are included in crime prevention through environmental design. This can include ensuring all doors and windows are sourced and fitted in such a way that they can resist forcible and surreptitious intruder attack, adding hard barriers and landscapes that resist vehicle and pedestrian intrusion, adding fences, walls and hostile planting. All of these are greatly assisted by removing or pruning any trees or bushes that could offer suitable hiding places or could be used to climb to a higher level of the property. However, for a business, taking target hardening too far can send the wrong message out to potential customers.In military or counter-terrorism terms, target hardening refers to ensuring strategic or tactical assets are secured against adversary attack.Other more specific terms associated with target hardening include hostile vehicle mitigation and "blast hardening".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Insulin-induced gene 1 protein** Insulin-induced gene 1 protein: Insulin induced gene 1, also known as INSIG1, is a protein which in humans is encoded by the INSIG1 gene.INSIG1 is short for insulin-induced gene 1; it is located on chromosome 7 (7q36). This human gene encodes for a transmembrane protein of 277 amino acids with probably 6 transmembrane domains. It is localized in the endoplasmic reticulum (ER) and seems to be expressed in all tissues, especially in liver. This gene is called an insulin-induced gene because the molecule insulin can regulate it. Importantly, the protein encoded by this gene plays a critical role in regulating cholesterol concentrations in cells. Function: INSIG1 plays an important role in the SREBP-mediated regulation of cholesterol biosynthesis: by binding to the sterol-sensing domain of SCAP (SREBP cleavage activating protein) it makes the SCAP/SREBP complex stay longer in the ER, thus prohibiting SCAP from carrying activated SREBP to the golgi complex. This ultimately blocks SREBP from acting as a transcription factor for the SRE in the promoter region of the HMG-CoA-reductase gene and results in a decreased expression of HMG-CoA-reductase. Function: INSIG1 also binds to the sterol-sensing domain of HMG-CoA-reductase, resulting in the enzyme's increased degradation.Both functions require the binding of INSIG1 protein via the same site. Function: There are two other proteins whose sterol-binding sites show a great similarity to the ones of SCAP and HMG-CoA-reductase and who might thus be regulated by INSIG1 as well: Niemann-Pick disease type C1 protein, which participates in the intracellular movement of cholesterol Patched, the receptor for Hedgehog, a protein that contains covalently bound cholesterolOxysterols regulate cholesterol homeostasis through liver X receptor (LXR) and sterol regulatory element-binding protein (SREBP) mediated signaling pathway. This protein binds to the sterol-sensing domains of SREBP cleavage-activating protein (SCAP) and HMG CoA reductase, and is essential for the sterol-mediated trafficking of the two proteins. Alternatively spliced transcript variants encoding distinct isoforms have been observed. Regulation: INSIG1 is regulated by insulin and highly expressed in liver. Sequence (277 AA): MPRLHDHFWS CSCAHSARRR GPPRASTAGL PPKVGEMINV SVSGPSLLAA HGAPDADPAP RGRSAAMSGP EPGSPYPNTW HHRLLQRSLV LFSVGVVLAL VLNLLQIQRN VTLFPEEVIA TIFSSAWWVP PCCGTAAAVV GLLYPCIDSH LGEPHKFKRE WASVMRCIAV FVGINHASAK LDFANNVQLS LTLAALSLGL WWTFDRSRSG LGLGITIAFL ATLITQFLVY NGVYQYTSPD FLYIRSWLPC IFFSGGVTVG NIGRQLAMGV PEKPHSD Synonyms: CL-6, INSIG-1, Insulin-induced gene 1 protein, MGC1405 (source: iHOP) Interactions: INSIG1 has been shown to interact with SREBF2.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**1883 Cleveland Blues season** 1883 Cleveland Blues season: The 1883 Cleveland Blues finished the season at 55–42, fourth place in the National League. Regular season: Season standings Record vs. opponents Roster Player stats: Batting Starters by position Note: Pos = Position; G = Games played; AB = At bats; H = Hits; Avg. = Batting average; HR = Home runs; RBI = Runs batted in Other batters Note: G = Games played; AB = At bats; H = Hits; Avg. = Batting average; HR = Home runs; RBI = Runs batted in Pitching Starting pitchers Note: G = Games pitched; IP = Innings pitched; W = Wins; L = Losses; ERA = Earned run average; SO = Strikeouts Relief pitchers Note: G = Games pitched; W = Wins; L = Losses; SV = Saves; ERA = Earned run average; SO = Strikeouts
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Plantar cuneonavicular ligaments** Plantar cuneonavicular ligaments: The plantar cuneonavicular ligaments are fibrous bands that connect the plantar surface of the navicular bone to the adjacent plantar surfaces of the three cuneiform bones.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Blocco Automatico a Correnti Codificate** Blocco Automatico a Correnti Codificate: Blocco automatico a correnti codificate (BACC or BAcc, automatic block with codified currents) is a signalling block system used in Italy on railway lines using 3 kV DC electrification. The track circuits used to detect the presence of a train also transmit coded signals to the trains which are used for train protection and cab signaling. Train protection systems that use BAcc are RS4 Codici, RS9 Codici and SCMT. Codes: The information is conveyed by superposition of two amplitude-modulated alternating currents in the rails (with a carrier frequency of 50 Hz and 178 Hz, respectively). Receiver coils in front of the first axle of a locomotive or control car are used to detect the signal. Codes: The frequency of the modulating signal encodes the signal aspect: While RS4 Codici is a simple cab signalling system that only requires the driver to acknowledge any change in the aspect of the next signal, SSB (On board sub system) RS9 equipment also continuously monitors the train speed and computes braking curves according to the train's length, mass, and braking ability. Codes: RS4 Codici uses a single 50 Hz alternating current. All codes that do not use the 178 Hz carrier are used identically for both RS4 and RS9 Codici. Thus, RS9 is backward-compatible to RS4 Codici. An alarm is sounded in the cab if the train exceeds the speed setpoint by more than 3 km/h. If the train exceeds the speed setpoint by more than 5 km/h or misses the designated stopping place, the system applies emergency brakes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Magic: The Gathering formats** Magic: The Gathering formats: Magic: The Gathering formats are various ways in which the Magic: The Gathering collectible card game can be played. Each format provides rules for deck construction and gameplay, with many confining the pool of permitted cards to those released in a specified group of Magic card sets. The Wizards Play Network (WPN; formerly known as the DCI), the governing body that oversees official Magic competitive play, categorizes its tournament formats into Constructed and Limited. Additionally, there are many casual formats with the Commander format being one of the most popular formats of the game. Overview: Formats are divided into two main categories by the Wizards Play Network: Tournament and Casual. The term "sanctioned" refers to formats that the Wizards Play Network allows to be run at official events. Officially sanctioned events can also add additional rules such as disallowing proxy cards.A number of other formats have been designed by Wizards of the Coast or by players themselves for custom gameplay or reduced investment cost; these are known as casual formats. Some casual formats utilize rules or sets of cards that differ from those used in sanctioned tournament play. One of the most popular formats of Magic is the Commander format which is technically a casual sanctioned format. In 2015, Wizards of the Coast officially sanctioned many casual formats, including "Invent Your Own Format", for use at Friday Night Magic events.Formats can further be divided by if they are Constructed or Limited formats. Constructed formats require decks to be made prior to participation, with players allowed to use any tournament-legal cards they possess. Sanctioned Constructed formats include Standard, Pioneer, Modern, Legacy, and Vintage. Limited formats, in contrast, utilize a restricted and unknown pool of cards, usually formed by opening Magic products. Limited competition require players to select cards and build decks on the fly within the tournament itself. The primary two sanctioned Limited formats are Sealed Deck and Booster Draft. Tournament formats: The following is a non-exhaustive summary of some of the major tournament formats: Constructed Constructed formats, as opposed to Limited formats, allow players to build decks from the entirety of the legal cards available in the specified format. The formats differ based on the card pool allowed, which affects each format's accessibility, power level, and complexity. In Constructed format tournaments, players build their deck in advance of the tournament.The following rules apply to most sanctioned Constructed formats: Constructed decks must contain a minimum of 60 cards. There is no maximum deck size, however, the player must be able to shuffle their deck unassisted. Tournament formats: Players may have a sideboard of up to a maximum of 15 cards, and exchanges of cards between games are not required to be on a one-for-one basis, so long as the player adheres to the 60 card minimum deck size. With the exception of basic land cards and cards that specify otherwise, a player's combined deck and sideboard may not contain more than four of any individual card, unless stated otherwise, counted by its English card title equivalent. All cards named Plains, Island, Swamp, Mountain, Forest, and Wastes are basic. A card may only be used in a particular format if the card is from a set that is legal in that format or has the same name as a card from a set that is legal in that format. Cards banned in a specific format may not be used in decks for that format. Cards restricted in a specific format may only have one copy in a deck, including sideboard. Tournament formats: Standard The Standard (originally called "Type 2") format was introduced in 1995 and became the flagship format in the constructed deck tournament scene. It is also the format most commonly found at Friday Night Magic tournaments, played weekly at many hobby shops. A variation of the format called Arena Standard is used for online play through Magic: The Gathering Arena. This format generally consists of the most recent standard sets (expansion/core set) releases. Sets are included in the standard format for up to two years, with the four oldest sets being removed from the format in the fall "rotation"; thus the number of sets included in the standard format is at its lowest immediately after the rotation and increases as new sets are released until the oldest sets are rotated out again the following fall. The previous rule was using three to four recent "Block" releases plus any core sets released between the older set of the block and the first set that would make oldest two blocks rotated out. As of December 2022, the current Standard set includes: Innistrad: Midnight Hunt, Innistrad: Crimson Vow, Kamigawa: Neon Dynasty, Streets of New Capenna, Dominaria United, The Brothers' War, Phyrexia: All Will Be One, March of the Machine, and March of the Machine: the Aftermath. Tournament formats: Modern Modern is a constructed format created by Wizards of the Coast in the Spring of 2011 as a response to the increasing popularity of the Legacy format which, although popular, proved difficult to access due to the high price of staple cards, as well as dissatisfaction with the Extended format of the time. Wizards of the Coast is unwilling to reprint some of these cards due to the Reserved List, a list of cards Wizards promised never to reprint in order to protect card prices. Therefore, Modern was designed as a new format that would exclude all cards on the Reserved List, allowing the format to be more accessible than Legacy.Modern allows cards from all core sets beginning with the 8th Edition core set and all expansions printed afterwards. The 8th Edition core set was when Magic cards began to be printed in modern card frames, and this is where the name for the format is derived. Wizards believed this cutoff would have the advantage of giving a visual cue as to which cards are legal in the Modern format. Additionally, Wizards has created “straight-to-Modern” sets which skip other formats (such as Standard) entirely but are legal in the Modern format. The format maintains its own banned list. Cards are banned on the basis of their power level, as in all constructed formats outside Vintage. The first official tournament to be held using the format was Pro Tour Philadelphia in September 2011. The first Grand Prix to use the format was Grand Prix Lincoln in February 2012.CBR highlighted that "the Modern format is more intense and competitive than Standard [...]. Only a tiny fraction of legal Modern cards end up in modern decks, thanks to the Modern format's high standards for playable cards. An entire 250-card set could only contribute four or five to the format, if not fewer. [...] A Standard or casual player getting into Modern will realize that they're on the verge of winning or losing even within the first four turns; in short, a game can go from 0 to 60 with astonishing speed. [...] Modern has one of the richest metas of all, boasting many decks of different color combinations and archetypes". Tournament formats: Pioneer Pioneer was created in the autumn of 2019. The rules for card legality are similar to Modern, consisting of cards that were released into the Standard format starting with a given expansion set. For Pioneer, the first legal expansion set is Return to Ravnica. The cutoff was made as it is the first expansion released after Modern was made an official format. Tournament formats: Like other constructed formats, Pioneer maintains its own banned list. Tournament formats: Historic In 2019, a "MTG Arena-first format" was officially announced. The new Historic format was created as a way for players to use cards that are available on Arena, but are not currently legal in the Standard format due to rotation, ban, or other reasons. The three ways that cards join the historic format are: appearing in a standard-legal set, appearing in supplemental sets released on Arena (such as the non-standard set Jumpstart ), or added via 15-20 card sets called Historic Anthologies. Like other constructed formats, Historic maintains its own banned list. Tournament formats: The Historic format was featured as the format of the Pro Tour event, The Mythic Invitational taking place September 10–13, 2020. Tournament formats: Legacy Legacy allows cards from all sets (known as an "Eternal" format). It maintains a curated ban list based on power level reasons. The format evolved from Type 1.5, which allowed cards from all sets and maintained a banned list corresponding to Vintage: all cards banned or restricted in the old Type 1 were banned in Type 1.5. The modern Legacy format began in 2006, as the DCI separated Legacy's banned list from Vintage and banned many new cards to reduce the power level of the format.Wizards has supported the format with Grand Prix events and the release of preconstructed Legacy decks on Magic Online in November 2010. The first Legacy Grand Prix was Grand Prix Philadelphia in 2005.Legacy format allows various cards that other formats would ban quickly, with a relatively small ban list for all of the cards that would be usable in it. Tournament formats: Vintage The Vintage format (formerly known as Type 1) is another Eternal constructed format. Vintage maintains a small banned list and a larger restricted list. Unlike in the other formats, the WPN does not ban cards in Vintage for power level reasons. Cards banned in Vintage are those that either involve ante, manual dexterity (Falling Star, Chaos Orb), or could hinder event rundown (Shahrazad and Conspiracy cards). Cards that raise power level concerns are instead restricted to a maximum of one copy per deck. The one exception to this was Lurrus of the Dream Den, which could be cast from outside the game and thus could not meaningfully be restricted; Lurrus was unbanned after a rule change in 2021. Vintage is currently the only format in which cards are restricted.Because of the expense in acquiring the old cards to play competitive Vintage, many Vintage tournaments are unsanctioned and permit players to use a certain number of proxy cards. These are treated as stand-ins of existing cards and are not normally permitted in tournaments sanctioned by the WPN. Dot eSports highlighted that "Vintage is Legacy, except you can add one copy of cards on the 'restricted list' (the 50 or so most powerful cards in the game). These decks, quite simply, are the most powerful things in Magic and are insanely fast and lethal. They’re also prohibitively expensive". Tournament formats: Pauper Pauper is a Magic variant in which card legality is based on rarity. Any cards that either have been printed as common in paper format or appeared as common in a Magic Online set at least once are legal. A variant format is Pauper Standard which is Standard but only with common cards. Destructoid commented that the Pauper format is being picked up by "many local gaming stores".The format was originally an official format exclusive for Magic Online on December 1, 2008, using Magic Online's own rarity list for pre-7th Edition cards appearing in the Master’s Edition series, though some paper Pauper events have been run on that list. After it became a sanctioned format in June 2019, all paper and digital sets were put into consideration instead. In January 2022, Wizards of the Coast announced additional support for the format via the newly formed Pauper Format Panel; the panel is led by Gavin Verhey, a senior designer at Wizards of the Coast, along with "six Magic players and personalities from the Pauper community". This panel will provide play recommendations, such as removing or unbanning cards, and will focus on the "health of the format". Tournament formats: Limited Limited formats are so-called because they require players to build their decks from a more limited pool of cards than Constructed formats. Limited formats require players to open a specified number of Magic products, they then must work exclusively with the cards that came from that product. Due to the nature of Limited formats, players cannot build their decks in advance of the tournament and must build their deck within the tournament itself.The three sanctioned Limited formats are: Sealed Deck: in Sealed Deck tournaments, each player receives six booster packs to build "the best 40-card deck they can". Depending on which sets are to be used in a sealed deck event, the distribution of packs can vary greatly. For example, a Magic 2010 sealed deck event consists of six Magic 2010 boosters, but a sanctioned Shards of Alara block sealed deck event consists of two Shards of Alara, two Conflux, and two Alara Reborn booster packs. Tournament formats: Booster Draft: in a booster draft, several players (usually eight) are seated around a table and each player is given three booster packs. Each player opens a pack, selects a card from it and passes the remaining cards to their left. Each player then selects one of the remaining cards from the pack that was just passed to them, and passes the remaining cards to the left again. This continues until all of the cards are depleted. The process is repeated with the second and third packs, except that the cards are passed to the right in the second pack. Players then build decks out of any cards that they selected during the drafting and add as many basic lands as they choose. Each deck built this way must have a minimum of 40 cards, including basic lands. Tournament formats: Rochester Draft: a booster draft variant that was commonly used as a format in Pro Tour and Grand Prix. Although it is still a sanctioned format, the format "at a competitive level is remarkably rare". In 2018, Wizards of the Coast ran the Silver Showcase which was an invite-only Rochester Draft held before the 25th Anniversary Pro Tour. The format differs from traditional booster draft in that packs are opened one at a time and are laid out for each player to see. Players openly pick one card from the pack in turn. Once each player has picked a card from the booster pack, the draft order reverses so that the last player to draft a card from the pack takes the next draft pick and then passes the pack back the way it came. Once each player has opened a booster and followed this process, the final player to open a booster opens their next booster and the draft pick order is reversed. The process is repeated until each player has opened three booster packs each and all the cards in those packs have been drafted.The following rules apply to all current sanctioned Limited formats: Limited decks must contain a minimum of 40 cards. There is no maximum deck size, but the player must be able to shuffle their deck unassisted. Tournament formats: Players are not restricted to four of any one card in Limited tournament play. Tournament formats: Any drafted or opened cards not used in a player's Limited deck function as his or her sideboard. Players may request additional basic land cards (not including Snow-covered lands and wastes, which only appear in specific sets) for their sideboard. There are no restrictions on the number of cards a player may exchange this way as long as the main deck contains at least forty cards. Cards do not need to be exchanged on a one-for-one basis. Tournament formats: Sanctioned Multiplayer Traditionally, Magic is a game that is played between two players, however, it is also possible to play with multiple players. Despite the existence of numerous multiplayer formats, Two-Headed Giant is currently the only multiplayer format that has been officially sanctioned by the WPN. Tournament formats: Two-Headed Giant (2HG): a team game where pairs of players share turns and life totals. Each player has their own separate deck and plays independently of their teammate, however, teammates share the goal of defeating the opposing team. The 2HG format can be used to play Constructed or Limited games. In Constructed 2HG, no cards can be used by both members of the team, except basic land cards. In June 2005, rules for handling multiplayer games were added to the official rulebook, and 2HG team play became the first multiplayer format to be sanctioned by the DCI. The first Two-Headed Giant Grand Prix was Grand Prix Amsterdam in 2007. The first and thus far only Pro Tour to be held under the Two-Headed Giant format was Pro Tour San Diego in 2007. On June 8, 2018, Battlebond was released as the first Two-Headed Giant-focused booster set. Casual formats: Casual play groups and even Wizards of the Coast have developed many alternative formats for playing the game. These formats are designed to accommodate larger numbers of players, to allow two or more players to work together as a team, or create specific requirements for deck construction. Not all formats are officially sanctioned formats. However, many of these variants are popular in tournament play, though not all have support from Wizards of the Coast. Several casual formats have been implemented in Magic: The Gathering Online and Magic: The Gathering Arena. Casual formats: Jan Švelch, in the academic journal Analog Game Studies, highlighted that "along the way, players themselves started creating their own formats and even more actively influencing the life of the game. Wizards of the Coast [...] have embraced some of these community formats by releasing cards made especially for such formats. [...] Many of these emergent formats address the more controversial aspects of the official and sanctioned Magic formats, for example the rather high barrier of entry for new players and the high level of competitiveness. [...] Some communities maintain unofficial formats through regular updates of rules and a banlist whenever new sets are released or when particular metagames converge around a small number of extremely efficient decks. [...] Creation of such community formats and their consequent commercialization by publishers can also be seen as a manifestation of fan labor in which fans create value which is later capitalized on by the official producers". Casual formats: Casual Constructed As with sanctioned formats, most casual formats can be categorized into Constructed or Limited formats. Casual constructed formats include: Peasant: a format similar to Pauper (where only common cards are legal), in Peasant, a deck may contain up to 5 uncommon cards and the rest must be common. Peasant Magic was created by Rob Baranowski who felt that players with limited access to cards should still have an opportunity for competitive play. Tournaments for this format have taken place at Gen Con since 2001. However, the original banned list is considered to be outdated and most tournaments are played by the rules of the largest active Peasant community. Casual formats: Frontier: a format developed by Japanese stores Hareruya and BigMagic in 2016. It is similar to Modern in its deck construction rules, but with a later start date; card sets are legal from Magic 2015 onwards. Casual formats: Singleton: a format where players are allowed to use only one of each card instead of the usual limit of four. This variation is also known as "Legendary" (in Magic, before the Magic 2014 Core Set rule change, there could only be one of any legend card in the game), or "Restricted" (tournament formats with a restricted list insist that decks have no more than one of those cards) Magic. The "Elder Dragon Highlander (EDH)" variation became the Commander format. Some versions of this format require that the decks have a minimum of 100 cards, ban sideboards, and institute a special rule for mulligans with hands having either too many or too few lands. In temporary events in Magic: The Gathering Arena, it's possible to play in the Singleton format. Casual formats: Tribal Wars: a constructed casual format in which one-third of every deck must be of a single creature type. Common tribes in Magic include elves, goblins, and merfolk. Certain cards are banned in the Magic Online variant of Tribal Wars that would be overly swingy against known enemy Tribal decks, such as Circle of Solace or Engineered Plague. Gladiator: introduced during the COVID-19 pandemic and the cessation of live events, Gladiator is a casual constructed, singleton format that is specific to Magic: The Gathering Arena. Casual formats: Premodern: a constructed format which only allows for typically "old-bordered" cards printed from 4th Edition up to Scourge (prior to the 1st modern-legal set, 8th Edition - hence the "Premodern" name). Certain powerful combo-enabling cards are banned, such as Entomb or Necropotence, but a number of decks and strategies from past Standard/Type 2 and Extended formats can be seen in tournaments, such as Goblins, the Rock, Landstill, etc. However, up-to-date Magic: The Gathering rules are enforced as opposed to past implementations : among other things, for example, extra mana left in the pool does not cause mana burns and combat damage does not go on the stack. Casual formats: Casual Limited Limited casual formats include all the sanctioned formats as well. Formats include: Cube Draft: a booster draft variant in which the pool of cards is a predetermined set of cards chosen for the purpose of drafting them. The pool of cards is known as a Cube and usually contains a minimum of 360 cards to accommodate an eight-player booster draft. The cards used in a Cube are usually unique so that no card appears more than once in a draft. Typically, the card pool is an amalgamation of powerful cards from throughout the history of Magic, although the card pool can be whatever theme is desired. The Cube Draft format has been sanctioned by Magic Online in 2012, albeit for limited time runs. Cube Draft was first used as a format at the 2012 Magic Players Championship. Casual formats: Back Draft: a draft variant where each player tries to build the worst deck possible, because each player gives another player that deck to play in the tournament. To avoid mana problems, players choose what lands to add in the deck after they are "backdrafted". Scoring is usually done where a player gains a point each time the deck they play with wins and each time the deck they built loses. Casual formats: Reject Rare Draft: is a format in which each player donates 45 rare cards (the same number as in 3 regular boosters) and then drafts as normal. The rares are "donated", as everyone takes home the deck they draft and no attempt is made to return the rares to the original owners, as all the rares donated must be able to be categorized as an "unplayable" rare occasionally printed by MTG for any number of reasons. Hence "reject rare draft". This variant was developed at Neutral Ground, a gaming store owned by Brian David-Marshall, a columnist for Wizards and noted commentator in the Magic world. Casual formats: Type 4 (or Limited Infinity): in this format players randomly draft a 45 card deck from a large card pool (similar to a cube draft) without knowing the cards included in their deck. Players get infinite mana but are only allowed 1 spell per turn (1 each turn, their own and 1 during each opponent's turn). A starting hand is 5 cards. Casual formats: Casual Multiplayer The majority of multiplayer formats are casual formats, with Two-Headed Giant being the only multiplayer format to ever be sanctioned. Many formats can be adapted for multiple players, however, some formats are designed specifically for play with multiple players. Multiplayer formats include: Free-For-All: the simplest format where players sit in a circle and vie with those around them to be the final surviving player. Sometimes restrictions are added on who can be attacked in large free-for-alls - e.g. a player can only attack players sitting next to them. Casual formats: Assassin: in this format players are randomly assigned "targets" to defeat. Assassins and targets are selected by picking out pairs of cards (such as two forests two mountains two plains etc.) According to the number of players. Each player is dealt one type of card which is placed face up next to player. The other cards are shuffled and dealt face down (this is their target). Each player may only attack the target assigned to them. Players score points for delivering the finishing blow to their assigned target as well as for being the last survivor. Defeating another player grants you their "contract", and thus a new target to attack. If a player is dealt their matching card, then they are considered rogue and may target any player. Casual formats: Emperor: in this format two teams, each generally composed of three players, play to ensure their central player (the "Emperor") outlasts the other. A team wins the game when the opposing Emperor has been eliminated, it does not matter if that team has any other players left on the team. Teams can either be pre determined or randomly decided. After teams have been selected Emperors are decided in the same fashion. Range of influence is a standard rule enforced upon each emperor during a game. It is widely debated what a fair range of influence is and should be discussed before the match. (Example: An Emperor with a ROI of 1 can only cast spells and abilities as far as 1 player to his left or right. A ROI of 2 enables targeting of 2 players left or right. This effectively allows emperors to use harmful spells on non emperor enemy players) Another rule worth noting is all creatures gain a tap ability that reads "Target Teammate gains control of this creature." Summoning sickness affects use of this rule. If a player leaves the game for any reason all of their permanents leave the game as well regardless of who controls them. A variant is a game with five players per team (an Emperor, two Generals, and two Lieutenants). Casual formats: Commander The Commander format launched in 2011, which was derived from a fan-created format known as "Elder Dragon Highlander (EDH)"; the format uses 100 card singleton decks (no duplicates except basic lands), a starting life total of 40, and features a "Commander" or "General". The Commander must be a legendary creature (with some exceptional cases, namely Planeswalkers with text that specifically states they can be your Commander), and all cards in the deck can only have mana symbols on them from the Commander's colors. The Commander is not included in one's library; it is visible to all players in the "command" zone and can be played as if it was in one's hand. Whenever it is put into a graveyard or exiled, the Commander's owner may choose to put it back into the "command" zone instead, and playing it afterwards will cost 2 more uncolored mana (and so on if this repeats). If a player takes 21 combat damage from any one commander, that player loses the game regardless of life total (a rule to bring games to an eventual halt and somewhat keep lifegain in check). The format has its own official banned list. The format "supports two to six players, sometimes more".Since 2011, Wizards of the Coast has released a product line containing preconstructed Commander decks. However, the format is still maintained by the Commander Rules Committee which is run independently of Wizards of the Coast. In 2021, Dot eSports highlighted that "Commander has become one of the biggest formats in Magic over the past five years, even leading to Wizards of the Coast dubbing 2020 as 'The Year of Commander.' The format is a boon for novice and experienced deckbuilders to craft thematic decks centered around Magic’s over 1,200 Legendary creatures". Charlie Hall, for Polygon, commented in 2020 that "many Magic players see creating a Commander deck as the ultimate expression of a player’s skill, and of their ability to use their personal collection of cards to its fullest. The Commander format embodies the game’s reputation for competition, but also for storytelling". Jason Coles, for Dicebreaker, wrote that Commander is "possibly the most popular format in all of Magic: The Gathering [...]. It’s a fun format that generally features groups of up to four players duking it out and trying to keep each other in check". Casual formats: Oathbreaker The Oathbreaker format launched in 2023, which was derived from a fan-created variation of Commander. It was created by the Weirdcards Charitable Club, a Minnesota-based gaming group, "around 2017" before becoming an officially supported format by Wizards of the Coast in March 2023. This format is free-for-all multiplayer with three to five players who each start with 20 life; the winner is the last standing player. Each player builds a 58-card singleton deck along with selecting an "Oathbreaker (a planeswalker card) and a Signature Spell (an instant or sorcery) that matches the color identity of the Oathbreaker" to go with the deck. "The Oathbreaker and the Signature Spell start in the command zone at the beginning of each game, and can be cast during the game at their normal costs, plus an additional two mana for each time they have been cast. Both of these cards return to the command zone if they would go to the graveyard". Casual formats: Variant Magic: The Gathering products Wizards of the Coast have released a number of official products creating new, or supporting existing, casual formats. Below is a list of the formats these products were created for. Casual formats: Vanguard This variant was designed specifically for social play. Each player has a special card that affects the game. These cards change the players' starting life total and cards in hand, and have additional effects as well. Vanguard initially began with special oversized Vanguard cards, released as part of various promotions. Only four sets of avatar cards were made before the product was discontinued. The cards featured depicted major characters from the storyline of Magic, including Gerrard Capashen, Karn and Squee. A new version of Vanguard was eventually added to Magic Online, with a player's avatar filling the role of the oversized physical cards. Players are given a standard set of avatars and can receive more as entry and high-finishing prizes in release events. New avatars are regularly added as new sets of Magic cards are released, each depicting a card from the set. The wider availability online, combined with occasional tournaments, has made online Vanguard more of a success than its physical predecessor. Casual formats: One recent addition to the regular Vanguard format is Momir Basic, which involves the Momir Avatar, which allows a player to discard a land card to get a random creature into play. All Momir Basic Decks are constructed entirely of basic land. Casual formats: Planar Magic In September 2009, Wizards of the Coast released the Planechase product. The product was designed to allow players to play the new casual 'Planar Magic' format. The format can be played with two or more players. Each player requires a traditional Magic deck and a 'planar deck' of plane cards, players also need a 'planar die'. The first player turns over a plane card from the top of their planar deck and that card affects the game as specified on the card. The current plane card only changes when the specific symbol on the planar dice is rolled. Casual formats: In 2012, Wizards announced that they would be making a new set of Planechase game packs. They were released on June 1, 2012. Archenemy In June 2010, Wizards of the Coast released the Archenemy product. The product allowed players to play a new multiplayer casual format designed by Wizards of the Coast. The format is designed for four players with one player taking the role of the Archenemy and the other three players creating a team to play against the Archenemy. Casual formats: Each player plays with a traditional Magic deck, however, the Archenemy also possess a 'scheme deck' of 20 oversized cards. During the first main phase of the Archenemy's turn they turn over a card from their Scheme deck and use its effect. The effects of scheme cards are usually powerful to allow the Archenemy a greater chance of defeating their three opponents. The Archenemy starts at 40 life while all other players start at the traditional 20 life. The Archenemy also always takes the first turn and draws a card at the beginning of this turn. The Archenemy's opponents share a turn, as in the Two-Headed Giant format, however they play individually and cannot share resources.The Archenemy wins the game by defeating each member of the opposing team, whilst the opposing team wins if they defeat the Archenemy. Casual formats: Brawl Brawl format is a variant format of Commander developed by WotC staff Gerritt Turner which launched in 2018. Brawl utilizes all cards that are currently legal in Standard and has a rotation schedule similar to that of Standard. While similar to traditional Commander, deck size is limited to 60 cards and each player starts with 30 health. The format is commonly played as a sanctioned event on Magic: The Gathering Online and on MTG Arena. It was a highly requested addition to MTG Arena but the "variant never took off on paper". The physical format was not well-received by the players due to a "shortage of preconstructed decks" and the resale price of individual cards. Casual formats: Other casual formats Various alternative rules can be used to govern the construction of decks. Some of these variants have become so popular that unsanctioned tournaments have taken place at various Magic tournaments and gaming-oriented conventions such as Gen Con. Casual formats: All-Play is an unofficial format using a 100 card deck. All 5 basic land cards are available at the side. On your turn you can play a land from one of the piles. During draw phase you draw 2 cards and discard 1 to the scrap pile. Hand limit is 3 cards and other players play off the same deck. Each player has their own discard pile and the scrap pile is not counted. If you trigger a scry you do it next turn before your draw step. Casual formats: Artisan is a Magic Arena-specific constructed format. All cards must be either common or uncommon rarity. Artisan events may use either Standard or Historic format legality for cards. Desolation is an unofficial format where all cards are banned besides cards that were once banned and/or restricted in sanctioned formats. The format is played in two different versions, one that allows cards printed in unglued, unhinged, and unstable, and another one that does not titled "Candy Desolation" and "Classic Desolation" respectively. Casual formats: Game of Thrones is an unofficial format based on "Kings" of a six-person to eight-person free for all where you have 6-8 cards that represent the following characters 2 Kings (King of the North and King of the South), 1-2 Knights (who are secretly assigned to a king), and 1-2 Usurpers. You give out the cards secretly to each of the six-eight players. Their role is decided by the card they have. When the game starts the two kings reveal his or her role and she or he rolls against the other king to see who goes first. The goal of the game for each class is different. As the king, if you are the last one alive you win the game so your goal is to kill the other king, the knight or knights can or can not be alive for this win condition. The Knight's role is to keep their king alive and kill the other king. The Usurpers role is to kill any king in order to become the king. Should this happen, the Usurper becomes the king, adds 10 life to their life total, and the "killed" king becomes the new Usurper with a total of fifteen life. The knight or knights who were assigned to the previous king are now reassigned to the new king. There are a number of special rules for this format. 1) anyone can block for the king but no one can block for anyone else. 2) Everyone is allowed a free mulligan and free scry. 3)The king who rolls the highest amount always goes first. 5) The King's life starts 10 life higher than the knights and the usurpers. The point of the game is to play as the roles and figure out each others roles while trying to protect your king and kill the other king. Milling and commander damage still apply. Casual formats: Horde Magic is a cooperative multiplayer variant of Magic. The Allied players face off against the Horde deck, which is automatically controlled. The Horde automatically casts a semi-random number of creatures and effects from it every turn, then attacks with everything possible. The default flavor of the Horde are mindless attacking zombies. The Horde has no life total, but damage to it reduces its library of cards. If the players can survive until the Horde runs out of cards, they win. Casual formats: King is an unofficial format of a five-person free for all where you have 5 cards that represent the following characters King, Assassin, Guard, and Rogue. The assassin has two cards in the 5 pile. You give out the cards secretly to each of the five players. Their role is decided by the card they have. When the game starts the king reveals his or her role and she or he goes first. The goal of the game for each class is different. As the king, if you are the last one alive you win the game, the guard can be alive for this condition. The guards only role is to keep the king alive and kill the assassins and rogue. The assassins jobs are to kill the king, if at any time in the game the king loses and an assassin is alive they win. The rogue's job is to kill everyone without the assassins winning the game. When a person loses during this format they reveal their card and their character. Another role is possible for this format for more players: the jester. The jester wants to be killed by anyone, and is pretending to be an assassin. If the jester is killed he/she reveals their card. They then take the card of the person who killed him and their life total. Their board state remains the same. There are a number of special rules for this format. 1) anyone can block for the king but no one can block for anyone else. 2) Everyone is allowed a free mulligan and free scry. 3)The king always goes first. 4) no alternate win conditions. This is because the point of the game is to play as the roles and find each other this is defeated by any alternate win conditions. This rule does not apply to milling or commander damage. Casual formats: Mental Magic is a format in which cards may be played as any card in the game with the same mana cost. Casual formats: Mini-Magic is a constructed variant where decks are built with a maximum card limit of 15 and a maximum hand size of 3. Because of the small deck size, the state-based action causing a player to lose when they attempt to draw a card from their empty library is ignored. Select cards are banned in this format due to their heightened power level given the limited deck size. Alternatively, the format may be drafted using a single booster pack per person, this is known as Mini-Master or Pack Wars. Casual formats: Pack War is a format in which two players each open two booster packs (without looking at the contents), set aside the token or advertisement cards, and add 3 of each type of basic land. The players then play a best 2-of-3 games. Several game stores supporting this unofficial format then award a booster pack from one of the sets in Standard to the winner (assuming the four other booster packs were purchased at the store that day). Casual formats: Penny Dreadful is an unofficial Magic Online budget format where the legality rules include only cards that cost 0.02 ticket - roughly one penny. Old Frame is a format where only cards that were originally printed between Alpha and Onslaught are allowed. This format intends to recreate 2003 Vintage. The main differences are that Portal sets and Starter 1999 are allowed, the format is played with contemporary rules and erratas and the banned and restricted list is regularly modified as needed. Old School is a format where only cards that were printed in 1993 and 1994 (the first 2 years of Magic) are allowed. There are many different variations, often with different rules set regionally by a playgroup or a local tournament organizer. QL Magic is a variant where players can only play with cards with the old Magic card frames, in contrast with the Modern format. As such the format allows players to play with cards from Alpha through Onslaught block. The format also uses an older version of the rules based on the Sixth Edition rules. Roploplo is a format where only cards that were printed in 1993 and 1994 (the first 2 years of Magic) are allowed. Singleton 80 Multiplayer with commander Legendary or mono Color Paladin Ruleset (Mind Twist, Library of Alexandria and City in a bottle banned). Retired formats: Extended The Extended format, was a format created in 1997 that contained more sets than Standard / Type II, but fewer sets than Vintage / Type 1. In 1997, it consisted of cards from The Dark and forward, and any edition of the basic set from Revised and forward, which at the time included Chronicles , as well as all promotional cards. By 2002, it changed to consist of the last six-to-eight years of sets, rotating every three years. In 2008, the format was changed to a flat last seven years regardless, with a rotation each year. In 2010, the format was changed again to consist of only the last four years of blocks and core sets. With each autumn set release, one year's worth of sets rotate out of the format. Any additional sets released between rotations are automatically added to this format's card pool. The new system was implemented to reduce the format's card pool, with the intention that this would make the format more understandable and attractive to play. On July 22, 2013, Wizards of the Coast announced that the Extended format would be retired, with the final sanctioned events occurring on October 8, 2013. Retired formats: Block Constructed The Block Constructed format uses only the cards from a single block of Magic sets. Magic sets from Mirage to Khans of Tarkir have come in groups of three sets known as blocks. Block Constructed formats, and blocks themselves, usually take the name of the first set in the block. For example, the Ravnica Block Constructed format consists of Ravnica: City of Guilds, Guildpact, and Dissension. Only cards that were printed in the sets in the appropriate block can be used in Block Constructed formats. The Lorwyn and Shadowmoor blocks were a minor exception, as they were two mini-blocks of two sets each that were combined to make the Lorwyn-Shadowmoor Block Constructed format.After 2015's Battle for Zendikar, blocks now consist of only two sets. Despite Wizards of the Coast still sanctioned Block Constructed event, no major events like Grand Prix or Pro Tour used that format since then, and has played the importance of the formats down. The format itself would be dropped in April 2018, when Block was no longer used in Standard sets. Retired formats: Prismatic In Prismatic or 5-Color, players must build very large decks of at least 250 cards and accommodate a minimum number of cards of each color. This format was first developed by Kurt Hahn and several other players in the Milwaukee area in 1999–2000. 5-Color was managed by the 5CRC (5-Color Ruling Council), which while not affiliated with Wizards of the Coast, organized tournaments, had its own list of banned and restricted cards, and had a world championship held at Gen Con. It also supported ante cards, an initial component of the rules for Magic that has since been deprecated. When Magic Online was under development, this format was requested by many users, and it was added as "Prismatic" with slight differences. An additional "big deck" mulligan was also standard online, allowing players to compensate for hands with too many or too few lands. However, the 5CRC eventually stopped sanctioning tournaments and changed leadership, and the Magic Online Prismatic format was discontinued due to lack of interest in 2015.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Blount's disease** Blount's disease: Blount's disease (or Blount disease) is a growth disorder of the tibia (shin bone) which causes the lower leg to angle inward, resembling a bowleg. It is also known as "tibia vara". Cause: Blount disease is a growth disorder of the shin bone which causes the lower leg to angle inward, resembling a bowleg. It can present in boys under 4-years in both legs, or in adolescents usually on one side. Causes are thought to be genetic and environmental, like obesity, African-American lineage, and early walkers. Diagnosis: Differential Diagnosis Lower extremity deformities in Rickets can closely mimic those produced by Blount's disease. To differentiate between Rickets and Blount's disease it is important to correlate the clinical picture with laboratory findings such as calcium, phosphorus and alkaline phosphatase. Besides the X-ray appearance. Bone deformities in Rickets have a reasonable likelihood to correct over time, while this is not the case with Blount's disease. Nevertheless, both disorders may need surgical intervention in the form of bone osteotomy or more commonly guided growth surgery. Osteochondrodysplasias or genetic bone diseases can cause lower extremity deformities similar to Blount's disease. The clinical appearance and the characteristic radiographic are important to confirm the diagnosis. Treatment: Children who develop severe bowing before the age of 3 may be treated with knee ankle foot orthoses. However, bracing may fail, or bowing may not be detected until the child is older. Bracing should be started by 3 years of age. In some cases, surgery may be performed.Blount disease is one of the 8 severe comorbidities of severe obesity (BMI >35), which are an indication for bariatric surgery in children per a 2019 policy statement of the American Academy of Pediatrics. Treatment: The other severe comorbidities are: obstructive sleep apnea (Apnea-Hypopnea Index > .5), Type2 Diabetes mellitus, idiopathic intracranial hypertension (IIH), nonalcoholic steatohepatitis, SCFE, GERD, and hypertension. Etymology: Blount disease is named after Walter Putnam Blount (1900–1992), an American pediatric orthopedic surgeon, who described it in 1937. It has also been known as Mau-Nilsonne Syndrome, after C. Mau and H. Nilsonne, who published early case reports of the condition. it is today considered an acquired disease of the proximal tibial metaphysis rather than an epiphyseal dysplasia or osteochondrosis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Allmyapps** Allmyapps: Allmyapps was an application store for Microsoft Windows. The service allowed users to install, update and organize over 1,500 PC applications. History: Allmyapps developed an application manager for Ubuntu before entering Microsoft's IDEEs program and receiving 1 million euro from Elaia Partners in 2010. The company launched the first Windows application store as a beta version at LeWeb in December 2010, for which it won the Startup Pitch competition.In December 2011, Allmyapps announced that they had 2.5 million registered users.On October 21, 2014, Allmyapps was bought by ironSource. Features: Allmyapps supported the Windows 8, Windows 7, Windows Vista and Windows XP operating systems. The service letted users initiate the installation of applications from its website and from its desktop client. The desktop client could detect software already installed on the computer in order to update them via the application store and to save the list online as a backup.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Skewbald** Skewbald: Skewbald is a colour pattern of horses. A skewbald horse has a coat made up of white patches on a non-black base coat, such as chestnut, bay, or any colour besides black coat. Skewbald horses which are bay and white (bay is a reddish-brown colour with black mane and tail) are sometimes called tricoloured. These horses usually have pink skin under white markings and dark skin under non-white areas. Other than colour, it is similar in appearance to the piebald pattern. Some animals also exhibit colouration of the irises of the eye that match the surrounding skin (blue eyes for white skin, brown for dark). The underlying genetic cause is related to a condition known as leucism. The term is also used to describe spotting patterns in various other animals, such as goats. Horses: Terminology In British equestrian use, skewbald and piebald (black-and-white) are together known as coloured, and the white markings are called patches. In North American equestrian usage, the term for all large-spotted colouring is pinto, and the markings are called spots, The specialized term paint refers specifically to a breed of horse with American Quarter Horse or Thoroughbred bloodlines in addition to being spotted, whereas pinto refers to a spotted horse of any breed. Americans usually describe the colouration of a pinto literally: black-and-white, chestnut-and-white, or bay-and-white. Horses: Genetics Genetically, a skewbald horse begins most commonly with a chestnut base coat colour (called "red" by geneticists), or some other set of colour genes other than black. Then the horse has an allele for one of three basic spotting patterns overlaying the base colour. The most common coloured spotting pattern is called tobiano, and is a dominant gene. Tobiano creates spots that are large and rounded, usually with a somewhat vertical orientation, with white that usually crosses the back of the horse, white on the legs, with the head mostly dark. Three less common spotting genes are the frame and splash overo genes, which create a mostly dark, jagged spotting with a horizontal orientation, white on the head, but dark or minimally marked legs. The sabino pattern can be very minimal, usually adding white that runs up the legs onto the belly or flanks, "lacy" or roaning at the edge of the white, plus white on the head that either extends past the eye, over the chin, or both. The genetics of overo and sabino are not yet fully understood, but they can appear in the offspring of two solid-coloured parents, whereas a tobiano must always have at least one tobiano parent. Other animals: The term is also applied sometimes to goats and dogs, among other animals. Both piebald and skewbald are used in dog and goat breeding.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hair plexus** Hair plexus: A hair plexus or root hair plexus is a special group of nerve fiber endings and serves as a very sensitive mechanoreceptor for touch sensation. Each hair plexus forms a network around a hair follicle and is a receptor, which means it sends and receives nerve impulses to and from the brain when the hair moves.sensory nerve fiber ends that, in skin with hair, create a plexus around a hair follicle. They are mechanoreceptors conveying touch sensation. Specifically, crude touch and pressure sensation conveyed through the spinocervical tract, which is located in the posterior part of the lateral funiculus and terminates in the lateral cervical nucleus. The plexus acts as a receptor.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cardiopulmonary resuscitation** Cardiopulmonary resuscitation: Cardiopulmonary resuscitation (CPR) is an emergency procedure consisting of chest compressions often combined with artificial ventilation, or mouth to mouth in an effort to manually preserve intact brain function until further measures are taken to restore spontaneous blood circulation and breathing in a person who is in cardiac arrest. It is recommended for those who are unresponsive with no breathing or abnormal breathing, for example, agonal respirations.CPR involves chest compressions for adults between 5 cm (2.0 in) and 6 cm (2.4 in) deep and at a rate of at least 100 to 120 per minute. The rescuer may also provide artificial ventilation by either exhaling air into the subject's mouth or nose (mouth-to-mouth resuscitation) or using a device that pushes air into the subject's lungs (mechanical ventilation). Current recommendations place emphasis on early and high-quality chest compressions over artificial ventilation; a simplified CPR method involving only chest compressions is recommended for untrained rescuers. With children, however, 2015 American Heart Association guidelines indicate that doing only compressions may actually result in worse outcomes, because such problems in children normally arise from respiratory issues rather than from cardiac ones, given their young age. Chest compression to breathing ratios is set at 30 to 2 in adults. Cardiopulmonary resuscitation: CPR alone is unlikely to restart the heart. Its main purpose is to restore the partial flow of oxygenated blood to the brain and heart. The objective is to delay tissue death and to extend the brief window of opportunity for a successful resuscitation without permanent brain damage. Administration of an electric shock to the subject's heart, termed defibrillation, is usually needed to restore a viable, or "perfusing", heart rhythm. Defibrillation is effective only for certain heart rhythms, namely ventricular fibrillation or pulseless ventricular tachycardia, rather than asystole or pulseless electrical activity, which usually requires the treatment of underlying conditions to restore cardiac function. Early shock, when appropriate, is recommended. CPR may succeed in inducing a heart rhythm that may be shockable. In general, CPR is continued until the person has a return of spontaneous circulation (ROSC) or is declared dead. Medical uses: CPR is indicated for any person unresponsive with no breathing or breathing only in occasional agonal gasps, as it is most likely that they are in cardiac arrest.: S643  If a person still has a pulse but is not breathing (respiratory arrest) artificial ventilations may be more appropriate, but, due to the difficulty people have in accurately assessing the presence or absence of a pulse, CPR guidelines recommend that lay persons should not be instructed to check the pulse, while giving healthcare professionals the option to check a pulse. In those with cardiac arrest due to trauma, CPR is considered futile but still recommended. Correcting the underlying cause such as a tension pneumothorax or pericardial tamponade may help. Pathophysiology: CPR is used on people in cardiac arrest to oxygenate the blood and maintain a cardiac output to keep vital organs alive. Blood circulation and oxygenation are required to transport oxygen to the tissues. The physiology of CPR involves generating a pressure gradient between the arterial and venous vascular beds; CPR achieves this via multiple mechanisms. The brain may sustain damage after blood flow has been stopped for about four minutes and irreversible damage after about seven minutes. Typically if blood flow ceases for one to two hours, then body cells die. Therefore, in general CPR is effective only if performed within seven minutes of the stoppage of blood flow. The heart also rapidly loses the ability to maintain a normal rhythm. Low body temperatures, as sometimes seen in near-drownings, prolong the time the brain survives. Following cardiac arrest, effective CPR enables enough oxygen to reach the brain to delay brain stem death, and allows the heart to remain responsive to defibrillation attempts. Methods: In 2010, the American Heart Association and International Liaison Committee on Resuscitation updated their CPR guidelines.: S640  The importance of high quality CPR (sufficient rate and depth without excessively ventilating) was emphasized.: S640  The order of interventions was changed for all age groups except newborns from airway, breathing, chest compressions (ABC) to chest compressions, airway, breathing (CAB).: S642  An exception to this recommendation is for those believed to be in a respiratory arrest (airway obstruction, drug overdose, etc.).: S642 The most important aspects of CPR are: few interruptions of chest compressions, a sufficient speed and depth of compressions, completely relaxing pressure between compressions, and not ventilating too much. It is unclear if a few minutes of CPR before defibrillation results in different outcomes than immediate defibrillation. Methods: Compressions with rescue breaths A normal CPR procedure uses chest compressions and ventilations (rescue breaths). Anyway, ventilations could be omitted for not trained rescuers aiding adults who suffer a cardiac arrest. The chest compressions push on the lower half of the bone that is in the middle of the chest (sternum), and the rescue breaths are made pinching the victim's nose and blowing air mouth-to-mouth. If the victim is a baby, the rescuer would compress the chest with only 2 fingers, and would make the ventilations using the own's mouth to cover the baby's mouth and nose at the same time. It is recommended for all victims of any age a general compression-to-ventilation ratio of 30:2 (a continual cycle of 30 rhythmic chest compressions series before each 2 rescue breaths series).: 8 As an exception for the normal compression-to-ventilation ratio of 30:2, if at least two trained rescuers are present and the victim is a child, it is preferred a ratio of 15:2.: 8  And, according to the AHA 2015 Guidelines, the ratio in newborns is 30:2 if one rescuer is present and 15:2 if two rescuers are present.: S647  In an advanced airway treatment, such as an endotracheal tube or laryngeal mask airway, the artificial ventilation should occur without pauses in compressions, at a rate of 1 breath every 6 to 8 seconds (8–10 ventilations per minute).In all the victims, the compression speed is of at least 100 compressions per minute.: 8  Recommended compression depth in adults and children is of 5 cm (2 inches), and in infants it is 4 cm (1.6 inches).: 8  In adults, rescuers should use two hands for the chest compressions (one on the top of the other), while in children one hand can be enough, and with babies the rescuer must use only two fingers.There exist some plastic shields and respirators that can be used in the rescue breaths between the mouths of the rescuer and the victim, with the purposes of sealing a better vacuum and avoiding infections. Methods: In some cases, the patient has experienced one of the failures in the rhythm of the heart (ventricular fibrillation and ventricular tachycardia) that can be corrected with the electric shock of a defibrillator. It is important then that someone asks for the defibrillator and to use it, which would be easy, because the common models of defibrillator (the AEDs) are automatic portable machines that guide to the user with recorded voice instructions along the process, and analyze the victim, and apply the correct shocks if they are needed. Besides, there exist written instructions of defibrillators that explain how to use them step-by-step. Methods: The recommended order of normal cardiopulmonary resuscitation is the 'CAB': first 'Chest' (chest compressions), followed by 'Airway' (attempt to open the airway by performing a head tilt and a chin lift), and 'Breathing' (rescue breaths).: S642  Anyway, as of 2010, the Resuscitation Council (UK) was still recommending an 'ABC' order if the victim is a child. It can be difficult to determine the presence or absence of a pulse, so the pulse check has been removed for common providers and should not be performed for more than 10 seconds by healthcare providers.: 8 Compression only For adults with cardiac arrest, compression-only (hands-only or cardiocerebral resuscitation) CPR which involves chest compressions without artificial ventilation is recommended as the method of choice for the untrained rescuer or those who are not proficient as it is easier to perform and instructions are easier to give over a phone.: S643 : S643 : 8  In adults with out-of-hospital cardiac arrest, compression-only CPR by the lay public has an equal or higher success rate than standard CPR. It is hoped that the use of compression-only delivery will increase the chances of the lay public delivering CPR.Compression-only CPR is not as good for children who are more likely to have cardiac arrest from respiratory causes. Two reviews have found that compression-only CPR had no more success than no CPR whatsoever.: S646  Rescue breaths for children and especially for babies should be relatively gentle. Either a ratio of compressions to breaths of 30:2 or 15:2 was found to have better results for children. Both children and adults should receive a hundred chest compressions per minute. Other exceptions besides children include cases of drownings and drug overdose. In both these cases, compressions and rescue breaths are recommended if the bystander is trained and is willing to do so.As per the American Heart Association, the beat of the Bee Gees song "Stayin' Alive" provides an ideal rhythm in terms of beats per minute to use for hands-only CPR, which is 104 beats-per-minute. One can also hum Queen's "Another One Bites the Dust", which is 110 beats-per-minute and contains a memorable repeating drum pattern. For those in cardiac arrest due to non-heart related causes and in people less than 20 years of age, standard CPR is superior to compression-only CPR. Methods: Prone CPR Standard CPR is performed with the victim in supine position. Prone CPR, or reverse CPR, is performed on a victim in prone position, lying on the chest. This is achieved by turning the head to the side and compressing the back. Due to the head being turned, the risk of vomiting and complications caused by aspiration pneumonia may be reduced.The American Heart Association's current guideline recommends performing CPR in the supine position, and limits prone CPR to situations where the patient cannot be turned. Methods: Pregnancy During pregnancy when a woman is lying on her back, the uterus may compress the inferior vena cava and thus decrease venous return. It is therefore recommended that the uterus be pushed to the woman's left. This can be done by placing a pillow or towel under her right hip so that she is on an angle of 15–30 degrees, and making sure their shoulders are flat to the ground. If this is not effective healthcare professionals should consider emergency resuscitative hysterotomy. Methods: Family presence Evidence generally supports family being present during CPR. This includes in CPR for children. Methods: Other Interposed abdominal compressions may be beneficial in the hospital environment. There is no evidence of benefit pre-hospital or in children.Cooling during CPR is being studied as currently results are unclear whether or not it improves outcomes.Internal cardiac massage is manual squeezing of the exposed heart itself carried out through a surgical incision into the chest cavity, usually when the chest is already open for cardiac surgery. Methods: Active compression-decompression methods using mechanical decompression of the chest have not been shown to improve outcome in cardiac arrest.CPR for Heart Attack: Airway: If you’ve been trained in CPR after you’ve done the 30 compressions to your chest, you can open the airway for your child by performing the head-tilted chin-lift technique. 1. Place your fingers on the forehead of the child and gently tilt the child’s head to the side. 2. On the other hand, use the other hand to gently lift the cheeks forward to open the airway. For more help. Use of devices: Defibrillators Defibrillators produce a defibrillation (electric shocks) that can restore the normal heart function of the victim. Use of devices: Nevertheless, they are only indicated for some arrhythmias (abnormal heart beatings), specifically ventricular fibrillation (VF) and pulseless ventricular tachycardia. Defibrillation is not indicated if the patient is conscious or has a normal pulse. Defibrillation is also not indicated if the heart has completely stopped, as in asystole or pulseless electrical activity (PEA), in those cases a normal CPR would be used to oxygenate the brain until the heart function can be restored. Improperly given electrical shocks can cause dangerous arrhythmias, such as the ventricular fibrillation (VF).The standard defibrillation device, prepared for a fast use out of the medical centres, is the automated external defibrillator (AED), a portable machine of small size (similar to a briefcase) that can be used by any user with no previous training. That machine produces recorded voice instructions that guide to the user along the defibrillation process. It also checks the victim's condition to apply automatically electric shocks at a correct level, if they are needed. Other models are semi-automatic and need that the user push a button before producing an electric shock. Use of devices: The defibrillation process is simple, but there exist written instructions of defibrillators that explain it step-by-step. There are several devices for improving CPR but, only defibrillators (as of 2010) have been found better than standard CPR for an out-of-hospital cardiac arrest. Devices for timing CPR Timing devices can feature a metronome (an item carried by many ambulance crews) to assist the rescuer in achieving the correct rate. Some units can also give timing reminders for performing compressions, ventilating and changing operators. Use of devices: Devices for assisting in manual CPR Mechanical chest compression devices have not been found to be better than standard manual compressions. Their use is reasonable in situations where manual compressions are not safe to perform such as a moving vehicle.Audible and visual prompting may improve the quality of CPR and prevent the decrease of compression rate and depth that naturally occurs with fatigue, and to address this potential improvement, a number of devices have been developed to help improve CPR technique. Use of devices: These items can be devices to be placed on top of the chest, with the rescuer's hands going over the device, and a display or audio feedback giving information on depth, force or rate, or in a wearable format such as a glove. Several published evaluations show that these devices can improve the performance of chest compressions.As well as its use during actual CPR on a cardiac arrest victim, which relies on the rescuer carrying the device with them, these devices can also be used as part of training programs to improve basic skills in performing correct chest compressions. Use of devices: Devices for providing automatic CPR Mechanical CPR has not seen as much use as mechanical ventilation; however, use in the prehospital setting is increasing. Devices on the market include the LUCAS device, developed at the University Hospital of Lund, and AutoPulse. Both use straps around the chest to secure the patient. The first generation of the LUCAS uses a gas-driven piston and motor-driven constricting band, while later version are battery operated.There are several advantages to automated devices: they allow rescuers to focus on performing other interventions; they do not fatigue and begin to perform less effective compressions, as humans do; they are able to perform effective compressions in limited-space environments such as air ambulances, where manual compressions are difficult, and they allow ambulance workers to be strapped in safely rather than standing over a patient in a speeding vehicle. However the disadvantages are cost to purchase, time to train emergency personnel to use them, interruption to CPR to implement, potential for incorrect application and the need for multiple device sizes.Several studies have shown little or no improvement in survival rates but acknowledge the need for more study. Use of devices: Mobile apps for providing CPR instructions To support training and incident management, mobile apps have been published on the largest app markets. An evaluation of 61 available apps has revealed that a large number do not follow international guidelines for basic life support and many apps are not designed in a user-friendly way. As a result, the Red Cross updated and endorsed its emergency preparedness application, which uses pictures, text and videos to assist the user. Use of devices: The UK Resuscitation Council, has an app, called Lifesaver, which shows how to perform CPR. Effectivity rate: CPR oxygenates the body and brain, which favours making a later defibrillation and the advanced life support. Even in the case of a "non-shockable" rhythm, such as pulseless electrical activity (PEA) where defibrillation is not indicated, effective CPR is no less important. Used alone, CPR will result in few complete recoveries, though the outcome without CPR is almost uniformly fatal.Studies have shown that immediate CPR followed by defibrillation within 3–5 minutes of sudden VF cardiac arrest dramatically improves survival. In cities such as Seattle where CPR training is widespread and defibrillation by EMS personnel follows quickly, the survival rate is about 20 percent for all causes and as high as 57 percent if a witnessed "shockable" arrest. In cities such as New York, without those advantages, the survival rate is only 5 percent for witnessed shockable arrest. Similarly in-hospital CPR is more successful when arrests are witnessed or are in the ICU or in patients wearing heart monitors, where the arrests are noticed immediately, as shown in the table and graph later in this article. Effectivity rate: \* AED data here exclude health facilities and nursing homes, where patients are sicker than average. Effectivity rate: In adults compression-only CPR by bystanders appears to be better than chest compressions with rescue breathing. Compression-only CPR may be less effective in children than in adults, as cardiac arrest in children is more likely to have a non-cardiac cause. In a 2010 prospective study of cardiac arrest in children (age 1–17) for arrests with a non-cardiac cause, provision by bystanders of conventional CPR with rescue breathing yielded a favorable neurological outcome at one month more often than did compression-only CPR (OR 5.54). For arrests with a cardiac cause in this cohort, there was no difference between the two techniques (OR 1.20). This is consistent with American Heart Association guidelines for parents.When done by trained responders, 30 compressions interrupted by two breaths appears to have a slightly better result than continuous chest compressions with breaths being delivered while compressions are ongoing.Measurement of end-tidal carbon dioxide during CPR reflects cardiac output and can predict chances of ROSC.In a study of in-hospital CPR from 2000 to 2008, 59% of CPR survivors lived over a year after hospital discharge and 44% lived over 3 years. Consequences: Performing CPR is advised as an urgent intervention, for when a person is not breathing and therefore would certainly die without it. Survival rates: In US hospitals in 2017, 26% of patients who received CPR survived to hospital discharge.: e381, e390  In 2017 in the US, outside hospitals, 16% of people whose cardiac arrest was witnessed survived to hospital discharge.Since 2003, widespread cooling of patients after CPR and other improvements have raised survival and reduced mental disabilities. Consequences: Organ donation Organ donation is usually made possible by CPR, even if CPR does not save the patient. If there is a return of spontaneous circulation (ROSC), all organs can be considered for donation. If the patient does not achieve ROSC, and CPR continues until an operating room is available, the kidneys and liver can still be considered for donation. Consequences: 1,000 organs per year in the US are transplanted from patients who had CPR. Donations can be taken from 40% of patients who have ROSC and later become brain dead. Up to 8 organs can be taken from each donor, and an average of 3 organs are taken from each patient who donates organs. Consequences: Mental abilities Mental abilities are about the same for survivors before and after CPR for 89% of patients, based on before and after counts of 12,500 US patients' Cerebral-Performance Category (CPC) codes in a 2000–2009 study of CPR in hospitals. 1% more survivors were in comas than before CPR. 5% more needed help with daily activities. 5% more had moderate mental problems and could still be independent.For CPR outside hospitals, a Copenhagen study of 2,504 patients in 2007-2011 found 21% of survivors developed moderate mental problems but could still be independent, and 11% of survivors developed severe mental problems, so they needed daily help. Two patients out of 2,504 went into comas (0.1% of patients, or 2 out of 419 survivors, 0.5%), and the study did not track how long the comas lasted.Most people in comas start to recover in 2–3 weeks. 2018 guidelines on disorders of consciousness say it is no longer appropriate to use the term "permanent vegetative state." Mental abilities can continue to improve in the six months after discharge, and in subsequent years. For long-term problems, brains form new paths to replace damaged areas. Consequences: Injuries Injuries from CPR vary. 87% of patients are not injured by CPR. Overall, injuries are caused in 13% (2009–12 data) of patients, including broken sternum or ribs (9%), lung injuries (3%), and internal bleeding (3%). Consequences: The internal injuries counted here can include heart contusion, hemopericardium, upper airway complications, damage to the abdominal viscera − lacerations of the liver and spleen, fat emboli, pulmonary complications − pneumothorax, hemothorax, lung contusions. Most injuries did not affect care; only 1% of those given CPR received life-threatening injuries from it.Broken ribs are present in 3% of those who survive to hospital discharge, and 15% of those who die in the hospital, for an average rate of 9% (2009-12 data) to 8% (1997–99). Consequences: In the 2009-12 study, 20% of survivors were older than 75. A study in the 1990s found 55% of CPR patients who died before discharge had broken ribs, and a study in the 1960s found 97% did; training and experience levels have improved. Lung injuries were caused in 3% of patients and other internal bleeding in 3% (2009–12). Consequences: Bones heal in 1–2 months.The costal cartilage also breaks in an unknown number of additional cases, which can sound like breaking bones.The type and frequency of injury can be affected by factors such as sex and age. A 1999 Austrian study of CPR on cadavers, using a machine which alternately compressed the chest then pulled it outward, found a higher rate of sternal fractures in female cadavers (9 of 17) than male (2 of 20), and found the risk of rib fractures rose with age, though they did not say how much. Consequences: Children and infants have a low risk of rib fractures during CPR, with an incidence less than 2%, although, when they do occur, they are usually anterior and multiple.Where CPR is performed in error by a bystander, on a person not in cardiac arrest, around 2% have injury as a result (although 12% experienced discomfort).A 2004 overview said, "Chest injury is a price worth paying to achieve optimal efficacy of chest compressions. Cautious or faint-hearted chest compression may save bones in the individual case but not the patient's life." Other side effects The most common side effect is vomiting, which necessitates clearing the mouth so patients do not breathe it in. Consequences: It happened in 16 of 35 CPR efforts in a 1989 study in King County, WA, USA. Consequences: Survival differences, based on prior illness, age or location The American Heart Association guidelines say that survival rates below 1% are "futility," but all groups have better survival than that. Even among very sick patients at least 10% survive: A study of CPR in a sample of US hospitals from 2001 to 2010, where overall survival was 19%, found 10% survival among cancer patients, 12% among dialysis patients, 14% over age 80, 15% among blacks, 17% for patients who lived in nursing homes, 19% for patients with heart failure, and 25% for patients with heart monitoring outside the ICU. Consequences: Another study, of advanced cancer patients, found the same 10% survival mentioned above. Consequences: A study of Swedish patients in 2007–2015 with ECG monitors found 40% survived at least 30 days after CPR at ages 70–79, 29% at ages 80–89, and 27% above age 90.An earlier study of Medicare patients in hospitals 1992–2005, where overall survival was 18%, found 13% survival in the poorest neighborhoods, 12% survival over age 90, 15% survival among ages 85–89, and 17% survival among ages 80–84. Consequences: Swedish patients 90 years or older had 15% survival to hospital discharge, 80-89 had 20%, and 70-79 had 28%.A study of King County WA patients who had CPR outside hospitals in 1999–2003, where 34% survived to hospital discharge overall, found that among patients with 4 or more major medical conditions, 18% survived; with 3 major conditions 24% survived, and 33% of those with 2 major medical conditions survived.Nursing home residents' survival has been studied by several authors, and is measured annually by the Cardiac Arrest Registry to Enhance Survival (CARES). CARES reports CPR results from a catchment area of 115 million people, including 23 state-wide registries, and individual communities in 18 other states as of 2019. CARES data show that in health care facilities and nursing homes where AEDs are available and used, survival rates are double the average survival found in nursing homes overall.Geographically, there is wide variation state-to-state in survival after CPR in US hospitals, from 40% in Wyoming to 20% in New York, so there is room for good practices to spread, raising the averages. Consequences: For CPR outside hospitals, survival varies even more across the US, from 3% in Omaha to 45% in Seattle in 2001. This study only counted heart rhythms which can respond to defibrillator shocks (tachycardia). Consequences: A major reason for the variation has been delay in some areas between the call to emergency services and the departure of medics, and then arrival and treatment. Delays were caused by lack of monitoring, and the mismatch between recruiting people as firefighters, though most emergency calls they are assigned to are medical, so staff resisted and delayed on the medical calls. Building codes have cut the number of fires, but staff still think of themselves as firefighters. Consequences: Dysthanasia In some instances CPR can be considered a form of dysthanasia. Prevalence: Chance of receiving CPR Various studies show that in out-of-home cardiac arrest, bystanders in the US attempt CPR in between 14% and 45% of the time, with a median of 32%. Globally, rates of bystander CPR reported to be as low as 1% and as high as 44%. However, the effectiveness of this CPR is variable, and the studies suggest only around half of bystander CPR is performed correctly. One study found that members of the public having received CPR training in the past lack the skills and confidence needed to save lives. The report's authors suggested that better training is needed to improve the willingness to respond to cardiac arrest. Factors that influence bystander CPR in out-of-hospital cardiac arrest include: Affordable training. Prevalence: Target CPR training to family members of potential cardiac arrest. CPR classes should be simplified and shortened. Offer reassurance and education about CPR. Provide clearer information about legal implications for specific regions. Prevalence: Focus on reducing the stigma and fears around providing bystander CPR.There is a relation between age and the chance of CPR being commenced. Younger people are far more likely to have CPR attempted on them before the arrival of emergency medical services. Bystanders more commonly administer CPR when in public than when at the person's home, although health care professionals are responsible for more than half of out-of-hospital resuscitation attempts. People with no connection to the person are more likely to perform CPR than are a member of their family.There is also a clear relation between the cause of arrest and the likelihood of a bystander initiating CPR. Laypersons are most likely to give CPR to younger people in cardiac arrest in a public place when it has a medical cause; those in arrest from trauma, exsanguination or intoxication are less likely to receive CPR.It is believed that there is a higher chance that CPR will be performed if the bystander is told to perform only the chest compression element of the resuscitation.The first formal study into gender bias in receiving CPR from the public versus professionals was conducted by the American Heart Association and the National Institutes of Health (NIH), and examined nearly 20,000 cases across the U.S. The study found that women are six percent less likely than men to receive bystander CPR when in cardiac arrest in a public place, citing the disparity as "likely due to the fear of being falsely accused of sexual assault." Chance of receiving CPR in time CPR is likely to be effective only if commenced within 6 minutes after the blood flow stops because permanent brain cell damage occurs when fresh blood infuses the cells after that time, since the cells of the brain become dormant in as little as 4–6 minutes in an oxygen deprived environment and, therefore, cannot survive the reintroduction of oxygen in a traditional resuscitation. Research using cardioplegic blood infusion resulted in a 79.4% survival rate with cardiac arrest intervals of 72±43 minutes, traditional methods achieve a 15% survival rate in this scenario, by comparison. New research is currently needed to determine what role CPR, defibrillation, and new advanced gradual resuscitation techniques will have with this new knowledge.A notable exception is cardiac arrest that occurs in conjunction with exposure to very cold temperatures. Hypothermia seems to protect by slowing down metabolic and physiologic processes, greatly decreasing the tissues' need for oxygen. There are cases where CPR, defibrillation, and advanced warming techniques have revived victims after substantial periods of hypothermia. Society and culture: Portrayed effectiveness CPR is often severely misrepresented in movies and television as being highly effective in resuscitating a person who is not breathing and has no circulation.A 1996 study published in the New England Journal of Medicine showed that CPR success rates in television shows was 75% for immediate circulation, and 67% survival to discharge. This gives the general public an unrealistic expectation of a successful outcome. When educated on the actual survival rates, the proportion of patients over 60 years of age desiring CPR should they have a cardiac arrest drops from 41% to 22%. Society and culture: Training and stage CPR It is dangerous to perform CPR on a person who is breathing normally. These chest compressions create significant local blunt trauma, risking bruising or fracture of the sternum or ribs. If a patient is not breathing, these risks still exist but are dwarfed by the immediate threat to life. For this reason, training is always done with a mannequin, such as the well-known Resusci Anne model.The portrayal of CPR technique on television and film often is purposely incorrect. Actors simulating the performance of CPR may bend their elbows while appearing to compress, to prevent force from reaching the chest of the actor portraying the patient. Society and culture: Self-CPR hoax A form of "self-CPR" termed "cough CPR" was the subject of a hoax chain e-mail entitled "How to Survive a Heart Attack When Alone," which wrongly cited "ViaHealth Rochester General Hospital" as the source of the technique. Rochester General Hospital has denied any connection with the technique."Cough CPR" in the sense of resuscitating oneself is impossible because a prominent symptom of cardiac arrest is unconsciousness, which makes coughing impossible.The American Heart Association (AHA) and other resuscitation bodies do not endorse "cough CPR", which it terms a misnomer as it is not a form of resuscitation. The AHA does recognize a limited legitimate use of the coughing technique: "This coughing technique to maintain blood flow during brief arrhythmias has been useful in the hospital, particularly during cardiac catheterization. In such cases the patient's ECG is monitored continuously, and a physician is present." When coughing is used on trained and monitored patients in hospitals, it has been shown to be effective only for 90 seconds. Society and culture: Learning from film In at least one case, it has been alleged that CPR learned from a film was used to save a person's life. In April 2011, it was claimed that nine-year-old Tristin Saghin saved his sister's life by administering CPR on her after she fell into a swimming pool, using only the knowledge of CPR that he had gleaned from a motion picture, Black Hawk Down. Society and culture: Hands-only CPR portrayal Less than 1/3 of those people who experience a cardiac arrest at home, work or in a public location have CPR performed on them. Most bystanders are worried that they might do something wrong. On October 28, 2009, the American Heart Association and the Ad Council launched a hands-only CPR public service announcement and website as a means to address this issue. In July 2011, new content was added to the website including a digital app that helps a user learn how to perform hands-only CPR. History: In the 19th century, Doctor H. R. Silvester described a method (the Silvester method) of artificial ventilation in which the patient is laid on their back, and their arms are raised above their head to aid inhalation and then pressed against their chest to aid exhalation. The Holger Nielsen technique of artificial respiration, developed by Danish physician Holger Nielsen, revolutionized the field of emergency medical care. Introduced in the early 20th century, this technique involved positioning the patient in a supine position (lying flat on their back) and the performer of the technique kneeling beside or above the patient. The Holger Nielsen technique utilized a manual resuscitator, commonly referred to as the "Holger Nielsen bag," to administer rescue breaths. The performer would place a mask or the bag's mouthpiece over the patient's mouth and nose while manually compressing the bag. This action would deliver a controlled flow of air into the patient's lungs, aiding in oxygenation and facilitating the exchange of gases.It was not until the middle of the 20th century that the wider medical community started to recognize and promote artificial ventilation in the form of mouth-to-mouth resuscitation combined with chest compressions as a key part of resuscitation following cardiac arrest. The combination was first seen in a 1962 training video called "The Pulse of Life" created by James Jude, Guy Knickerbocker, and Peter Safar. Jude and Knickerbocker, along with William Kouwenhoven and Joseph S. Redding had recently discovered the method of external chest compressions, whereas Safar had worked with Redding and James Elam to prove the effectiveness of mouth-to-mouth resuscitation. The first effort at testing the technique was performed on a dog by Redding, Safar and JW Pearson. Soon afterward, the technique was used to save the life of a child. Their combined findings were presented at the annual Maryland Medical Society meeting on September 16, 1960, in Ocean City, and gained widespread acceptance over the following decade, helped by the video and speaking tour they undertook. Peter Safar wrote the book ABC of Resuscitation in 1957. In the U.S., it was first promoted as a technique for the public to learn in the 1970s.Mouth-to-mouth resuscitation was combined with chest compressions based on the assumption that active ventilation is necessary to keep circulating blood oxygenated, and the combination was accepted without comparing its effectiveness with chest compressions alone. However, research in the 2000s demonstrated that assumption to be in error, resulting in the American Heart Association's acknowledgment of the effectiveness of chest compressions alone (see Compression only in this article).CPR methods continued to advance, with developments in the 2010s including an emphasis on constant, rapid heart stimulation, and a de-emphasis on the respiration aspect. Studies have shown that people who had rapid, constant heart-only chest compression are 22% more likely to survive than those receiving conventional CPR that included breathing. Because people tend to be reluctant to do mouth-to-mouth resuscitation, chest-only CPR nearly doubles the chances of survival overall, by increasing the odds of receiving CPR in the first place. On animals: It is feasible to perform CPR on animals, including cats and dogs. The principles and practices are similar to CPR for humans, except that resuscitation is usually done through the animal's nose, not the mouth. CPR should only be performed on unconscious animals to avoid the risk of being bitten; a conscious animal would not require chest compressions. Animals, depending on species, may have a lower bone density than humans and so CPR can cause bones to become weakened after it is performed. Research: Cerebral performance category (CPC scores) are used as a research tool to describe "good" and "poor" outcomes. Level 1 is conscious and alert with normal function. Level 2 is only slight disability. Level 3 is moderate disability. Level 4 is severe disability. Level 5 is comatose or persistent vegetative state. Level 6 is brain dead or death from other causes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carbon nanofiber** Carbon nanofiber: Carbon nanofibers (CNFs), vapor grown carbon fibers (VGCFs), or vapor grown carbon nanofibers (VGCNFs) are cylindrical nanostructures with graphene layers arranged as stacked cones, cups or plates. Carbon nanofibers with graphene layers wrapped into perfect cylinders are called carbon nanotubes. Introduction: Carbon has a high level of chemical bonding flexibility, which lends itself to the formation of a number of stable Organic and Inorganic Molecules. Elemental carbon has a number of allotropes(variants) including diamond, graphite, and fullerenes. Though they all consist of elemental carbon, their properties vary widely. This underscores the versatility of CNFs, which are notable for their thermal, electrical, electromagnetic shielding, and mechanical property enhancements. As carbon is readily available at low cost, CNFs are popular additives to composite materials. CNFs are very small, existing at the nanometer scale. An atom is between .1-.5 nm, thus specialized microscopic techniques such as Scanning Tunneling Microscopy and Atomic Force Microscopy are required to examine the properties of CNFs. Synthesis: Catalytic chemical vapor deposition (CCVD) or simply CVD with variants like thermal and plasma-assisted is the dominant commercial technique for the fabrication of VGCF and VGCNF. Here, gas-phase molecules are decomposed at high temperatures and carbon is deposited in the presence of a transition metal catalyst on a substrate where subsequent growth of the fiber around the catalyst particles is realized. In general, this process involves separate stages such as gas decomposition, carbon deposition, fiber growth, fiber thickening, graphitization, and purification and results in hollow fibers. The nanofiber diameter depends on the catalyst size. The CVD process for the fabrication of VGCF generally falls into two categories: 1) fixed-catalyst process (batch), and 2) floating-catalyst process (continuous). Synthesis: In the batch process developed by Tibbetts, a mixture of hydrocarbon/hydrogen/helium was passed over a mullite (crystalline aluminum silicate) with fine iron catalyst particle deposits maintained at 1000 °C. The hydrocarbon used was methane in the concentration of 15% by volume. Fiber growth in several centimeters was achieved in just 10 minutes with a gas residence time of 20 seconds. In general, fiber length can be controlled by the gas residence time in the reactor. Gravity and direction of the gas flow typically affects the direction of the fiber growth.The continuous or floating-catalyst process was patented earlier by Koyama and Endo and was later modified by Hatano and coworkers. This process typically yields VGCF with sub-micrometre diameters and lengths of a few to 100 µm, which accords with the definition of carbon nanofibers. They utilized organometallic compounds dissolved in a volatile solvent like benzene that would yield a mixture of ultrafine catalyst particles (5–25 nm in diameter) in hydrocarbon gas as the temperature rose to 1100 °C. In the furnace, the fiber growth initiates on the surface of the catalyst particles and continues until catalyst poisoning occurs by impurities in the system. In the fiber growth mechanism described by Baker and coworkers, only the part of catalyst particle exposed to the gas mixture contributes to the fiber growth and the growth stops as soon as the exposed part is covered, i.e. the catalyst is poisoned. The catalyst particle remains buried in the growth tip of the fiber at a final concentration of about a few parts per million. At this stage, fiber thickening takes place.The most commonly used catalyst is iron, often treated with sulfur, hydrogen sulfide, etc. to lower the melting point and facilitate its penetration into the pores of carbon and hence, to produce more growth sites. Fe/Ni, Ni, Co, Mn, Cu, V, Cr, Mo, Pd, MgO, and Al2O3 are also used as catalyst. Acetylene, ethylene, methane, natural gas, and benzene are the most commonly used carbonaceous gases. Often carbon monoxide (CO) is introduced in the gas flow to increase the carbon yield through reduction of possible iron oxides in the system.In 2017, a research group in Tsinghua University reported the epytixial growth of aligned, continuous, catalyst-free carbon nanofiber from a carbon nanotube template. The fabrication process includes thickening of continuous carbon nanotube films by gas-phase pyrolytic carbon deposition and further graphitization of the carbon layer by high temperature treatment. Due to the epitaxial growth mechanism, the fiber features superior properties including low density, high mechanical strength, high electrical conductivity, high thermal conductivity. Safety: The Occupational Safety and Health Act (United States) (1970) was a driving force behind many of the changes made regarding safety in the workplace over the last few decades. One small group of the numerous substances to be regulated by this act is carbon nanofibers (CNF). While still an active area of research, there have been studies conducted that indicate health risks associated with carbon nanotubes (CNT) and CNF that pose greater hazards than their bulk counterparts. One of the primary hazards of concern associated with CNT and CNF is respiratory damage such as pulmonary inflammation, granuloma, and fibrosis. It is important to note, however, that these findings were observed in mice, and that it is currently unknown whether the same effects would be observed in humans. Nonetheless these studies have given cause for an attempt to minimize exposure to these nanoparticles.A separate study conducted prior to the 2013 annual Society of Toxicology meeting aimed to identify potential carcinogenic effects associated with multi-walled carbon nanotubes (MWCNT). The findings indicated that, in the presence of an initiator chemical, the MWCNTs caused a much greater incidence of tumors in mice. There was no indication of increased presence of tumors in the absence of the initiator chemical, however. Further studies are needed for this scenario.One of the major hurdles in identifying hazards associated with CNF is the diversity of fibers that exist. Some of the contributing factors to this diversity include shape, size, and chemical composition. One exposure standard (2015) states that the acceptable limit for CNT and CNF exposure is 1 μg/m3 of respirable size fraction elemental carbon (8-hour time-weighted average). This standard was based on information gathered from 14 sites whose samples were analyzed by transmission electron microscopy (TEM).A recent safety data sheet (SDS) for CNF (revised in 2016) lists the nanofibers as an eye irritant, and states that they have single exposure respiratory system organ toxicity. Smaller CNF possess a greater potential for forming dust clouds when handling. As such, great care must be taken when handling CNF. The recommended personal protective equipment (PPE) for handling CNF includes nitrile gloves, particle respirators, and nanomaterial-impervious clothing (dependent on workplace conditions). In addition to exposure controls while working with the CNF, safe storage conditions are also important in minimizing the risk associated with CNF. Safe CNF storage entails storing the fibers away from oxidizing agents and open flames. Under fire conditions, CNF form hazardous decomposition products though the exact nature of these decomposition products is not currently known. Apart from carcinogenicity and organ toxicity, toxicological data for CNF is currently rather limited. Applications: Researchers are using nanofibers to deliver therapeutic drugs. They have developed an elastic material that is embedded with needle like carbon nanofibers. The material is intended to be used as balloons which are inserted next diseased tissue, and then inflated. When the balloon is inflated the carbon, nanofibers penetrate diseased cells and delivery therapeutic drugs. Researchers at MIT have used carbon nanofibers to make lithium ion battery electrodes that show four times the storage capacity of current lithium ion batteries. Researchers are using nanofibers to make sensors that change color as they absorb chemical vapors. They plan to use these sensors to show when the absorbing material in a gas mask becomes saturated. Applications: The unique structure of these porous carbon nanofibers resulted in good electrochemical performance such as high reversible capacity and good cycle stability when they were used as anodes for rechargeable lithium-ion batteries. Further market development will depend on material availability at reasonable prices. We have achieved bulk production capacities of high purity carbon nanofibers (CNFs) at low cost by a catalytic chemical vapor deposition (CCVD) process. Unlike catalytic synthesis, electrospinning polyacrylonitrile (PAN) followed by stabilization and carbonization has become a straightforward and convenient route to make continuous carbon nanofibers. Applications: Field electron emission sources Field electron emission (also known as field emission (FE) and electron field emission) is emission of electrons induced by an electrostatic field. The most common context is field emission from a solid surface into vacuum. However, field emission can take place from solid or liquid surfaces, into vacuum, air, a fluid, or any non-conducting or weakly conducting dielectric. The field-induced promotion of electrons from the valence to conduction band of semiconductors (the Zener effect) can also be regarded as a form of field emission. Applications: Composite materials Scanning probe microscopy tips Scanning probe microscopy (SPM) is a branch of microscopy that forms images of surfaces using a physical probe that scans the specimen. Applications: Carrier material for various catalysts in petrochemistry In vertically-aligned arrays, a platform for gene delivery. (See Impalefection) Impalefection is a method of gene delivery using nanomaterials, such as carbon nanofibers, carbon nanotubes, nanowires. Needle-like nanostructures are synthesized perpendicular to the surface of a substrate. Plasmid DNA containing the gene, intended for intracellular delivery, is attached to the nanostructure surface. A chip with arrays of these needles is then pressed against cells or tissue. Cells that are impaled by nanostructures can express the delivered gene(s). Applications: For electrode materials Oil spill remediation Oil spill remediation: The process for the manufacture of a carbon-carbon-composite material comprises the steps of treating a carbonaceous carrier material with a metal-containing catalyst material. The metal is capable of forming nanosize carbon structures, and growing nanosize carbon structures by means of a chemical vapor deposition method on the treated carrier in a gas atmosphere comprising a carbon-containing gas, followed by an optional surface modification step. This process allows optimizing porosity, hydrodynamical properties and surface chemistry independently from each other, which is particularly beneficial in respect of the use of the composite for water purification. Carbon black-based composites are particularly useful for filler applications. History: One of the first technical records concerning carbon nanofibers is probably a patent dated 1889 on synthesis of filamentous carbon by Hughes and Chambers. They utilized a methane/hydrogen gaseous mixture and grew carbon filaments through gas pyrolysis and subsequent carbon deposition and filament growth. The true appreciation of these fibers, however, came much later when their structure could be analyzed by electron microscopy. The first electron microscopy observations of carbon nanofibers were performed in the early 1950s by the Soviet scientists Radushkevich and Lukyanovich, who published a paper in the Soviet Journal of Physical Chemistry showing hollow graphitic carbon fibers that are 50 nanometers in diameter. Early in the 1970s, Japanese researchers Morinobu Endo, now the director of the Institute of Carbon Science and Technology at Shinshu University, reported the discovery of carbon nanofibers, including that some were shaped as hollow tubes. He also succeeded in the manufacturing of VGCF with a diameter of 1 µm and length of above 1 mm. Later, in the early 1980s, Tibbetts in the USA and Benissad in France continued to perfect the VGCF fabrication process. In the USA, the deeper studies focusing on synthesis and properties of these materials for advanced applications were led by R. Terry K. Baker. They were motivated by the need to inhibit the growth of carbon nanofibers because of the persistent problems caused by accumulation of the material in a variety of commercial processes, especially in the particular field of petroleum processing. In 1991, Japanese researchers Sumio Iijima, while working at NEC, synthesized hollow carbon molecules and determined their crystal structure. The following year, these molecules were called "carbon nanotubes" for the first time. VGCNF is produced through essentially the same manufacturing process as VGCF, only the diameter is typically less than 200 nm. Several companies around the globe are actively involved in the commercial scale production of carbon nanofibers and new engineering applications are being developed for these materials intensively, the latest being a carbon nanofiber bearing porous composite for oil spill remediation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RapidSMS** RapidSMS: RapidSMS is a web framework based on the Django web framework which extends the logic and capabilities of Django to communicate with SMS messages. Initial development was done by UNICEF's Innovation Unit for use in mobile data collection and polls. A side effect of the work was pygsm, a Python library for interacting with GSM modems, including cell phones which handle the Hayes command set. The software has been deployed in numerous countries, including Senegal, Mauritania, Uganda, Somalia, Zambia, Kenya, Nigeria, Malawi, and Ethiopia. Awards: Columbia University and UNICEF won the 2008 USAID Development 2.0 Challenge for their work with RapidSMS in Malawi. In 2009, UNICEF won the Gov2.0 Summit Award in the 'Government as a provider' category for their work with RapidSMS in Malawi. Frog Design won two IDSA IDEA Awards (Gold in the Social Impact Design category and Silver in the Design Strategy category) at the 2012 International Design Excellence Awards for their work with UNICEF on Project Mwana. In 2010, Matt Berg was chosen by Time Magazine as one of 100 most influential people of the year for his work with RapidSMS and ChildCount. In 2013, Christopher Fabian and Erica Kochi were selected by Time Magazine to be on the Time 100 list of the 100 most influential people in the world for their work with RapidSMS at UNICEF. Projects: RapidSMS is the basis for a few notable projects: mTrac, a disease surveillance and drug tracking system developed by UNICEF and the World Health Organization in Uganda, is one of only a handful of mHealth projects being scaled up nationally. In August 2012, it was featured in "The Wireless Issue" of Time Magazine. U-Report, one of the largest SMS social networks of community crowd sourced volunteer reporters in the world, with approximately 200,000 registered users in Uganda as of April 2013, reporting on development issues and engaging directly with national and local government through the platform. Birth Registration, UNICEF and Timba Objects developed a system with RapidSMS that is used for birth registration nationwide in Nigeria. Projects: Project Mwana uses RapidSMS to improve early infant diagnosis of HIV and post-natal follow-up and care. Project Mwana was developed by UNICEF and Frog Design and has been deployed in Zambia and Malawi. Project Mwana won two IDSA IDEA Awards (Gold in the Social Impact Design category and Silver in the Design Strategy category) at the 2012 International Design Excellence Awards. Projects: RapidSMS MCH, is a system for monitoring pregnancy and reducing bottlenecks in communication associated with maternal and newborn deaths in Rwanda. The project was developed by UNICEF and Pivot Access. ChildCount, developed by the Millennium Villages Project and deployed in Kenya. Matt Berg has been chosen by Time Magazine as one of 100 most influential people of the year for his work with RapidSMS and ChildCount. RapidAndroid, developed by UNICEF and Dimagi, is a port of RapidSMS to the Android operating system Jokko, developed by UNICEF, Dimagi, and Tostan to help teach literacy in West Africa. Textonic, developed by students in Clay Shirky's 2009 Design For UNICEF course at New York University's Interactive Telecommunications Program to extend RapidSMS to use Amazon.com's Amazon Mechanical Turk service.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mediated quasi-interaction** Mediated quasi-interaction: Mediated quasi-interaction is a concept in communication science that describes a monological interaction between people, which is oriented towards an indefinite range of potential recipients. It involves a fundamental asymmetry between producers and receivers. Some examples of Mediated Quasi-Interaction are television, radio and newspapers and other forms of mass media. History: The concept was developed by sociologist John Brookshire Thompson, a professor at the University of Cambridge and a fellow of Jesus College. The concept is first documented in his book “The Rise of Mediated Interaction”, which was published in 1995 in Cambridge, UK. Thompson developed a conceptual framework for the analysis of the forms of action and interaction created by the media. He wanted to focus the types of interactional situation created by the mass media. He also wanted the analytical framework to examine some of the interactional features of the social relationships established by the media.He created the concept in part of his theory of interaction. The 3 steps theory consisted of Face-to-Face interaction, Mediated Interaction and Mediated Quasi-Interaction. Face-to- people share time and space, since they are co-present and mediated interaction the sending of the message and its reception are separated in time and space. Basic premises and approach: Mediated Quasi-interaction is monologicial in character and involves the production of symbolic forms for an indefinite range of potential recipients. Mediated Quasi-Interaction creates a certain kind of social situation in which individuals are linked together in a process of communication and symbolic exchange. It is a structured situation in which some individuals are engaged primarily in producing symbolic forms for others who are not physically present, while others are involved primarily in receiving symbolic forms produced to whom they cannot respond, but with whom they can form bonds of friendship, affection or loyalty.Mediated quasi-interaction is based on social relations established by media of mass communications. With mass media being impossible to be genuine interactivity, Mediated quasi-interaction is simulated interaction. It is typical for the mass media to try to simulate interpersonal communication and to personalize their communication (ex. Call-ins). Another focus of Mediated Quasi-Interaction is also on its space-time constitution. It is described as a separation of contexts with extended availability in time and space.Mediated Quasi-Interaction can also be combined with other interaction such as face-to-face. For example, people can sit in a room together and have a discussion while they are watching television. In a similar relation, a television program may involve face to face interaction with a panel and an audience, although the program remains primarily a form Mediated Quasi-Interaction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Complete blood count** Complete blood count: A complete blood count (CBC), also known as a full blood count (FBC), is a set of medical laboratory tests that provide information about the cells in a person's blood. The CBC indicates the counts of white blood cells, red blood cells and platelets, the concentration of hemoglobin, and the hematocrit (the volume percentage of red blood cells). The red blood cell indices, which indicate the average size and hemoglobin content of red blood cells, are also reported, and a white blood cell differential, which counts the different types of white blood cells, may be included. Complete blood count: The CBC is often carried out as part of a medical assessment and can be used to monitor health or diagnose diseases. The results are interpreted by comparing them to reference ranges, which vary with sex and age. Conditions like anemia and thrombocytopenia are defined by abnormal complete blood count results. The red blood cell indices can provide information about the cause of a person's anemia such as iron deficiency and vitamin B12 deficiency, and the results of the white blood cell differential can help to diagnose viral, bacterial and parasitic infections and blood disorders like leukemia. Not all results falling outside of the reference range require medical intervention. Complete blood count: The CBC is usually performed by an automated hematology analyzer, which counts cells and collects information on their size and structure. The concentration of hemoglobin is measured, and the red blood cell indices are calculated from measurements of red blood cells and hemoglobin. Manual tests can be used to independently confirm abnormal results. Approximately 10–25% of samples require a manual blood smear review, in which the blood is stained and viewed under a microscope to verify that the analyzer results are consistent with the appearance of the cells and to look for abnormalities. The hematocrit can be determined manually by centrifuging the sample and measuring the proportion of red blood cells, and in laboratories without access to automated instruments, blood cells are counted under the microscope using a hemocytometer. Complete blood count: In 1852, Karl Vierordt published the first procedure for performing a blood count, which involved spreading a known volume of blood on a microscope slide and counting every cell. The invention of the hemocytometer in 1874 by Louis-Charles Malassez simplified the microscopic analysis of blood cells, and in the late 19th century, Paul Ehrlich and Dmitri Leonidovich Romanowsky developed techniques for staining white and red blood cells that are still used to examine blood smears. Automated methods for measuring hemoglobin were developed in the 1920s, and Maxwell Wintrobe introduced the Wintrobe hematocrit method in 1929, which in turn allowed him to define the red blood cell indices. A landmark in the automation of blood cell counts was the Coulter principle, which was patented by Wallace H. Coulter in 1953. The Coulter principle uses electrical impedance measurements to count blood cells and determine their sizes; it is a technology that remains in use in many automated analyzers. Further research in the 1970s involved the use of optical measurements to count and identify cells, which enabled the automation of the white blood cell differential. Purpose: Blood is composed of a fluid portion, called plasma, and a cellular portion that contains red blood cells, white blood cells and platelets. The complete blood count evaluates the three cellular components of blood. Some medical conditions, such as anemia or thrombocytopenia, are defined by marked increases or decreases in blood cell counts. Changes in many organ systems may affect the blood, so CBC results are useful for investigating a wide range of conditions. Because of the amount of information it provides, the complete blood count is one of the most commonly performed medical laboratory tests.The CBC is often used to screen for diseases as part of a medical assessment. It is also called for when a healthcare provider suspects a person has a disease that affects blood cells, such as an infection, a bleeding disorder, or some cancers. People who have been diagnosed with disorders that may cause abnormal CBC results or who are receiving treatments that can affect blood cell counts may have a regular CBC performed to monitor their health, and the test is often performed each day on people who are hospitalized. The results may indicate a need for a blood or platelet transfusion.The complete blood count has specific applications in many medical specialties. It is often performed before a person undergoes surgery to detect anemia, ensure that platelet levels are sufficient, and screen for infection, as well as after surgery, so that blood loss can be monitored. In emergency medicine, the CBC is used to investigate numerous symptoms, such as fever, abdominal pain, and shortness of breath, and to assess bleeding and trauma. Blood counts are closely monitored in people undergoing chemotherapy or radiation therapy for cancer, because these treatments suppress the production of blood cells in the bone marrow and can produce severely low levels of white blood cells, platelets and hemoglobin. Regular CBCs are necessary for people taking some psychiatric drugs, such as clozapine and carbamazepine, which in rare cases can cause a life-threatening drop in the number of white blood cells (agranulocytosis). Because anemia during pregnancy can result in poorer outcomes for the mother and her baby, the complete blood count is a routine part of prenatal care; and in newborn babies, a CBC may be needed to investigate jaundice or to count the number of immature cells in the white blood cell differential, which can be an indicator of sepsis.The complete blood count is an essential tool of hematology, which is the study of the cause, prognosis, treatment, and prevention of diseases related to blood. The results of the CBC and smear examination reflect the functioning of the hematopoietic system—the organs and tissues involved in the production and development of blood cells, particularly the bone marrow. For example, a low count of all three cell types (pancytopenia) can indicate that blood cell production is being affected by a marrow disorder, and a bone marrow examination can further investigate the cause. Abnormal cells on the blood smear might indicate acute leukemia or lymphoma, while an abnormally high count of neutrophils or lymphocytes, in combination with indicative symptoms and blood smear findings, may raise suspicion of a myeloproliferative disorder or lymphoproliferative disorder. Examination of the CBC results and blood smear can help to distinguish between causes of anemia, such as nutritional deficiencies, bone marrow disorders, acquired hemolytic anemias and inherited conditions like sickle cell anemia and thalassemia.The reference ranges for the complete blood count represent the range of results found in 95% of apparently healthy people. By definition, 5% of results will always fall outside this range, so some abnormal results may reflect natural variation rather than signifying a medical issue. This is particularly likely if such results are only slightly outside the reference range, if they are consistent with previous results, or if there are no other related abnormalities shown by the CBC. When the test is performed on a relatively healthy population, the number of clinically insignificant abnormalities may exceed the number of results that represent disease. For this reason, professional organizations in the United States, United Kingdom and Canada recommend against pre-operative CBC testing for low-risk surgeries in individuals without relevant medical conditions. Repeated blood draws for hematology testing in hospitalized patients can contribute to hospital-acquired anemia and may result in unnecessary transfusions. Procedure: The sample is collected by drawing blood into a tube containing an anticoagulant—typically EDTA—to stop its natural clotting. The blood is usually taken from a vein, but when this is difficult it may be collected from capillaries by a fingerstick, or by a heelprick in babies. Testing is typically performed on an automated analyzer, but manual techniques such as a blood smear examination or manual hematocrit test can be used to investigate abnormal results. Cell counts and hemoglobin measurements are performed manually in laboratories lacking access to automated instruments. Procedure: Automated On board the analyzer, the sample is agitated to evenly distribute the cells, then diluted and partitioned into at least two channels, one of which is used to count red blood cells and platelets, the other to count white blood cells and determine the hemoglobin concentration. Some instruments measure hemoglobin in a separate channel, and additional channels may be used for differential white blood cell counts, reticulocyte counts and specialized measurements of platelets. The cells are suspended in a fluid stream and their properties are measured as they flow past sensors in a technique known as flow cytometry. Hydrodynamic focusing may be used to isolate individual cells so that more accurate results can be obtained: the diluted sample is injected into a stream of low-pressure fluid, which causes the cells in the sample to line up in single file through laminar flow. Procedure: To measure the hemoglobin concentration, a reagent chemical is added to the sample to destroy (lyse) the red cells in a channel separate from that used for red blood cell counts. On analyzers that perform white blood cell counts in the same channel as hemoglobin measurement, this permits white blood cells to be counted more easily. Hematology analyzers measure hemoglobin using spectrophotometry and are based on the linear relationship between the absorbance of light and the amount of hemoglobin present. Chemicals are used to convert different forms of hemoglobin, such as oxyhemoglobin and carboxyhemoglobin, to one stable form, usually cyanmethemoglobin, and to create a permanent colour change. The absorbance of the resulting colour, when measured at a specific wavelength—usually 540 nanometres—corresponds with the concentration of hemoglobin.Sensors count and identify the cells in the sample using two main principles: electrical impedance and light scattering. Impedance-based cell counting operates on the Coulter principle: cells are suspended in a fluid carrying an electric current, and as they pass through a small opening (an aperture), they cause decreases in current because of their poor electrical conductivity. The amplitude of the voltage pulse generated as a cell crosses the aperture correlates with the amount of fluid displaced by the cell, and thus the cell's volume, while the total number of pulses correlates with the number of cells in the sample. The distribution of cell volumes is plotted on a histogram, and by setting volume thresholds based on the typical sizes of each type of cell, the different cell populations can be identified and counted.In light scattering techniques, light from a laser or a tungsten-halogen lamp is directed at the stream of cells to collect information about their size and structure. Cells scatter light at different angles as they pass through the beam, which is detected using photometers. Forward scatter, which refers to the amount of light scattered along the beam's axis, is mainly caused by diffraction of light and correlates with cellular size, while side scatter (light scattered at a 90-degree angle) is caused by reflection and refraction and provides information about cellular complexity.Radiofrequency-based methods can be used in combination with impedance. These techniques work on the same principle of measuring the interruption in current as cells pass through an aperture, but since the high-frequency RF current penetrates into the cells, the amplitude of the resulting pulse relates to factors like the relative size of the nucleus, the nucleus's structure, and the amount of granules in the cytoplasm. Small red cells and cellular debris, which are similar in size to platelets, may interfere with the platelet count, and large platelets may not be counted accurately, so some analyzers use additional techniques to measure platelets, such as fluorescent staining, multi-angle light scatter and monoclonal antibody tagging.Most analyzers directly measure the average size of red blood cells, which is called the mean cell volume (MCV), and calculate the hematocrit by multiplying the red blood cell count by the MCV. Some measure the hematocrit by comparing the total volume of red blood cells to the volume of blood sampled, and derive the MCV from the hematocrit and red blood cell count. The hemoglobin concentration, the red blood cell count and the hematocrit are used to calculate the average amount of hemoglobin within each red blood cell, the mean corpuscular hemoglobin (MCH); and its concentration, the mean corpuscular hemoglobin concentration (MCHC). Another calculation, the red blood cell distribution width (RDW), is derived from the standard deviation of the mean cell volume and reflects variation in cellular size. Procedure: After being treated with reagents, white blood cells form three distinct peaks when their volumes are plotted on a histogram. These peaks correspond roughly to populations of granulocytes, lymphocytes, and other mononuclear cells, allowing a three-part differential to be performed based on cell volume alone. More advanced analyzers use additional techniques to provide a five- to seven-part differential, such as light scattering or radiofrequency analysis, or using dyes to stain specific chemicals inside cells—for example, nucleic acids, which are found in higher concentrations in immature cells or myeloperoxidase, an enzyme found in cells of the myeloid lineage. Basophils may be counted in a separate channel where a reagent destroys other white cells and leaves basophils intact. The data collected from these measurements is analyzed and plotted on a scattergram, where it forms clusters that correlate with each white blood cell type. Another approach to automating the differential count is the use of digital microscopy software, which uses artificial intelligence to classify white blood cells from photomicrographs of the blood smear. The cell images are displayed to a human operator, who can manually re-classify the cells if necessary.Most analyzers take less than a minute to run all the tests in the complete blood count. Because analyzers sample and count many individual cells, the results are very precise. However, some abnormal cells may not be identified correctly, requiring manual review of the instrument's results and identification by other means of abnormal cells the instrument could not categorize. Procedure: Point-of-care testing Point-of-care testing refers to tests conducted outside of the laboratory setting, such as at a person's bedside or in a clinic. This method of testing is faster and uses less blood than conventional methods, and does not require specially trained personnel, so it is useful in emergency situations and in areas with limited access to resources. Commonly used devices for point-of-care hematology testing include the HemoCue, a portable analyzer that uses spectrophotometry to measure the hemoglobin concentration of the sample, and the i-STAT, which derives a hemoglobin reading by estimating the concentration of red blood cells from the conductivity of the blood. Hemoglobin and hematocrit can be measured on point-of-care devices designed for blood gas testing, but these measurements sometimes correlate poorly with those obtained through standard methods. There are simplified versions of hematology analyzers designed for use in clinics that can provide a complete blood count and differential. Procedure: Manual The tests can be performed manually when automated equipment is not available or when the analyzer results indicate that further investigation is needed. Automated results are flagged for manual blood smear review in 10–25% of cases, which may be due to abnormal cell populations that the analyzer cannot properly count, internal flags generated by the analyzer that suggest the results could be inaccurate, or numerical results that fall outside set thresholds. To investigate these issues, blood is spread on a microscope slide, stained with a Romanowsky stain, and examined under a microscope. The appearance of the red and white blood cells and platelets is assessed, and qualitative abnormalities are reported if present. Changes in the appearance of red blood cells can have considerable diagnostic significance—for example, the presence of sickle cells is indicative of sickle cell disease, and a high number of fragmented red blood cells (schistocytes) requires urgent investigation as it can suggest a microangiopathic hemolytic anemia. In some inflammatory conditions and in paraprotein disorders like multiple myeloma, high levels of protein in the blood may cause red blood cells to appear stacked together on the smear, which is termed rouleaux. Some parasitic diseases, such as malaria and babesiosis, can be detected by finding the causative organisms on the blood smear, and the platelet count can be estimated from the blood smear, which is useful if the automated platelet count is inaccurate.To perform a manual white blood cell differential, the microscopist counts 100 cells on the blood smear and classifies them based on their appearance; sometimes 200 cells are counted. This gives the percentage of each type of white blood cell, and by multiplying these percentages by the total number of white blood cells, the absolute number of each type of white cell can be obtained. Manual counting is subject to sampling error because so few cells are counted compared with automated analysis, but it can identify abnormal cells that analyzers cannot, such as the blast cells seen in acute leukemia. Clinically significant features like toxic granulation and vacuolation can also be ascertained from microscopic examination of white blood cells.The hematocrit can performed manually by filling a capillary tube with blood, centrifuging it, and measuring the percentage of the blood that consists of red blood cells. This is useful in some conditions that can cause automated hematocrit results to be incorrect, such as polycythemia (a highly elevated red blood cell count) or severe leukocytosis (a highly elevated white blood cell count, which interferes with red blood cell measurements by causing white blood cells to be counted as red cells). Procedure: Red and white blood cells and platelets can be counted using a hemocytometer, a microscope slide containing a chamber that holds a specified volume of diluted blood. The hemocytometer's chamber is etched with a calibrated grid to aid in cell counting. The cells seen in the grid are counted and divided by the volume of blood examined, which is determined from the number of squares counted on the grid, to obtain the concentration of cells in the sample. Manual cell counts are labour-intensive and inaccurate compared to automated methods, so they are rarely used except in laboratories that do not have access to automated analyzers. To count white blood cells, the sample is diluted using a fluid containing a compound that lyses red blood cells, such as ammonium oxalate, acetic acid, or hydrochloric acid. Sometimes a stain is added to the diluent that highlights the nuclei of white blood cells, making them easier to identify. Manual platelet counts are performed in a similar manner, although some methods leave the red blood cells intact. Using a phase-contrast microscope, rather than a light microscope, can make platelets easier to identify. The manual red blood cell count is rarely performed, as it is inaccurate and other methods such as hemoglobinometry and the manual hematocrit are available for assessing red blood cells; but if it is necessary to do so, red blood cells can be counted in blood that has been diluted with saline.Hemoglobin can be measured manually using a spectrophotometer or colorimeter. To measure hemoglobin manually, the sample is diluted using reagents that destroy red blood cells to release the hemoglobin. Other chemicals are used to convert different types of hemoglobin to one form, allowing it to be easily measured. The solution is then placed in a measuring cuvette and the absorbance is measured at a specific wavelength, which depends on the type of reagent used. A reference standard containing a known amount of hemoglobin is used to determine the relationship between the absorbance and the hemoglobin concentration, allowing the hemoglobin level of the sample to be measured.In rural and economically disadvantaged areas, available testing is limited by access to equipment and personnel. At primary care facilities in these regions, testing may be limited to examination of red cell morphology and manual measurement of hemoglobin, while more complex techniques like manual cell counts and differentials, and sometimes automated cell counts, are performed at district laboratories. Regional and provincial hospitals and academic centres typically have access to automated analyzers. Where laboratory facilities are not available, an estimate of hemoglobin concentration can be obtained by placing a drop of blood on a standardized type of absorbent paper and comparing it to a colour scale. Procedure: Quality control Automated analyzers have to be regularly calibrated. Most manufacturers provide preserved blood with defined parameters and the analyzers are adjusted if the results are outside defined thresholds. To ensure that results continue to be accurate, quality control samples, which are typically provided by the instrument manufacturer, are tested at least once per day. The samples are formulated to provide specific results, and laboratories compare their results against the known values to ensure the instrument is functioning properly. For laboratories without access to commercial quality control material, an Indian regulatory organization recommends running patient samples in duplicate and comparing the results. A moving average measurement, in which the average results for patient samples are measured at set intervals, can be used as an additional quality control technique. Assuming that the characteristics of the patient population remain roughly the same over time, the average should remain constant; large shifts in the average value can indicate instrument problems. The MCHC values are particularly useful in this regard.In addition to analyzing internal quality control samples with known results, laboratories may receive external quality assessment samples from regulatory organizations. While the purpose of internal quality control is to ensure that analyzer results are reproducible within a given laboratory, external quality assessment verifies that results from different laboratories are consistent with each other and with the target values. The expected results for external quality assessment samples are not disclosed to the laboratory. External quality assessment programs have been widely adopted in North America and western Europe, and laboratories are often required to participate in these programs to maintain accreditation. Logistical issues may make it difficult for laboratories in under-resourced areas to implement external quality assessment schemes. Included tests: The CBC measures the amounts of platelets and red and white blood cells, along with the hemoglobin and hematocrit values. Red blood cell indices—MCV, MCH and MCHC—which describe the size of red blood cells and their hemoglobin content, are reported along with the red blood cell distribution width (RDW), which measures the amount of variation in the sizes of red blood cells. A white blood cell differential, which enumerates the different types of white blood cells, may be performed, and a count of immature red blood cells (reticulocytes) is sometimes included. Included tests: Red blood cells, hemoglobin, and hematocrit Red blood cells deliver oxygen from the lungs to the tissues and on their return carry carbon dioxide back to the lungs where it is exhaled. These functions are mediated by the cells' hemoglobin. The analyzer counts red blood cells, reporting the result in units of 106 cells per microlitre of blood (× 106/μL) or 1012 cells per litre (× 1012/L), and measures their average size, which is called the mean cell volume and expressed in femtolitres or cubic micrometres. By multiplying the mean cell volume by the red blood cell count, the hematocrit (HCT) or packed cell volume (PCV), a measurement of the percentage of blood that is made up of red blood cells, can be derived; and when the hematocrit is performed directly, the mean cell volume may be calculated from the hematocrit and red blood cell count. Hemoglobin, measured after the red blood cells are lysed, is usually reported in units of grams per litre (g/L) or grams per decilitre (g/dL). Assuming that the red blood cells are normal, there is a constant relationship between hemoglobin and hematocrit: the hematocrit percentage is approximately three times greater than the hemoglobin value in g/dL, plus or minus three. This relationship, called the rule of three, can be used to confirm that CBC results are correct.Two other measurements are calculated from the red blood cell count, the hemoglobin concentration, and the hematocrit: the mean corpuscular hemoglobin and the mean corpuscular hemoglobin concentration. These parameters describe the hemoglobin content of each red blood cell. The MCH and MCHC can be confusing; in essence the MCH is a measure of the average amount of hemoglobin per red blood cell. The MCHC gives the average proportion of the cell that is hemoglobin. The MCH does not take into account the size of the red blood cells whereas the MCHC does. Collectively, the MCV, MCH, and MCHC are referred to as the red blood cell indices. Changes in these indices are visible on the blood smear: red blood cells that are abnormally large or small can be identified by comparison to the sizes of white blood cells, and cells with a low hemoglobin concentration appear pale. Another parameter is calculated from the initial measurements of red blood cells: the red blood cell distribution width or RDW, which reflects the degree of variation in the cells' size. Included tests: An abnormally low hemoglobin, hematocrit, or red blood cell count indicates anemia. Anemia is not a diagnosis on its own, but it points to an underlying condition affecting the person's red blood cells. General causes of anemia include blood loss, production of defective red blood cells (ineffective erythropoeisis), decreased production of red blood cells (insufficient erythropoeisis), and increased destruction of red blood cells (hemolytic anemia). Anemia reduces the blood's ability to carry oxygen, causing symptoms like tiredness and shortness of breath. If the hemoglobin level falls below thresholds based on the person's clinical condition, a blood transfusion may be necessary.An increased number of red blood cells, leading to an increase in the hemoglobin and hematocrit, is called polycythemia. Dehydration or use of diuretics can cause a "relative" polycythemia by decreasing the amount of plasma compared to red cells. A true increase in the number of red blood cells, called absolute polycythemia, can occur when the body produces more red blood cells to compensate for chronically low oxygen levels in conditions like lung or heart disease, or when a person has abnormally high levels of erythropoietin, a hormone that stimulates production of red blood cells. In polycythemia vera, the bone marrow produces red cells and other blood cells at an excessively high rate.Evaluation of red blood cell indices is helpful in determining the cause of anemia. If the MCV is low, the anemia is termed microcytic, while anemia with a high MCV is called macrocytic anemia. Anemia with a low MCHC is called hypochromic anemia. If anemia is present but the red blood cell indices are normal, the anemia is considered normochromic and normocytic. The term hyperchromia, referring to a high MCHC, is generally not used. Elevation of the MCHC above the upper reference value is rare, mainly occurring in conditions such as spherocytosis, sickle cell disease and hemoglobin C disease. An elevated MCHC can also be a false result from conditions like red blood cell agglutination (which causes a false decrease in the red blood cell count, elevating the MCHC) or highly elevated amounts of lipids in the blood (which causes a false increase in the hemoglobin result).Microcytic anemia is typically associated with iron deficiency, thalassemia, and anemia of chronic disease, while macrocytic anemia is associated with alcoholism, folate and B12 deficiency, use of some drugs, and some bone marrow diseases. Acute blood loss, hemolytic anemia, bone marrow disorders, and various chronic diseases can result in anemia with a normocytic blood picture. The MCV serves an additional purpose in laboratory quality control. It is relatively stable over time compared to other CBC parameters, so a large change in MCV may indicate that the sample was drawn from the wrong patient. Included tests: A low RDW has no clinical significance, but an elevated RDW represents increased variation in red blood cell size, a condition known as anisocytosis. Anisocytosis is common in nutritional anemias such as iron deficiency anemia and anemia due to vitamin B12 or folate deficiency, while people with thalassemia may have a normal RDW. Based on the CBC results, further steps can be taken to investigate anemia, such as a ferritin test to confirm the presence of iron deficiency, or hemoglobin electrophoresis to diagnose a hemoglobinopathy such as thalassemia or sickle cell disease. Included tests: White blood cells White blood cells defend against infections and are involved in the inflammatory response. A high white blood cell count, which is called leukocytosis, often occurs in infections, inflammation, and states of physiologic stress. It can also be caused by diseases that involve abnormal production of blood cells, such as myeloproliferative and lymphoproliferative disorders. A decreased white blood cell count, termed leukopenia, can lead to an increased risk of acquiring infections, and occurs in treatments like chemotherapy and radiation therapy and many conditions that inhibit the production of blood cells. Sepsis is associated with both leukocytosis and leukopenia. The total white blood cell count is usually reported in cells per microlitre of blood (/μL) or 109 cells per litre (× 109/L).In the white blood cell differential, the different types of white blood cells are identified and counted. The results are reported as a percentage and as an absolute number per unit volume. Five types of white blood cells—neutrophils, lymphocytes, monocytes, eosinophils, and basophils—are typically measured. Some instruments report the number of immature granulocytes, which is a classification consisting of precursors of neutrophils; specifically, promyelocytes, myelocytes and metamyelocytes. Other cell types are reported if they are identified in the manual differential.Differential results are useful in diagnosing and monitoring many medical conditions. For example, an elevated neutrophil count (neutrophilia) is associated with bacterial infection, inflammation, and myeloproliferative disorders, while a decreased count (neutropenia) may occur in individuals who are undergoing chemotherapy or taking certain drugs, or who have diseases affecting the bone marrow. Neutropenia can also be caused by some congenital disorders and may occur transiently after viral or bacterial infections in children. People with severe neutropenia and clinical signs of infection are treated with antibiotics to prevent potentially life-threatening disease. Included tests: An increased number of band neutrophils—young neutrophils that lack segmented nuclei—or immature granulocytes is termed left shift and occurs in sepsis and some blood disorders, but is normal in pregnancy. An elevated lymphocyte count (lymphocytosis) is associated with viral infection and lymphoproliferative disorders like chronic lymphocytic leukemia; elevated monocyte counts (monocytosis) are associated with chronic inflammatory states; and the eosinophil count is often increased (eosinophilia) in parasitic infections and allergic conditions. An increased number of basophils, termed basophilia, can occur in myeloproliferative disorders like chronic myeloid leukemia and polycythemia vera. The presence of some types of abnormal cells, such as blast cells or lymphocytes with neoplastic features, is suggestive of a hematologic malignancy. Included tests: Platelets Platelets play an essential role in clotting. When the wall of a blood vessel is damaged, platelets adhere to the exposed surface at the site of injury and plug the gap. Simultaneous activation of the coagulation cascade results in the formation of fibrin, which reinforces the platelet plug to create a stable clot. A low platelet count, known as thrombocytopenia, may cause bleeding if severe. It can occur in individuals who are undergoing treatments that suppress the bone marrow, such as chemotherapy or radiation therapy, or taking certain drugs, such as heparin, that can induce the immune system to destroy platelets. Thrombocytopenia is a feature of many blood disorders, like acute leukemia and aplastic anemia, as well as some autoimmune diseases. If the platelet count is extremely low, a platelet transfusion may be performed. Thrombocytosis, meaning a high platelet count, may occur in states of inflammation or trauma, as well as in iron deficiency, and the platelet count may reach exceptionally high levels in people with essential thrombocythemia, a rare blood disease. The platelet count can be reported in units of cells per microlitre of blood (/μL), 103 cells per microlitre (× 103/μL), or 109 cells per litre (× 109/L). Included tests: The mean platelet volume (MPV) measures the average size of platelets in femtolitres. It can aid in determining the cause of thrombocytopenia; an elevated MPV may occur when young platelets are released into the bloodstream to compensate for increased destruction of platelets, while decreased production of platelets due to dysfunction of the bone marrow can result in a low MPV. The MPV is also useful for differentiating between congenital diseases that cause thrombocytopenia. The immature platelet fraction (IPF) or reticulated platelet count is reported by some analyzers and provides information about the rate of platelet production by measuring the number of immature platelets in the blood. Included tests: Other tests Reticulocyte count Reticulocytes are immature red blood cells, which, unlike the mature cells, contain RNA. A reticulocyte count is sometimes performed as part of a complete blood count, usually to investigate the cause of a person's anemia or evaluate their response to treatment. Anemia with a high reticulocyte count can indicate that the bone marrow is producing red blood cells at a higher rate to compensate for blood loss or hemolysis, while anemia with a low reticulocyte count may suggest that the person has a condition that reduces the body's ability to produce red blood cells. When people with nutritional anemia are given nutrient supplementation, an increase in the reticulocyte count indicates that their body is responding to the treatment by producing more red blood cells. Hematology analyzers perform reticulocyte counts by staining red blood cells with a dye that binds to RNA and measuring the number of reticulocytes through light scattering or fluorescence analysis. The test can be performed manually by staining the blood with new methylene blue and counting the percentage of red blood cells containing RNA under the microscope. The reticulocyte count is expressed as an absolute number or as a percentage of red blood cells. Included tests: Some instruments measure the average amount of hemoglobin in each reticulocyte; a parameter that has been studied as an indicator of iron deficiency in people who have conditions that interfere with standard tests. The immature reticulocyte fraction (IRF) is another measurement produced by some analyzers which quantifies the maturity of reticulocytes: cells that are less mature contain more RNA and thus produce a stronger fluorescent signal. This information can be useful in diagnosing anemias and evaluating red blood cell production following anemia treatment or bone marrow transplantation. Included tests: Nucleated red blood cells During their formation in bone marrow, and in the liver and spleen in fetuses, red blood cells contain a cell nucleus, which is usually absent in the mature cells that circulate in the bloodstream. Nucleated red blood cells are normal in newborn babies, but when detected in children and adults, they indicate an increased demand for red blood cells, which can be caused by bleeding, some cancers and anemia. Most analyzers can detect these cells as part of the differential cell count. High numbers of nucleated red cells can cause a falsely high white cell count, which will require adjusting. Included tests: Other parameters Advanced hematology analyzers generate novel measurements of blood cells which have shown diagnostic significance in research studies but have not yet found widespread clinical use. For example, some types of analyzers produce coordinate readings indicating the size and position of each white blood cell cluster. These parameters (termed cell population data) have been studied as potential markers for blood disorders, bacterial infections and malaria. Analyzers that use myeloperoxidase staining to produce differential counts can measure white blood cells' expression of the enzyme, which is altered in various disorders. Some instruments can report the percentage of red blood cells that are hypochromic in addition to reporting the average MCHC value, or provide a count of fragmented red cells (schistocytes), which occur in some types of hemolytic anemia. Because these parameters are often specific to particular brands of analyzers, it is difficult for laboratories to interpret and compare results. Reference ranges: The complete blood count is interpreted by comparing the output to reference ranges, which represent the results found in 95% of apparently healthy people. Based on a statistical normal distribution, the tested samples' ranges vary with sex and age.On average, adult females have lower hemoglobin, hematocrit, and red blood cell count values than males; the difference lessens, but is still present, after menopause. CBC results for children and newborn babies differ from those of adults. Newborns' hemoglobin, hematocrit, and red blood cell count are extremely high to compensate for low oxygen levels in the womb and the high proportion of fetal hemoglobin, which is less effective at delivering oxygen to tissues than mature forms of hemoglobin, inside their red blood cells. The MCV is also increased, and the white blood cell count is elevated with a preponderance of neutrophils. The red blood cell count and related values begin to decline shortly after birth, reaching their lowest point at about two months of age and increasing thereafter. The red blood cells of older infants and children are smaller, with a lower MCH, than those of adults. In the pediatric white blood cell differential, lymphocytes often outnumber neutrophils, while in adults neutrophils predominate.Other differences between populations may affect the reference ranges: for example, people living at higher altitudes have higher hemoglobin, hematocrit, and RBC results, and people of African heritage have lower white blood cell counts on average. The type of analyzer used to run the CBC affects the reference ranges as well. Reference ranges are therefore established by individual laboratories based on their own patient populations and equipment. Limitations: Some medical conditions or problems with the blood sample may produce inaccurate results. If the sample is visibly clotted, which can be caused by poor phlebotomy technique, it is unsuitable for testing, because the platelet count will be falsely decreased and other results may be abnormal. Samples stored at room temperature for several hours may give falsely high readings for MCV (mean corpuscular volume), because red blood cells swell as they absorb water from the plasma; and platelet and white blood cell differential results may be inaccurate in aged specimens, as the cells degrade over time. Limitations: Samples drawn from individuals with very high levels of bilirubin or lipids in their plasma (referred to as an icteric sample or a lipemic sample, respectively) may show falsely high readings for hemoglobin, because these substances change the colour and opacity of the sample, which interferes with hemoglobin measurement. This effect can be mitigated by replacing the plasma with saline.Some individuals produce an antibody that causes their platelets to form clumps when their blood is drawn into tubes containing EDTA, the anticoagulant typically used to collect CBC samples. Platelet clumps may be counted as single platelets by automated analyzers, leading to a falsely decreased platelet count. This can be avoided by using an alternative anticoagulant such as sodium citrate or heparin.Another antibody-mediated condition that can affect complete blood count results is red blood cell agglutination. This phenomenon causes red blood cells to clump together because of antibodies bound to the cell surface. Red blood cell aggregates are counted as single cells by the analyzer, leading to a markedly decreased red blood cell count and hematocrit, and markedly elevated MCV and MCHC (mean corpuscular hemoglobin concentration). Often, these antibodies are only active at room temperature (in which case they are called cold agglutinins), and the agglutination can be reversed by heating the sample to 37 °C (99 °F). Samples from people with warm autoimmune hemolytic anemia may exhibit red cell agglutination that does not resolve on warming.While blast and lymphoma cells can be identified in the manual differential, microscopic examination cannot reliably determine the cells' hematopoietic lineage. This information is often necessary for diagnosing blood cancers. After abnormal cells are identified, additional techniques such as immunophenotyping by flow cytometry can be used to identify markers that provide additional information about the cells. History: Before automated cell counters were introduced, complete blood count tests were performed manually: white and red blood cells and platelets were counted using microscopes. The first person to publish microscopic observations of blood cells was Antonie van Leeuwenhoek, who reported on the appearance of red cells in a 1674 letter to the Proceedings of the Royal Society of London. Jan Swammerdam had described red blood cells some years earlier, but did not publish his findings at the time. Throughout the 18th and 19th centuries, improvements in microscope technology such as achromatic lenses allowed white blood cells and platelets to be counted in unstained samples.The physiologist Karl Vierordt is credited with performing the first blood count. His technique, published in 1852, involved aspirating a carefully measured volume of blood into a capillary tube and spreading it onto a microscope slide coated with egg white. After the blood dried, he counted every cell on the slide; this process could take more than three hours to complete. The hemocytometer, introduced in 1874 by Louis-Charles Malassez, simplified the microscopic counting of blood cells. Malassez's hemocytometer consisted of a microscope slide containing a flattened capillary tube. Diluted blood was introduced to the capillary chamber by means of a rubber tube attached to one end, and an eyepiece with a scaled grid was attached to the microscope, permitting the microscopist to count the number of cells per volume of blood. In 1877, William Gowers invented a hemocytometer with a built-in counting grid, eliminating the need to produce specially calibrated eyepieces for each microscope. History: In the 1870s, Paul Ehrlich developed a staining technique using a combination of an acidic and basic dye that could distinguish different types of white blood cells and allow red blood cell morphology to be examined. Dmitri Leonidovich Romanowsky improved on this technique in the 1890s, using a mixture of eosin and aged methylene blue to produce a wide range of hues not present when either of the stains was used alone. This became the basis for Romanowsky staining, the technique still used to stain blood smears for manual review.The first techniques for measuring hemoglobin were devised in the late 19th century, and involved visual comparisons of the colour of diluted blood against a known standard. Attempts to automate this process using spectrophotometry and colorimetry were limited by the fact that hemoglobin is present in the blood in many different forms, meaning that it could not be measured at a single wavelength. In 1920, a method to convert the different forms of hemoglobin to one stable form (cyanmethemoglobin or hemiglobincyanide) was introduced, allowing hemoglobin levels to be measured automatically. The cyanmethemoglobin method remains the reference method for hemoglobin measurement and is still used in many automated hematology analyzers.Maxwell Wintrobe is credited with the invention of the hematocrit test. In 1929, he undertook a PhD project at the University of Tulane to determine normal ranges for red blood cell parameters, and invented a method known as the Wintrobe hematocrit. Hematocrit measurements had previously been described in the literature, but Wintrobe's method differed in that it used a large tube that could be mass-produced to precise specifications, with a built-in scale. The fraction of red blood cells in the tube was measured after centrifugation to determine the hematocrit. The invention of a reproducible method for determining hematocrit values allowed Wintrobe to define the red blood cell indices. History: Research into automated cell counting began in the early 20th century. A method developed in 1928 used the amount of light transmitted through a diluted blood sample, as measured by photometry, to estimate the red blood cell count, but this proved inaccurate for samples with abnormal red blood cells. Other unsuccessful attempts, in the 1930s and 1940s, involved photoelectric detectors attached to microscopes, which would count cells as they were scanned. In the late 1940s, Wallace H. Coulter, motivated by a need for better red blood cell counting methods following the bombing of Hiroshima and Nagasaki, attempted to improve on photoelectric cell counting techniques. His research was aided by his brother, Joseph R. Coulter, in a basement laboratory in Chicago. Their results using photoelectric methods were disappointing, and in 1948, after reading a paper relating the conductivity of blood to its red blood cell concentration, Wallace devised the Coulter principle—the theory that a cell suspended in a conductive medium generates a drop in current proportional to its size as it passes through an aperture.That October, Wallace built a counter to demonstrate the principle. Owing to financial constraints, the aperture was made by burning a hole through a piece of cellophane from a cigarette package. Wallace filed a patent for the technique in 1949, and in 1951 applied to the Office of Naval Research to fund the development of the Coulter counter. Wallace's patent application was granted in 1953, and after improvements to the aperture and the introduction of a mercury manometer to provide precise control over sample size, the brothers founded Coulter Electronics Inc. in 1958 to market their instruments. The Coulter counter was initially designed for counting red blood cells, but with later modifications it proved effective for counting white blood cells. Coulter counters were widely adopted by medical laboratories.The first analyzer able to produce multiple cell counts simultaneously was the Technicon SMA 4A−7A, released in 1965. It achieved this by partitioning blood samples into two channels: one for counting red and white blood cells and one for measuring hemoglobin. However, the instrument was unreliable and difficult to maintain. In 1968, the Coulter Model S analyzer was released and gained widespread use. Similarly to the Technicon instrument, it used two different reaction chambers, one of which was used for the red cell count, and one of which was used for the white blood cell count and hemoglobin determination. The Model S also determined the mean cell volume using impedance measurements, which allowed the red blood cell indices and hematocrit to be derived. Automated platelet counts were introduced in 1970 with Technicon's Hemalog-8 instrument and were adopted by Coulter's S Plus series analyzers in 1980.After basic cell counting had been automated, the white blood cell differential remained a challenge. Throughout the 1970s, researchers explored two methods for automating the differential count: digital image processing and flow cytometry. Using technology developed in the 1950s and 60s to automate the reading of Pap smears, several models of image processing analyzers were produced. These instruments would scan a stained blood smear to find cell nuclei, then take a higher resolution snapshot of the cell to analyze it through densitometry. They were expensive, slow, and did little to reduce workload in the laboratory because they still required blood smears to be prepared and stained, so flow cytometry-based systems became more popular, and by 1990, no digital image analyzers were commercially available in the United States or western Europe. These techniques enjoyed a resurgence in the 2000s with the introduction of more advanced image analysis platforms using artificial neural networks.Early flow cytometry devices shot beams of light at cells in specific wavelengths and measured the resulting absorbance, fluorescence or light scatter, collecting information about the cells' features and allowing cellular contents such as DNA to be quantified. One such instrument—the Rapid Cell Spectrophotometer, developed by Louis Kamentsky in 1965 to automate cervical cytology—could generate blood cell scattergrams using cytochemical staining techniques. Leonard Ornstein, who had helped to develop the staining system on the Rapid Cell Spectrophotometer, and his colleagues later created the first commercial flow cytometric white blood cell differential analyzer, the Hemalog D. Introduced in 1974, this analyzer used light scattering, absorbance and cell staining to identify the five normal white blood cell types in addition to "large unidentified cells", a classification that usually consisted of atypical lymphocytes or blast cells. The Hemalog D could count 10,000 cells in one run, a marked improvement over the manual differential. In 1981, Technicon combined the Hemalog D with the Hemalog-8 analyzer to produce the Technicon H6000, the first combined complete blood count and differential analyzer. This analyzer was unpopular with hematology laboratories because it was labour-intensive to operate, but in the late 1980s to early 1990s similar systems were widely produced by other manufacturers such as Sysmex, Abbott, Roche and Beckman Coulter.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ZMODEM** ZMODEM: ZMODEM is an inline file transfer protocol developed by Chuck Forsberg in 1986, in a project funded by Telenet in order to improve file transfers on their X.25 network. In addition to dramatically improved performance compared to older protocols, ZMODEM offered restartable transfers, auto-start by the sender, an expanded 32-bit CRC, and control character quoting supporting 8-bit clean transfers, allowing it to be used on networks that would not pass control characters. ZMODEM: In contrast to most transfer protocols developed for bulletin board systems (BBSs), ZMODEM was not directly based on, nor compatible with, the seminal XMODEM. Many variants of XMODEM had been developed in order to address one or more of its shortcomings, and most remained backward compatible and would successfully complete transfers with "classic" XMODEM implementations. This list includes Forsberg's own YMODEM. ZMODEM: ZMODEM eschewed backward compatibility in favor of producing a radically improved protocol. It performed as well or better than any of the high-performance varieties of XMODEM, did so over links that previously didn't work at all, like X.25, or had poor performance, like Telebit modems, and included useful features found in few or no other protocols. ZMODEM became extremely popular on bulletin board systems (BBS) in the early 1990s, becoming a standard as widespread as XMODEM had been before it. Improvements: Streaming Generally, file transfer protocols break down a file into a series of packets, and then send them one-at-a-time to the receiver. The main portion of the packet, the payload, is a certain number of bytes from the file being sent. After the payload comes a checksum or cyclic redundancy check (CRC) that can be used to determine if the payload was received correctly. If the packet is received correctly, the receiver sends an ACK message and the sender then starts sending the next packet. Improvements: The telephone system introduces a small delay known as latency that interferes with this process. Even if the receiver sends the ACK immediately, the delay in the phone lines means there will always be some time before the sender receives it and sends the next packet. As modem speeds increase, this delay represents a larger and larger number of packets that could have been sent during the delay, decreasing the channel efficiency. Improvements: XMODEM used 128-bytes payloads with a three-byte header and one-byte checksum for a total of 132 bytes per packet. In the era of 300 bit/s modems, a packet took about four seconds to send, and typical latencies were on the order of 1⁄10 of a second, so the performance overhead was not significant. As speeds increase the problem becomes more problematic; at 2400 bit/s a packet takes about 1⁄2 to send, so about 1⁄5 of the available bandwidth is wasted waiting for ACKs. At 9600 bit/s a packet requires only 0.13 seconds to send, so about 1⁄2 of the bandwidth is wasted. Improvements: One solution to this problem is the use of a sliding window. These protocols address latency by allowing the sender to continue sending a number of packets without waiting for an ACK. The number of packets that it allows to continue is the "window", which was typically between two and sixteen packets in most implementations. A number of new versions of XMODEM with sliding window support appeared in the early 1980s. Improvements: Sliding windows are useful for latencies on the order of several packet lengths, which is the case for XMODEM on conventional phone lines. However, it is not enough to address longer latencies found on overseas phone calls or X.25 services such as PC Pursuit, where the latencies are on the order of a second or longer. In other cases, where the reverse channel was much slower than the sending one, as was the case for Telebit or US Robotics modems, even the small number of ACKs might overwhelm the return channel and cause the transfer to pause. Improvements: ZMODEM addressed these problems by removing the need for ACKs at all, allowing the sender to send data continually as long as the receiver detected no errors. Only NAKs had to be sent, if and only if there was a problem. Since ZMODEM was often used on links with built-in error correction, like X.25, the receiver would often not send a single message back to the sender. As a result, the system would send the entire file in a continual stream, and ZMODEM referred to itself as a "streaming protocol". Improvements: ZMODEM's performance was so improved over previous common protocols that it generally replaced even special protocols such as YMODEM-g, which included no error correction at all and instead relied on error-free links maintained by the modems. Although YMODEM-g was faster, the lack of other features such as restartable transfers made it less appealing. Improvements: Restart XMODEM, and most protocols based on it, managed packet order by prefixing the data with a packet number from 1 to 255. Windowed versions used this packet number to indicate which packets had been received properly, or specify one that had not. Since the packets were 128 bytes long, this meant the maximum amount of data that could be transferred before the packet numbers rolled over was 32 kB. Improvements: ZMODEM replaced the packet number with the actual location in the file, indicated by a 32-bit number. This allowed it to send NAK messages that re-wound the transfer to the point of failure, regardless of how long the file might be. This same feature was also used to re-start transfers if they failed or were deliberately interrupted. In this case, the receiver would look to see how much data had been previously received and then send a NAK with that location, automatically triggering the sender to start from that point. Improvements: Auto-start Auto-starting simplified management by allowing the sending machine to start the transfer. Previously the user had to first request the file from the sender, placing it into a "waiting" state, then return to their local program and invoke a command to start the transfer. With auto-transfer, they simply requested the file, the sender would then automatically trigger the transfer in the user's program. Variations: A number of modified versions of ZMODEM appeared. ZedZap was a variant of ZMODEM with 8 kbyte blocks for better performance on high-speed modems. LeechZmodem was a mischievous ZMODEM variant (among similar XMODEM and YMODEM derivatives) that cheated BBS download quotas. A backwards compatible extension of ZMODEM with 32 kbyte and 64 kbyte block lengths was created by ADONTEC in 2002 and 2007 to increase performance on high-speed error free connections like ISDN or TCP/IP networks. Variations: The most notable ZMODEM implementations were from Chuck Forsberg's Omen Technology, Inc. These included DSZ (DOS Send ZMODEM), GSZ (Graphical Send ZMODEM), and the ubiquitous (l)rzsz for Unix variants. Variations: In more current times, the developers of Synchronet have created a modern X/Y/ZMODEM implementation named SEXYZ, loosely based on the zmtx/zmrx package, which runs natively on Windows and Unix variants, supports long filenames and faster, more reliable data transfers. The ZMODEM implementation from SEXYZ has also been incorporated into the SyncTERM project. Synchronet, SEXYZ, and SyncTERM are all open-source, cross-platform, BBS-centric projects. Variations: Forsberg himself collected a number of improvements into ZMODEM-90. The first of these is MobyTurbo, which removed control quoting to further improve performance, about 15%. Even on networks that "eat" control characters, ZMODEM-90 can be tailored to quote only those characters the network actually eats, as opposed to every possible one. A similar improvement allows ZMODEM-90 to work on 7-bit networks, whereas earlier protocols (with the notable exception of Kermit) had all demanded 8-bits to one degree or another. Finally, ZMODEM-90 includes a basic run-length encoding compression system to further improve performance on uncompressed files. Limitations: Some of the ZMODEM packets (e.g. ZACK, ZRPOS) embed a byte-offset within the transferred file as a 32-bit unsigned integer. This design limits the feasibility of ZMODEM to only reliably transfer files that are under 4GB in size. Even though the protocol could permit it, the reference (l)rzsz implementation cannot encode arbitrary non-control characters (e.g. '~') which are often used by TCP/IP connection programs like telnet and ssh as client-side "terminal escape" characters. Users must disable the terminal escape feature to achieve reliable transfers over these kinds of links, e.g. ssh -e none user@hostname.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Berge's theorem** Berge's theorem: In graph theory, Berge's theorem states that a matching M in a graph G is maximum (contains the largest possible number of edges) if and only if there is no augmenting path (a path that starts and ends on free (unmatched) vertices, and alternates between edges in and not in the matching) with M. It was proven by French mathematician Claude Berge in 1957 (though already observed by Petersen in 1891 and Kőnig in 1931). Proof: To prove Berge's theorem, we first need a lemma. Take a graph G and let M and M′ be two matchings in G. Let G′ be the resultant graph from taking the symmetric difference of M and M′; i.e. (M - M′) ∪ (M′ - M). G′ will consist of connected components that are one of the following: An isolated vertex. Proof: An even cycle whose edges alternate between M and M′. Proof: A path whose edges alternate between M and M′, with distinct endpoints.The lemma can be proven by observing that each vertex in G′ can be incident to at most 2 edges: one from M and one from M′. Graphs where every vertex has degree less than or equal to 2 must consist of either isolated vertices, cycles, and paths. Furthermore, each path and cycle in G′ must alternate between M and M′. In order for a cycle to do this, it must have an equal number of edges from M and M′, and therefore be of even length. Proof: Let us now prove the contrapositive of Berge's theorem: G has a matching larger than M if and only if G has an augmenting path. Clearly, an augmenting path P of G can be used to produce a matching M′ that is larger than M — just take M′ to be the symmetric difference of P and M (M′ contains exactly those edges of G that appear in exactly one of P and M). Hence, the reverse direction follows. Proof: For the forward direction, let M′ be a matching in G larger than M. Consider D, the symmetric difference of M and M′. Observe that D consists of paths and even cycles (as observed by the earlier lemma). Since M′ is larger than M, D contains a component that has more edges from M′ than M. Such a component is a path in G that starts and ends with an edge from M′, so it is an augmenting path. Corollaries: Corollary 1 Let M be a maximum matching and consider an alternating chain such that the edges in the path alternates between being and not being in M. If the alternating chain is a cycle or a path of even length starting on an unmatched vertex, then a new maximum matching M′ can be found by interchanging the edges found in M and not in M. For example, if the alternating chain is (m1, n1, m2, n2, ...), where mi ∈ M and ni ∉ M, interchanging them would make all ni part of the new matching and make all mi not part of the matching. Corollaries: Corollary 2 An edge is considered "free" if it belongs to a maximum matching but does not belong to all maximum matchings. An edge e is free if and only if, in an arbitrary maximum matching M, edge e belongs to an even alternating path starting at an unmatched vertex or to an alternating cycle. By the first corollary, if edge e is part of such an alternating chain, then a new maximum matching, M′, must exist and e would exist either in M or M′, and therefore be free. Conversely, if edge e is free, then e is in some maximum matching M but not in M′. Since e is not part of both M and M′, it must still exist after taking the symmetric difference of M and M′. The symmetric difference of M and M′ results in a graph consisting of isolated vertices, even alternating cycles, and alternating paths. Suppose to the contrary that e belongs to some odd-length path component. But then one of M and M′ must have one fewer edge than the other in this component, meaning that the component as a whole is an augmenting path with respect to that matching. By the original lemma, then, that matching (whether M or M′) cannot be a maximum matching, which contradicts the assumption that both M and M′ are maximum. So, since e cannot belong to any odd-length path component, it must either be in an alternating cycle or an even-length alternating path.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pi Arietis** Pi Arietis: Pi Arietis, Latinized from π Arietis, is the Bayer designation for a multiple star system in the northern constellation of Aries. Based upon parallax measurements made during the Hipparcos mission, this system is approximately 800 light-years (250 parsecs) distant from Earth and has an apparent visual magnitude of 5.21. This is bright enough to be faintly seen with the naked eye. Pi Arietis: The primary member of this system is a massive, B-type main sequence star with a stellar classification of B6 V. It is a close spectroscopic binary with an orbital period of 3.854 days, an eccentricity of 0.04, and a combined visual magnitude of 5.30. At an angular separation of 3.28 arcseconds is a magnitude 8.46 A-type main sequence star with a classification of A0 Vp. Finally, a fourth member of the system is a magnitude 11.0 F-type main sequence star with a classification of F8V at an angular separation of 25.2 arcseconds from the primary. Name: This star, along with δ Ari, ε Ari, ζ Ari, and ρ3 Ari, were Al Bīrūnī's Al Buṭain (ألبطين), the dual of Al Baṭn, the Belly. According to the catalogue of stars in the Technical Memorandum 33-507 - A Reduced Star Catalog Containing 537 Named Stars, Al Buṭain were the title for five stars : δ Ari as Botein, π Ari as Al Buṭain I, ρ3 Ari as Al Buṭain II, ε Ari as Al Buṭain III dan ζ Ari as Al Buṭain IV.In Chinese, 左更 (Zuǒ Gēng), meaning Official in Charge of the Forest, refers to an asterism consisting of π Arietis, ν Arietis, μ Arietis, ο Arietis and σ Arietis. Consequently, the Chinese name for π Arietis itself is 左更五 (Zuǒ Gēng wu, English: the Fifth Star of Official in Charge of the Forest.)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DATATRIEVE** DATATRIEVE: DATATRIEVE is a database query and report writer tool originally from Digital Equipment Corporation. It runs on the OpenVMS operating system, as well as several PDP-11 operating systems. DATATRIEVE's command structure is nearly plain English, and it is an early example of a Fourth Generation Language (4GL). Overview: DATATRIEVE works against flat files, indexed files, and databases. Such data files are delimited using record definitions stored in the Common Data Dictionary (CDD), or in RMS files. DATATRIEVE is used at many OpenVMS installations. History: DATATRIEVE was developed in the late 1970s and early 1980s by a team of software engineers at DEC's Central Commercial Engineering facilities in Merrimack and Nashua, New Hampshire, under database architect Jim Starkey. Many of the project's engineers went on to highly visible careers in database management and other software disciplines. Version 1 for the PDP-11 was released in 1977; VAX DATATRIEVE was released in 1981 as part of the VAX Information Architecture. DATATRIEVE adopted the wombat as its notional mascot; the program's help file responded to “HELP WOMBAT” with factual information about real world wombats. Examples of DATATRIEVE usage: DATATRIEVE queries and commands approach plain English sentence structure, though would not be considered natural language, since a precise sentence structure must be used: DTR> FOR FAMILIES WITH NUMBER_KIDS = 2 CON> PRINT KID_NAME, AGE OF KIDS WITH AGE GT 20 DATATRIEVE can also be used to modify data: DTR> FOR FAMILIES MODIFY EACH_KID OF FIRST 1 KIDS Enter KID_NAME: DATATRIEVE can also cross multiple datasets, creating joined data views: DTR> PRINT NAME, TYPE, PRICE OF CON> YACHTS CROSS OWNERS OVER TYPE
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Acidic oxide** Acidic oxide: An acidic oxide is an oxide that either produces an acidic solution upon addition to water, or acts as an acceptor of hydroxide ions effectively functioning as a Lewis acid. Acidic oxides will typically have a low pKa and may be inorganic or organic. A commonly encountered acidic oxide, carbon dioxide produces an acidic solution (and the generation of carbonic acid) when dissolved.The acidity of an oxide can be reasonably assumed by its accompanying constituents. Less electronegative elements tend to form basic oxides such as sodium oxide and magnesium oxide, whereas more electronegative elements tend to produce acidic oxides as seen with carbon dioxide and phosphorus pentoxide. Some oxides like aluminium oxides are amphoteric.Acidic oxides are of environmental concern. Sulfur and nitrogen oxides are considered air pollutants as they react with atmospheric water vapour to produce acid rain. Examples: Carbonic acid is an illustrative example of the Lewis acidity of an acidic oxide. Examples: CO2 + 2OH− ⇌ HCO3− + OH− ⇌ CO32− + H2OThis property is a key reason for keeping alkali chemicals well sealed from the atmosphere, as long-term exposure to carbon dioxide in the air can degrade the material. Carbon dioxide is also the anhydride of carbonic acid: CO CO 2 Chromium trioxide, which reacts with water forming chromic acid Dinitrogen pentoxide, which reacts with water forming nitric acid Manganese heptoxide, which reacts with water forming permanganic acid Further examples: Aluminium oxide Aluminium oxide (Al2O3) is an amphoteric oxide; it can act as a base or acid. For example, with base different aluminate salts will be formed: Al2O3 + 2 NaOH + 3 H2O → 2 NaAl(OH)4 Silicon dioxide Silicon dioxide is an acidic oxide. It will react with strong bases to form silicate salts.Silicon dioxide is the anhydride of silicic acid: Si OH SiO 2 Phosphorus oxides Phosphorus(III) oxide reacts to form phosphorous acid in water: P4O6 + 6 H2O → 4 H3PO3Phosphorus(V) oxide reacts with water to give phosphoric acid: P4O10 + 6 H2O → 4 H3PO4 Sulfur oxides Sulfur dioxide reacts with water to form the weak acid, sulfurous acid: SO2 + H2O → H2SO3Sulfur trioxide forms the strong acid sulfuric acid with water: SO3 + H2O → H2SO4This reaction is important in the manufacturing of sulfuric acid. Further examples: Chlorine oxides Chlorine(I) oxide reacts with water to form hypochlorous acid, a very weak acid: Cl HOCl Chlorine(VII) oxide reacts with water to form perchloric acid, a strong acid: Cl2O7 + H2O → 2 HClO4 Iron oxides Iron(II) oxide is the anhydride of the aqueous ferrous ion: Fe FeO +2H++5H2O Chromium oxides Chromium trioxide is the anhydride of chromic acid: CrO CrO 3 Vanadium oxides Vanadium trioxide is the anhydride of vanadous acid: VO 3⟶3H2O+V2O3 Vanadium pentoxide is the anhydride of vanadic acid: VO 4⟶3H2O+V2O5
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chronic neutrophilic leukemia** Chronic neutrophilic leukemia: Chronic neutrophilic leukemia (CNL) is a rare myeloproliferative neoplasm that features a persistent neutrophilia in peripheral blood, myeloid hyperplasia in bone marrow, hepatosplenomegaly, and the absence of the Philadelphia chromosome or a BCR/ABL fusion gene. Signs and symptoms: The most common clinical finding is hepatosplenomegaly. Pruritus, gout, and mucocutaneous bleeding are occasionally seen. Cause: The cause of CNL is currently unknown. An association between CNL and multiple myeloma has been suggested based on the observation of myeloma in 20% of CNL cases. However, a clonal genetic abnormality has not been detected in these myeloma-associated cases of CNL, raising the possibility that the neutrophilia is a reaction due to the neoplastic myeloma cells. The postulated cell of origin is a limited-potential, marrow-derived stem cell. Genetics: The majority (90%) of cases have not had detectable cytogenetic abnormalities. Most importantly, the Philadelphia chromosome and other BCR/ABL fusion genes are not detected. Diagnosis: Laboratory findings Peripheral blood neutrophilia (> 25 x 109/L) with myeloid precursors (promyelocytes, myelocytes, metamyelocytes) comprising less than 5% of leukocytes. Sites of involvement Peripheral blood, bone marrow, spleen, and liver are most common, but any organ or tissue can be infiltrated by neutrophils. Diagnosis: Bone marrow biopsy On both the bone marrow aspirate and the core biopsy, a hypercellular marrow with an increased myeloid:erythroid ratio of 20:1 or greater. Myelocytes and neutrophils are increased, and blasts and promyelocytes are not increased. Due to the myeloproliferative nature of the disease, an increase in megakaryocytes and erythroid precursors may be observed, but dyspoiesis in not seen in any cell lineage. Also, reticulin fibrosis is rare. There is a reported association between CNL and multiple myeloma, so the bone marrow biopsy may show evidence of a plasma cell dyscrasia with increased numbers of atypical plasma cells. Diagnosis: Spleen Splenic infiltrates are typically found only in the red pulp. Liver Hepatic infiltrates can be found in either the sinusoids, portal triad regions, or both. Immunophenotype No distinct immunophenotype abnormality for CNL has been described. See OHSU 2013 findings of gene CSF3R, mutation p. T6181 Epidemiology: This is a rare disease, with less than 100 cases reported. Of these cases, an equal male:female ratio was observed, with cases typically seen in older adults.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fluid (web browser)** Fluid (web browser): Fluid is a WebKit2-based site-specific browser (SSB) for Mac OS X created by Todd Ditchendorf. Its original WebKit-based version was compared to Mozilla Prism and mentioned in Lifehacker, TechCrunch, 43 Folders, the 37 Signals blog, and on InfoWorld as a way to make web applications more like native desktop applications. 1.0 milestone: On May 1, 2011, Fluid 1.0 was released with a completely new codebase. Fluid Apps created with previous versions of Fluid cannot be updated via software update and SSBs have to be re-created with Fluid 1.0 (to transition to version 1.0 and later). While version 1.0 is still a free app, a Fluid License can be purchased which will unlock extra features (some previously included by default in previous versions). On July 4, 2011, version 1.2 was released and featured compatibility with Mac OS X 10.7 Lion. 2.0 milestone: In July 2018, Fluid underwent another rewrite to take advantage of Apple's newer WebKit2 API with process separation, with the same licensing terms as 1.x versions. Subsequent minor versions restored feature support and added support for Dark Mode.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quarter period** Quarter period: In mathematics, the quarter periods K(m) and iK ′(m) are special functions that appear in the theory of elliptic functions. The quarter periods K and iK ′ are given by sin 2⁡θ and iK′(m)=iK(1−m). When m is a real number, 0 < m < 1, then both K and K ′ are real numbers. By convention, K is called the real quarter period and iK ′ is called the imaginary quarter period. Any one of the numbers m, K, K ′, or K ′/K uniquely determines the others. These functions appear in the theory of Jacobian elliptic functions; they are called quarter periods because the elliptic functions sn ⁡u and cn ⁡u are periodic functions with periods 4K and 4iK′. However, the sn function is also periodic with a smaller period (in terms of the absolute value) than 4iK′ , namely 2iK′ Notation: The quarter periods are essentially the elliptic integral of the first kind, by making the substitution k2=m . In this case, one writes K(k) instead of K(m) , understanding the difference between the two depends notationally on whether k or m is used. This notational difference has spawned a terminology to go with it: m is called the parameter m1=1−m is called the complementary parameter k is called the elliptic modulus k′ is called the complementary elliptic modulus, where k′2=m1 α the modular angle, where sin ⁡α, π2−α the complementary modular angle. Note that sin cos 2⁡α. Notation: The elliptic modulus can be expressed in terms of the quarter periods as ns ⁡(K+iK′) and dn ⁡K where ns and dn are Jacobian elliptic functions. The nome q is given by q=e−πK′K. The complementary nome is given by q1=e−πKK′. The real quarter period can be expressed as a Lambert series involving the nome: K=π2+2π∑n=1∞qn1+q2n. Additional expansions and relations can be found on the page for elliptic integrals.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mouse models of breast cancer metastasis** Mouse models of breast cancer metastasis: Breast cancer metastatic mouse models are experimental approaches in which mice are genetically manipulated to develop a mammary tumor leading to distant focal lesions of mammary epithelium created by metastasis. Mammary cancers in mice can be caused by genetic mutations that have been identified in human cancer. This means models can be generated based upon molecular lesions consistent with the human disease. Breast cancer metastasis: Metastasis is a process of migration of tumour cells from the primary cancer site to a distant location where the cancer cells form secondary tumors. Metastatic breast cancer represents the most devastating attribute of cancer and it is considered an advanced-stage event. Human breast cancer metastasizes to multiple distant organs such as the brain, lungs, bones and liver. Breast cancer metastasis: Genetic diversity between primary and metastatic tumor The classical theory developed in the early 70's anticipated that metastasis is due to genetically determined subpopulations in primary tumours. The genetic variance between metastatic foci is significant for only particular locus and within specific cell populations or only one-cell population shows differences and some loci are divergent only in one cell subpopulation. This explains the concept of tumour heterogeneity and the order of genetic events during tumor evolution. Many of the genes driving the growth at primary site can determine the dissemination and colonization at the ectopic site. Breast cancer is consensually considered genetically and clinically as a heterogeneous disease, in that it reflects the heterogeneity of the normal breast tissue at its origin17873350. A number of discrete genetic events have to occur in order to enable individual tumor cells that have the capacity to grow at an ectopic site. The metastatic progression depends on the regulation of developmental programs and environmental events. The metastatic potential of sub populations within mouse mammary cells is now considered as relatively an early event and dissemination occurs at the same time of pre invasive or micro-invasive lesions. The genetic profiles of primary and metastatic lesions in breast carcinomas show a large extent of clonal pertinence between lesions. There are various patterns of prevalence of genetic mutations in the genomes of primary breast tumour and its metastases. It also confirms the genetic heterogeneity between the primary neoplasm of breast cancer patients and their respective metastases. Breast cancer metastasis: Genes involved in organ specific metastasis Breast cancer phenotypes periodically express genes in metastasis that are indispensable for the metastatic process. Metastatic diversity is mediated by the activation of genes that act as coupling to organ-specific growth. The growth of lesions at the ectopic site depends on multiple complex interactions between metastatic cells and host homeostatic mechanisms. Lethal protein-protein interactions at the metastatic site aid the survival of adapted cells. Generating mouse models of breast cancer: Targeted expression of oncogenes in mouse mammary epithelial cells is a way of modeling human breast cancer. Mutation or over expression of oncogenes can be kept under controlled expression in a very specific cellular context rather than throughout the organism. Another way to model human breast cancer is done through the targeted inhibition of a tumor suppressor gene. Mice in genetic research In 1909, Clarence C. Little developed the first inbred strain, the DBA (Dilute, brown non-Agouti) mouse. In 1915, N.M Haldane identified first linkage in mouse between Albino mice and pink eye dilution on chromosome seven. In 1921, C57BL became one of the most widely used mice in genetics and was the first strain to have its genome sequenced. In 1982, Palmiter and Brinster implanted a foreign gene into fertilized egg, finally generating the first transgenic mice genetically engineered to express dominant oncogenes. In 1982, the stimulation of expression from the MMTV-LTR (Mouse mammary tumor virus-Long terminal repeat) was done by multiple rounds of pregnancy and lactation to evaluate the relevance of a cellular proto-oncogene, c-myc. Generating mouse models of breast cancer: Human and mouse: a genomic comparison Genetic studies of common diseases in humans suffer significant limitations for practical and ethical reasons. Human cell lines can be used to model disease but it is difficult to study processes at the tissue level, within an organ or across the entire body. Mice can be a good representation of diseases in humans because:. Generating mouse models of breast cancer: There are close similarities of physiology, development and cell biology between mice and humans. Humans and mice both have around 30,000 protein-coding genes. The number of mouse genes without a corresponding human homologue is less than 1%. 90% of the human and mouse genomes are syntenic. 40% of both human and mouse genomes can be aligned at the nucleotide level. Mice have relatively short gestation periods. Mice take a brief time to reach sexual maturity. Mice have large litter sizes. Generating mouse models of breast cancer: The availability of hundreds of mutations affecting almost every tissue and aspect of development.Mice may not be an ideal model for breast cancer. This is mainly due to the lack of precision in many of the models. When looking at metastasis, it is difficult to determine the precise location as well as its frequency. Another issue revolves around the epithelial sub types and the inability to specifically target them when targeting a mutation. An example of this would be determining the development of tumors in K14-Cre BRCA2 mice. In a standard case, the excision of BRCA2 resulted in no tumorgenesis, but if p53 was mutated and inactivated, tumorgenesis would occur. Therefore, there is not a definitive answer in terms of the origin of the tumor, due to the extra mutation in p53. Generating mouse models of breast cancer: Metastatic mouse mammary carcinoma cell lines Various mouse mammary carcinoma cell lines, like 4T1 and TS/A, are metastatic in syngeneic immunocompetent mice and can be used to identify genes and pathways involved in the metastatic process. Generating mouse models of breast cancer: Simple tumor transplantation models Transplantation of tumor cells into immunodeficient mice is a tool to study breast cancer and its metastatic effects. The transplantation occurs as either allotransplants or xenographic transplants. Commonly, human cells are inoculated in an immunocompromised murine recipient. Inoculating cells through intra ductal transplantations, by cleared mammary fat pad injections or by transplantations into the tail vein. Different organs can be seeded with breast cancer cells depending on the route of injection Cardiac injection: Bone Tail vein injection: Lung Splenic injection: Liver Carotid artery Injection: Brain Tumor tissue transplant models The specific immunodeficient mice that were used were the NOD/SCID mouse (non-obese diabetic/severe conditional immunodeficient). These mutations allow for the integration of new xenograft tissue. The mouse must first have their mammary fat pads humanized by injecting human telemorase-immortalized human mammary stromal fibroblasts(RMF/EG fibroblasts) into the mammary fat pads. Without this injection, the human mammary epithelial cells en-grafted onto the pad are unable to colonize and grow. The RMF/EG fibroblast must then be irradiated to allow the expression of key proteins and growth factors. After 4 weeks of development, the newly en-grafted human mammary epithelial cells expanded within the fat pad. Generating mouse models of breast cancer: Genetically engineered mice to study metastasis Genetically engineered mice are constructed to model human phenotypes and pathologies. Mutant mice may include transgenes using different delivery methods: The use of bacteria-derived tetracycline-inducible system permitting the switching on or off (Tet-On/Tet-Off system) Targeted mutations by knock in gene and knock out sequence by using Cre-Lox recombination system Introduction of retro viral mutations Introduction of chemically induced mutations Transgenic mouse models of breast cancer The mice undergoing the process of transgenesis are known as transgenic mice. A basic transgene has a promoter region, Protein coding sequence, Intron and a stop codon. Mouse mammary tumor virus (MMTV), is a retro virus that has been a known promoter to cause breast tumors once activated. MMTV is a heritable somatic mutagen whose target range is limited. It harbors a regulatory DNA sequence called the long terminal repeat (LTR), which promotes steroid-hormone-inducible transcription. Tumorgenesis that was induced by the mouse mammary tumor virus can also be done by integration of the viral genome. The sites of integration have been known to be critical genes for cellular regulation. Generating mouse models of breast cancer: Whey acidic protein (WAP), is another common promoter used to generate mouse mammary cancer models. For a list of other mammary gland specific promoters and mouse models see. Generating mouse models of breast cancer: MMTV-PyMT MMTV-PyMT is the model of breast cancer metastasis, in which MMTV-LTR is used to drive the expression of mammary gland specific polyomavirus middle T-antigen, leading to a rapid development of highly metastatic tumors. MMTV-PyMT is the most commonly used model for the study of mammary tumor progression and metastasis. MMTV-PyMT mice are then crossed bred with other genetically modified mice to generate various types of breast cancer models, including: PI3K/Akt signalling in metastasis can be demonstrated in MMTV-PyMT; Akt1−/− mice. Generating mouse models of breast cancer: Chemoattractive paracrine loop of colony-stimulating factor-1 (CSF-1) and EGF ligands between tumor-associated macrophages (TAMs) and tumor cells, and the lung metastasis can be studied by crossing MMTV-PyMT mice with Csf-1−/− mice. The role of an innate and adaptive immune response to assist metastasis can be studied in MMTV-PyMT; Rag1−/− mice in which CD4+ T cells are selectively lost. Interleukin-4 (IL4) lacking model of MMTV-PyMT; IL4−/− mice. Role of the adhesion molecule CD44 in lung metastasis. Conditional ablation in MMTV-PyMT breast cancer cells has been done to reveal pro-metastatic actions of the angiogenic factors, Vascular endothelial growth factor A (VEGF-A). The role of autocrine transforming growth factor beta 1(TGF-β1) signaling on motility and survival in PymT cells derived from an MMTV-PymT mouse mammary cancer. Others are MMTV-PyMT; uPA-/- and MMTV-PyMT; MEKK1-/-. MMTV-HER2/neu The MMTV-LTR can also be used to promote receptor tyrosine-protein kinase ErbB2 to transform the mouse mammary epithelium. ErbB2 is an oncogene amplified and overexpressed in around 20% of human breast cancers. The mice harbouring this oncogene develop multifocal adenocarcinomas with lung metastases at about 15 weeks after pregnancy. Generating mouse models of breast cancer: To create a more accurate representation of HER2 gene mutations, researchers have fused the mouse gene containing neu and a rat gene containing neu. This addresses the issue in terms of modeling the amplification of HER2 in mice development. In the non-fused mouse, the mammary gland would revert to a near virgin, but with this addition the mammary gland maintained the developed function. Generating mouse models of breast cancer: Bi-transgenic models Mouse models having two transgenes are called bi transgenic. To check the cooperation of two oncogenes, Tim Stewert and group made the first bi-transgenic mouse models in 1987, MMTV-Myc and MMTV- Ras mice were crossed with a resulting acceleration in tumorigenesis. Expression of TGFβ in the breast cancer cells of MMTV-ErbB2; MMTV-TGFβ double-transgenic mice can induce higher levels of circulating tumor cells and lung metastasis. Ras gene can be combined with rtTA (reverse tetracycline transactivator) to generate bi-transgenic inducible mouse model through tetracycline-controlled transcriptional activation e.g. mice carrying TetO-KrasG12D (TOR) and MMTV-rtTA (MTB), comes with the transgene expressing the reverse tetracycline transactivator (rtTA) in mammary epithelial cells. Generating mouse models of breast cancer: Tri-transgenic models Tri-transgenic mouse models constitute of more than two genes. Multiple combinations and genetic modifications are made in such a way that either one or all the genes are put into a continuously expressed status, or in a controlled fashion to activate them at different time points. For example, TOM( TetO-myc); TOR; MTB mice, where both the myc (M) and ras (R) genes are under the control of tetracycline operators. They can also both be activated or deactivated by adding doxycycline. Other combinations in this respect are TOM; Kras; MTB, where myc can be induced and uninduced at various time points while Kras is in continuous expressed state, and myc; TOR; MTB model is vice versa. Applications of genetically modified mice to study metastasis: Metastatic cascade can be studied by keeping the gene activation under control or by adding a reporter gene e.g. Beta actin GFP (Green fluorescent protein) or RFP (Red fluorescent protein). Applications of genetically modified mice to study metastasis: Identification of genes that regulate metastasis By knocking in/knocking out specific genes by homologous recombination, the extent of metastasis can be measured and new target genes identification can be achieved e.g. a gene that consistently regulates metastatic behavior of cancer cells is TGF-β1. Acute ablation of TGF-β signaling in MMTV-PyMT mammary tumor cells leads to a five-fold increase in lung metastasis. Applications of genetically modified mice to study metastasis: Certain enhancer regions can also be analyzed and can be determined to be a crucial part of cell proliferation e.g. an enhancing region that is associated with a cancer critical gene p53 which was determined via CRISPR-Cas9. Applications of genetically modified mice to study metastasis: Lineage tracing in metastasis models The quantitative lineage-tracing strategies have proven to be successful in resolving cell fates in normal epithelial tissues either using tissue –specific or stem-cell-specific transgenes. To conduct an inducible lineage-tracing experiment two components must be engineered into the mouse genome: a switch and a reporter. The switch is commonly a drug-regulated form of the bacterial enzyme Cre-recombinase. This enzyme recognizes specific sequences, called LoxP sites. Proteins that are capable of enhancing the identification of labeled cells or a specific population in unlabelled cells are encoded by the reporter transgenes. After harvesting all the ten mouse mammary glands from the transgenic mice, single cell suspension is usually made and transplanted either in tail vein of non transgenic recipient mice or in cleared fat pad of non-transgenic mice repopulating the mammary fat pad. These cells are then followed in the blood stream, lungs, bone marrow and liver to look for the favorable site of metastasis.these transgenic cells can be traced according to their special features of either fluorescence or induced by placing the recipients on doxycycline food. Applications of genetically modified mice to study metastasis: Circulating tumor cells Another tool to study breast cancer metastasis is to look for circulating tumor cells in transgenic mice e.g. MMTV-PyMT mice can respond to various therapies in shedding tumor cells in the blood leading to lung metastasis. Not only in blood but cells can be detected in bone marrow e.g. cytokeratin-positive cells in the bone marrow of MMTV-pyMT and MMTV-Neu transgenic mice were identified but not in the wild type controls. Applications of genetically modified mice to study metastasis: Limitations In the absence of specific markers for mammary cells, models with genetic marking of tumor cells gives the best experimental advantage, however the low volume of peripheral blood that can be obtained from live animals limits the application of this technique. In vivo imaging of metastatic mouse models: Transgenic mouse models can be imaged by various non-invasive techniques. In vivo imaging of metastatic mouse models: Bioluminescence imaging Bioluminescence imaging relies on the detection of light produced by the enzymatic oxidation of an exogenous substrate. The substrate luciferin, is oxidized to oxyluciferin in the presence of luciferase and emits light, which can be detected using an IVIS system such as a Xenogen machine. Dissociated mammary cells from MMTV-PyMT: IRES: Luc; MTB (Internal ribosome entry site: Luciferin) animals (which were not exposed to doxycycline) can be injected into the lateral tail veins of immunodeficient mice on a doxycycline-free diet. No bioluminescence signal will be observed in the lungs of recipient mice until they are given doxycycline food. Bioluminescence can then be detected in the chest within 2 weeks of the start of doxycycline exposure. Luciferase is injected just before taking the images. In vivo imaging of metastatic mouse models: Fluorescent imaging Intravital microscopy with multi photon excitation is a technique to visualize genetically engineered cells directly in vivo. Multi step metastatic cascades can be visualized by labelling with unique fluorescent colour under fluorescence microscope. Radioisotopic imaging Positron emission tomography (PET), single photon emission computed tomography (SPECT) and computed tomography (CT) have been used to compare the efficiency of these in vivo imaging for detecting lesions at an early stage and to evaluate the response to chemotherapy. In vivo imaging of metastatic mouse models: MRI Imaging Magnetic resonance imaging requires the use of nano-particles(liposomes) and an MRI contrast agent called gadolinium. The particles were then placed in vesicles via a polycarbonate membrane filter. The nano-particles are injected into the metastases evolved mice, and left there for twenty-four hours. These mice are then scanned, and in the imaging software there are accumulations of these particles in certain areas where cells have metastasized.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cancrinite** Cancrinite: Cancrinite is a complex carbonate and silicate of sodium, calcium and aluminium with the formula Na6Ca2[(CO3)2|Al6Si6O24]·2H2O. It is classed as a member of the feldspathoid group of minerals; the alkali feldspars that are poor in silica. Yellow, orange, pink, white or even blue, it has a vitreous or pearly luster; a hardness of 5–6 and an uneven conchoidal fracture. It is unusual among the silicate minerals in that it will effervesce with hydrochloric acid due to the associated carbonate ions. Cancrinite: Found originally in 1839 in the Ural Mountains, it is named after Georg von Cancrin, a Russian minister of finance.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Backhoe loader** Backhoe loader: A backhoe loader, also called a loader backhoe, loader excavator, digger in layman's terms, or colloquially shortened to backhoe within the industry, is a heavy equipment vehicle that consists of a tractor-like unit fitted with a loader-style shovel/bucket on the front and a backhoe on the back. Due to its (relatively) small size and versatility, backhoe loaders are very common in urban engineering and small construction projects (such as building a small house, fixing urban roads, etc.) as well as developing countries. This type of machine is similar to and derived from what is now known as a TLB (Tractor-Loader-Backhoe), which is to say, an agricultural tractor fitted with a front loader and rear backhoe attachment. Backhoe loader: The true development of the backhoe actually began in 1947 by the inventors that started the Wain-Roy Corporation of Hubbardston, Massachusetts. In 1947 Wain-Roy Corporation developed and tested the first actual backhoes. In April 1948 Wain-Roy Corporation sold the very first all hydraulic backhoes, mounted to a Ford Model 8N tractor, to the Connecticut Light and Power Company for the sum of $705. History: Evolving in parallel to development in the U.S., backhoes were first produced in the UK in 1953 by JCB, but it was just a prototype. The world's first backhoe loader with factory warranty was introduced in the U.S. by J.I. Case in 1957. Their Model 320 was the world's first serial backhoe loader. Although based on a tractor, a backhoe loader was and is almost never called a tractor when both the loader and the backhoe are permanently attached. Backhoe loaders are also not generally used for towing and usually do not have a power take-off (PTO) as often this is used to drive the hydraulic pump operating the attachments. When the backhoe is permanently attached, the machine usually has a seat that can swivel to the rear to face the hoe controls. Removable backhoe attachments almost always have a separate seat on the attachment itself. History: In Britain and Ireland they are commonly referred to simply as JCBs; they are popularly called "JCB" in India. In the United States, they are often referred to as "backhoes", although the term 'backhoe' only refers to one component. In Russia they are referred as excavator-loaders. History: In 1970, Hy-Dynamic, now a division of Bucyrus-Erie, manufacturer of the Dynahoe, was the first company to incorporate a four-wheel drive system into their backhoe loaders, allowing these models to go over almost any terrain with little difficulty. Since the backhoe was invented, several companies such as Caterpillar and John Deere have changed the backhoe's back arm to be slightly curved like that of an excavator, which can allow more maneuverability. Use: Backhoe loaders are very common and can be used for a wide variety of tasks: construction, small demolitions, light transportation of building materials, powering building equipment, digging holes/excavation, landscaping, breaking asphalt, and paving roads. Often, the backhoe bucket can also be replaced with powered attachments such as a breaker, grapple, auger, or a stump grinder. Enhanced articulation of attachments can be achieved with intermediate attachments such as the tiltrotator. Many backhoes feature quick coupler (quick-attach) mounting systems and auxiliary hydraulic circuits for simplified attachment mounting, increasing the machine's utilization on the job site. Some loader buckets have a retractable bottom or "clamshell", enabling it to empty its load more quickly and efficiently. Retractable-bottom loader buckets are also often used for grading and scraping. The front assembly may be a removable attachment or permanently mounted. Use: Because digging while on tires intrinsically causes the machine to rock, and the swinging weight of the backhoe could cause the vehicle to tip, most backhoe loaders use hydraulic outriggers or stabilizers at the rear when digging and lower the loader bucket for additional stability. This means that the bucket must be raised and the outriggers retracted when the vehicle needs to change positions, reducing efficiency. For this reason many companies offer miniature tracked excavators, which sacrifice the loader function and ability to be driven from site to site, for increased digging efficiency. Use: Their relatively small frame and precise control make backhoe-loaders very useful and common in urban engineering projects such as construction and repairs in areas too small for larger equipment. Their versatility and compact size makes them one of the most popular urban construction vehicles. For larger projects, a tracked excavator is generally used. In recent years, small compact tractors have become very popular with private homeowners. Subcompact tractors, the size between a compact tractor and lawn tractor, are also often sold in backhoe loader setup, sometimes with a belly-mounted mower also included. These tractors offer private homeowners the ability to perform minor excavation projects. In popular culture: Scoop from Bob the Builder is a yellow Backhoe Loader Diggs from Construction Site is a blue Backhoe Loader
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Reality–virtuality continuum** Reality–virtuality continuum: The virtuality continuum is a continuous scale ranging between the completely virtual, a virtuality, and the completely real, reality. The reality–virtuality continuum therefore encompasses all possible variations and compositions of real and virtual objects. It has been described as a concept in new media and computer science, but in fact it could be considered a matter of anthropology. The concept was first introduced by Paul Milgram.The area between the two extremes, where both the real and the virtual are mixed, is called mixed reality. This in turn is said to consist of both augmented reality, where the virtual augments the real, and augmented virtuality, where the real augments the virtual. Overview: This continuum has been extended into a two-dimensional plane of virtuality and mediality. Taxonomy of reality, virtuality, mediality. The origin R denotes unmodified reality. A continuum across the virtuality axis, V, includes reality augmented with graphics (augmented reality), as well as graphics augmented by reality (augmented virtuality). However, the taxonomy also includes modification of reality or virtuality or any combination of these. The mediality axis denotes changes. The modification is denoted by moving up the mediality axis. Further up this axis, for example, we can find mediated reality, mediated virtuality, or any combination of these. Further up and to the right, we have virtual worlds that are responsive to a severely modified version of reality. The virtuality continuum has grown and progressed past labels such as computer science and new media. As the concept has much to do with the way in which humans continue to change how they communicate; the way in which identities form and the way in which they interact to and within the world; it is more accurately described as a subject within anthropology.Changes in attitudes towards and the increase in availability of technology and media have changed and progressed the way it is used. One to one (SMS), one to many (email), and many to many (chat rooms), have become ingrained in society. The use of such items have made once clear distinctions like online and offline obsolete, and the distinctions between reality and virtuality have become blurred as people are incorporating and relying heavily upon virtuality within their everyday personal realities.Daniel Miller and Don Slater are prominent researchers pursuing the concept of the virtuality continuum and the media and its effect on communities, especially in the Caribbean, most notably Trinidad and Jamaica. Overview: Steve Woolgar is another researcher who has established four rules of virtuality. These are: The way in which media and technology affect people relies on their non-information communication technology (ICT) related background which may include gender, age, social status, income amongst others. Risks and fears in regards to new media and technology are unevenly socially distributed. Advancements in media and technology supplement rather than replace existing activities in reality. New media and technology tends to create new kinds of localism rather than furthering globalization.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stereopsis recovery** Stereopsis recovery: Stereopsis recovery, also recovery from stereoblindness, is the phenomenon of a stereoblind person gaining partial or full ability of stereo vision (stereopsis). Stereopsis recovery: Recovering stereo vision as far as possible has long been established as an approach to the therapeutic treatment of stereoblind patients. Treatment aims to recover stereo vision in very young children, as well as in patients who had acquired but lost their ability for stereopsis due to a medical condition. In contrast, this aim has normally not been present in the treatment of those who missed out on learning stereopsis during their first few years of life. In fact, the acquisition of binocular and stereo vision was long thought to be impossible unless the person acquired this skill during a critical period in infancy and early childhood. This hypothesis normally went unquestioned and has formed the basis for the therapeutic approaches to binocular disorders for decades. It has been put in doubt in recent years. In particular since studies on stereopsis recovery began to appear in scientific journals and it became publicly known that neuroscientist Susan R. Barry achieved stereopsis well into adulthood, that assumption is in retrospect considered to have held the status of a scientific dogma.Very recently, there has been a rise in scientific investigations into stereopsis recovery in adults and youths who have had no stereo vision before. While it has now been shown that an adult may gain stereopsis, it is currently not yet possible to predict how likely a stereoblind person is to do so, nor is there general agreement on the best therapeutic procedure. Also the possible implications for the treatment of children with infantile esotropia are still under study. Clinical management of strabismus and stereoblindness: In cases of acquired strabismus with double vision (diplopia), it is long-established state of the art to aim at curing the double vision and at the same time recovering a patient's earlier ability for stereo vision. For example, a patient may have had full stereo vision but later had diplopia due to a medical condition, losing stereo vision. In this case, medical interventions, including vision therapy and strabismus surgery, may remove the double vision and recover the stereo vision which had temporarily been absent in the patient. Clinical management of strabismus and stereoblindness: Also when children with congenital (infantile) strabismus (e.g. infantile esotropia) receive strabismus surgery within the first few years or two of their life, this goes along with the hope that they may yet develop their full potential for binocular vision including stereopsis. Clinical management of strabismus and stereoblindness: In contrast, in a case where a child's eyes are straightened surgically after the age of about five or six years and the child had no opportunity to develop stereo vision in early childhood, normally the clinical expectation is that this intervention will lead to cosmetic improvements but not to stereo vision. Conventionally, no follow-up for stereopsis was performed in such cases. Clinical management of strabismus and stereoblindness: For instance, one author summarized the accepted scientific view of the time with the words: "Stereopsis will never be obtained unless amblyopia is treated, the eyes are aligned, and binocular fusion and function are achieved before the critical period for stereopsis ends. Clinical data suggest that this occurs before 24 months of age,[...] but we do not know exactly when it occurs, because crucial pieces of basic science information are missing." For purposes of illustration, reference is made to a book of doctors' handouts for patients, written for the general public and published in 2002, which summarizes the limitations in the terms in which they, at the time, were fully accepted as medical state of the art as follows: "If an adult has a childhood strabismus that was never treated, it is too late to improve any amblyopia or depth perception, so the goal may be simply cosmetic – to make the eyes appear to be properly aligned – though sometimes treatment does enlarge the extent of side vision." It has only been accepted very recently that the therapeutic approach was based on an unquestioned notion that has, since, been referred to as "myth" or "dogma". Clinical management of strabismus and stereoblindness: Recently, however, stereopsis recovery is known to have occurred in a number of adults. While this has in some cases occurred after visual exercises or spontaneous visual experiences, recently also the medical community's view of strabismus surgery has become more optimistic with regard to outcomes in terms of binocular function and possibly stereopsis. As one author states:The majority of adults will experience some improvement in binocular function after strabismus surgery even if the strabismus has been longstanding. Most commonly this takes the form of an expansion of binocular visual fields; however, some patients may also regain stereopsis.Scientific investigations on residual neural plasticity in adulthood now also include studies on the recovery of stereopsis. Now it is a matter of active scientific investigation under which conditions and to which degree binocular fusion and stereo vision can be acquired in adulthood, especially if the person is not known to have had any preceding experience of stereo vision, and how outcomes may depend on the patient's history of therapeutic interventions. Examples and case studies: Stereopsis recovery has been reported to have occurred in a few adults as a result of either medical treatments including strabismus surgery and vision therapy, or spontaneously after a stereoscopic 3D cinema experience. Examples and case studies: Personal reports in Fixing my Gaze The most renowned case of regained stereopsis is that of neuroscientist Susan R. Barry, who had had alternating infantile esotropia with diplopia, but no amblyopia, underwent three surgical corrections in childhood without achieving binocular vision at the time, and recovered from stereoblindness in adult age after vision therapy with optometrist Theresa Ruggiero. Barry's case has been reported on by neurologist Oliver Sacks. Also David H. Hubel, winner of the 1981 Nobel Prize in Physiology or Medicine with Torsten Wiesel for their discoveries concerning information processing in the visual system, commented positively on her case. In 2009, Barry published a book Fixing My Gaze: A Scientist's Journey into Seeing in Three Dimensions, reporting on her own and several other cases of stereopsis recovery.In her book Fixing my Gaze, Susan Barry gives a detailed description of her surprise, elation and subsequent experiences when her stereo vision suddenly set in. Examples and case studies: Hubel wrote of her book: Her book includes reports of further persons who have had similar experiences with stereopsis recovery. Barry cites the personal experiences of several persons, including a man who was an artist and described his experience of seeing with stereopsis as "that he could see one hundred more times negative space", a woman who had been amblyopic before seeing in 3D described how empty space now "looks and feels palpable, tangible—alive!", a woman who had been strabismic since age two and saw in 3D after taking vision therapy and stated that "The coolest thing is the feeling you get being 'in the dimension'", a woman who felt quite alarmed at the experience of suddenly seeing roadside trees and signs looming towards her, and two women who experienced an abrupt onset of stereo vision with a wide-angled view of the world, the first stating: "I was able to take in so much more of the room than I did before" and the second: "It was very dramatic as my peripheral vision suddenly filled in on both sides".Common to Barry and at least one person on whom she had reported is the finding that also their mental representation of space changed after having acquired stereo vision: that even with one eye closed the feeling is to see "more" than seeing with one eye closed before recovering stereopsis. Examples and case studies: Further cases in the media Apart from Barry, another formerly stereoblind adult whose acquired ability for stereopsis has received media attention is neuroscientist Bruce Bridgeman, professor of psychology and psychobiology at University of California Santa Cruz, who had grown up nearly stereoblind and acquired stereo vision spontaneously in 2012 at the age of 67, when watching the 3D movie Hugo with polarizing 3D glasses. The scene suddenly appeared to him in depth, and the ability to see the world in stereo stayed with him also after leaving the cinema. Examples and case studies: Other first person accounts Michael Thomas has described the experience of instantaneous onset of three dimensional vision at the age of 69 in a Public Facebook post. Recent scientific investigations: There is a growing recent body of scientific literature on investigations into the recovery of stereopsis in adults which started to appear shortly before Oliver Sacks' The New Yorker publication drew public attention to Barry's discovery. A number of scientific publications have systematically assessed patients' post-surgical stereopsis, whereas other studies have investigated the effects of eye training procedures. Recent scientific investigations: Post-surgical stereopsis Certain conditions are known to be a prerequisite for stereo vision, for instance, that the amount of horizontal deviation, if any is present, needs to be small. In several studies it has been recognized that surgery to correct strabismus can have the effect of improving binocular function. One of these studies, published in 2003, explicitly concluded: "We found that improvement in binocularity, including stereopsis, can be obtained in a substantial portion of adults." That article was published together with a discussion of the results among peers in which the scientific and social implications of the medical treatment were addressed, for example concerning the long-term relevancy of stereopsis, the importance of avoiding diplopia, the necessity of predictable outcomes, and psychosocial and socioeconomic relevance.Among the investigations into post-surgical stereopsis is a publication of 2005 that reported on a total of 43 adults over 18 years of age who had surgical correction after having lived with from constant-horizontal strabismus for more than 10 years with no previous surgery or stereopsis, with visual acuity of 20/40 or more also in the deviating eye; in this group, stereopsis was present in 80% of exotropes and 31% of esotropes, with the recovery of stereopsis and stereoacuity being uncorrelated to the number of years the deviation had persisted. A study that was published 2006 included, aside an extensive review of investigations on stereopsis recovery of the last decades, a re-evaluation of all those patients who had had congenital or early-onset strabismus with a large constant horizontal divergence and had undergone strabismus surgery in the years 1997–1999 in a given clinic, excluding those who had a history of neurologic or systemic diseases or with organic retinal diseases. Among the resulting 36 subjects aged 6–30 years, many had regained binocular vision (56% according to an evaluation with Bagolini striated glasses, 39% with Titmus test, 33% with Worth 4-dot test, and 22% with Random dot E test) and 57% had stereoacuity of 200 sec of arc of better, leading to the conclusion that some degrees of stereopsis can be achieved even in cases of infantile or early-childhood strabism. Another study  found that some chronically strabismic adults with good vision could recover fusion and stereopsis by means of surgical alignment.In contrast, in a study in which a group of 17 adults and older children of at least 8 years of age, all of whom received strabismus surgery and post-operative evaluation after long-standing untreated infantile esotropia, most showed binocular fusion when tested with Bagolini lenses and an increased visual field, but none demonstrated stereo fusion or stereopsis.Stereoacuity is limited by the visual acuity of the eyes, and in particular by the visual acuity of the weaker eye. That is, the more a patient's vision of any one of the two eyes is degraded compared to the 20/20 vision standard, the lower are the prospects of improving or re-gaining stereo vision, unless visual acuity itself were improved by other means. Strabismus surgery itself does not improve visual acuity. Recent scientific investigations: Stereopsis following training procedures Orthoptic exercises have proven to be effective for reducing symptoms in patients with convergence insufficiency and decompensating exophoria by improving the near-point convergence of the eyes that is necessary for binocular fusion.Experiments on monkeys, published 2007, revealed improvements in stereoacuity in monkeys who, after having been raised with binocular deprivation through prisms for the first two years, were exposed to extensive psychophysical training. Their stereo vision recovered in part, but remained far more limited than that of normally raised monkeys. Recent scientific investigations: Scientists at the University of California, Berkeley have stated that perceptual learning appears to play an important role. One investigation, published 2011, reported on a study on human stereopsis recovery using perceptual learning which was inspired by Barry's work. In this study, a small number of stereoblind subjects who had initially been stereoblind or stereoanomalous recovered stereopsis using perceptual learning exercises. Alongside the scientific assessment of the extent of recovery, also the subjective outcomes are described:After achieving stereopsis, our observers reported that the depth "popped out," which they found very helpful and joyful in their everyday life. The anisometropic observer GD noticed "a surge in depth" one day when shopping in a supermarket. While playing table tennis, she feels that she is able to track a ping-pong ball more accurately and therefore can play better. Strabismic observer AB is more confident now when walking down stairs because she can judge the depth of the steps better. Strabismics AB, DP, and LR, are able to enjoy 3D movies for the first time, and strabismic GJ finds it easier to catch a fly ball while playing baseball.In a follow-up study, the authors of this study pointed out that the stereopsis that was recovered following perceptual learning was more limited in resolution and precision compared to normal subjects' stereopsis. Dennis M. Levi was awarded the 2011 Charles F. Prentice Medal of the American Academy of Optometry for this work.There have been several attempts to make use of modern technology for enhanced binocular eye training, in particular for treating amblyopia and interocular suppression. In some cases these modern techniques have improved patients' stereoacuity. Very early technology-enhanced vision therapy efforts have included the cheiroscope, which is a haploscope in which left- and or right-eye images can be blended into view over a drawing pad, and the subject may be given a task such as to reproduce a line image presented to one eye. However, historically these approaches were not developed much further and they were not put to widespread use. Recent systems are based on dichoptic presentation of the elements of a video game or virtual reality such that each eye receives different signals of the virtual world that the player's brain must combine in order to play successfully. Recent scientific investigations: One of the earliest systems of this kind has been proposed by a research group in the University of Nottingham with the aim of treating amblyopia, using virtual reality masks or commercially available 3D shutter glasses. The group also has worked to develop perceptual learning training protocols that specifically target the deficit in stereo acuity to allow the recovery of normal stereo function even in adulthood.Another system of dichoptic presentation for binocular vision therapy has been proposed by researchers of the Research Institute of the McGill University Health Centre. Using a modified puzzle video game Tetris, the interocular suppression of patients with amblyopia was successfully treated with dichotomic training in which certain parameters of the training material were systematically adapted during the course of four weeks. Clinical supervision of such procedures is required to ensure that double vision does not occur. Most of the patients who underwent this treatment gained improved visual acuity of the weaker eye, and some also showed increased stereoacuity. Another study performed at the same institute showed that dichoptic training can be more effective in adults than the more conventional amblyopia treatment of an eye patch. For this investigation, 18 adults played Tetris for one hour each day, half of the group wearing eye patches and the other half playing a dichoptic version of the game. After two weeks, the group who played dichoptically showed a significant improvement of vision in the weaker eye and in stereopsis acuity; the eye patch group had moderate improvements, which increased substantially after they, too, were given the dichoptic training afterwards. Dichoptic-based perceptual learning therapy, presented by means of a head-mounted display, is amenable also to amblyopic children, as it improves both the amblyopic eye's visual acuity and the stereo function. The researchers at McGill University have shown that one to three weeks of playing a dichoptic video game for one to two hours on a hand-held device "can improve acuity and restore binocular function, including stereopsis in adults". Furthermore, it has been suggested that these effect can be enhanced by anodal transcranial direct current stimulation (tDCS).Together with Levi of the University of California, Berkeley, scientists at the University of Rochester have made further developments in terms of virtual reality computer games which have shown some promise in improving both monocular and binocular vision in human subjects. Recent scientific investigations: Game developer James Blaha, who developed his own crowd-funded version of a dichoptic VR game for the Oculus Rift together with Manish Gupta and is continuing to experiment with the game, experienced stereopsis for the first time using his game. In 2011, two cases of adults with anisometropic amblyopia were reported whose visual acuity and stereoacuity improved due to learning-based therapies.There are indications that the suppression of binocularity in amblyopic subjects is due to a suppression mechanism that prevents the amblyopic brain from learning to see. It has been suggested that desuppression and neuroplasticity may be favored by specific conditions that are commonly associated with perceptual learning tasks and video game playing such as a heightened requirement of attention, a prospect of reward, a feeling of enjoyment and a sense of flow. Health care policy matters: Health insurances always review therapies in terms of clinical effectiveness in view of existing scientific literature, benefit, risk and cost. Even if individual cases of recovery exist, a treatment is only considered effective under this point of view if there is sufficient likelihood that it will predictably improve outcomes. Health care policy matters: In this context, medical coverage policy of the global health services organization Cigna "does not cover vision therapy, optometric training, eye exercises or orthoptics because they are considered experimental, investigational or unproven for any indication including the management of visual disorders and learning disabilities" based on a bibliographic review published by Cigna which concludes that "insufficient evidence exists in the published, peer-reviewed literature to conclude that vision therapy is effective for the treatment of any of the strabismic disorders except preoperative prism adaptation for acquired esotropia". Similarly, the U.S. managed health care company Aetna offers vision therapy only in contracts with supplemental coverage and limits its prescriptions to a number of conditions that are explicitly specified in a list of vision disorders.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CPMulator** CPMulator: CPMulator is a program to emulate the CP/M operating system under x86 DOS. The program was developed in 1984 by Keystone Software Development. The company was owned and operated by Jay Sprenkle.The NEC V20 processor released that year was guaranteed to be hardware compatible with the Intel 8088. After reviewing the instruction timing of the math operations and instruction addressing hardware it was determined it could slightly speed up existing 8088 based IBM PC machines. Keystone software started advertising "PC Speedup Kits" in PCWeek magazine. The CPU was socketed in IBM PC's so it could easily be replaced. In practice most programs received a 5% speed increase but those that were math intensive were much improved. One customer reported his monte carlo simulation of a nuclear reactor was so much faster that he "double checked the results because he couldn't believe it was finished." CPMulator was developed after the release of the V20. The processor was also able to emulate the Intel 8080 instruction set in hardware. This opened the possibility of running older code on the new IBM machines. CPMulator was designed to modify CP/M binaries to make them run as if native 8088 DOS programs. The code to put the CPU in emulation mode was prefixed to each CP/M executable. Any calls to the CP/M operating system were intercepted and translated to DOS operating system calls. The program would leave 8080 emulation mode, make the operating system call, translate the results to CP/M standards and returned to emulation mode and continue the original program. CPMulator: The product went out of production after AT class machines became prevalent and NEC produced no V series pin for pin compatible version of 80286 processor.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Terminal ballistics** Terminal ballistics: Terminal ballistics (also known as wound ballistics) is a sub-field of ballistics concerned with the behavior and effects of a projectile when it hits and transfers its energy to a target. Bullet design (as well as the velocity of impact) largely determines the effectiveness of penetration. General: The concept of terminal ballistics can be applied to any projectile striking a target. Much of the topic specifically regards the effects of small arms fire striking live targets, and a projectile's ability to incapacitate or eliminate a target. Common factors include bullet weight, composition, velocity, and shape. Firearm projectiles: Classes of bullets There are three basic classes of bullets: Those designed to maximize accuracy at varying ranges Those designed to maximize damage to a target (by penetrating as deeply as possible) Those designed to avoid over-penetration of a target. This is done by deformation (to control the depth to which the bullet penetrates) which, as a by-product, causes more damage inside the wound. This class may limit penetration by either expanding or fragmenting. Firearm projectiles: Target shooting For short-range target shooting, typically on ranges up to 50 meters, or 55 yards, with low-powered ammunition like a .22 long rifle, aerodynamics is relatively unimportant, and velocities are low compared to velocities attained by full-powered ammunition. As long as a bullet's weight is balanced, it will not tumble; its shape is thus unimportant for purposes of its aerodynamics. For shooting at paper targets, bullets that will punch a perfect hole through the target —called wadcutters— are preferred. They have a very flat front, often with a relatively sharp edge along the perimeter, which punches out a hole equal to or almost equal to its diameter, thus enabling unambiguous scoring of the target. Since cutting the edge of a target ring will result in a higher score, accuracy to within fractions of an inch is desirable. Magazine-fed pistols tend not to reliably feed wadcutters because of their angular shape. To address this, the semi-wadcutter is often used. The semi-wadcutter consists of a conical section that comes to a smaller flat point and a thin sharp shoulder at the base of the cone. The flat point punches a hole, and the shoulder opens it up cleanly. For steel targets, the concern is to provide enough force to knock over the target while minimizing the damage to the target. A soft lead bullet, or jacketed hollow-point bullet, or soft-point bullet will flatten out on impact (if the velocity at impact is sufficient to make it deform), spreading the impact over a larger area of the target, allowing more total force to be applied without damaging the steel target. Firearm projectiles: There are also specialized bullets designed for use in long-range precision target shooting with high-powered rifles. The designs vary somewhat from manufacturer to manufacturer. Research in the 1950s by the U.S. Air Force discovered that bullets are more stable in flight for longer distances and more resistant to crosswinds if the center of gravity is biased to the rear of the center of pressure. The MatchKing bullet is an open-tip match design with a tiny aperture in the jacket at the point of the bullet and a hollow air space under the point of the bullet, whereas previous conventional bullets had a lead core that went all the way up to the point.The U.S. military now issues ammunition to snipers that use bullets of this type. In 7.62×51mm NATO, M852 Match and M118LR ammunition are issued, both of which use Sierra MatchKing bullets; in 5.56×45mm NATO, those U.S. Navy and U.S. Marine snipers who use accurized M16-type rifles are issued the Mk 262 Mod 0 cartridge developed jointly by Black Hills Ammunition and Crane Naval Surface Warfare Center. Firearm projectiles: For ultra-long-range precision target shooting with high-powered rifles and military sniping, radically designed very-low-drag (VLD) bullets are available that are generally produced out of rods of mono-metal alloys on CNC lathes. The driving force behind these projectiles is the wish to enhance the practical maximum effective range beyond normal standards. To achieve this, the bullets have to be very long and normal cartridge overall lengths often have to be exceeded. Common rifling twist rates also often have to be tightened to stabilize very long projectiles. Such commercially nonexistent cartridges are termed "wildcats". The use of a wildcat-based (ultra) long-range cartridge demands the use of a custom or customized rifle with an appropriately cut chamber and a fast-twist bore. Firearm projectiles: Maximum penetration For use against armored targets, or large, tough game animals, penetration is the most important consideration. Focusing the largest amount of kinetic energy and projectile mass on the smallest possible area of the target provides the greatest penetration. Bullets for maximum penetration are designed to resist deformation on impact and usually are made of lead that is covered in a copper, brass, or mild steel jacket (some are even solid copper or bronze alloy). The jacket completely covers the front of the bullet, although often the rear is left with exposed lead (this is a manufacturing consideration: the jacket is formed first, and the lead is swaged in from the rear). Firearm projectiles: For penetrating substances significantly harder than jacketed lead, the lead core is supplemented with or replaced with a harder material, such as hardened steel. Military armor-piercing small arms ammunition is made from a copper-jacketed steel core; the steel resists deformation better than the usual soft lead core leading to greater penetration. The current NATO 5.56mm SS109 (M855) bullet uses a steel-tipped lead core to improve penetration, the steel tip providing resistance to deformation for armor piercing, and the heavier lead core (25% heavier than the previous bullet, the M193) providing increased sectional density for better penetration in soft targets. For larger, higher-velocity calibers, such as tank guns, hardness is of secondary importance to density, and are normally sub-caliber projectiles made from tungsten carbide, tungsten hard alloy, or depleted uranium fired in a light aluminum or magnesium alloy (or carbon fiber in some cases) sabot. Firearm projectiles: Many modern tank guns are smoothbore, not rifled because practical rifling twists can only stabilize projectiles, such as an Armour-Piercing Capped Ballistic Cap (APCBC), with a length-to-diameter ratio of up to about 5:1 and also because the rifling adds friction, reducing the velocity and thus total force it is possible to achieve. To get the maximum force on the smallest area, modern anti-tank rounds have aspect ratios of 10:1 or more. Since these cannot be stabilized by rifling, they are built instead like large darts, with fins providing the stabilizing force instead of rifling. These subcaliber rounds, called Armor-Piercing Fin-Stabilized Discarding Sabot (APFSDS) are held in place in the bore by sabots. The sabot is a light material that transfers the pressure of the charge to the penetrator, then is discarded when the round leaves the barrel. Firearm projectiles: Controlled penetration The final category of bullets is that intended to control penetration so as not to harm anything behind the target. Such bullets are used primarily for hunting and civilian antipersonnel use; they are not generally used by the military, since the use of expanding bullets in international conflicts is prohibited by the Hague Convention and because these bullets have less chance of penetrating modern body armor. These bullets are designed to increase their surface area on impact, thus creating greater drag and limiting the travel through the target. A desirable side effect is that the expanded bullet makes a larger hole, increasing tissue damage and speeding up incapacitation. Firearm projectiles: While a bullet that penetrates through-and-through tends to cause more profuse bleeding, allowing a game animal to be blood trailed more easily, in some applications, preventing exit from the rear of the target is more desirable. A perforating bullet can continue on (likely not coaxial to the original trajectory due to target deflection) and might cause unintended damage or injury. Firearm projectiles: Flat point The simplest maximum disruption bullet is one with a wide, flat tip. This increases the effective surface area, as rounded bullets can allow tissues to "flow" around the edges. Flat points also increase drag during flight, which decreases the depth to which the bullet penetrates. Flat-point bullets, with fronts of up to 90% of the overall bullet diameter, are usually designed for use against large or dangerous games. They are often made of unusually hard alloys, are longer and heavier than normal for their caliber, and even include exotic materials such as tungsten to increase their sectional density. Firearm projectiles: These bullets are designed to penetrate deeply through muscle and bone while causing a wound channel of nearly the full diameter of the bullet. These bullets are designed to penetrate deeply enough to reach vital organs from any shooting angle and at a far enough range. One of the hunting applications of the flat point bullet is large game such as bear hunting with a handgun in a .44 Magnum or larger caliber. More common than hunting is its use in a defensive "bear gun" carried by outdoorsmen. The disadvantage of flat point bullets is the reduction in aerodynamic performance; the flat point induces much drag, leading to significantly reduced velocities at long range. Firearm projectiles: Expanding More effective on lighter targets are the expanding bullets, the hollow-point bullet, and the soft-point bullet. These are designed to use the hydraulic pressure of muscle tissue to expand the bullet. The hollow point peels back into several connected pieces (sometimes referred to as petals due to their appearance) causing the bullet to create a larger area of permanent damage. The hollow point fills with body tissue and fluids on impact, then expands as the bullet continues to have matter pushed into it. This process is informally called mushrooming, as the ideal result is a shape that resembles a mushroom—a cylindrical base, topped with a wide surface where the tip of the bullet has peeled back to expose more area while traveling through a body. For the purposes of aerodynamic efficiency, due to the hollow-point not creating drag, the tip of the hollow-point will often be tipped with a pointed polymer 'nose' which may also aid in expansion by functioning as a piston upon impact pushing the hollow point open. A copper-plated hollow-point loaded in a .44 Magnum, for example, with an original weight of 240 grains (15.55 g) and a diameter of 0.43 inch (11 mm) might mushroom on impact to form a rough circle with a diameter of 0.70 inches (18 mm) and a final weight of 239 grains (15.48 g). This is excellent performance; almost the entire weight is retained, and the frontal surface area increased by 63%. Penetration of the hollow-point would be less than half that of a similar nonexpanding bullet, and the resulting wound or permanent cavity would be much wider. Firearm projectiles: It might seem that if the whole purpose of a maximum disruption round is to expand to a larger diameter, it would make more sense to start out with the desired diameter rather than relying on the somewhat inconsistent results of expansion on impact. While there is merit to this (there is a strong following of the .45 ACP, as compared to the .40 S&W and 0.355 in diameter 9×19mm, for just this reason) there are also significant downsides. A larger-diameter bullet is going to have significantly more drag than a smaller-diameter bullet of the same mass, which means long-range performance will be significantly degraded. A larger diameter bullet also means more space is required to store the ammunition, which means either bulkier guns or smaller magazine capacities. The common trade-off when comparing .45 ACP, .40 S&W, and 9×19mm pistols is a 7- to 14-round capacity in the .45 ACP vs. a 10- to 16-round capacity in the .40 S&W vs. a 13- to 19-round capacity in the 9×19mm. Although several .45-caliber pistols are available with high-capacity magazines (Para Ordnance being one of the first in the late 1980s) many people find the wide grip required uncomfortable and difficult to use. Especially where the military requirement of a nonexpanding round is concerned, there is fierce debate over whether it is better to have fewer, larger bullets for enhanced terminal effects, or more, smaller bullets for an increased number of potential target hits. Firearm projectiles: Fragmenting This class of projectile is designed to break apart on impact whilst being of a construction more akin to that of an expanding bullet. Fragmenting bullets are usually constructed like the hollow-point projectiles described above, but with deeper and larger cavities. They may also have thinner copper jackets in order to reduce their overall integrity. These bullets are typically fired at high velocities to maximize their fragmentation upon impact. In contrast to a hollow-point which attempts to stay in one large piece retaining as much weight as possible whilst presenting the most surface area to the target, a fragmenting bullet is intended to break up into many small pieces almost instantly. Firearm projectiles: This means that all the kinetic energy from the bullet is transferred to the target in a very short period of time. The most common application of this bullet is the shooting of vermin, such as prairie dogs. The effect of these bullets is quite dramatic, often resulting in the animal being blown apart upon impact. However, in larger games fragmenting ammunition provides inadequate penetration of vital organs to ensure a clean kill; instead, a "splash wound" may result. This also limits the practical use of these rounds to supersonic (rifle) rounds, which have a high enough kinetic energy to ensure a lethal hit. The two main advantages of this ammunition are that it is very humane, as a hit almost anywhere on most small vermin will ensure an instant kill, and that the relatively low mass bullet fragments pose a very low risk of ricochet or of penetrating unintended secondary targets. Fragmenting bullets should not be confused with frangible bullets (see below). Firearm projectiles: Also used are bullets similar to hollow-point bullets or soft-point bullets whose cores and/or jackets are deliberately weakened to cause deformation or fragmentation upon impact. The Warsaw Pact 5.45×39mm M74 assault rifle round exemplifies a trend that is becoming common in the era of high velocity, small caliber military rounds. The 5.45×39mm uses a steel-jacketed bullet with a two-part core, the rear being lead and the front being steel with an air pocket foremost. Upon impact, the unsupported tip deforms, bending the bullet nose into a slight "L" shape. This causes the bullet to tumble in the tissue, thus increasing its effective frontal surface area by traveling sideways more often than not. Firearm projectiles: This does not violate the Hague Convention, as it specifically mentions bullets that expand or flatten in the body. The NATO SS109 also tends to bend at the steel/lead junction, but with its weaker jacket, it fragments into many dozens of pieces. NATO 7.62 mm balls manufactured by some countries, such as Germany and Sweden, are also known to fragment due to jacket construction. Firearm projectiles: Frangible The last category of expanding bullets is frangible bullets. These are designed to break upon impact, which results in a huge increase in surface area. The most common of these bullets are made of small diameter lead pellets, placed in a thin copper shell, and held in place by an epoxy or similar binding agent. On impact, the epoxy shatters, and the copper shell opens up, the individual lead balls then spread out in a wide pattern, and due to their low mass-to-surface area ratio, stop very quickly. Similar bullets are made out of sintered metals, which turn to powder upon impact. These bullets are usually restricted to pistol cartridges and rifle cartridges intended for use at very short ranges, as the nonhomogenous cores tend to cause inaccuracies that, while acceptable at short ranges, are not acceptable for the long ranges at which some rifles are used. Firearm projectiles: By far the most common use of frangible ammunition is for training by shooting steel targets at close ranges, while one may be at risk of being injured by fragments of standard solid lead bullets at close ranges when shooting steel, the powder that frangible bullets disintegrate into upon impact poses a very low risk to the shooter. This becomes irrelevant when shooting at longer ranges because it is unlikely that fragments created by the impact of any type of bullet on a steel target will travel more than 50-100yds, in these long-range cases it is of more value to use bullets that fly identically to those to be used in real situations than to mitigate the possible risks of bullet fragments and ricochets so frangible bullets are typically not used. One interesting use of the sintered metal rounds is in shotguns in hostage rescue situations; the sintered metal round is used at near-contact range to shoot the lock mechanism out of doors. The resulting metal powder will immediately disperse after knocking out the door lock and cause little or no damage to the occupants of the room. Frangible rounds are also used by armed security agents on aircraft. The concern is not depressurization (a bullet hole will not depressurize an airliner), but over-penetration and damage to vital electrical or hydraulic lines, or injury to an innocent bystander by a bullet that travels through a target's body completely instead of stopping in the body. Firearm projectiles: Large caliber The purpose of firing a large caliber projectile is not always the same. For example, one might need to create disorganization within enemy troops, create casualties within enemy troops, eliminate the functioning of an enemy tank, or destroy an enemy bunker. Different purposes of course require different projectile designs. Firearm projectiles: Many large caliber projectiles are filled with a high explosive which, when detonated, shatters the shell casing, producing thousands of high-velocity fragments and an accompanying sharply rising blast overpressure. More rarely, others are used to release chemical or biological agents, either on impact or when over the target area; designing an appropriate fuse is a difficult task that lies outside the realm of terminal ballistics. Firearm projectiles: Other large-caliber projectiles use bomblets (sub-munitions), which are released by the carrier projectile at a required height or time above their target. For US artillery ammunition, these projectiles are called Dual-Purpose Improved Conventional Munition (DPICM), a 155 mm M864 DPICM projectile for example contains a total of 72 shaped-charge fragmentation bomblets. The use of multiple bomblets over a single HE projectile allows for a denser and less wasteful fragmentation field to be produced. If a bomblet strikes an armored vehicle, there is also a chance that the shaped charge will (if used) penetrate and disable the vehicle. A negative factor in their use is that any bomblets that fail to function go on to litter the battlefield in a highly sensitive and lethal state, causing casualties long after the cessation of conflict. International conventions tend to forbid or restrict the use of this type of projectile. Firearm projectiles: Some anti-armor projectiles use what is known as a shaped charge to defeat their target. Shaped charges have been used ever since it was discovered that a block of high explosives with letters engraved in it created perfect impressions of those letters when detonated against a piece of metal. A shaped charge is an explosive charge with a hollow lined cavity at one end and a detonator at the other. They operate by the detonating high explosive collapsing the (often copper) liner into itself. Some of the collapsing liners go on to form a constantly stretching jet of material traveling at hypersonic speed. When detonated at the correct standoff to the armor, the jet violently forces its way through the target's armor. Firearm projectiles: Contrary to popular belief, the jet of a copper-lined shaped charge is not molten, although it is heated to about 500 °C. This misconception is due to the metal's fluid-like behavior, which is caused by the massive pressures produced during the detonation of the explosive causing the metal to flow plastically. When used in the anti-tank role, a projectile that uses a shaped-charge warhead is known by the acronym HEAT (high-explosive anti-tank). Firearm projectiles: Shaped charges can be defended against by the use of explosive reactive armor (ERA), or complex composite armor arrays. ERA uses a high explosive sandwiched between two, relatively thin, (normally) metallic plates. The explosive is detonated when struck by the shaped charge's jet, the detonating explosive sandwich forces the two plates apart, lowering the jets’ penetration by interfering with, and disrupting it. A disadvantage of using ERA is that each plate can protect against a single strike, and the resulting explosion can be extremely dangerous to nearby personnel and lightly armoured structures.Tank fired HEAT projectiles are slowly being replaced for the attack of heavy armour by so-called "kinetic energy" penetrators. It is the most primitive (in-shape) projectiles that are hardest to defend against. A KE penetrator requires an enormous thickness of steel, or a complex armour array to protect against. They also produce a much larger diameter hole in comparison to a shaped charge and hence produce a far more extensive behind armour effect. KE penetrators are most effective when constructed of a dense tough material that is formed into a long, narrow, arrow/dart like projectile. Firearm projectiles: Tungsten and depleted uranium alloys are often used as the penetrator material. The length of the penetrator is limited by the ability of the penetrator to withstand launch forces whilst in the bore and shear forces along its length at impact.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Linear search problem** Linear search problem: In computational complexity theory, the linear search problem is an optimal search problem introduced by Richard E. Bellman and independently considered by Anatole Beck. The problem: "An immobile hider is located on the real line according to a known probability distribution. A searcher, whose maximal velocity is one, starts from the origin and wishes to discover the hider in minimal expected time. It is assumed that the searcher can change the direction of his motion without any loss of time. It is also assumed that the searcher cannot see the hider until he actually reaches the point at which the hider is located and the time elapsed until this moment is the duration of the game." The problem is to find the hider in the shortest time possible. Generally, since the hider could be on either side of the searcher and an arbitrary distance away, the searcher has to oscillate back and forth, i.e., the searcher has to go a distance x1 in one direction, return to the origin and go distance x2 in the other direction, etc., (the length of the n-th step being denoted by xn). (However, an optimal solution need not have a first step and could start with an infinite number of small 'oscillations'.) This problem is usually called the linear search problem and a search plan is called a trajectory. It has attracted much research, some of it quite recent.The linear search problem for a general probability distribution is unsolved. However, there exists a dynamic programming algorithm that produces a solution for any discrete distribution and also an approximate solution, for any probability distribution, with any desired accuracy.The linear search problem was solved by Anatole Beck and Donald J. Newman (1970) as a two-person zero-sum game. Their minimax trajectory is to double the distance on each step and the optimal strategy is a mixture of trajectories that increase the distance by some fixed constant. This solution gives search strategies that are not sensitive to assumptions concerning the distribution of the target. Thus, it also presents an upper bound for a worst-case scenario. This solution was obtained in the framework of an online algorithm by Shmuel Gal, who also generalized this result to a set of concurrent rays. The best online competitive ratio for the search on the line is 9 but it can be reduced to 4.6 by using a randomized strategy. Demaine et al. gave an online solution with a turn cost.These results were rediscovered in the 1990s by computer scientists as the cow path problem.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bicalicene** Bicalicene: Bicalicene is polycyclic hydrocarbon with chemical formula C16H8, composed of two cyclopentadiene and two cyclopropene rings linked into a larger eight-membered ring. There are two isomers: cis-bicalicene and trans-bicalicene. It is a dimer of calicene. Synthesis: Bicalicene is prepared by treatment of 1,2-bis(tert-butylthio)-3,3-dichlorocyclopropene with cyclopentadiene anion, followed by desulfurizing stannylation with tributyltin hydride, and then treatment with silica gel. Properties: trans-Bicalicene is polycyclic aromatic hydrocarbon, which is unusual for a 16 π electron ring system. Viewed as a unified ring structure, Hückel's rule predicts it would be anti-aromatic (4n π electrons). Instead, however, the structure has a dominant partially-delocalized charge-separated structure consisting of four independently-aromatic (4n+2 π electron) rings: two as cyclopropenyl cations (two π electrons each) and two as cyclopentadienyl anions (six π electrons each).cis-Bicalicene, by contrast, is an antiaromatic hydrocarbon. A resonance structure with four aromatic rings, analogous to the one that makes the trans isomer stable, would suffer from destabilizing charge effects, and other resonance structures have 4n rather than 4n+2 π electrons in at least one ring.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hydroxymethylation** Hydroxymethylation: Hydroxymethylation is a chemical reaction that installs the CH2OH group. The transformation can be implemented in many ways and applies to both industrial and biochemical processes. Hydroxymethylation with formaldehyde: A common method for hydroxymethylation involves the reaction of formaldehyde with active C-H and N-H bonds: R3C-H + CH2O → R3C-CH2OH R2N-H + CH2O → R2N-CH2OHA typical active C-H bond is provided by a terminal acetylene or the alpha protons of an aldehyde. In industry, hydroxymethylation of acetaldehyde with formaldehyde is used in the production of pentaerythritol: P-H bonds are also prone to reaction with formaldehyde. Tetrakis(hydroxymethyl)phosphonium chloride ([P(CH2OH)4]Cl) is produced in this way from phosphine (PH3). Hydroxymethylation in demethylation: 5-Methylcytosine is a common epigenetic marker. The methyl group is modified by oxidation of the methyl group in a process called hydroxymethylation: RCH3 + O → RCH2OHThis oxidation is thought to be a prelude to removal, regenerating cytosine. Representative reactions: A two-step hydroxymethylation of aldehydes involves methylenation followed by hydroboration-oxidation: RCHO + Ph3P=CH2 → RCH=CH2 + Ph3PO RCH=CH2 + R2BH → RCH2-CH2BR2 RCH2-CH2BR2 + H2O2 → RCH2-CH2OH + "HOBR2"Silylmethyl Grignard reagents are nucleophilic reagents for hydroxymethylation of ketones: R2C=O + ClMgCH2SiR'3 → R2C(OMgCl)CH2SiR'3 R2C(OMgCl)CH2SiR'3 + H2O + H2O2 → R2C(OH)CH2OH + "HOSiR'3" Reactions of hydroxymethylated compounds: A common reaction of hydroxymethylated compounds is further reaction with a second equivalent of an active X-H bond: hydroxymethylation: X-H + CH2O → X-CH2OH crosslinking: X-H + X-CH2OH → X-CH2-X + H2OThis pattern is illustrated by the use of formaldehyde in the production various polymers and resins from phenol-formaldehyde condensations (Bakelite, Novolak, and calixarenes). Similar crosslinking occurs in urea-formaldehyde resins. Reactions of hydroxymethylated compounds: The hydroxymethylation of N-H and P-H bonds can often be reversed by base. This reaction is illustrated by the preparation of tris(hydroxymethyl)phosphine: [P(CH2OH)4]Cl + NaOH → P(CH2OH)3 + H2O + H2C=O + NaClWhen conducted in the presence of chlorinating agents, hydroxymethylation leads to chloromethylation as illustrated by thee Blanc chloromethylation. Related: Hydroxyethylation involves the installation of the CH2CH2OH group, as practiced in ethoxylation. Aminomethylation is often effected with Eschenmoser's salt, [(CH3)2NCH2]OTf
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Party spokesperson** Party spokesperson: A party spokesperson (also known as party spokesman or party spokeswoman) is any member of a political party (at any regional level of the party structure) who is charged by the leaders of the party with communicating the party's position on specific portfolios. Party spokespersons largely feature in political parties of parliamentary systems. Party spokespersons can also be assisted in their duties by deputy or assistant spokespersons in the same portfolio. Party spokesperson: In Canada, non-government party spokespersons are known as party critics and deputy party critics, respectively. Parliamentary party spokespersons: Spokespersons of a ruling party are coterminous with their roles as ministers in the government cabinet, and spokespersons of the leading opposition party (usually in Westminster system parliaments, where they're called the "Official Opposition") are coterminous with their roles as shadow ministers in the shadow cabinet; both are usually called "frontbenchers". A minor parliamentary/legislative party (be it in or out of coalition with a government cabinet or official opposition shadow cabinet) may have its own set of spokespersons and respective portfolios, although they are often considered during parliamentary debates with lesser courtesy than the government or official opposition's cabinets; in Ireland, for example, all parliamentary parties with at least 7 elected members have their own front benches, while those with less than 7 elected members must agree with other independent MPs to form a technical group in order to gain speaking rights. Non-parliamentary party spokespersons: Non-parliamentary parties or parties with very few elected parliament members (that is, not enough to effectively spread policy communication duties) may also have their own non-parliamentary spokespersons and respective portfolios, despite not possessing speaking rights in parliament (or sometimes, as in extra-parliamentary opposition, abstaining from seeking office). They are more likely to speak for the party to media outlets or other organizations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sodium phosphates** Sodium phosphates: A sodium phosphate is a generic variety of salts of sodium (Na+) and phosphate (PO43−). Phosphate also forms families or condensed anions including di-, tri-, tetra-, and polyphosphates. Most of these salts are known in both anhydrous (water-free) and hydrated forms. The hydrates are more common than the anhydrous forms. Uses: Sodium phosphates have many applications in food and for water treatment. For example, sodium phosphates are often used as emulsifiers (as in processed cheese), thickening agents, and leavening agents for baked goods. They are also used to control pH of processed foods. They are also used in medicine for constipation and to prepare the bowel for medical procedures. They are also used in detergents for softening water and as an efficient anti-rust solution. Adverse effects: Sodium phosphates are popular in commerce in part because they are inexpensive and because they are nontoxic at normal levels of consumption. However, oral sodium phosphates when taken at high doses for bowel preparation for colonoscopy may in some individuals carry a risk of kidney injury under the form of phosphate nephropathy. There are several oral phosphate formulations which are prepared extemporaneously. Oral phosphate prep drugs have been withdrawn in the United States, although evidence of causality is equivocal. Since safe and effective replacements for phosphate purgatives are available, several medical authorities have recommended general disuse of oral phosphates. Monophosphates: Three families of sodium monophosphates are common, those derived from orthophosphate (PO43−), hydrogen phosphate (HPO42−), and dihydrogenphosphate (H2PO4−). Some of the best known salts are shown in the following table. Di- and polyphosphates: In addition to these phosphates, sodium forms a number of useful salts with pyrophosphates (also called diphosphates), triphosphates and high polymers. Of these salts, those of the diphosphates are particularly common commercially. Beyond the diphosphates, sodium salts are known triphosphates, e.g. sodium triphosphate and tetraphosphates. The cyclic polyphosphates, called metaphosphates, include the trimer sodium trimetaphosphate and the tetramer, Na3P3O9 and Na4P4O12, respectively. Di- and polyphosphates: Polymeric sodium phosphates are formed upon heating mixtures of NaH2PO4 and Na2HPO4, which induces a condensation reaction. The specific polyphosphate generated depends on the details of the heating and annealing. One derivative is the glassy (i.e., amorphous) Graham's salt. It is a linear polyphosphate with the average formula NaO(NaPO3)Na2. Crystalline high molecular weight polyphosphates include Kurrol's salt and Maddrell's salt (CAS#10361-03-2). These species have the formula [NaPO3]n[NaPO3(OH)]2 where n can be as great as 2000. In terms of their structures, these polymers consist of PO3− "monomers", with the chains are terminated by protonated phosphates.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Betaxolol** Betaxolol: Betaxolol is a selective beta1 receptor blocker used in the treatment of hypertension and angina. Being selective for beta1 receptors, it typically has fewer systemic side effects than non-selective beta-blockers, for example, not causing bronchospasm (mediated by beta2 receptors) as timolol may. Betaxolol also shows greater affinity for beta1 receptors than metoprolol. In addition to its effect on the heart, betaxolol reduces the pressure within the eye (intraocular pressure). This effect is thought to be caused by reducing the production of the liquid (which is called the aqueous humor) within the eye. The precise mechanism of this effect is not known. The reduction in intraocular pressure reduces the risk of damage to the optic nerve and loss of vision in patients with elevated intraocular pressure due to glaucoma. Betaxolol: It was patented in 1975 and approved for medical use in 1983. Medical uses: Hypertension Betaxolol is most commonly ingested orally alone or with other medications for the management of essential hypertension. It is a cardioselective beta blocker, targeting beta-1 adrenergic receptors found in the cardiac muscle. Blood pressure is decreased by the mechanism of blood vessels relaxing and improving the flow of blood. Medical uses: Glaucoma Ophthalmic betaxolol is an available treatment for primary open angle glaucoma (POAG) and optical hypertension. Betaxolol effectively prevents the increase of intracellular calcium, which leads to increased production of the aqueous humor. In the context of open angle glaucoma, increased aqueous humor produced by ciliary bodies increases intraocular pressure, causing degeneration of retinal ganglion cells and the optic nerve.Furthermore, betaxolol is additionally able to protect retinal neurones following topical application from excitotoxicity or ischemia-reperfusion, providing a neuroprotective effect. This is thought to be attributed to its capacity to attenuate neuronal calcium and sodium influx. Contraindications: Hypersensitivity to the drug Patients with sinus bradycardia, heart block greater than first degree, cardiogenic shock, and overt cardiac failure Side effects: The adverse side-effects of betaxolol can be categorized into local and systemic effects. The local effects include: transient irritation (20-40% of patients) burning pruritus, or general itching punctate keratitis blurry visionSystemically, patients taking betaxolol might experience: bradycardia hypotension fatigue sexual impotence hair loss confusion headache dizziness bronchospasm at higher doses cardiac problems such as arrhythmia, bundle branch block, myocardial infarction, sinus arrest, and congestive heart failure mental effects such as depression, disorientation, vertigo, sleepwalking, rhinitis dysuria metabolic side effects such as an increase in LDL cholesterol levels can mask the symptoms of hypoglycemia diabetic patients History: Betaxolol was approved by the U.S. Food and Drug Administration (FDA) for ocular use as a 0.5% solution (Betoptic) in 1985 and as a 0.25% solution (Betoptic S) in 1989. Society and culture: Brand names Brand names include Betoptic, Betoptic S, Lokren, Kerlone.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Endodontic files and reamers** Endodontic files and reamers: Endodontic files and reamers are surgical instruments used by dentists when performing root canal treatment. These tools are used to clean and shape the root canal, with the concept being to perform complete chemomechanical debridement of the root canal to the length of the apical foramen. Preparing the canal in this way facilitates the chemical disinfection to a satisfactory length but also provides a shape conducive to obturation (filling of the canal). Hand files: Hand files can provide tactile sensation when cleaning or shaping root canals. This allows the dentist to feel changes in resistance or angulation, which can help determine curvature, calcification and/or changes in anatomy, in which two dimensional radiographs may not always identify. This information can help determine strategies or avoid complications before moving on to rotary instruments. K-type files The cutting edge of K type files is made up of twisted squares of stainless steel alloy. The K-flex file differs for the fact it has a rhomboid shaped cross-section and has an increased flexibility compared to traditional K-files. C-type files C-files are stiffer than K-files, and are recommended for calcified canals and ones that are curved and narrow. Hand files: Nickel-titanium files Nickel-titanium is a superelastic alloy which allows it to undergo greater stresses compared to stainless steel therefore files have a reduced risk of file fracture. It also has the characteristic of 'shape memory' which allows it to return to its initial shape through heating after strain. This reduces the risk of deformation within the root canal as forces of compression and tension are absent. Hand files: The superelasticity allows an increase in taper (between 4–8%) compared to stainless steel. This allows an adequate taper of the root canal which takes less time to prepare than with stainless steel and less files needed. The super elasticity also means the risk of zipping and apical transportation is reduced. Many Nickel-titanium files are available. The files can be used within rotary systems or manually for a higher level of control. Techniques for use Watch winding and circumferential filing technique The use of the file in a forwards and backwards motion, as if watch winding, with slight apical pressure. This allows the file to effectively debride the canal dentine by moving slowly down the canal. For K-type files, once the file has reached the desired working length, a push and pulling action is used around the circumference of the canal, while only maintaining contact with the canal wall on the outstroke to minimise a debris blockage apically. Hand files: The balanced force technique This is the most widely used technique and especially good for working with curved canals.Files used for this technique need to be non-cutting edge and flexible. The file is rotated 60 degrees clockwise in the canal when a slight resistance is felt. The file is then rotated 360 degrees anticlockwise to pick up the dentine in the flutes that was made during the first rotation. This should be done no more than three times before the file is removed and cleaned and the canal system irrigated before reinsertion. Hand files: Hedstrom files The cross-section of a Hedstrom file (H-file) is made up of a continuous sequence of cones. They are very sharp with a cutting tip. Their use in a push-pull fashion results in a high level of debridement on removal from the root canal. They should not be rotated more than 30 degrees as they are narrow and vulnerable to fracture. They are also used for removal of root canal filling materials e.g. gutta percha during secondary root canal treatment. Hand files: Barbed broach This file is used to remove pulp tissue (extirpation) during root canal treatment. There are sharp barbs on the file to engage the pulp tissue and remove this efficiently. These files are not used to shape the RCS. Standardisation of instruments (ISO): The handles of the ISO instruments are colour coded and are available in three different lengths of 21mm, 25mm and 31mm where the extra length is non-cutting shaft. This extra length is particularly useful for posterior teeth where access and visibility is impaired. Standardisation of instruments (ISO): ISO files are made of stainless steel. This can be useful in smaller files (<20) but larger files have increased rigidity which can result in procedural errors. At smaller sizes the files can be pre-curved which is a major advantage for the debridement of roots with sharp curvatures. Their rigidity also has an advantage in calcified root canals in the initial stages of debridement. The ISO stainless steel files on the market today include K-Flex, K-Flexofile and Hedström where the tip size and taper is standardised. Standardisation of instruments (ISO): ISO normed hand files have a standardised taper of 2% that equates to 0.02mm increase in diameter per mm of file. This standardised taper allows you to calculate the diameter of any given stainless steel file at any given point. Where the 2% taper means that there is an increase in diameter by 0.02mm every 1mm of file (moved in a coronal direction). The most apical point of any file is deemed D0, so moving coronal on the file by 1mm brings you to D1 and so on, up to D16 as there is a 16mm cutting surface on all files. Standardisation of instruments (ISO): For example, an ISO K file size 25 has a D0 value of 0.25mm diameter at its tip. If you were to move 6mm coronally on this file from D0, the cross sectional diameter would be: 0.25mm + (6mmx0.02mm)=0.37mm Protaper series The range of files are available as hand and rotary. The first files in the series are termed SX, S1 and S2. These are used to improve access to the canals by first creating a coronal flare in the crown-down technique. SX files: D0 value of 0.19mm S1 files: D0 value of 0.17mm S2 files: D0 value of 0.20mmSX files are typically used first as they are shorter in overall length 19mm and so are good in cases of restrictive space. The canal is prepped in the coronal 2/3 with these files as part of the crown-down technique. Standardisation of instruments (ISO): After this, files named F1, F2, F3 etc. are used with increasing D0 values. These are used to shape the canal. Standardisation of instruments (ISO): F1 files: D0 value of 0. 20mm F2 files: D0 value of 0.25mm F3 files: D0 value of 0.30mm etc.Between each of these finishing files, you should recapitulate the canal using the corresponding (with the same D0 value) K file. This prevents procedural errors, confirms the canal remains patent and prevents dentine swarf build up in within the canal. Complete copious irrigation in between each file. Rotary files: The introduction of Nickel Titanium in dentistry has allowed the use of rotary systems to be used to prepare root canals safely and predictably. Rotary instrumentation is known to have an improved cutting efficiency when compared with hand filing techniques. It is advisable to use a dedicated electric endodontic motor where torque and speed can be easily controlled dependent on the system chosen. Despite the advantages of rotary systems, it is always recommended to create a glide path with hand files in each canal prior to rotary instrumentation. There are numerous rotary files available on the market, including a variety of systems from different manufacturers. Rotary files: Reciprocating systems Reciprocating systems involve rotation of the file in both anti-clockwise and clockwise directions. This is similar to the ‘balanced force’ mechanism used with hand files. When the file is used in an anti-clockwise direction, it engages dentine and is quickly followed by a clockwise turn before re-engaging the root canal wall and shearing the dentine. Benefits of a reciprocating system include: Reduced risk of cyclical failure Reduced risk of torsional failure Simple protocol with single file (small, regular or large based on canal size) therefore more cost effective Self-adjusting files Self-adjusting file systems have been developed to overcome complications that arise due to complex anatomy and canal configurations. These files are used in a rotary hand piece and consist of a flexible, thin NiTi lattice with a hollow centre that adapt three-dimensionally to the shape of a given root canal, including its cross section. The files are operated with vibratory in-and-out motion, with continuous irrigation of disinfectant delivered by a peristaltic pump through the hollow file. A uniform layer of dentin is removed from the whole circumference of the root canal, thus achieving the main goals of root canal treatment while preserving the remaining root dentin. The 3D scrubbing effect of the file, combined with the fresh irrigant, result in clean canals, which in turn facilitate better obturation. More effective disinfection of flat-oval root canals is another goal that is simultaneously attained. Rotary files: D-files D files are a selection of bespoke rotary files that are commonly used in re-treatment cases for the efficient removal of gutta percha. They are used in sequence to remove the coronal (D1), mid (D2) and apical (D3) ⅓ root filling material more efficiently before the final shaping with conventional instruments. D1 is 16mm in length with a cutting end tip to engage the filling material in the canal. D2 and D3 are 18mm and 22mm in length respectively, both are non end cutting and aim to not remove remaining dentine from canal walls in the process. Single use legislation (in the UK): In 2007, new legislation documenting the possible risk of prion disease transmission via endodontic files/reamers during root canal treatment was published via the BDJ. The conclusions made were such that there was no significant risk associated but the implementation of single use instruments was introduced to take all possible precautions. This was primarily due to the shape and relative surface area of the files making thorough disinfection and sterilisation very difficult. Mechanisms of failure: Instrumentation of the root canal systems (RCS) can lead to procedural errors including ledging, zipping, canal perforation and apex transportation all of which can be somewhat successfully resolved through further manual corrective techniques. However, file separation whereby the instrument breaks in the canal, is the most concerning and problematic procedural error, with fractured endodontic instruments being the most commonly found object in the RCS. The incidence of file fracture has been found to range between 0.25-6% of cases. File separation will create an obstruction within the canal preventing adequate cleaning and shaping of the canal at and beyond the obstruction as well as under-filling of the RCS. This may ultimately lead to endodontic failure depending on the location at which the file fractured in the RCS. Mechanisms of failure: The cause fracture of instruments can be divided into different factors, operator/ technique, anatomy and instrument. Cyclic fatigue i.e. the lack of flexibility of the instruments when negotiating particularly curved canals. Mechanisms of failure: The more curved the canal, the greater the cyclical fatigue placed on the instrument, as it is undergoing repetitive tensile and compressive stresses upon rotation no matter the flexibility of the alloy. Pre-curving of the stainless steel files for canal negotiation will work-harden them, rendering them more brittle and therefore are more likely to fracture. Such files should also not be twisted in an anticlockwise manner, as this may also lead to brittle fracture especially when there is increased torque. NiTi files have been designed with increased flexibility for canal negotiation, however this does not entirely negate the event of file separation. NiTi files undergo cyclic fatigue due to a change in the crystalline structure of the file whilst under stress resulting in the alloy becoming more brittle. Mechanisms of failure: Flexural fatigue i.e. overuse of the file. Mechanisms of failure: It is safe to assume that the more a file is used, the greater the risk of separation. However, one cannot dictate a specific number of times for use nor predict when a file is going to fracture. The introduction of single use files has reduced this risk somewhat, yet it is vital to regularly inspect the files upon removal from canals for damage. The problem comes when files separate without there being any visible sign of damage. Mechanisms of failure: Torsional fatigue Torque relates to the required force needed in order for an instrument to carry on rotating upon encountering frictional forces. A file may bind the wall of the root canal apically due to a larger diameter of the file compared with the canal causing friction. If rotational forces are still in motion, torque may reach a critical level and the file will fracture. The torque generated from smaller canals will be greater than in larger canals, as files will bind to the canal walls more readily through friction. The greater the diameter of the instrument, the more force it can withstand despite needing increased torque however, the less resistant it becomes to cyclic fatigue. Torsional fatigue can be somewhat limited through creation of a glide path and adopting the Crown-Down technique in a bid to reduce frictional forces. Mechanisms of failure: Intrinsic file defects Beware of surface defects arising from the manufacture of the files, which can propagate under fatigue by creating stress concentrations and ultimately lead to fracture. This holds especially true for NiTi files, which are manufactured via milling of alloy blanks using CAD-CAM, as opposed to twisting of the blanks like with stainless steel. Deeper cutting flutes will also create stress concentrations. Mechanisms of failure: Operator related fracture File failure could be attributed to the skill and chosen technique used for instrumentation by the operator. It is more often the way in which an instrument is used, as opposed to the number of times it has been used that causes fracture e.g. due to overloading. Aggressively inserting instruments into canals should be avoided, as this will increase the friction created between the canal walls and the file. Evidence shows that hand instrumentation will result in a lower risk of file fracture compared with rotary and this may be attributed to increased rotational speed, which enhances the effects of cyclic fatigue. Therefore, when using electric motors with rotary instruments, a low speed and low torque concept is recommended. Mechanisms of failure: Minimising the risk of separation Well-angled radiographs to determine canal curvature (this will however be a 2D representation of a 3D system) Access cavity design (straight line access) and glide path Crown Down instrumentation sequence to minimise friction Wet canals for lubrication but beware of risk of corrosion to stainless steel instruments due to irrigants used in canals e.g. with EDTA or Sodium Hypochlorite Regular file inspection before and during instrumentation Set electric motors at low torque (follow manufacturer instruction for recommended speed and torque)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Calico cat** Calico cat: A calico cat (US English) is a domestic cat of any breed with a tri-color coat. The calico cat is most commonly thought of as being 25% to 75% white with large orange and black patches; however, they may have other colors in their patterns. Sometimes a variation occurs with cream and grey patches that is called a muted calico. Calicoes are almost exclusively female except under rare genetic conditions. Calico cat: A calico cat is not to be confused with a tortoiseshell, which has a black undercoat and a mostly mottled coat of black/red or blue/cream with relatively few to no white markings. However, outside North America, the calico pattern is more commonly called tortoiseshell and white. In the province of Quebec, Canada, they are sometimes called chatte d'Espagne (French for '(female) cat of Spain'). Other names include brindle, tricolor cat, mikeneko (三毛猫) (Japanese for 'triple fur cat'), samsaek goyangi (삼색 고양이) (Korean for 'three colored cat'), and lapjeskat (Dutch for 'patches cat'). Calicoes with diluted coloration (blue tortoiseshell and white) have been called calimanco or clouded tiger. Occasionally, the tri-color calico coloration is combined with a tabby patterning, called tortoiseshell tabby with white. This calico-patched tabby may be referred to as caliby or torbico. Calico cat: Derived from a colorful printed Calico fabric, when the term "calico" is applied to cats it refers only to a color pattern of the fur, not to a cat breed or any reference to any other traits, such as their eyes. Formal standards set by professional and show animal breeders limit the breeds among which they permit registration of cats with calico coloration; those breeds are the Manx cat, American Shorthair, Maine Coon, British Shorthair, Persian cat, Arabian Mau, Japanese Bobtail, Exotic Shorthair, Siberian, Turkish Van, Turkish Angora, and Norwegian Forest cat. Calico cat: Because the genetic determination of coat colors in calico cats is linked to the X chromosome, calicoes are nearly always female, with one color linked to the maternal X chromosome and a second color linked to the paternal X chromosome. In most cases, males are only one color (for instance, black) as they have only one X chromosome. Male calicoes can happen when a male cat has two X chromosomes (Klinefelter syndrome, with XXY sex chromosomes and generally they are sterile); the condition is a chimera, with two different cell types; or, rarely, when some skin cells of the developing kitten spontaneously mutate. Calico cat: Some calico cats, called "dilute calicoes", may be lighter in color overall. Dilutes are distinguished by having grey (known as blue), cream, and gold colors instead of the typical colors along with the white. History: The tri-color coat characteristic of calico cats does not define any breed, but occurs incidentally in cats who express a range of color patterns; accordingly, the effect has no definitive historical background. However, the existence of patches in calico cats was traced to a certain degree by Neil Todd in a study determining the migration of domesticated cats along trade routes in Europe and Northern Africa. The proportion of cats having the orange mutant gene found in calicoes was traced to the port cities along the Mediterranean in Greece, France, Spain, and Italy, originating from Egypt.The calico has been Maryland's state cat since 1 October 2001. Calico cats were chosen as the state cat because their white, black, and orange coloring is in harmony with the coloring of the Baltimore oriole (the state bird) and the Baltimore checkerspot butterfly (the state insect). Etymology: The fabric called "calico" was originally from the city of Calicut in southwestern India. Printed calico was imported into the United States from Lancashire, England, in the 1780s, and a linguistic separation occurred there. While Europe maintained the word calico for the fabric, in the US it was used to refer to the printed design or pattern. These colorful, small-patterned printed fabrics gave rise to the use of the word calico to describe a cat coat of tri-color; "calico" as an adjective being synonymous to "mottled" or "resembling printed calico". Genetics: In genetic terms, calico cats resemble tortoiseshells in most ways, except the tortoiseshell has a black undercoat and the calico has a white undercoat. One anomaly is that, as a rule of thumb the larger the areas of white, the fewer and larger the patches of ginger and dark or tabby coat. In contrast, a non-white-spotted tortoiseshell usually has small patches of color or even something resembling a salt-and-pepper sprinkling. This reflects the genetic effects on relative speeds of migration of melanocytes and X-inactivation in the embryo.Serious study of calico cats apparently began in 1948 when Murray Barr and his graduate student E. G. Bertram noticed dark, drumstick-shaped masses inside the nuclei of nerve cells of female cats, but not in male cats. These dark masses became known as Barr bodies. In 1959, Japanese cell biologist Susumu Ohno determined the Barr bodies were X chromosomes. In 1961, Mary Lyon proposed the concept of X-inactivation: when one of the two X chromosomes inside a female mammal shuts off. She observed this in the coat color patterns of mice. There are two different alleles in Calico cats, one received from each parent, that can determine their fur coloration: each allele is responsible for either orange or black fur. Typically, each allele received would create a solid coat of black and orange fur, but, with Calico cats, Lyonization (commonly known as X-inactivation), occurs at random, which makes for the very distinct fur coat.Calico cats are almost always female because the locus of the gene for the orange/non-orange coloring is on the X chromosome. In the absence of other influences, such as color inhibition that causes white fur, the alleles present in those orange loci determine whether the fur is orange or not. Female cats, like all female placental mammals, normally have two X chromosomes. In contrast, male placental mammals, including chromosomally stable male cats, have one X and one Y chromosome. Since the Y chromosome does not have any locus for the orange gene, it is not possible for a normal XY male cat to have both orange and non-orange genes together, which is what typically results in tortoiseshell or calico coloring.One rare genetic exception resulting in a male calico is when faulty cell division leaves an extra X chromosome in one of the gametes that produced the male cat. That extra X then is reproduced in each of his cells, a condition referred to as XXY, or Klinefelter syndrome. Such a combination of chromosomes could produce tortoiseshell or calico markings in the affected male, in the same way as XX chromosomes produce them in the female.All but approximately one in three thousand of the rare calico or tortoiseshell male cats are sterile because of the chromosome abnormality and breeders reject any exceptions for stud purposes because they generally are of poor physical quality and fertility. Even in the rare cases where a male calico is healthy and fertile, most cat registries will not accept them as show animals.As Sue Hubble stated in her book Shrinking the Cat: Genetic Engineering Before We Knew About Genes, The mutation that gives male cats a ginger-colored coat and females ginger, tortoiseshell, or calico coats produced a particularly telling map. The orange mutant gene is found only on the X, or female, chromosome. As with humans, female cats have paired sex chromosomes, XX, and male cats have XY sex chromosomes. The female cat, therefore, can have the orange mutant gene on one X chromosome and the gene for a black coat on the other. The piebald gene is on a different chromosome. If expressed, this gene codes for white, or no color, and is dominant over the alleles that code for a certain color (i.e. orange or black), making the white spots on calico cats. If that is the case, those several genes will be expressed in a blotchy coat of the tortoiseshell or calico kind. But the male, with his single X chromosome, has only one of that particular coat-color gene: he can be not-ginger or he can be ginger (although some modifier genes can add a bit of white here and there), but unless he has a chromosomal abnormality he cannot be a calico cat. Genetics: Currently, it has been very difficult to reproduce the fur patterns of calico cats by cloning. Penelope Tsernoglou wrote that this "...is due to an effect called x-linked inactivation which involves the random inactivation of one of the X chromosomes. Since all female mammals have two X chromosomes, one might wonder if this phenomenon could have a more widespread impact on cloning in the future."The study of Calico cats may have provided significant findings relating to physiological differences between female and male mammals. Folklore: Cats with calico coloration are believed to bring good luck in the folklore of many cultures. In Germany, the word for a cat with calico coloring is "Glückskatze" or "lucky cat". In the United States, calicoes sometimes are referred to as money cats. In Japan, Maneki-neko figures depict calico cats, bringing good luck. Japanese sailors often kept a calico as their ship's cat to protect against misfortune at sea. Literature: In the late nineteenth century, Eugene Field published "The Duel", a poem for children also known as "The Gingham Dog and the Calico Cat".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Argentine Sign Language** Argentine Sign Language: Argentine Sign Language (Spanish: Lengua de señas argentina; LSA) is used in Argentina. Deaf people attend separate schools, and use local sign languages out of class. A manual alphabet for spelling Spanish has been developed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Call girl** Call girl: A call girl or female escort is a prostitute who (unlike a street walker) does not display her profession to the general public, nor does she usually work in an institution like a brothel, although she may be employed by an escort agency. The client must make an appointment, usually by calling a telephone number. Call girls often advertise their services in small ads in magazines and via the Internet, although an intermediary advertiser, such as an escort agency, may be involved in promoting escorts, while, less often, some may be handled by a pimp. Call girls may work either incall, where the client comes to them, or outcall, where they go to the client. Some porn stars are known to escort as well. Internet: Many call girl agencies and independent call girls have their own websites. The internet has become the main medium through which customers find their desired escort. Generally, a picture of the woman is provided, and sometimes, the type of sexual services she offers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Triangle of auscultation** Triangle of auscultation: The triangle of auscultation is a relative thinning of the musculature of the back, situated along the medial border of the scapula which allows for improved listening to the lungs. Boundaries: It has the following boundaries: medially, by the inferior portion of the trapezius inferiorly, by the latissimus dorsi laterally, by the medial border of the scapulaThe superficial floor of the triangle is formed by the lateral portion of the erector spinae muscles. Deep to these muscles are the osseous portions of the 6th and 7th ribs and the internal and external intercostal muscles. Clinical significance: The triangle of auscultation is useful for assessment using a pulmonary auscultation and thoracic procedures. Due to the relative thinning of the musculature of the back in the triangle, the posterior thoracic wall is closer to the skin surface, making respiratory sounds audible more clearly with a stethoscope. On the left side, the cardiac orifice of the stomach lies deep to the triangle. In days before X-rays were discovered, the sound of swallowed liquids were auscultated over this triangle to confirm an oesophageal tumour. Clinical significance: To better expose the floor of the triangle up of the posterior thoracic wall in the 6th and 7th intercostal space, a patient is asked to fold their arms across their chest, laterally rotating the scapulae, while bending forward at the trunk, somewhat resembling the fetal position. Clinical significance: The triangle of auscultation can be used as a surgical approach path. It can also be used for applying a nerve block known as the rhomboid intercostal block, which can be used to relieve pain after rib fractures, and a thoracotomy. This nerve block is usually achieved by injection of the local anesthetic agent into the fascial plane between the rhomboid upper intercostal muscle and the rhombic muscles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Forward algorithm** Forward algorithm: The forward algorithm, in the context of a hidden Markov model (HMM), is used to calculate a 'belief state': the probability of a state at a certain time, given the history of evidence. The process is also known as filtering. The forward algorithm is closely related to, but distinct from, the Viterbi algorithm. Forward algorithm: The forward and backward algorithms should be placed within the context of probability as they appear to simply be names given to a set of standard mathematical procedures within a few fields. For example, neither "forward algorithm" nor "Viterbi" appear in the Cambridge encyclopedia of mathematics. The main observation to take away from these algorithms is how to organize Bayesian updates and inference to be efficient in the context of directed graphs of variables (see sum-product networks). Forward algorithm: For an HMM such as this one: this probability is written as p(xt|y1:t) . Here x(t) is the hidden state which is abbreviated as xt and y1:t are the observations 1 to t . The backward algorithm complements the forward algorithm by taking into account the future history if one wanted to improve the estimate for past times. This is referred to as smoothing and the forward/backward algorithm computes p(xt|y1:T) for 1<t<T . Thus, the full forward/backward algorithm takes into account all evidence. Note that a belief state can be calculated at each time step, but doing this does not, in a strict sense, produce the most likely state sequence, but rather the most likely state at each time step, given the previous history. In order to achieve the most likely sequence, the Viterbi algorithm is required. It computes the most likely state sequence given the history of observations, that is, the state sequence that maximizes p(x0:t|y0:t) History: The forward algorithm is one of the algorithms used to solve the decoding problem. Since the development of speech recognition and pattern recognition and related fields like computational biology which use HMMs, the forward algorithm has gained popularity. Algorithm: The goal of the forward algorithm is to compute the joint probability p(xt,y1:t) , where for notational convenience we have abbreviated x(t) as xt and (y(1),y(2),...,y(t)) as y1:t . Computing p(xt,y1:t) directly would require marginalizing over all possible state sequences {x1:t−1} , the number of which grows exponentially with t . Instead, the forward algorithm takes advantage of the conditional independence rules of the hidden Markov model (HMM) to perform the calculation recursively. Algorithm: To demonstrate the recursion, let αt(xt)=p(xt,y1:t)=∑xt−1p(xt,xt−1,y1:t) .Using the chain rule to expand p(xt,xt−1,y1:t) , we can then write αt(xt)=∑xt−1p(yt|xt,xt−1,y1:t−1)p(xt|xt−1,y1:t−1)p(xt−1,y1:t−1) .Because yt is conditionally independent of everything but xt , and xt is conditionally independent of everything but xt−1 , this simplifies to αt(xt)=p(yt|xt)∑xt−1p(xt|xt−1)αt−1(xt−1) .Thus, since p(yt|xt) and p(xt|xt−1) are given by the model's emission distributions and transition probabilities, one can quickly calculate αt(xt) from αt−1(xt−1) and avoid incurring exponential computation time. The initial condition is set as some prior probability over x0 as α0(x0)=p(x0) such that 1. Algorithm: Once the joint probability αt(xt)=p(xt,y1:t) has been computed using the forward algorithm, we can easily obtain the related joint probability p(y1:t) as αt=p(y1:t)=∑xtp(xt,y1:t)=∑xtαt(xt) and the required conditional probability p(xt|y1:t) as p(xt|y1:t)=p(xt,y1:t)p(y1:t)=αt(xt)αt. Once the conditional probability has been calculated, we can also find the point estimate of xt . For instance, the MAP estimate of xt is given by arg max arg max xtαt(xt), while the MMSE estimate of xt is given by x^tMMSE=E[xt|y1:t]=∑xtxtp(xt|y1:t)=1αt∑xtxtαt(xt). The forward algorithm is easily modified to account for observations from variants of the hidden Markov model as well, such as the Markov jump linear system. Example: This example on observing possible states of weather from the observed condition of seaweed. We have observations of seaweed for three consecutive days as dry, damp, and soggy in order. The possible states of weather can be sunny, cloudy, or rainy. In total, there can be 27 such weather sequences. Exploring all such possible state sequences is computationally very expensive. To reduce this complexity, Forward algorithm comes in handy, where the trick lies in using the conditional independence of the sequence steps to calculate partial probabilities, αt(xt)=p(xt,y1:t)=p(yt|xt)∑xt−1p(xt|xt−1)αt−1(xt−1) as shown in the above derivation. Hence, we can calculate the probabilities as the product of the appropriate observation/emission probability, p(yt|xt) ( probability of state yt seen at time t from previous observation) with the sum of probabilities of reaching that state at time t, calculated using transition probabilities. This reduces complexity of the problem from searching whole search space to just using previously computed α 's and transition probabilities. Applications of the algorithm: The forward algorithm is mostly used in applications that need us to determine the probability of being in a specific state when we know about the sequence of observations. We first calculate the probabilities over the states computed for the previous observation and use them for the current observations, and then extend it out for the next step using the transition probability table. The approach basically caches all the intermediate state probabilities so they are computed only once. This helps us to compute a fixed state path. The process is also called posterior decoding. Applications of the algorithm: The algorithm computes probability much more efficiently than the naive approach, which very quickly ends up in a combinatorial explosion. Together, they can provide the probability of a given emission/observation at each position in the sequence of observations. It is from this information that a version of the most likely state path is computed ("posterior decoding"). Applications of the algorithm: The algorithm can be applied wherever we can train a model as we receive data using Baum-Welch or any general EM algorithm. The Forward algorithm will then tell us about the probability of data with respect to what is expected from our model. One of the applications can be in the domain of Finance, where it can help decide on when to buy or sell tangible assets. Applications of the algorithm: It can have applications in all fields where we apply Hidden Markov Models. The popular ones include Natural language processing domains like tagging part-of-speech and speech recognition. Recently it is also being used in the domain of Bioinformatics. Applications of the algorithm: Forward algorithm can also be applied to perform Weather speculations. We can have a HMM describing the weather and its relation to the state of observations for few consecutive days (some examples could be dry, damp, soggy, sunny, cloudy, rainy etc.). We can consider calculating the probability of observing any sequence of observations recursively given the HMM. We can then calculate the probability of reaching an intermediate state as the sum of all possible paths to that state. Thus the partial probabilities for the final observation will hold the probability of reaching those states going through all possible paths. Variants of the algorithm: Hybrid Forward Algorithm: A variant of the Forward Algorithm called Hybrid Forward Algorithm (HFA) can be used for the construction of radial basis function (RBF) neural networks with tunable nodes. The RBF neural network is constructed by the conventional subset selection algorithms. The network structure is determined by combining both the stepwise forward network configuration and the continuous RBF parameter optimization. It is used to efficiently and effectively produce a parsimonious RBF neural network that generalizes well. It is achieved through simultaneous network structure determination and parameter optimization on the continuous parameter space. HFA tackles the mixed integer hard problem using an integrated analytic framework, leading to improved network performance and reduced memory usage for the network construction. Variants of the algorithm: Forward Algorithm for Optimal Control in Hybrid Systems: This variant of Forward algorithm is motivated by the structure of manufacturing environments that integrate process and operations control. We derive a new property of the optimal state trajectory structure which holds under a modified condition on the cost function. This allows us to develop a low-complexity, scalable algorithm for explicitly determining the optimal controls, which can be more efficient than Forward Algorithm. Variants of the algorithm: Continuous Forward Algorithm: A continuous forward algorithm (CFA) can be used for nonlinear modelling and identification using radial basis function (RBF) neural networks. The proposed algorithm performs the two tasks of network construction and parameter optimization within an integrated analytic framework, and offers two important advantages. First, the model performance can be significantly improved through continuous parameter optimization. Secondly, the neural representation can be built without generating and storing all candidate regressors, leading to significantly reduced memory usage and computational complexity. Complexity: Complexity of Forward Algorithm is Θ(nm2) , where m is the number of hidden or latent variables, like weather in the example above, and n is the length of the sequence of the observed variable. This is clear reduction from the adhoc method of exploring all the possible states with a complexity of Θ(nmn) Softwares: Hidden Markov Model R-Package contains functionality for computing and retrieving forward procedure momentuHMM R-Package provides tools for using and inferring HMMs. GHMM Library for Python The hmm package Haskell library for HMMS, implements Forward algorithm. Library for Java contains Machine Learning and Artificial Intelligence algorithm implementations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spaghettieis** Spaghettieis: Spaghettieis (German pronunciation: [ʃpaˈɡɛtiˌaɪs]), or spaghetti ice cream, is a German ice cream dish made to resemble a plate of spaghetti. In the dish, vanilla ice cream is extruded through a modified Spätzle press or potato ricer, giving it the appearance of spaghetti. It is then placed over whipped cream and topped with strawberry sauce (to simulate tomato sauce) and either coconut flakes, grated almonds, or white chocolate shavings to represent the parmesan cheese. Besides the usual dish with strawberry sauce, one may also find variations like ice cream with dark chocolate and nuts, simulating Spaghetti Carbonara instead of Spaghetti Bolognese. History: Spaghettieis was created by Dario Fontanella in the late 1960s in Mannheim, Germany. Fontanella recalls serving his innovative creation to children who broke into tears because they wanted ice cream and not a plate of spaghetti. He received the "Bloomaulorden", a medal bestowed by the city of Mannheim, in 2014.For many years, the dish was not well known outside Germany, and could only be found at some gelaterias and specialty ice cream parlors, special events, and hotels and restaurants around the world. Recently, Spaghettieis has begun to appear as a novelty in more restaurants and has had some attention on social media.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Arason invariant** Arason invariant: In mathematics, the Arason invariant is a cohomological invariant associated to a quadratic form of even rank and trivial discriminant and Clifford invariant over a field k of characteristic not 2, taking values in H3(k,Z/2Z). It was introduced by (Arason 1975, Theorem 5.7). The Rost invariant is a generalization of the Arason invariant to other algebraic groups. Definition: Suppose that W(k) is the Witt ring of quadratic forms over a field k and I is the ideal of forms of even dimension. The Arason invariant is a group homomorphism from I3 to the Galois cohomology group H3(k,Z/2Z). It is determined by the property that on the 8-dimensional diagonal form with entries 1, –a, –b, ab, -c, ac, bc, -abc (the 3-fold Pfister form«a,b,c») it is given by the cup product of the classes of a, b, c in H1(k,Z/2Z) = k*/k*2. The Arason invariant vanishes on I4, and it follows from the Milnor conjecture proved by Voevodsky that it is an isomorphism from I3/I4 to H3(k,Z/2Z).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Comparative genomic hybridization** Comparative genomic hybridization: Comparative genomic hybridization (CGH) is a molecular cytogenetic method for analysing copy number variations (CNVs) relative to ploidy level in the DNA of a test sample compared to a reference sample, without the need for culturing cells. The aim of this technique is to quickly and efficiently compare two genomic DNA samples arising from two sources, which are most often closely related, because it is suspected that they contain differences in terms of either gains or losses of either whole chromosomes or subchromosomal regions (a portion of a whole chromosome). This technique was originally developed for the evaluation of the differences between the chromosomal complements of solid tumor and normal tissue, and has an improved resolution of 5–10 megabases compared to the more traditional cytogenetic analysis techniques of giemsa banding and fluorescence in situ hybridization (FISH) which are limited by the resolution of the microscope utilized.This is achieved through the use of competitive fluorescence in situ hybridization. In short, this involves the isolation of DNA from the two sources to be compared, most commonly a test and reference source, independent labelling of each DNA sample with fluorophores (fluorescent molecules) of different colours (usually red and green), denaturation of the DNA so that it is single stranded, and the hybridization of the two resultant samples in a 1:1 ratio to a normal metaphase spread of chromosomes, to which the labelled DNA samples will bind at their locus of origin. Using a fluorescence microscope and computer software, the differentially coloured fluorescent signals are then compared along the length of each chromosome for identification of chromosomal differences between the two sources. A higher intensity of the test sample colour in a specific region of a chromosome indicates the gain of material of that region in the corresponding source sample, while a higher intensity of the reference sample colour indicates the loss of material in the test sample in that specific region. A neutral colour (yellow when the fluorophore labels are red and green) indicates no difference between the two samples in that location.CGH is only able to detect unbalanced chromosomal abnormalities. This is because balanced chromosomal abnormalities such as reciprocal translocations, inversions or ring chromosomes do not affect copy number, which is what is detected by CGH technologies. CGH does, however, allow for the exploration of all 46 human chromosomes in single test and the discovery of deletions and duplications, even on the microscopic scale which may lead to the identification of candidate genes to be further explored by other cytological techniques.Through the use of DNA microarrays in conjunction with CGH techniques, the more specific form of array CGH (aCGH) has been developed, allowing for a locus-by-locus measure of CNV with increased resolution as low as 100 kilobases. This improved technique allows for the aetiology of known and unknown conditions to be discovered. History: The motivation underlying the development of CGH stemmed from the fact that the available forms of cytogenetic analysis at the time (giemsa banding and FISH) were limited in their potential resolution by the microscopes necessary for interpretation of the results they provided. Furthermore, giemsa banding interpretation has the potential to be ambiguous and therefore has lowered reliability, and both techniques require high labour inputs which limits the loci which may be examined.The first report of CGH analysis was by Kallioniemi and colleagues in 1992 at the University of California, San Francisco, who utilised CGH in the analysis of solid tumors. They achieved this by the direct application of the technique to both breast cancer cell lines and primary bladder tumors in order to establish complete copy number karyotypes for the cells. They were able to identify 16 different regions of amplification, many of which were novel discoveries.Soon after in 1993, du Manoir et al. reported virtually the same methodology. The authors painted a series of individual human chromosomes from a DNA library with two different fluorophores in different proportions to test the technique, and also applied CGH to genomic DNA from patients affected with either Downs syndrome or T-cell prolymphocytic leukemia as well as cells of a renal papillary carcinoma cell line. It was concluded that the fluorescence ratios obtained were accurate and that differences between genomic DNA from different cell types were detectable, and therefore that CGH was a highly useful cytogenetic analysis tool.Initially, the widespread use of CGH technology was difficult, as protocols were not uniform and therefore inconsistencies arose, especially due to uncertainties in the interpretation of data. However, in 1994 a review was published which described an easily understood protocol in detail and the image analysis software was made available commercially, which allowed CGH to be utilised all around the world. History: As new techniques such as microdissection and degenerate oligonucleotide primed polymerase chain reaction (DOP-PCR) became available for the generation of DNA products, it was possible to apply the concept of CGH to smaller chromosomal abnormalities, and thus the resolution of CGH was improved.The implementation of array CGH, whereby DNA microarrays are used instead of the traditional metaphase chromosome preparation, was pioneered by Solinas-Tolodo et al. in 1997 using tumor cells and Pinkel et al. in 1998 by use of breast cancer cells. This was made possible by the Human Genome Project which generated a library of cloned DNA fragments with known locations throughout the human genome, with these fragments being used as probes on the DNA microarray. Now probes of various origins such as cDNA, genomic PCR products and bacterial artificial chromosomes (BACs) can be used on DNA microarrays which may contain up to 2 million probes. Array CGH is automated, allows greater resolution (down to 100 kb) than traditional CGH as the probes are far smaller than metaphase preparations, requires smaller amounts of DNA, can be targeted to specific chromosomal regions if required and is ordered and therefore faster to analyse, making it far more adaptable to diagnostic uses. Basic methods: Metaphase slide preparation The DNA on the slide is a reference sample, and is thus obtained from a karyotypically normal man or woman, though it is preferential to use female DNA as they possess two X chromosomes which contain far more genetic information than the male Y chromosome. Phytohaemagglutinin stimulated peripheral blood lymphocytes are used. 1mL of heparinised blood is added to 10ml of culture medium and incubated for 72 hours at 37 °C in an atmosphere of 5% CO2. Colchicine is added to arrest the cells in mitosis, the cells are then harvested and treated with hypotonic potassium chloride and fixed in 3:1 methanol/acetic acid.One drop of the cell suspension should then be dropped onto an ethanol cleaned slide from a distance of about 30 cm, optimally this should be carried out at room temperature at humidity levels of 60–70%. Slides should be evaluated by visualisation using a phase contrast microscope, minimal cytoplasm should be observed and chromosomes should not be overlapping and be 400–550 bands long with no separated chromatids and finally should appear dark rather than shiny. Slides then need to be air dried overnight at room temperature, and any further storage should be in groups of four at −20 °C with either silica beads or nitrogen present to maintain dryness. Different donors should be tested as hybridization may be variable. Commercially available slides may be used, but should always be tested first. Basic methods: Isolation of DNA from test tissue and reference tissue Standard phenol extraction is used to obtain DNA from test or reference (karyotypically normal individual) tissue, which involves the combination of Tris-Ethylenediaminetetraacetic acid and phenol with aqueous DNA in equal amounts. This is followed by separation by agitation and centrifugation, after which the aqueous layer is removed and further treated using ether and finally ethanol precipitation is used to concentrate the DNA.May be completed using DNA isolation kits available commercially which are based on affinity columns.Preferentially, DNA should be extracted from fresh or frozen tissue as this will be of the highest quality, though it is now possible to use archival material which is formalin fixed or paraffin wax embedded, provided the appropriate procedures are followed. 0.5-1 μg of DNA is sufficient for the CGH experiment, though if the desired amount is not obtained DOP-PCR may be applied to amplify the DNA, however it in this case it is important to apply DOP-PCR to both the test and reference DNA samples to improve reliability. Basic methods: DNA labelling Nick translation is used to label the DNA and involves cutting DNA and substituting nucleotides labelled with fluorophores (direct labelling) or biotin or oxigenin to have fluophore conjugated antibodies added later (indirect labelling). It is then important to check fragment lengths of both test and reference DNA by gel electrophoresis, as they should be within the range of 500kb-1500kb for optimum hybridization. Basic methods: Blocking Unlabelled Life Technologies Corporation's Cot-1 DNA (placental DNA enriched with repetitive sequences of length 50bp-100bp)is added to block normal repetitive DNA sequences, particularly at centromeres and telomeres, as these sequences, if detected, may reduce the fluorescence ratio and cause gains or losses to escape detection. Basic methods: Hybridization 8–12μl of each of labelled test and labelled reference DNA are mixed and 40 μg Cot-1 DNA is added, then precipitated and subsequently dissolved in 6μl of hybridization mix, which contains 50% formamide to decrease DNA melting temperature and 10% dextran sulphate to increase the effective probe concentration in a saline sodium citrate (SSC) solution at a pH of 7.0.Denaturation of the slide and probes are carried out separately. The slide is submerged in 70% formamide/2xSSC for 5–10 minutes at 72 °C, while the probes are denatured by immersion in a water bath of 80 °C for 10 minutes and are immediately added to the metaphase slide preparation. This reaction is then covered with a coverslip and left for two to four days in a humid chamber at 40 °C.The coverslip is then removed and 5 minute washes are applied, three using 2xSSC at room temperature, one at 45 °C with 0.1xSSC and one using TNT at room temperature. The reaction is then preincubated for 10 minutes then followed by a 60-minute, 37 °C incubation, three more 5 minute washes with TNT then one with 2xSSC at room temperature. The slide is then dried using an ethanol series of 70%/96%/100% before counterstaining with DAPI (0.35 μg/ml), for chromosome identification, and sealing with a coverslip. Basic methods: Fluorescence visualisation and imaging A fluorescence microscope with the appropriate filters for the DAPI stain as well as the two fluorophores utilised is required for visualisation, and these filters should also minimise the crosstalk between the fluorophores, such as narrow band pass filters. The microscope must provide uniform illumination without chromatic variation, be appropriately aligned and have a "plan" type of objective which is apochromatic and give a magnification of x63 or x100.The image should be recorded using a camera with spatial resolution at least 0.1 μm at the specimen level and give an image of at least 600x600 pixels. The camera must also be able to integrate the image for at least 5 to 10 seconds, with a minimum photometric resolution of 8 bit.Dedicated CGH software is commercially available for the image processing step, and is required to subtract background noise, remove and segment materials not of chromosomal origin, normalize the fluorescence ratio, carry out interactive karyotyping and chromosome scaling to standard length. A "relative copy number karyotype" which presents chromosomal areas of deletions or amplifications is generated by averaging the ratios of a number of high quality metaphases and plotting them along an ideogram, a diagram identifying chromosomes based on banding patterns. Interpretation of the ratio profiles is conducted either using fixed or statistical thresholds (confidence intervals). When using confidence intervals, gains or losses are identified when 95% of the fluorescence ratio does not contain 1.0. Basic methods: Extra notes Extreme care must be taken to avoid contamination of any step involving DNA, especially with the test DNA as contamination of the sample with normal DNA will skew results closer to 1.0, thus abnormalities may go undetected. FISH, PCR and flow cytometry experiments may be employed to confirm results. Array comparative genomic hybridization: Array comparative genomic hybridization (also microarray-based comparative genomic hybridization, matrix CGH, array CGH, aCGH) is a molecular cytogenetic technique for the detection of chromosomal copy number changes on a genome wide and high-resolution scale. Array CGH compares the patient's genome against a reference genome and identifies differences between the two genomes, and hence locates regions of genomic imbalances in the patient, utilizing the same principles of competitive fluorescence in situ hybridization as traditional CGH. Array comparative genomic hybridization: With the introduction of array CGH, the main limitation of conventional CGH, a low resolution, is overcome. In array CGH, the metaphase chromosomes are replaced by cloned DNA fragments (+100–200 kb) of which the exact chromosomal location is known. This allows the detection of aberrations in more detail and, moreover, makes it possible to map the changes directly onto the genomic sequence.Array CGH has proven to be a specific, sensitive, fast and high-throughput technique, with considerable advantages compared to other methods used for the analysis of DNA copy number changes making it more amenable to diagnostic applications. Using this method, copy number changes at a level of 5–10 kilobases of DNA sequences can be detected. As of 2006, even high-resolution CGH (HR-CGH) arrays are accurate to detect structural variations (SV) at resolution of 200 bp. This method allows one to identify new recurrent chromosome changes such as microdeletions and duplications in human conditions such as cancer and birth defects due to chromosome aberrations. Array comparative genomic hybridization: Methodology Array CGH is based on the same principle as conventional CGH. In both techniques, DNA from a reference (or control) sample and DNA from a test (or patient) sample are differentially labelled with two different fluorophores and used as probes that are cohybridized competitively onto nucleic acid targets. In conventional CGH, the target is a reference metaphase spread. In array CGH, these targets can be genomic fragments cloned in a variety of vectors (such as BACs or plasmids), cDNAs, or oligonucleotides.Figure 2. is a schematic overview of the array CGH technique. DNA from the sample to be tested is labeled with a red fluorophore (Cyanine 5) and a reference DNA sample is labeled with green fluorophore (Cyanine 3). Equal quantities of the two DNA samples are mixed and cohybridized to a DNA microarray of several thousand evenly spaced cloned DNA fragments or oligonucleotides, which have been spotted in triplicate on the array. After hybridization, digital imaging systems are used to capture and quantify the relative fluorescence intensities of each of the hybridized fluorophores. The resulting ratio of the fluorescence intensities is proportional to the ratio of the copy numbers of DNA sequences in the test and reference genomes. If the intensities of the flurochromes are equal on one probe, this region of the patient's genome is interpreted as having equal quantity of DNA in the test and reference samples; if there is an altered Cy3:Cy5 ratio this indicates a loss or a gain of the patient DNA at that specific genomic region. Array comparative genomic hybridization: Technological approaches to array CGH Array CGH has been implemented using a wide variety of techniques. Therefore, some of the advantages and limitations of array CGH are dependent on the technique chosen. Array comparative genomic hybridization: The initial approaches used arrays produced from large insert genomic DNA clones, such as BACs. The use of BACs provides sufficient intense signals to detect single-copy changes and to locate aberration boundaries accurately. However, initial DNA yields of isolated BAC clones are low and DNA amplification techniques are necessary. These techniques include ligation-mediated polymerase chain reaction (PCR), degenerate primer PCR using one or several sets of primers, and rolling circle amplification. Arrays can also be constructed using cDNA. These arrays currently yield a high spatial resolution, but the number of cDNAs is limited by the genes that are encoded on the chromosomes, and their sensitivity is low due to cross-hybridization. This results in the inability to detect single copy changes on a genome wide scale. The latest approach is spotting the arrays with short oligonucleotides. The amount of oligos is almost infinite, and the processing is rapid, cost-effective, and easy. Although oligonucleotides do not have the sensitivity to detect single copy changes, averaging of ratios from oligos that map next to each other on the chromosome can compensate for the reduced sensitivity. It is also possible to use arrays which have overlapping probes so that specific breakpoints may be uncovered. Array comparative genomic hybridization: Design approaches There are two approaches to the design of microarrays for CGH applications: whole genome and targeted. Array comparative genomic hybridization: Whole genome arrays are designed to cover the entire human genome. They often include clones that provide an extensive coverage across the genome; and arrays that have contiguous coverage, within the limits of the genome. Whole-genome arrays have been constructed mostly for research applications and have proven their outstanding worth in gene discovery. They are also very valuable in screening the genome for DNA gains and losses at an unprecedented resolution.Targeted arrays are designed for a specific region(s) of the genome for the purpose of evaluating that targeted segment. It may be designed to study a specific chromosome or chromosomal segment or to identify and evaluate specific DNA dosage abnormalities in individuals with suspected microdeletion syndromes or subtelomeric rearrangements. The crucial goal of a targeted microarray in medical practice is to provide clinically useful results for diagnosis, genetic counseling, prognosis, and clinical management of unbalanced cytogenetic abnormalities. Applications: Conventional Conventional CGH has been used mainly for the identification of chromosomal regions that are recurrently lost or gained in tumors, as well as for the diagnosis and prognosis of cancer. This approach can also be used to study chromosomal aberrations in fetal and neonatal genomes. Furthermore, conventional CGH can be used in detecting chromosomal abnormalities and have been shown to be efficient in diagnosing complex abnormalities associated with human genetic disorders. Applications: In cancer research CGH data from several studies of the same tumor type show consistent patterns of non-random genetic aberrations. Some of these changes appear to be common to various kinds of malignant tumors, while others are more tumor specific. For example, gains of chromosomal regions lq, 3q and 8q, as well as losses of 8p, 13q, 16q and 17p, are common to a number of tumor types, such as breast, ovarian, prostate, renal and bladder cancer (Figure. 3). Other alterations, such as 12p and Xp gains in testicular cancer, 13q gain 9q loss in bladder cancer, 14q loss in renal cancer and Xp loss in ovarian cancer are more specific, and might reflect the unique selection forces operating during cancer development in different organs. Array CGH is also frequently used in research and diagnostics of B cell malignancies, such as chronic lymphocytic leukemia. Applications: Chromosomal aberrations Cri du Chat (CdC) is a syndrome caused by a partial deletion of the short arm of chromosome 5. Several studies have shown that conventional CGH is suitable to detect the deletion, as well as more complex chromosomal alterations. For example, Levy et al. (2002) reported an infant with a cat-like cry, the hallmark of CdC, but having an indistinct karyotype. CGH analysis revealed a loss of chromosomal material from 5p15.3 confirming the diagnosis clinically. These results demonstrate that conventional CGH is a reliable technique in detecting structural aberrations and, in specific cases, may be more efficient in diagnosing complex abnormalities. Applications: Array CGH Array CGH applications are mainly directed at detecting genomic abnormalities in cancer. However, array CGH is also suitable for the analysis of DNA copy number aberrations that cause human genetic disorders. That is, array CGH is employed to uncover deletions, amplifications, breakpoints and ploidy abnormalities. Earlier diagnosis is of benefit to the patient as they may undergo appropriate treatments and counseling to improve their prognosis. Applications: Genomic abnormalities in cancer Genetic alterations and rearrangements occur frequently in cancer and contribute to its pathogenesis. Detecting these aberrations by array CGH provides information on the locations of important cancer genes and can have clinical use in diagnosis, cancer classification and prognostification. However, not all of the losses of genetic material are pathogenetic, since some DNA material is physiologically lost during the rearrangement of immunoglobulin subgenes. In a recent study, array CGH has been implemented to identify regions of chromosomal aberration (copy-number variation) in several mouse models of breast cancer, leading to identification of cooperating genes during myc-induced oncogenesis.Array CGH may also be applied not only to the discovery of chromosomal abnormalities in cancer, but also to the monitoring of the progression of tumors. Differentiation between metastatic and mild lesions is also possible using FISH once the abnormalities have been identified by array CGH. Applications: Submicroscopic aberrations Prader–Willi syndrome (PWS) is a paternal structural abnormality involving 15q11-13, while a maternal aberration in the same region causes Angelman syndrome (AS). In both syndromes, the majority of cases (75%) are the result of a 3–5 Mb deletion of the PWS/AS critical region. These small aberrations cannot be detected using cytogenetics or conventional CGH, but can be readily detected using array CGH. As a proof of principle Vissers et al. (2003) constructed a genome wide array with a 1 Mb resolution to screen three patients with known, FISH-confirmed microdeletion syndromes, including one with PWS. In all three cases, the abnormalities, ranging from 1.5 to 2.9Mb, were readily identified. Thus, array CGH was demonstrated to be a specific and sensitive approach in detecting submicroscopic aberrations. Applications: When using overlapping microarrays, it is also possible to uncover breakpoints involved in chromosomal aberrations. Applications: Prenatal genetic diagnosis Though not yet a widely employed technique, the use of array CGH as a tool for preimplantation genetic screening is becoming an increasingly popular concept. It has the potential to detect CNVs and aneuploidy in eggs, sperm or embryos which may contribute to failure of the embryo to successfully implant, miscarriage or conditions such as Down syndrome (trisomy 21). This makes array CGH a promising tool to reduce the incidence of life altering conditions and improve success rates of IVF attempts. The technique involves whole genome amplification from a single cell which is then used in the array CGH method. It may also be used in couples carrying chromosomal translocations such as balanced reciprocal translocations or Robertsonian translocations, which have the potential to cause chromosomal imbalances in their offspring. Limitations of CGH and array CGH: A main disadvantage of conventional CGH is its inability to detect structural chromosomal aberrations without copy number changes, such as mosaicism, balanced chromosomal translocations, and inversions. CGH can also only detect gains and losses relative to the ploidy level. In addition, chromosomal regions with short repetitive DNA sequences are highly variable between individuals and can interfere with CGH analysis. Therefore, repetitive DNA regions like centromeres and telomeres need to be blocked with unlabeled repetitive DNA (e.g. Cot1 DNA) and/or can be omitted from screening. Furthermore, the resolution of conventional CGH is a major practical problem that limits its clinical applications. Although CGH has proven to be a useful and reliable technique in the research and diagnostics of both cancer and human genetic disorders, the applications involve only gross abnormalities. Because of the limited resolution of metaphase chromosomes, aberrations smaller than 5–10 Mb cannot be detected using conventional CGH. Limitations of CGH and array CGH: For the detection of such abnormalities, a high-resolution technique is required. Limitations of CGH and array CGH: Array CGH overcomes many of these limitations. Array CGH is characterized by a high resolution, its major advantage with respect to conventional CGH. The standard resolution varies between 1 and 5 Mb, but can be increased up to approximately 40 kb by supplementing the array with extra clones. However, as in conventional CGH, the main disadvantage of array CGH is its inability to detect aberrations that do not result in copy number changes and is limited in its ability to detect mosaicism. The level of mosaicism that can be detected is dependent on the sensitivity and spatial resolution of the clones. At present, rearrangements present in approximately 50% of the cells is the detection limit. For the detection of such abnormalities, other techniques, such as SKY (Spectral karyotyping) or FISH have to still be used.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**3C 75** 3C 75: 3C 75 (also called 3C75) is a binary black hole system in the dumbbell-shaped galaxy NGC 1128 in the galaxy cluster Abell 400. It has four relativistic jets, two coming from each accreting supermassive black hole. It is travelling at 1200 kilometers per second, causing the jets to be swept back. 3C 75 may be X-ray source 2A 0252+060 (1H 0253+058, XRS 02522+060).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Academic dress of the University of Cambridge** Academic dress of the University of Cambridge: The University of Cambridge has a long tradition of academic dress, which it traditionally refers to as academical dress. Almost every degree which is awarded by the university has its own distinct gown in addition to having its own hood. Undergraduates wear college gowns which have subtle differences enabling the wearer's college to be determined. Academic dress is worn quite often in Cambridge on formal, and sometimes informal, occasions, and there are a number of rules and customs governing when and how it is worn. Black gowns (undress) are worn at less formal events, while on special days (such as the days of General Admission to Degrees) full academical dress is worn, consisting of gown, hood and headdress with Doctors in festal dress. The university's officials also have ancient forms of academic dress, unique to the university. When academic dress is worn: Most undergraduates buy or borrow a gown in their first week at Cambridge for the purpose of matriculation, which is the formal ceremony of enrolment in the university. It is more common to buy a gown, especially at the more traditional colleges, as the number of occasions on which it is worn quickly repays the investment. Gowns are often recycled between 'generations', as new graduate students in turn need to upgrade their gowns at the start of the year. When academic dress is worn: In most colleges, gowns are worn to Formal Hall (formal dinner, held every night in some colleges, once or twice a term in a few others) and to Chapel. Various College events also demand academic dress; for example, in the Trinity College regulations for members in statu pupillari, it specifies that certain senior members of college (such as the Dean) prefer students to wear a gown when addressing them in their official capacity (often when having been "deaned" for breaking the College Rules). The extent to which these rules apply vary greatly from college to college, some dispensing with academic attire even for formal hall. When academic dress is worn: On special occasions, fuller academic dress is used, including hoods. Gowns are always worn with a hood to graduation ceremonies at the Senate House, and the university sets out strict rules regarding which gown and hood a graduating student should wear, and with what. Hoods may also be worn when attending chapel with Choir Dress or a surplice. Components of Cambridge academic dress: When wearing full academic dress, an individual wears the gown, hood and headdress of the highest degree s/he has already received from the University of Cambridge. One who does not hold a Cambridge degree (such as an undergraduate, or a graduate of another university) normally wears a gown according to his or her status in Cambridge, i.e., undergraduate, BA status or MA status (see below), without the strings which are attached to the gowns of Cambridge graduates. Graduates of other universities may wear the academic dress of those universities on 'scarlet days', unless they are university officials or participating in a degree ceremony, but this has only been permitted since 1998.A graduand (someone about to be presented for a degree) wears the full Cambridge academic dress of the highest status degree that s/he already holds. Graduands who do not already hold a Cambridge degree wear the gown appropriate to their status in the university, along with the hood of the degree to which they are about to be admitted. Undergraduates, who do not yet hold a degree, wear their undergraduate gowns, with the hood of the degree that they are about to receive. Thus, for example, an undergraduate graduating to a BA degree wears an undergraduate gown and a BA hood. A holder of a BA from Cambridge graduating to a PhD wears a BA gown and hood, whereas a graduate of another university graduating to a PhD wears a BA or MA status gown and PhD hood. Components of Cambridge academic dress: Medical graduates completing their clinical years wear the gown and hood of the B.Chir degree. This is because the B.Chir degree is conferred in absentia as soon as the list of people passing the Final M.B. examination is posted outside Senate House. This was to prevent the necessity for a 'double graduation' ceremony. It is common practice for students to hire the B.Chir academic dress, rather than purchase it, since it is superseded by the M.B academic dress post graduation. Where students have completed both pre-clinical and clinical years at Cambridge, many do not purchase the M.B academic dress, hiring it for occasions requiring academic dress (Alumni Formal Hall etc.) as this is superseded a couple of years later when the automatic Oxbridge MA is granted. Medical students graduate at the end of the third year with a Cambridge BA, and for this ceremony are treated as any other students graduating with a BA. Components of Cambridge academic dress: The full list of degrees and their order of seniority is given in the Ordinances of the university: roughly speaking, in descending order of rank the degrees are higher doctorates such as the DD or ScD, the PhD and other initial doctorates such as the MD or EdD, master's degrees, and finally bachelor's degrees. These different groups of degrees are distinguished by different academic dress. Components of Cambridge academic dress: Gowns The gowns in use at Cambridge, like those generally used throughout the UK but not the US, are open-fronted. The main types seen are the undergraduate gown, Bachelor of Arts gown and Master of Arts gown, though the sleeves of graduates' gowns are adorned with various patterns that indicate the exact degree or degrees that they possess, and allow this to be determined even when hoods are not being worn. In addition, for Scarlet days, Doctors (either of Philosophy, or one of the more senior doctorates) wear special dress gowns, distinguished by the use of scarlet. Components of Cambridge academic dress: Undergraduates All undergraduate gowns resemble knee-length versions of the BA gown, and the basic gown is black, reaching down to just below the knees with an open pointed sleeve and the forearm seam left open. Most colleges' gowns include minor variations on this pattern, such as sleeve decorations. The most distinct differences are the blue colour of the undergraduate gowns of Trinity and Caius and the blue facings of Selwyn. Illustrations and descriptions of the various collegiate gowns are available from the university's Heraldic and Genealogical Society website. Components of Cambridge academic dress: BA and MA The two most common graduate gowns in Cambridge are the BA gown and the MA gown. Unlike in most other universities, except Oxford and Trinity College Dublin, no bachelor's degree save the BA is awarded. All undergraduates at Cambridge traditionally graduated with a BA degree after three years, although it is now common for many graduates in scientific subjects also obtain a master's degree, such as an MEng or MSci, after a further year of study and graduate to both degrees at once. Components of Cambridge academic dress: As in Oxford and Dublin, BAs are automatically entitled to proceed to the degree of Master of Arts after a period of time (see also Master of Arts (Oxbridge and Dublin)). In Cambridge, this period is six years from the end of the first term after matriculation provided this is at least two years from the award of the BA — BAs are thus eligible for the MA at the first graduation ceremony in the seventh calendar year after matriculation. Components of Cambridge academic dress: The BA gown is a long black stuff (cloth) gown with long pointed sleeves to the wrists with the forearm seam left open from near the shoulder to around 4-3" from the wrist. The gown is gathered at the back in a yoke, and falls down to just below the knees. The BA hood is of black cloth, bound and half-lined in white fur, which by regulation is artificial.The MA gown is similar to the BA gown, except that it has "boot" sleeves, which are long, rectangular and closed at the ends, with a crescent cut out of each sleeve-end which curves at the top (unlike the Oxford MA gown), and a horizontal arm-slit just above the elbow. It falls down to calf length (slightly longer than the BA gown) and may be made of silk. The MA hood is of black silk lined in white silk. Components of Cambridge academic dress: Other master's degree gowns vary from subject to subject at Cambridge; for example, the Master of Engineering (MEng) and M.Sci. gowns are the standard MA gown but with a circle of cord on each sleeve, and a corresponding hood is worn. The MPhil gown is the same as the MSci gown, but instead of an embroidered wheel, it has two buttons connected by a vertical cord running from the sleeve slit to the shoulder. Components of Cambridge academic dress: Persons without a Cambridge degree (including those with a degree from another university) wear a "BA status" or "MA status" gown, which is identical to a BA or MA gown but (nominally) with the "strings" (black ribbons attached inside the shoulder) removed or hidden from view. The BA status gown is for those aged under twenty-four while the MA gown is for those aged twenty-four or over. (The rationale is that Cambridge students would usually join the university at 18, obtain their BA after three years, at 21, and their MA after a further three years, at 24.) Doctors Doctors in Cambridge have two forms of academic dress: undress and full dress (or scarlet). Scarlet is worn on formal college and university occasions, and so-called Scarlet Days (see below). Components of Cambridge academic dress: The undress gown or black gown is similar to the MA gown (for PhD, MD, VetMD, BusD, EngD, EdD, LittD, ScD and in practice DD) or is a 'lay-type' gown similar to that worn by King's Counsel (LLD, MedScD, MusD). Different doctorates are distinguished from each other and from the plain MA gown by different arrangements of lace on the sleeves, facings or flap collar. The DD traditionally had a gown with sleeves gathered at the wrists, like those on American gowns, which are called 'bishop's sleeves' or 'pudding sleeves'. Components of Cambridge academic dress: Undress gowns may be made of silk or stuff. The gown may be worn with a doctor's hood. The PhD hood, the one most commonly seen, is made of black corded silk lined with scarlet cloth, and other 'lower' doctorates have variants on the same scheme with different colours lining the black hood. The hoods of higher doctors are made of red cloth and lined with silk in the faculty colour (scarlet for letters, pink shot light blue for science, light-cherry for laws, mid-cherry for medicine, dove grey for divinity). The MusD hood is of cream damask lined with dark cherry satin. Components of Cambridge academic dress: The full dress or scarlet gown differs for each doctorate. For PhDs, and also MD, VetMD, BusD, EngD and EdD who share the same scarlet gown, there are two versions of the robe. The traditional version is the same as the MA gown (in theory, though not in practice, the silk version), with the addition of a broad red cloth stripe down each side at the front. The alternative version (authorised in 2006 but commonly used without authorisation before then) uses detachable facings on an undress PhD gown, which is distinguished from the MA gown by doctors' lace on the sleeves that is not found on the traditional festal PhD gown. For the higher doctorates other than MusD (DD, LLD, ScD, LittD, and MedScD), the scarlet gown is made of scarlet cloth and has open sleeves that hang long at the back. The linings of the sleeves and the facings are in silk of the colour of the hood lining. At the sleeve front, the lining is turned outwards and is fixed in position by a twisted cord and button. The MusD gown is of cream damask, with much shorter sleeves, lined and faced with deep cherry satin. Components of Cambridge academic dress: Scarlet day Scarlet day is the term used in the University of Cambridge to designate those days on which Doctors are required to wear the festal form of academic dress. They are so called because of the scarlet elements in the gowns and hoods of the festal full dress worn by Doctors as opposed to the everyday undress black gowns. On these days it is also permitted for members of the university to wear the academic dress of other Universities from which they have obtained degrees.The ordinances of the university set out the following days as scarlet days: Christmas Day, Easter Day, Ascension Day, Whitsunday, Trinity Sunday, All Saints' Day, the day appointed for the Commemoration of Benefactors, and on the days of General Admission to Degrees. In addition, the Vice-Chancellor may prescribe other scarlet days throughout the year, for example on days of national celebration. Components of Cambridge academic dress: The individual colleges each have a few scarlet days of their own as well, such as on the day of the college's Saint. Components of Cambridge academic dress: Hoods Hoods are worn on the back as an indicator of academic status. These are of the distinctive Cambridge Full shape. The hood consists of a cape (known also as the 'tippet'), cowl and liripipe. The neckband of a hood is of the outer colour, with no edging of the lining material. The corners of tippets are square. The design of hoods as set by University Ordinances Chapter II is below. Components of Cambridge academic dress: If a student already holds one or more degrees from the University of Cambridge, the above rules do not apply. Instead, they wear the hood of the degree that they previously received. For example, when a MPhil student graduates, but already holds a BA from Cambridge, then they would wear the BA hood again. Components of Cambridge academic dress: Headdresses A form of a black hat known as a square cap (also mortarboard) is worn or carried. Properly, it is worn outdoors and carried indoors, except by people acting in an official capacity who customarily continue to wear it indoors. Although in practice few people wear (or even carry) a cap nowadays, they are nominally still required for graduates at the university; caps ceased to be compulsory for undergraduates in 1943 due to a shortage during the Second World War, and, after bringing them back for degree ceremonies in the Senate House only, were finally made entirely optional for undergraduates in 1953, though they are still not permitted to wear any other head covering with a gown. Components of Cambridge academic dress: With their festal gowns, Doctors of Divinity wear a black velvet cap, and Doctors in other Faculties wear a wide-brimmed round velvet bonnet with gold string and tassels, known as a Tudor bonnet, instead of a mortarboard, though they may choose to wear a square cap with a festal gown if they are taking part in a ceremony in the Senate House. Components of Cambridge academic dress: Sub-fusc Sub-fusc means "of a dark/dusky colour", and refers to the clothes worn with full academic dress in Cambridge although the university officially does not use this term. Generally, this involves a dark suit and white shirt, collar, bands and bow tie for men (who must also wear black socks), and a dark suit and white blouse for women. The rules for dress on graduation for women also specify that women's attire must have long sleeves. Although only male graduands are required to wear white ties and bands by a regulation, nothing prevents female graduands doing so too if they wear a properly collared shirt. Components of Cambridge academic dress: In place of sub-fusc, members of Her Majesty's Forces have in the past been allowed to wear their service uniform, persons in holy orders their clerical dress, and national dress has been worn, together with the appropriate gown and hood. Currently as of 2007, national dress is no longer accepted as an alternative to sub-fusc. The proctors have discretion to waive the part of the regulations concerning dark clothes and white tie on 'reasonable grounds'. Components of Cambridge academic dress: Notably, the rules governing Cambridge sub-fusc are less detailed and less strict than those prevailing at Oxford. In particular, bands may be worn at Oxford only by high officers of the university and by doctors on certain occasions. Some women are required to wear bands with a black tie (in practice a black velvet ribbon) while others holding certain offices are permitted the alternative of a white bow tie with their bands. At Cambridge, there are only strict sub-fusc rules in Statutes and Ordinances for graduation ceremonies, at which the rules are enforced strictly by the proctors. Persons who are incorrectly dressed may be prevented from graduating in person, and their Praelector or Presenter may be fined. Academic dress for officials of the University: The Chancellor The chancellor of the university wears on ceremonial occasions a black silk gown with a long train, decorated with gold lace, similar to the gown of the Lord Chancellor. Academic dress for officials of the University: Persons presenting for, or conferring, degrees The Vice-Chancellor (or his/her deputy) when conferring degrees, and presenters of graduands for higher doctorates (Doctor of Divinity, Doctor of Law, Doctor of Science, Doctor of Medical Science, Doctor of Letters, Doctor of Music), wear a scarlet cappa clausa, or "closed cope" of scarlet cloth with an ermine hood and trimmings, as shown in the image to the right. Cambridge is the only university in the world apart from the University of the South in America to retain the cappa clausa as part of its academic dress. Academic dress for officials of the University: College praelectors, and all other presenters, wear the academical dress of their highest Cambridge degree when presenting graduands. Proctors The Proctors in Cambridge are formally responsible for the discipline of junior members of the university. In addition, they have various ceremonial and administrative duties, with which they are, in practice, mainly occupied. Academic dress for officials of the University: In both Oxford and Cambridge, the Proctors could formerly be seen patrolling the streets after dark with the university Constables, or bulldogs, who wore top hats in Cambridge and bowler hats in Oxford. These traditions have now ceased, although the Proctors are still responsible for posting various disciplinary notices (e.g. highlighting the restriction on undergraduates' possession of motor cars) around the Colleges. Their Constables continue to wear top hats and cloaks on ceremonial occasions. Academic dress for officials of the University: The Proctors wear the academic dress of a Master of Arts, but with a distinctive ruff which is like a cape or very short mantle over the gown. This is known as 'Congregation dress' and is worn with the MA hood over it. When attending church, the hood is worn "squared," meaning that the hood is first flattened then worn over the shoulders like a cape (Masters of Trinity College also wear their hood squared at their installation, but the previous two and incumbent did not do so for unexplained reasons, so it is assumed this tradition is no longer observed). The method of arranging this dress has been handed down, as has a pattern "ruff", from Proctor to Proctor; but nowadays the repositories of such traditions are more often the Proctors' men, who, in these matters, perform the offices which judges expect of their clerks. Academic dress for officials of the University: Esquire Bedells The Esquire Bedells in Cambridge are ceremonial officers. The Senior Esquire Bedell is required to be familiar with all details of academical dress at the university. When carrying out the duties of his or her office, an Esquire Bedell is required to wear the academical dress of a Master of Arts. Other officials Other officials such as the Orator wear the academic dress appropriate to their degree.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bethe–Feynman formula** Bethe–Feynman formula: The Bethe–Feynman efficiency formula, a simple method for calculating the yield of a fission bomb, was first derived in 1943 after development in 1942. Aspects of the formula are speculated to be secret restricted data. Related formula: a = internal energy per gram b = growth rate c = sphere radius a≈(bc)2f A numerical coefficient would then be included to create the Bethe–Feynman formula—increasing accuracy by more than an order of magnitude. Eff=(1γ−1⋅E2)⋅αmax2⋅Rcrit2δ∗(1+3∗δ/2)(1−δ) where γ is the thermodynamic exponent of a photon gas, E2 is the prompt energy density of the fuel, α is V_n (neutron velocity) / λ_mfp_tot (Total reaction mean free path), R_crit is the critical radius and 𝛿 is the excess supercritical radius (Rcore - Rcrit) / Rcrit.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ulnar dysplasia** Ulnar dysplasia: Ulnar dysplasia also known as ulnar longitudinal deficiency, ulnar club hand or ulnar aplasia/hypoplasia is a rare congenital malformation which consists of an underdeveloped or missing ulnae bone, causing an ulnar deviation of the entire wrist. The muscles and nerves in the hand may be missing or unbalanced. In severe cases, ulnar digits (e.g. ring and pinky finger) may be missing. Sometimes, radial dysplasia occurs alongside this malformation. This condition occurs in 1 in 100,000 live births. Sometimes, other orthopedic problems occur alongside this malformation, such as scoliosis. Types: There are four types of ulnar dysplasia:Type 1: The mildest type of ulnar dysplasia. The ulnae is slightly shorter than average and there is a barely noticeable wrist deviation Type 2: The ulnae is moderately-severely smaller than normal. The radius is deviated and so is the hand Type 3: The ulnae is completely missing. The radius is even more deviated, causing a severe ulnar deviation of the hand. Types: Type 4: The most severe type of ulnar dysplasia, the ulnae is completely missing, and the wrist is severely deviated. The elbow bones are fused together, so the elbow has reduced mobility
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Electronic counter-countermeasure** Electronic counter-countermeasure: Electronic counter-countermeasures (ECCM) is a part of electronic warfare which includes a variety of practices which attempt to reduce or eliminate the effect of electronic countermeasures (ECM) on electronic sensors aboard vehicles, ships and aircraft and weapons such as missiles. ECCM is also known as electronic protective measures (EPM), chiefly in Europe. In practice, EPM often means resistance to jamming. A more detailed description defines it as the electronic warfare operations taken by a radar to offset the enemy's countermeasure. History: Ever since electronics have been used in battle in an attempt to gain superiority over the enemy, effort has been spent on techniques to reduce the effectiveness of those electronics. More recently, sensors and weapons are being modified to deal with this threat. One of the most common types of ECM is radar jamming or spoofing. This originated with the Royal Air Force's use of what they codenamed Window during World War II, which Americans referred to as chaff. It was first used during the Hamburg raid on July 24-25, 1943. The night fighters outfitted with Window had prong antennae stuck out from their noses, allowing their radars a range of four miles in a 70 degree cone. Jamming also may have originated with the British during World War II, when they began jamming German radio communications. These efforts include the successful British disruption of German Luftwaffe navigational radio beams.In perhaps the first example of ECCM, the Germans increased their radio transmitter power in an attempt to 'burn through' or override the British jamming, which by necessity of the jammer being airborne or further away produced weaker signals. This is still one of the primary methods of ECCM today. For example, modern airborne jammers are able to identify incoming radar signals from other aircraft and send them back with random delays and other modifications in an attempt to confuse the opponent's radar set, making the 'blip' jump around wildly and become impossible to range. More powerful airborne radars means that it is possible to 'burn through' the jamming at much greater ranges by overpowering the jamming energy with the actual radar returns. The Germans were not really able to overcome the chaff spoofing very successfully and had to work around it (by guiding the aircraft to the target area and then having them visually acquire the targets). History: Today, more powerful electronics with smarter software for operation of the radar might be able to better discriminate between a moving target like an aircraft and an almost stationary target like a chaff bundle. The technology powering modern sensors and seekers allow all successful systems partly due to ECCM designed into them. Today, electronic warfare is composed of ECM, ECCM and, electronic reconnaissance/intelligent (ELINT) activities.Examples of electronic counter-countermeasures include the American Big Crow program, which served as a Bear bomber and a standoff jammer. It was a modified Air Force NKC-135A and was built to provide capability and flexibility of conducting varied and precision electronic warfare experiments. Throughout its 20-year existence, the U.S. government developed and installed over 3,143 electronic counter-countermeasures to its array of weapons. There is also the BAMS Project, which was funded by the Belgian government since 1982. This system, together with advanced microelectronics, also provided secure voice, data, and text communications under the most severe electronic warfare conditions. Specific ECCM techniques: The following are some examples of EPM (other than simply increasing the fidelity of sensors through techniques such as increasing power or improving discrimination): ECM detection Sensor logic may be programmed to be able to recognize attempts at spoofing (e.g., aircraft dropping chaff during terminal homing phase) and ignore them. Even more sophisticated applications of ECCM might be to recognize the type of ECM being used, and be able to cancel out the signal. Specific ECCM techniques: Pulse compression by "chirping", or linear frequency modulation One of the effects of the pulse compression technique is boosting the apparent signal strength as perceived by the radar receiver. The outgoing radar pulses are chirped, that is, the frequency of the carrier is varied within the pulse, much like the sound of a cricket chirping. When the pulse reflects off a target and returns to the receiver, the signal is processed to add a delay as a function of the frequency. This has the effect of "stacking" the pulse so it seems stronger, but shorter in duration, to further processors. The effect can increase the received signal strength to above that of noise jamming. Similarly, jamming pulses (used in deception jamming) will not typically have the same chirp, so will not benefit from the increase in signal strength. Specific ECCM techniques: Frequency hopping Frequency agility ("frequency hopping") may be used to rapidly switch the frequency of the transmitted energy, and receiving only that frequency during the receiving time window. This foils jammers which cannot detect this switch in frequency quickly enough or predict the next hop frequency, and switch their own jamming frequency accordingly during the receiving time window. The most advanced jamming techniques have a very wide and fast frequency range, and might possibly jam out an antijammer.This method is also useful against barrage jamming in that it forces the jammer to spread its jamming power across multiple frequencies in the jammed system's frequency range, reducing its power in the actual frequency used by the equipment at any one time. The use of spread-spectrum techniques allow signals to be spread over a wide enough spectrum to make jamming of such a wideband signal difficult. Specific ECCM techniques: Sidelobe blanking Radar jamming can be effective from directions other than the direction the radar antenna is currently aimed. When jamming is strong enough, the radar receiver can detect it from a relatively low gain sidelobe. The radar, however, will process signals as if they were received in the main lobe. Therefore, jamming can be seen in directions other than where the jammer is located. To combat this, an omnidirectional antenna is used for a comparison signal. By comparing the signal strength as received by both the omnidirectional and the (directional) main antenna, signals can be identified that are not from the direction of interest. These signals are then ignored. Specific ECCM techniques: Polarization Polarization can be used to filter out unwanted signals, such as jamming. If a jammer and receiver do not have the same polarization, the jamming signal will incur a loss that reduces its effectiveness. The four basic polarizations are linear horizontal, linear vertical, right-hand circular, and left-hand circular. The signal loss inherent in a cross polarized (transmitter different from receiver) pair is 3 dB for dissimilar types, and 17 dB for opposites. Specific ECCM techniques: Aside from power loss to the jammer, radar receivers can also benefit from using two or more antennas of differing polarization and comparing the signals received on each. This effect can effectively eliminate all jamming of the wrong polarization, although enough jamming may still obscure the actual signal. Specific ECCM techniques: Radiation homing Another practice of ECCM is to program sensors or seekers to detect attempts at ECM and possibly even to take advantage of them. For example, some modern fire-and-forget missiles like the Vympel R-77 and the AMRAAM are able to home in directly on sources of radar jamming if the jamming is too powerful to allow them to find and track the target normally. This mode, called "home-on-jam", actually makes the missile's job easier. Some missile seekers actually target the enemy's radiation sources, and are therefore called "anti-radiation missiles" (ARMs). The jamming in this case effectively becomes a beacon announcing the presence and location of the transmitter. This makes the use of such ECM a difficult decision – it may serve to obscure an exact location from non-ARMs, but in doing so it must put the jamming vehicle at risk of being targeted and hit by ARMs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Blocked milk duct** Blocked milk duct: A blocked milk duct (sometimes also called plugged or clogged milk duct) is a blockage of one or more ducts carrying milk to the nipple for the purpose of breastfeeding an infant that can cause Mastitis. The symptoms are a tender, localised lump in one breast, with redness in the skin over the lump. The cause of a blocked milk duct is the failure to remove milk from part of the breast. This may be due to infrequent breastfeeding, poor attachment, tight clothing or trauma to the breast. Sometimes the duct to one part of the breast is blocked by thickened milk. A blocked milk duct can be managed by improving the removal of milk and correcting the underlying cause. Causes: Blocked milk ducts are a common breastfeeding problem and can be caused due to a number of reasons: When the infant does not latch properly Wearing a tight bra or tight clothing can restrict the breasts and put pressure on them leading to a blocked milk duct A bad or weak pump could lead to a drainage issue When the breast milk is not removed regularly, the milk can back up and create a blockage A nipple bleb can also block the milk duct When the body produces milk in over abundance, it can engorge the breast and hence lead to a blockage Other reasons include fatigue, over exercise, dehydration and weaning. Symptoms: A blocked milk duct has the following common symptoms: Low fever and breast infection Pain in a particular side of the breast Swollen or tender lump in the breast Slower milk flow a small white blister on the nipple called a milk bleb swelling or redness of the breast areas of the breast that are hot or warm to touch the infant may feel fussy when feeding from the affected breast Treatment: The most effective treatment against blocked milk ducts is to empty the affected breasts by frequent breastfeeding or pumping. Numerous other treatment approaches have been suggested, however, there is insufficient clinical research to determine the effectiveness. Treatments that have been studied but have no strong evidence for or against their use: A gentle massage of the affected breast Sometimes after gentle massage over the lump, a string of the thickened milk comes out through the nipple, followed by a stream of milk, and rapid relief of the blocked duct. Treatment: Ensuring a correct positioning and latching of the baby Wearing loose clothing items that do not bind the breasts Applying warm compresses Drinking a specialized herbal tea Acupuncture Gua-Sha Proteolytic enzymesA blocked milk duct can result from a nipple bleb. Both of these can lead to mastitis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flight hours** Flight hours: Flight time or block time is an aviation term referring to the total amount of time spent piloting aircraft, and serves as the primary measure of a pilot's experience. Flight time is defined by International Civil Aviation Organization (ICAO) as "The total time from the moment an aeroplane first moves for the purpose of taking off until the moment it finally comes to rest at the end of the flight", and thus includes time spent taxiing and performing pre-flight checks on the ground, provided the engine is running. It is colloquially referred to as "blocks to blocks" or "chocks to chocks" time. In commercial aviation, this means the time from pushing back at the departure gate to arriving at the destination gate.Air time is defined as "the time from the moment an aircraft leaves the surface until it comes into contact with the surface at the next point of landing".For gliders without self-launch capability, flight time "commences when the glider is towed for the purpose of flight and ends when the glider comes to rest after landing."For helicopters, ICAO defines "flight time" as "The total time from the moment a helicopter's rotor blades start turning until the moment the helicopter finally comes to rest at the end of a flight and the rotor blades are stopped." Recording flight time: Most government licensing regulations have specific flight hour requirements, as do virtually all airline job listings. Consequently, all pilots maintain a logbook, which is a legal document. In commercial aviation, flight time is recorded to the nearest minute. In general aviation it is often rounded to the nearest 5 minutes or recorded in decimal rounded to the nearest 0.1 hour, which corresponds to the resolution of a typical Hobbs meter, an odometer-like instrument installed in most light aircraft. Pilots record many details about their flight time, such as whether a flight occurred during the day or at night, in a single- or multi-engine aircraft, in visual or instrument conditions, and the pilot's role during the flight. Legal decisions: In the United States, time spent de-icing between taxi and takeoff is considered flight time, even if the engines are shut down. If an aircraft becomes unserviceable during taxi, and a replacement aircraft is used, time spent taxiing the first aircraft is still included in the total flight time.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded