text
stringlengths
60
353k
source
stringclasses
2 values
**Mike Xie** Mike Xie: Yi Min “Mike” Xie is a Distinguished Professor and Director of the CISM Centre for Innovative Structures and Materials Team within School of Engineering at RMIT University. Early life and education: Xie was born in China and attended Shanghai Jiao Tong University obtaining his Bachelor's degree in Engineering Mechanics. Later he studied at Swansea University and received a PhD degree in Computational Mechanics. Career: Xie came to Australia and joined the University of Sydney in the 1990's where he carried out research on the Evolutionary Structural Optimization (ESO) method which has become a popular method used in topology optimization. He was appointed a Lecturer at Victoria University promoted to Senior Lecturer, then Associate Professor and eventually Professor. He moved to RMIT University in the early 2000's as Professor and Head of Civil & Infrastructure engineering for the coming years. Career: Honours He was elected a Fellow of the Australian Academy of Technology and Engineering in 2011. In 2017, he received the Clunies-Ross Award from the Australian Academy of Technology and Engineering. In the same year, he was awarded the AGM Michell Medal by Engineers Australia. He was awarded the Australian Laureate Fellowship in 2019 by the Australian Research Council. In the 2019 Queen's Birthday Honours List, he was appointed a Member of the Order of Australia (AM), for "significant service to higher education, and to civil engineering". He was awarded the Victoria Prize for Science and Innovation in 2020.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Molecular Conceptor** Molecular Conceptor: The Molecular Conceptor Learning Series, produced by Synergix Ltd, is an interactive computer-based learning suite that teaches the principles and techniques used in everyday drug discovery. The Molecular Conceptor Learning Series comprises five modules each of which are designed to give students and professionals in the drug discovery field the comprehensive training necessary to face even the toughest drug design challenges. The modules are: Medicinal chemistry Drug design Cheminformatics Structural bioinformatics Practical Drug Discovery: Case Studies.The Molecular Conceptor Learning Series is an educational resource centered around the field of computer-aided drug design. It encompasses a range of topics, progressing from foundational principles to more advanced discussions. Through the utilization of interactive 3D technology, practical examples, and numerous case studies, this series aims to elucidate the concepts, methodologies, and techniques pertinent to drug discovery.Esteemed experts in the realm of drug discovery have contributed their expertise, insights, and experiences, enabling the Synergix Ltd team to develop The Molecular Conceptor Learning Series. This educational tool is designed to cater to both students and professionals within the life sciences sector, equipping them with the essential knowledge and skills required to effectively navigate the intricacies of modern drug discovery. The concept: The concept behind the Molecular Conceptor Learning Series aims to bring together, in a ready-digested format, the necessary knowledge surrounding the skills, techniques and approaches used by the drug discovery team as a whole. It provides all the information needed by the medicinal chemist to allow him to analyze, understand and make informed decisions concerning the design of a drug, thus enabling them to contribute effectively to the drug discovery process. Contents: The full Molecular Conceptor Learning Series can be broken down into 10 main volumes, which are broken down into a number of chapters, each of which tackles a different aspect of drug design. The 10 volumes are as follows: Drug Discovery Analog Design and Molecular Mimicry Synthesis and Library Design Protein Structure and Modeling Structure-Based Design Cheminformatics Ligand-Based Design QSAR and Chemometrics Molecular Basis of Drugs Peptidomimetics General TopicsMolecular Conceptor, Version 1, was first released in December 2001 with 600 pages. Since then the software has developed and grown and in Oct 2010, Version 2.14 was released with more than 5000 pages.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nephrology** Nephrology: Nephrology (from Greek nephros "kidney", combined with the suffix -logy, "the study of") is a specialty of adult internal medicine and pediatric medicine that concerns the study of the kidneys, specifically normal kidney function (renal physiology) and kidney disease (renal pathophysiology), the preservation of kidney health, and the treatment of kidney disease, from diet and medication to renal replacement therapy (dialysis and kidney transplantation). The word "renal" is an adjective meaning "relating to the kidneys", and its roots are French or late Latin. Whereas according to some opinions, "renal" and "nephro" should be replaced with "kidney" in scientific writings such as "kidney medicine" (instead of nephrology) or "kidney replacement therapy", other experts have advocated preserving the use of renal and nephro as appropriate including in "nephrology" and "renal replacement therapy", respectively.Nephrology also studies systemic conditions that affect the kidneys, such as diabetes and autoimmune disease; and systemic diseases that occur as a result of kidney disease, such as renal osteodystrophy and hypertension. A physician who has undertaken additional training and become certified in nephrology is called a nephrologist. Nephrology: The term "nephrology" was first used in about 1960, according to the French "néphrologie" proposed by Pr. Jean Hamburger in 1953, from the Greek νεφρός / nephrós (kidney). Before then, the specialty was usually referred to as "kidney medicine". Scope: Nephrology concerns the diagnosis and treatment of kidney diseases, including electrolyte disturbances and hypertension, and the care of those requiring renal replacement therapy, including dialysis and renal transplant patients. The word 'dialysis' is from the mid-19th century: via Latin from the Greek word 'dialusis'; from 'dialuein' (split, separate), from 'dia' (apart) and 'luein' (set free). In other words, dialysis replaces the primary (excretory) function of the kidney, which separates (and removes) excess toxins and water from the blood, placing them in the urine.Many diseases affecting the kidney are systemic disorders not limited to the organ itself, and may require special treatment. Examples include acquired conditions such as systemic vasculitides (e.g. ANCA vasculitis) and autoimmune diseases (e.g. lupus), as well as congenital or genetic conditions such as polycystic kidney disease.Patients are referred to nephrology specialists after a urinalysis, for various reasons, such as acute kidney injury, chronic kidney disease, hematuria, proteinuria, kidney stones, hypertension, and disorders of acid/base or electrolytes. Nephrologist: A nephrologist is a physician who specializes in the care and treatment of kidney disease. Nephrology requires additional training to become an expert with advanced skills. Nephrologists may provide care to people without kidney problems and may work in general/internal medicine, transplant medicine, immunosuppression management, intensive care medicine, clinical pharmacology, perioperative medicine, or pediatric nephrology.Nephrologists may further sub-specialise in dialysis, kidney transplantation, home therapies (home dialysis), cancer-related kidney diseases (onco-nephrology), structural kidney diseases (uro-nephrology), procedural nephrology or other non-nephrology areas as described above. Nephrologist: Procedures a nephrologist may perform include native kidney and transplant kidney biopsy, dialysis access insertion (temporary vascular access lines, tunnelled vascular access lines, peritoneal dialysis access lines), fistula management (angiographic or surgical fistulogram and plasty), and bone biopsy. Bone biopsies are now unusual. Training India To become a nephrologist in India, one has to complete an MBBS (5 and 1/2 years) degree, followed by an MD/DNB (3 years) either in medicine or paediatrics, followed by a DM/DNB (3 years) course in either nephrology or paediatric nephrology. Nephrologist: Australia and New Zealand Nephrology training in Australia and New Zealand typically includes completion of a medical degree (Bachelor of Medicine, Bachelor of Surgery: 4–6 years), internship (1 year), Basic Physician Training (3 years minimum), successful completion of the Royal Australasian College of Physicians written and clinical examinations, and Advanced Physician Training in Nephrology (3 years). The training pathway is overseen and accredited by the Royal Australasian College of Physicians, though the application process varies across states. Completion of a post-graduate degree (usually a PhD) in a nephrology research interest (3–4 years) is optional but increasingly common. Finally, many Australian and New Zealand nephrologists participate in career-long professional and personal development through bodies such as the Australian and New Zealand Society of Nephrology and the Transplant Society of Australia and New Zealand. Nephrologist: United Kingdom In the United Kingdom, nephrology (often called renal medicine) is a subspecialty of general medicine. A nephrologist has completed medical school, foundation year posts (FY1 and FY2) and core medical training (CMT), specialist training (ST) and passed the Membership of the Royal College of Physicians (MRCP) exam before competing for a National Training Number (NTN) in renal medicine. The typical Specialty Training (when they are called a registrar, or an ST) is five years and leads to a Certificate of Completion of Training (CCT) in both renal medicine and general (internal) medicine. In those five years, they usually rotate yearly between hospitals in a region (known as a deanery). They are then accepted on to the Specialist Register of the General Medical Council (GMC). Specialty trainees often interrupt their clinical training to obtain research degrees (MD/PhD). After achieving CCT, the registrar (ST) may apply for a permanent post as Consultant in Renal Medicine. Subsequently, some Consultants practice nephrology alone. Others work in this area, and in Intensive Care (ICU), or General (Internal) or Acute Medicine. Nephrologist: United States Nephrology training can be accomplished through one of two routes. The first path way is through an internal medicine pathway leading to an Internal Medicine/Nephrology specialty, and sometimes known as "adult nephrology". The second pathway is through Pediatrics leading to a speciality in Pediatric Nephrology. In the United States, after medical school adult nephrologists complete a three-year residency in internal medicine followed by a two-year (or longer) fellowship in nephrology. Complementary to an adult nephrologist, a pediatric nephrologist will complete a three-year pediatric residency after medical school or a four-year Combined Internal Medicine and Pediatrics residency. This is followed by a three-year fellowship in Pediatric Nephrology. Once training is satisfactorily completed, the physician is eligible to take the American Board of Internal Medicine (ABIM) or American Osteopathic Board of Internal Medicine (AOBIM) nephrology examination. Nephrologists must be approved by one of these boards. To be approved, the physician must fulfill the requirements for education and training in nephrology in order to qualify to take the board's examination. If a physician passes the examination, then he or she can become a nephrology specialist. Typically, nephrologists also need two to three years of training in an ACGME or AOA accredited fellowship in nephrology. Nearly all programs train nephrologists in continuous renal replacement therapy; fewer than half in the United States train in the provision of plasmapheresis. Only pediatric trained physicians are able to train in pediatric nephrology, and internal medicine (adult) trained physicians may enter general (adult) nephrology fellowships. Diagnosis: History and physical examination are central to the diagnostic workup in nephrology. The history typically includes the present illness, family history, general medical history, diet, medication use, drug use and occupation. The physical examination typically includes an assessment of volume state, blood pressure, heart, lungs, peripheral arteries, joints, abdomen and flank. A rash may be relevant too, especially as an indicator of autoimmune disease. Diagnosis: Examination of the urine (urinalysis) allows a direct assessment for possible kidney problems, which may be suggested by appearance of blood in the urine (hematuria), protein in the urine (proteinuria), pus cells in the urine (pyuria) or cancer cells in the urine. A 24-hour urine collection used to be used to quantify daily protein loss (see proteinuria), urine output, creatinine clearance or electrolyte handling by the renal tubules. It is now more common to measure protein loss from a small random sample of urine. Diagnosis: Basic blood tests can be used to check the concentration of hemoglobin, white count, platelets, sodium, potassium, chloride, bicarbonate, urea, creatinine, albumin, calcium, magnesium, phosphate, alkaline phosphatase and parathyroid hormone (PTH) in the blood. All of these may be affected by kidney problems. The serum creatinine concentration is the most important blood test as it is used to estimate the function of the kidney, called the creatinine clearance or estimated glomerular filtration rate (GFR). Diagnosis: It is a good idea for patients with longterm kidney disease to know an up-to-date list of medications, and their latest blood tests, especially the blood creatinine level. In the United Kingdom, blood tests can monitored online by the patient, through a website called RenalPatientView. More specialized tests can be ordered to discover or link certain systemic diseases to kidney failure such as infections (hepatitis B, hepatitis C), autoimmune conditions (systemic lupus erythematosus, ANCA vasculitis), paraproteinemias (amyloidosis, multiple myeloma) and metabolic diseases (diabetes, cystinosis). Structural abnormalities of the kidneys are identified with imaging tests. These may include Medical ultrasonography/ultrasound, computed axial tomography (CT), scintigraphy (nuclear medicine), angiography or magnetic resonance imaging (MRI). Diagnosis: In certain circumstances, less invasive testing may not provide a certain diagnosis. Where definitive diagnosis is required, a biopsy of the kidney (renal biopsy) may be performed. This typically involves the insertion, under local anaesthetic and ultrasound or CT guidance, of a core biopsy needle into the kidney to obtain a small sample of kidney tissue. The kidney tissue is then examined under a microscope, allowing direct visualization of the changes occurring within the kidney. Additionally, the pathology may also stage a problem affecting the kidney, allowing some degree of prognostication. In some circumstances, kidney biopsy will also be used to monitor response to treatment and identify early relapse. A transplant kidney biopsy may also be performed to look for rejection of the kidney. Treatment: Treatments in nephrology can include medications, blood products, surgical interventions (urology, vascular or surgical procedures), renal replacement therapy (dialysis or kidney transplantation) and plasma exchange. Kidney problems can have significant impact on quality and length of life, and so psychological support, health education and advanced care planning play key roles in nephrology. Treatment: Chronic kidney disease is typically managed with treatment of causative conditions (such as diabetes), avoidance of substances toxic to the kidneys (nephrotoxins like radiologic contrast and non-steroidal anti-inflammatory drugs), antihypertensives, diet and weight modification and planning for end-stage kidney failure. Impaired kidney function has systemic effects on the body. An erythropoetin stimulating agent (ESA) may be required to ensure adequate production of red blood cells, activated vitamin D supplements and phosphate binders may be required to counteract the effects of kidney failure on bone metabolism, and blood volume and electrolyte disturbance may need correction. Diuretics (such as furosemide) may be used to correct fluid overload, and alkalis (such as sodium bicarbonate) can be used to treat metabolic acidosis. Treatment: Auto-immune and inflammatory kidney disease, such as vasculitis or transplant rejection, may be treated with immunosuppression. Commonly used agents are prednisone, mycophenolate, cyclophosphamide, ciclosporin, tacrolimus, everolimus, thymoglobulin and sirolimus. Newer, so-called "biologic drugs" or monoclonal antibodies, are also used in these conditions and include rituximab, basiliximab and eculizumab. Blood products including intravenous immunoglobulin and a process known as plasma exchange can also be employed. Treatment: When the kidneys are no longer able to sustain the demands of the body, end-stage kidney failure is said to have occurred. Without renal replacement therapy, death from kidney failure will eventually result. Dialysis is an artificial method of replacing some kidney function to prolong life. Renal transplantation replaces kidney function by inserting into the body a healthier kidney from an organ donor and inducing immunologic tolerance of that organ with immunosuppression. At present, renal transplantation is the most effective treatment for end-stage kidney failure although its worldwide availability is limited by lack of availability of donor organs. Generally speaking, kidneys from living donors are 'better' than those from deceased donors, as they last longer. Treatment: Most kidney conditions are chronic conditions and so long term followup with a nephrologist is usually necessary. In the United Kingdom, care may be shared with the patient's primary care physician, called a General Practitioner (GP). Organizations: The world's first society of nephrology was the French 'Societe de Pathologie Renale'. Its first president was Jean Hamburger, and its first meeting was in Paris in February 1949. In 1959, Hamburger also founded the 'Société de Néphrologie', as a continuation of the older society. The UK's Renal Association was founded in 1950; the second society of nephrologists. Its first president was Arthur Osman and met for the first time, in London, on the 30th of March 1950. The Società di Nefrologia Italiana was founded in 1957 and was the first national society to incorporate the phrase nephrologia (or nephrology) into its name. Organizations: The word 'nephrology' appeared for the first time in a conference, on 1–4 September 1960 at the "Premier Congrès International de Néphrologie" in Evian and Geneva, the first meeting of the International Society of Nephrology (ISN, International Society of Nephrology). The first day (1.9.60) was in Geneva and the next three (2–4.9.60) were in Evian, France. The early history of the ISN is described by Robinson and Richet in 2005 and the later history by Barsoum in 2011. The ISN is the largest global society representing medical professionals engaged in advancing kidney care worldwide. Organizations: In the US, founded in 1964, the National Kidney Foundation is a national organization representing patients and professionals who treat kidney diseases. Founded in 1966, the American Society of Nephrology (ASN) is the world's largest professional society devoted to the study of kidney disease. The American Nephrology Nurses' Association (ANNA), founded in 1969, promotes excellence in and appreciation of nephrology nursing to make a positive difference for patients with kidney disease. The American Association of Kidney Patients (AAKP) is a non-profit, patient-centric group focused on improving the health and well-being of CKD and dialysis patients. The National Renal Administrators Association (NRAA), founded in 1977, is a national organization that represents and supports the independent and community-based dialysis providers. The American Kidney Fund directly provides financial support to patients in need, as well as participating in health education and prevention efforts. ASDIN (American Society of Diagnostic and Interventional Nephrology) is the main organization of interventional nephrologists. Other organizations include CIDA, VASA etc. which deal with dialysis vascular access. The Renal Support Network (RSN) is a nonprofit, patient-focused, patient-run organization that provides non-medical services to those affected by chronic kidney disease (CKD). Organizations: In the United Kingdom, UK National Kidney Federation and Kidney Care UK (previously known as British Kidney Patient Association, BKPA) represent patients, and the Renal Association represents renal physicians and works closely with the National Service Framework for kidney disease. There is an international office in Brussels, Belgium.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Determination of equilibrium constants** Determination of equilibrium constants: Equilibrium constants are determined in order to quantify chemical equilibria. When an equilibrium constant K is expressed as a concentration quotient, K=[S]σ[T]τ⋯[A]α[B]β⋯ it is implied that the activity quotient is constant. For this assumption to be valid, equilibrium constants must be determined in a medium of relatively high ionic strength. Where this is not possible, consideration should be given to possible activity variation. Determination of equilibrium constants: The equilibrium expression above is a function of the concentrations [A], [B] etc. of the chemical species in equilibrium. The equilibrium constant value can be determined if any one of these concentrations can be measured. The general procedure is that the concentration in question is measured for a series of solutions with known analytical concentrations of the reactants. Typically, a titration is performed with one or more reactants in the titration vessel and one or more reactants in the burette. Knowing the analytical concentrations of reactants initially in the reaction vessel and in the burette, all analytical concentrations can be derived as a function of the volume (or mass) of titrant added. Determination of equilibrium constants: The equilibrium constants may be derived by best-fitting of the experimental data with a chemical model of the equilibrium system. Experimental methods: There are four main experimental methods. For less commonly used methods, see Rossotti and Rossotti. In all cases the range can be extended by using the competition method. An example of the application of this method can be found in palladium(II) cyanide. Experimental methods: Potentiometric measurements A free concentration [A] or activity {A} of a species A is measured by means of an ion selective electrode such as the glass electrode. If the electrode is calibrated using activity standards it is assumed that the Nernst equation applies in the form ln ⁡{A} where E0 is the standard electrode potential. When buffer solutions of known pH are used for calibration the meter reading will be a pH. Experimental methods: pH=nFRT(E0−E) At 298 K, 1 pH unit is approximately equal to 59 mV.When the electrode is calibrated with solutions of known concentration, by means of a strong acid–strong base titration, for example, a modified Nernst equation is assumed. log 10 ⁡[A] where s is an empirical slope factor. A solution of known hydrogen ion concentration may be prepared by standardization of a strong acid against borax. Constant-boiling hydrochloric acid may also be used as a primary standard for hydrogen ion concentration. Experimental methods: Range and limitations The most widely used electrode is the glass electrode, which is selective for the hydrogen ion. This is suitable for all acid–base equilibria. log10 β values between about 2 and 11 can be measured directly by potentiometric titration using a glass electrode. This enormous range of stability constant values (ca. 100 to 1011 is possible because of the logarithmic response of the electrode. The limitations arise because the Nernst equation breaks down at very low or very high pH. Experimental methods: When a glass electrode is used to obtain the measurements on which the calculated equilibrium constants depend, the precision of the calculated parameters is limited by secondary effects such as variation of liquid junction potentials in the electrode. In practice it is virtually impossible to obtain a precision for log β better than ±0.001. Spectrophotometric measurements Absorbance It is assumed that the Beer–Lambert law applies. A=l∑εc where l is the optical path length, ε is a molar absorbance at unit path length and c is a concentration. More than one of the species may contribute to the absorbance. In principle absorbance may be measured at one wavelength only, but in present-day practice it is common to record complete spectra. Experimental methods: Range and limitations An upper limit on log10 β of 4 is usually quoted, corresponding to the precision of the measurements, but it also depends on how intense the effect is. Spectra of contributing species should be clearly distinct from each other Fluorescence (luminescence) intensity It is assumed that the scattered light intensity is a linear function of species’ concentrations. Experimental methods: I=∑φc where φ is a proportionality constant. Range and limitations The magnitude of the constant φ may be higher than the value of the molar extinction coefficient, ε, for a species. When this is so, the detection limit for that species will be lower. At high solute concentrations, fluorescence intensity becomes non-linear with respect to concentration due to self-absorption of the scattered radiation. NMR chemical shift measurements Chemical exchange is assumed to be rapid on the NMR time-scale. An individual chemical shift δ is the mole-fraction-weighted average of the shifts δ of nuclei in contributing species. δ¯=∑xiδi∑xi Example: the pKa of the hydroxyl group in citric acid has been determined from 13C chemical shift data to be 14.4. Neither potentiometry nor ultraviolet–visible spectroscopy could be used for this determination. Range and limitations Limited precision of chemical shift measurements also puts an upper limit of about 4 on log10 β. Limited to diamagnetic systems. 1H NMR cannot be used with solutions of compounds in 1H2O. Calorimetric measurements Simultaneous measurement of K and ΔH for 1:1 adducts is routinely carried out using isothermal titration calorimetry. Extension to more complex systems is limited by the availability of suitable software. Range and limitations Insufficient evidence is currently available. The competition method The competition method may be used when a stability constant value is too large to be determined by a direct method. It was first used by Schwarzenbach in the determination of the stability constants of complexes of EDTA with metal ions. For simplicity consider the determination of the stability constant KAB of a binary complex, AB, of a reagent A with another reagent B. KAB=[AB][A][B] where the [X] represents the concentration, at equilibrium, of a species X in a solution of given composition. A ligand C is chosen which forms a weaker complex with A The stability constant, KAC, is small enough to be determined by a direct method. For example, in the case of EDTA complexes A is a metal ion and C may be a polyamine such as diethylenetriamine. KAC=[AC][A][C] The stability constant, K for the competition reaction AC+B⇋AB+C can be expressed as K=[AB][C][AC][B] It follows that KAB=K×KAC where K is the stability constant for the competition reaction. Thus, the value of the stability constant KAB may be derived from the experimentally determined values of K and KAC Computational methods: It is assumed that the collected experimental data comprise a set of data points. At each ith data point, the analytical concentrations of the reactants, TA(i), TB(i) etc. are known along with a measured quantity, yi, that depends on one or more of these analytical concentrations. A general computational procedure has four main components: Definition of a chemical model of the equilibria Calculation of the concentrations of all the chemical species in each solution Refinement of the equilibrium constants Model selectionThe value of the equilibrium constant for the formation of a 1:1 complex, such as a host-guest species, may be calculated with a dedicated spreadsheet application, Bindfit: In this case step 2 can be performed with a non-iterative procedure and the pre-programmed routine Solver can be used for step 3. Computational methods: The chemical model The chemical model consists of a set of chemical species present in solution, both the reactants added to the reaction mixture and the complex species formed from them. Denoting the reactants by A, B..., each complex species is specified by the stoichiometric coefficients that relate the particular combination of reactants forming them. Computational methods: pA+qB⋯↽−−⇀ApBq⋯ : βpq⋯=[ApBq⋯][A]p[B]q⋯ When using general-purpose computer programs, it is usual to use cumulative association constants, as shown above. Electrical charges are not shown in general expressions such as this and are often omitted from specific expressions, for simplicity of notation. In fact, electrical charges have no bearing on the equilibrium processes other that there being a requirement for overall electrical neutrality in all systems. Computational methods: With aqueous solutions the concentrations of proton (hydronium ion) and hydroxide ion are constrained by the self-dissociation of water. OH − : KW′=[H+][OH−][H2O] With dilute solutions the concentration of water is assumed constant, so the equilibrium expression is written in the form of the ionic product of water. Computational methods: OH −] When both H+ and OH− must be considered as reactants, one of them is eliminated from the model by specifying that its concentration be derived from the concentration of the other. Usually the concentration of the hydroxide ion is given by OH −]=KW[H+] In this case the equilibrium constant for the formation of hydroxide has the stoichiometric coefficients −1 in regard to the proton and zero for the other reactants. This has important implications for all protonation equilibria in aqueous solution and for hydrolysis constants in particular. Computational methods: It is quite usual to omit from the model those species whose concentrations are considered negligible. For example, it is usually assumed then there is no interaction between the reactants and/or complexes and the electrolyte used to maintain constant ionic strength or the buffer used to maintain constant pH. These assumptions may or may not be justified. Also, it is implicitly assumed that there are no other complex species present. When complexes are wrongly ignored a systematic error is introduced into the calculations. Computational methods: Equilibrium constant values are usually estimated initially by reference to data sources. Speciation calculations A speciation calculation is one in which concentrations of all the species in an equilibrium system are calculated, knowing the analytical concentrations, TA, TB etc. of the reactants A, B etc. This means solving a set of nonlinear equations of mass-balance TA=[A]+∑1,nkpβpq⋯[A]p[B]q⋯TB=[B]+∑1,nkqβpq⋯[A]p[B]q⋯etc. Computational methods: for the free concentrations [A], [B] etc. When the pH (or equivalent e.m.f., E).is measured, the free concentration of hydrogen ions, [H], is obtained from the measured value as 10 −pH or 10 2.303 RT/nF and only the free concentrations of the other reactants are calculated. The concentrations of the complexes are derived from the free concentrations via the chemical model. Computational methods: Some authors include the free reactant terms in the sums by declaring identity (unit) β constants for which the stoichiometric coefficients are 1 for the reactant concerned and zero for all other reactants. For example, with 2 reagents, the mass-balance equations assume the simpler form. TA=∑0,nkpβpq[A]p[B]qTB=∑0,nkqβpq[A]p[B]q 10 01 =1 In this manner, all chemical species, including the free reactants, are treated in the same way, having been formed from the combination of reactants that is specified by the stoichiometric coefficients. Computational methods: In a titration system the analytical concentrations of the reactants at each titration point are obtained from the initial conditions, the burette concentrations and volumes. The analytical (total) concentration of a reactant R at the ith titration point is given by TR=R0+vi[R]v0+vi where R0 is the initial amount of R in the titration vessel, v0 is the initial volume, [R] is the concentration of R in the burette and vi is the volume added. The burette concentration of a reactant not present in the burette is taken to be zero. Computational methods: In general, solving these nonlinear equations presents a formidable challenge because of the huge range over which the free concentrations may vary. At the beginning, values for the free concentrations must be estimated. Then, these values are refined, usually by means of Newton–Raphson iterations. The logarithms of the free concentrations may be refined rather than the free concentrations themselves. Refinement of the logarithms of the free concentrations has the added advantage of automatically imposing a non-negativity constraint on the free concentrations. Once the free reactant concentrations have been calculated, the concentrations of the complexes are derived from them and the equilibrium constants. Computational methods: Note that the free reactant concentrations can be regarded as implicit parameters in the equilibrium constant refinement process. In that context the values of the free concentrations are constrained by forcing the conditions of mass-balance to apply at all stages of the process. Computational methods: Equilibrium constant refinement The objective of the refinement process is to find equilibrium constant values that give the best fit to the experimental data. This is usually achieved by minimising an objective function, U, by the method of non-linear least-squares. First the residuals are defined as obs calc Then the most general objective function is given by U=∑i∑jriWijrj The matrix of weights, W, should be, ideally, the inverse of the variance-covariance matrix of the observations. It is rare for this to be known. However, when it is, the expectation value of U is one, which means that the data are fitted within experimental error. Most often only the diagonal elements are known, in which case the objective function simplifies to U=∑iWiiri2 with Wij = 0 when j ≠ i. Unit weights, Wii = 1, are often used but, in that case, the expectation value of U is the root mean square of the experimental errors. Computational methods: The minimization may be performed using the Gauss–Newton method. Firstly the objective function is linearised by approximating it as a first-order Taylor series expansion about an initial parameter set, p. Computational methods: U=U0+∑i∂U∂piδpi The increments δpi are added to the corresponding initial parameters such that U is less than U0. At the minimum the derivatives ∂U/∂pi, which are simply related to the elements of the Jacobian matrix, J Jjk=∂yjcalc∂pk where pk is the kth parameter of the refinement, are equal to zero. One or more equilibrium constants may be parameters of the refinement. However, the measured quantities (see above) represented by y are not expressed in terms of the equilibrium constants, but in terms of the species concentrations, which are implicit functions of these parameters. Therefore, the Jacobian elements must be obtained using implicit differentiation. Computational methods: The parameter increments δp are calculated by solving the normal equations, derived from the conditions that ∂U/∂p = 0 at the minimum. Computational methods: (JTWJ)δp=JTWr The increments δp are added iteratively to the parameters pn+1=pn+δp where n is an iteration number. The species concentrations and ycalc values are recalculated at every data point. The iterations are continued until no significant reduction in U is achieved, that is, until a convergence criterion is satisfied. If, however, the updated parameters do not result in a decrease of the objective function, that is, if divergence occurs, the increment calculation must be modified. The simplest modification is to use a fraction, f, of calculated increment, so-called shift-cutting. Computational methods: pn+1=pn+fδp In this case, the direction of the shift vector, δp, is unchanged. With the more powerful Levenberg–Marquardt algorithm, on the other hand, the shift vector is rotated towards the direction of steepest descent, by modifying the normal equations, (JTWJ+λI)δp=JTWr where λ is the Marquardt parameter and I is an identity matrix. Other methods of handling divergence have been proposed.A particular issue arises with NMR and spectrophotometric data. For the latter, the observed quantity is absorbance, A, and the Beer–Lambert law can be written as Aλ=l∑(εpq..)λcpq.. Computational methods: It can be seen that, assuming that the concentrations, c, are known, that absorbance, A, at a given wavelength, λ , and path length l , is a linear function of the molar absorbptivities, ε. With 1 cm path-length, in matrix notation A=εC There are two approaches to the calculation of the unknown molar absorptivities (1) The ε values are considered parameters of the minimization and the Jacobian is constructed on that basis. However, the ε values themselves are calculated at each step of the refinement by linear least-squares: ε=(CTC)−1CTA using the refined values of the equilibrium constants to obtain the speciation. The matrix (CTC)−1CT is an example of a pseudo-inverse. Computational methods: Golub and Pereyra showed how the pseudo-inverse can be differentiated so that parameter increments for both molar absorptivities and equilibrium constants can be calculated by solving the normal equations. Computational methods: (2) The Beer–Lambert law is written as ελ=Aλ−1C The unknown molar absorbances of all "coloured" species are found by using the non-iterative method of linear least-squares, one wavelength at a time. The calculations are performed once every refinement cycle, using the stability constant values obtaining at that refinement cycle to calculate species' concentration values in the matrix C Parameter errors and correlation In the region close to the minimum of the objective function, U, the system approximates to a linear least-squares system, for which p=(JTWJ)−1JTWyobs Therefore, the parameter values are (approximately) linear combinations of the observed data values and the errors on the parameters, p, can be obtained by error propagation from the observations, yobs, using the linear formula. Let the variance-covariance matrix for the observations be denoted by Σy and that of the parameters by Σp. Then, Σp=(JTWJ)−1JTWΣyWTJ(JTWJ)−1 When W = (Σy)−1, this simplifies to Σp=(JTWJ)−1 In most cases the errors on the observations are un-correlated, so that Σy is diagonal. Computational methods: If so, each weight should be the reciprocal of the variance of the corresponding observation. For example, in a potentiometric titration, the weight at a titration point, k, can be given by Wk=1σE2+(∂E∂v)k2σv2 where σE is the error in electrode potential or pH, (∂E/∂v)k is the slope of the titration curve and σv is the error on added volume. Computational methods: When unit weights are used (W = I, p = (JTJ)−1JTy) it is implied that the experimental errors are uncorrelated and all equal: Σy = σ2I, where σ2 is known as the variance of an observation of unit weight, and I is an identity matrix. In this case σ2 is approximated by σ2=Und−np where U is the minimum value of the objective function and nd and np are the number of data and parameters, respectively. Computational methods: Σp=Und−np(JTJ)−1 In all cases, the variance of the parameter pi is given by Σpii and the covariance between parameters pi and pj is given by Σpij. Standard deviation is the square root of variance. These error estimates reflect only random errors in the measurements. The true uncertainty in the parameters is larger due to the presence of systematic errors—which, by definition, cannot be quantified. Computational methods: Note that even though the observations may be uncorrelated, the parameters are always correlated. Computational methods: Derived constants When cumulative constants have been refined it is often useful to derive stepwise constants from them. The general procedure is to write down the defining expressions for all the constants involved and then to equate concentrations. For example, suppose that one wishes to derive the pKa for removing one proton from a tribasic acid, LH3, such as citric acid. Computational methods: LH LH 11 LH LH 12 LH LH 13 [L3−][H+]3 The stepwise association constant for formation of LH3 is given by LH LH LH LH 2−][H+] Substitute the expressions for the concentrations of LH3 and LH−2 into this equation 13 12 [L3−][H+]2[H+] whence 13 12 13 12 and since pKa = −log10 1/K its value is given by log 10 13 log 10 12 log 10 12 log 10 11 log 10 11 Note the reverse numbering for pK and log β. When calculating the error on the stepwise constant, the fact that the cumulative constants are correlated must accounted for. By error propagation 12 13 12 13 12 13 and log 10 ⁡K=σKK Model selection Once a refinement has been completed the results should be checked to verify that the chosen model is acceptable. generally speaking, a model is acceptable when the data are fitted within experimental error, but there is no single criterion to use to make the judgement. The following should be considered. Computational methods: The objective function When the weights have been correctly derived from estimates of experimental error, the expectation value of U/nd − np is 1. It is therefore very useful to estimate experimental errors and derive some reasonable weights from them as this is an absolute indicator of the goodness of fit. When unit weights are used, it is implied that all observations have the same variance. U/nd − np is expected to be equal to that variance. Computational methods: Parameter errors One would want the errors on the stability constants to be roughly commensurate with experimental error. For example, with pH titration data, if pH is measured to 2 decimal places, the errors of log10 β should not be much larger than 0.01. In exploratory work where the nature of the species present is not known in advance, several different chemical models may be tested and compared. There will be models where the uncertainties in the best estimate of an equilibrium constant may be somewhat or even significantly larger than σpH, especially with those constants governing the formation of comparatively minor species, but the decision as to how large is acceptable remains subjective. The decision process as to whether or not to include comparatively uncertain equilibria in a model, and for the comparison of competing models in general, can be made objective and has been outlined by Hamilton. Computational methods: Distribution of residuals At the minimum in U the system can be approximated to a linear one, the residuals in the case of unit weights are related to the observations by r=yobs−J(JTT)−1JTyobs The symmetric, idempotent matrix J(JTT)−1J is known in the statistics literature as the hat matrix, H. Thus, r=(I−H)yobs and Mr=(I−H)My(I−H) where I is an identity matrix and Mr and My are the variance-covariance matrices of the residuals and observations, respectively. This shows that even though the observations may be uncorrelated, the residuals are always correlated. Computational methods: The diagram at the right shows the result of a refinement of the stability constants of Ni(Gly)+, Ni(Gly)2 and Ni(Gly)−3 (where GlyH = glycine). The observed values are shown a blue diamonds and the species concentrations, as a percentage of the total nickel, are superimposed. The residuals are shown in the lower box. The residuals are not distributed as randomly as would be expected. This is due to the variation of liquid junction potentials and other effects at the glass/liquid interfaces. Those effects are very slow compared to the rate at which equilibrium is established. Computational methods: Physical constraints Some physical constraints are usually incorporated in the calculations. For example, all the concentrations of free reactants and species must have positive values and association constants must have positive values. With spectrophotometric data the calculated molar absorptivity (or emissivity) values should all be positive. Most computer programs do not impose this constraint on the calculations. Computational methods: Chemical constraints When determining the stability constants of metal-ligand complexes, it is common practice to fix ligand protonation constants at values that have been determined using data obtained from metal-free solutions. Hydrolysis constants of metal ions are usually fixed at values which were obtained using ligand-free solutions. When determining the stability constants for ternary complexes, MpAqBr it is common practice the fix the values for the corresponding binary complexes Mp′Aq′ and Mp′′Bq′′, at values which have been determined in separate experiments. Use of such constraints reduces the number of parameters to be determined, but may result in the calculated errors on refined stability constant values being under-estimated. Computational methods: Other models If the model is not acceptable, a variety of other models should be examined to find one that best fits the experimental data, within experimental error. The main difficulty is with the so-called minor species. These are species whose concentration is so low that the effect on the measured quantity is at or below the level of error in the experimental measurement. The constant for a minor species may prove impossible to determine if there is no means to increase the concentration of the species. Implementations: Some simple systems are amenable to spreadsheet calculations.A large number of general-purpose computer programs for equilibrium constant calculation have been published. See for a bibliography. The most frequently used programs are: Potentiometric data: Hyperquad, BEST PSEQUAD, ReactLab pH PRO Spectrophotometric data:HypSpec, SQUAD, Specfit, ReactLab EQUILIBRIA NMR data HypNMR, EQNMR Calorimetric data HypΔH. Affinimeter Commercial Isothermal titration calorimeters are usually supplied with software with which an equilibrium constant and standard formation enthalpy for the formation of a 1:1 adduct can be obtained. Some software for handling more complex equilibria may also be supplied.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ANLN** ANLN: Anillin is a conserved protein implicated in cytoskeletal dynamics during cellularization and cytokinesis. The ANLN gene in humans and the scraps gene in Drosophila encode Anillin. In 1989, anillin was first isolated in embryos of Drosophila melanogaster. It was identified as an F-actin binding protein. Six years later, the anillin gene was cloned from cDNA originating from a Drosophila ovary. Staining with anti-anillin (Antigen 8) antibody showed the anillin localizes to the nucleus during interphase and to the contractile ring during cytokinesis. These observations agree with further research that found anillin in high concentrations near the cleavage furrow coinciding with RhoA, a key regulator of contractile ring formation.The name of the protein anillin originates from a Spanish word, anillo. Anillo means ring and shows that the name anillin references the observed enrichment of anillins at the contractile ring during cytokinesis. Anillins are also enriched at other actomyosin rings, most significantly, those at the leading edge of the Drosophila embryo during cellularization. These actomyosin rings invaginate to separate all nuclei for one another in the syncytial blastoderm. Structure: Anillin has a unique multi-domain structure. At the N-terminus, there is an actin- and myosin-binding domain. At the C-terminus, there is a PH domain. The PH domain is conserved and essential for anillin functionality. The human anillin cDNA, located on Chr7, encodes a 1,125–amino acid protein with a predicted molecular mass of 124 kD and a pI of 8.1. The mouse anillin gene is located on Chromosome 9.There are also numerous anillin-like protein homologues found outside of metazoans. In Schizosaccharomyces pombe (fission yeast), there are Mid1p and Mid2p. These two anillin-like proteins do not have any overlap in their functions. Mid1p has been characterized as a key regulator in cytokinesis, responsible for arranging contractile ring assembly and positioning. Mid2p acts later in cytokinesis to organize septins during septation, or the invagination of inner membranes, outer membranes, and the cell wall that occurs in order to separate daughter cells completely. Saccharomyces cerevisiae (budding yeast) also have two anillin-like proteins, Boi1p and Boi2p. Boi1p and Boi2p localize to the nucleus and contractile ring at the bud neck, respectively. They are essential for cell growth and bud formation. Function: Anillins are required for the faithfulness of cytokinesis and its F-actin-, myosin-, and septin-binding domains implicate anillin in actomyosin cytoskeletal organization. In agreement with this belief, anillin-mutant cells have disrupted contractile rings. Additionally, it is hypothesized that anillin couples the actomyosin cytoskeleton to microtubules by binding MgcRacGAP/CYK-4/RacGAP50C.Anillins have also been shown to organize the actomyosin cytoskeleton into syncytial structures observed in Drosophila embryos or C. elegans gonads. ANI-1 and ANI-2 (proteins homologous to anillin) are essential for embryonic viability in both organisms. ANI-1 is required for cortical ruffling, pseudocleavage, and all contractile events that occur in embryos prior to mitosis. ANI-1 is also crucial for segregation of polar bodies during meiosis. ANI-2 functions in the maintenance of the structure of the central core of the cytoplasm, the rachis, during oogenesis. ANI-2 ensures oocytes do not disconnect prematurely from the rachis, thereby leading to the generation of embryos of varying sizes.In vitro experiments suggest that anillin drives myosin-independent actin contractility. Binding Partners: Actin Anillin specifically binds F-actin, rather than G-actin. Binding of F-actin by anillin only occurs during cell division. Anillin also bundles actin filaments together and drives their relative sliding. This contractile behavior is independent of myosin and ATP and may couple with actin filament disassembly. Amino acids 258-340 are sufficient and necessary for F-actin binding in Drosophila, but amino acids 246-371 are necessary to bundle actin filaments. The ability of anillin to bind to and bundle actin together is conversed through many species. It is hypothesized that by regulating actin bundling, anillin increases the efficiency of actomyosin contractility during cell division. Both anillin and F-actin are found in contractile structures. They are recruited independently to the contractile ring, but F-actin increases the efficiency of anillin targeting. Anillin may also be involved in promoting the polymerization of F-actin by stabilizing formin mDia2 in an active form. Binding Partners: Myosin Anillin interacts directly with non-muscle myosin II and interacts indirectly with myosin via F-actin. Residues 142-254 (near the N-terminus) are essential for anillin binding myosin in Xenopus. The interaction of anillin and myosin is also dependent on phosphorylation of the myosin light chain. The interaction of myosin and anillin does not seem to serve in recruitment, but rather organization of myosin. In Drosophila, anillin is necessary to organize myosin into rings in the cellularization front. Depletion of anillin in Drosophila and humans leads to changes in the spatial and temporal stability of myosin during cytokinesis. In C. elegans, ANI-1 organizes myosin into foci during cytokinesis and establishment of polarity, whereas, ANI-2 is a requirement for the maintenance of myosin-rich contractile lining of oogenic gonads. Binding Partners: Septins Septin localization during cytokinesis and cellularization is dependent on its association with anillin. The direct interaction between anillin and septins was first shown by the interaction seen between Xenopus anillin and a minimal reconstituted heterooligomer of human septins 2, 6, and 7. The ability of anillin to bind to septins is dependent on the C-terminal domain, which contains a terminal PH domain and an upstream sequence known as the “Anillin Homology” (AH) domain. Binding Partners: Rho The AH domain of human anillin is essential for its interaction with RhoA. Depletion of RhoA halts contractile ring assembly and ingression, whereas, anillin depletion leads to a less severe phenotype when the contractile ring forms and ingresses partially. Depletion of anillin in Drosophila spermatocytes greatly reduces the localization of Rho and F-actin to equatorial regions. Ect2 Anillin interacts with Ect2, further supporting the idea that anillin stabilizes RhoA localization since Ect2 is an activator of RhoA. Independent of RhoA, the interaction between anillin and Ect2 occurs. This interaction is essential of the GEF activity of Ect2 and requires the AH domain of anillin and the PH domain of Ect2. Cyk-4 Drosophila anillin interacts with Cyk-4, a central spindle protein, indicating that anillin may have a role in determining the division plane during cytokinesis. In anillin-depleted larval cells, the central spindle does not extend to the cortex. Human anillin-depleted cells show improperly positioned and distorted central spindles. Binding Partners: Microtubules Anillin was first isolated from Drosophila by harnessing its interactions with both F-actin and microtubules. Furthermore, anillin-rich structures that form after Latrunculin A treatment of Drosophila cells localize to the plus-ends of microtubules. The interaction between anillin and microtubules suggest that anillin may serve as a signaling factor to relay the position of the mitotic spindle to the cortex to ensure appropriate contractile ring formation during cytokinesis. Regulation: Anillins in metazoans are heavily phosphorylated; however, the kinases responsible for the phosphorylation are unknown at the present time. In humans and Drosophila, anillins are recruited to the equatorial cortex in a RhoA-dependent manner. This recruitment is independent of other cytoskeletal Rho targets such as myosin, F-actin, and Rho-kinase. It has been observed that anillin proteolysis is triggered after mitotic exit by the Anaphase Promoting Complex (APC). Regulation: Most anillins can be sequestered to the nucleus during interphase, but there are exceptions – Drosophila anilins in the early embryo, C. elegans ANI-1 in early embryos, C. elegans ANI-2 in oogenic gonads, and Mid2p in fission yeast. These anillins that are not sequestered during interphase suggest that anillins may also regulate cytoskeletal dynamics outside the contractile ring during cytokinesis. Role in Diseases: Anillin is critical for cell division and therefore development and homeostasis in metazoans. In recent years, the expression levels of anillin have been shown to correlate to the metastatic potential of human tumours. In colorectal cancer, expression levels of anillin are higher in tumours and when anillin was over-expressed in HT29 cells, a classical colorectal cancer cell line, the cells showed faster replication kinetics due to the lengthening of G2/M phase. Increasing the expression of anillin also led to further invasiveness and migration of numerous colorectal cancer cell lines. The hypothesis from such observations is that anillin promotes EMT and cell migration through cytoskeletal remodeling, leading to enhanced proliferation, invasion, and mobility of tumour cells.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Instant pudding** Instant pudding: Instant pudding is an instant food product that is manufactured in a powder form and used to create puddings and pie filling. It is produced using sugar, flavoring agents and thickeners as primary ingredients. Instant pudding can be used in some baked goods. Manufacturing: Many flavors of instant pudding are produced. Sugar, a flavoring agent, and thickeners are primary ingredients. Instant chocolate pudding mix is manufactured using cocoa. A key ingredient in instant pudding is gelatinized starch, a dried instant starch that readily absorbs liquids, which causes the pudding to gel when mixed with milk. Additional ingredients sometimes used as a thickener include gums that are soluble in cold water, such as carrageenans and alginates. Phosphate salts are sometimes used, which contribute to the gelling of the finished product. Some Jell-O brand instant puddings are vegan, such as those in vanilla, lemon, banana crème, and pistachio flavors.Many flavors of instant pudding are produced, with manufacturers including Kraft Foods and Jel Sert whose products are marketed under the Jell-O and Royal brand names. Manufacturing: Nutrition information One serving (one-quarter of a box) of dry Jell-O chocolate-flavored instant pudding contains 110 calories, 430 mg sodium, 8 g carbohydrate, 18 g sugars, and 1 g of dietary fiber. It also contains 4% of the daily Recommended Dietary Allowance of iron. Instant pudding mixes are produced in non-fat and sugar-free varieties. Preparation: Instant pudding is typically prepared by combining the powdered mix with milk and mixing the ingredients. Puddings may be cooled in a refrigerator before serving. Uses: In addition to being eaten as-is and used as a pie filling, instant pudding can be used in baked goods such as cakes and cookies. Instant pudding added to cake mix can result in a denser and moister cake compared to cakes prepared without it. The use of instant pudding can cause a cake to fall or shrink as it cools, more than a cake prepared without the pudding. Use of a small amount of instant pudding lessens shrinkage compared to using a whole box. Cookies prepared using instant pudding may be moister compared to those without it.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Math Mysteries** Math Mysteries: Math Mysteries is a collection of five math-related educational video games for the Windows and Macintosh platforms, developed and published by Tom Snyder Productions. The games were designed to fit the NCTM standards at their time of development. The series consists of Math Mysteries: Measurements, Math Mysteries: Whole Numbers, Math Mysteries: Fractions, Math Mysteries: Advanced Whole Numbers and Math Mysteries: Advanced Fractions. Development: Educational goals The series focuses on aiding students who struggle with mathematical problems. Products come with two CD discs. One is the Whole Class CD, which allows teachers to configure specific skills for their students. Students learn to understand problems, collect vital information, work in groups and find the answers to math problems. The other is the Mystery CD, which allows students to independently explore for themselves and reinforce the skills they learned in class.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Recruitment advertising** Recruitment advertising: Recruitment advertising, also known as Recruitment communications and Recruitment agency, includes all communications used by an organization to attract talent to work within it. Recruitment advertisements may be the first impression of a company for many job seekers. In turn, the strength of employer branding in job postings can directly impact interest in job openings. Recruitment advertising: Recruitment advertisements typically have a uniform layout per HRXML standards and may contain the following elements: the job title heading and location an explanatory paragraph describing the company, including the employer branding a job description entry qualifications the remuneration package (not always provided by the employer) further details and from where application forms may be soughtWhen faced with hiring many roles, corporate employers have many channels and options to choose from. The employer may: Deploy job distribution efforts to free and or paid sources Increase promotion of the employer brand Deploy search engine optimization (SEO) efforts for employer career sites and jobs Increase social media outreach Retain a search firm Partner with a contingency search firm Retain a recruitment process outsourcing organization Use a candidate fulfillment service Retain a recruitment advertising agency Retain a specialist interactive recruitment advertising agency Leverage old media to advertise their openings (print, radio and television) Leverage job boards Leverage new media Invest in additional internal resourcesEach of these channels has its benefits and many firms will use a mix of some or all of the above options. Recruitment advertising: The use of a specialist recruitment advertising agency enables organizations to receive professional advice on media, design and copywriting specifically related to the recruitment process. This may enable employer's advertisements to stand out in relevant publications to build their employer brand. Employer advertisers are also now able to use microsites to post job content, allowing job postings to be more creative with minimal copy, although it is a common understanding by search engine optimization firms that detailed, relevant content is necessary for successful optimization efforts. Recruitment advertising has now developed into a specialty service where most leading organizations hire agencies for their expertise. Recruitment advertising: The methodologies for recruiting talent are evolving. For example, sites have been developed for freelancers to bid on advertised jobs. These sites are normally free to join, but the agency will take between 10% and 25% of applicants' earnings.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Asymptotic distribution** Asymptotic distribution: In mathematics and statistics, an asymptotic distribution is a probability distribution that is in a sense the "limiting" distribution of a sequence of distributions. One of the main uses of the idea of an asymptotic distribution is in providing approximations to the cumulative distribution functions of statistical estimators. Definition: A sequence of distributions corresponds to a sequence of random variables Zi for i = 1, 2, ..., I . In the simplest case, an asymptotic distribution exists if the probability distribution of Zi converges to a probability distribution (the asymptotic distribution) as i increases: see convergence in distribution. A special case of an asymptotic distribution is when the sequence of random variables is always zero or Zi = 0 as i approaches infinity. Here the asymptotic distribution is a degenerate distribution, corresponding to the value zero. Definition: However, the most usual sense in which the term asymptotic distribution is used arises where the random variables Zi are modified by two sequences of non-random values. Thus if Yi=Zi−aibi converges in distribution to a non-degenerate distribution for two sequences {ai} and {bi} then Zi is said to have that distribution as its asymptotic distribution. If the distribution function of the asymptotic distribution is F then, for large n, the following approximations hold P(Zn−anbn≤x)≈F(x), P(Zn≤z)≈F(z−anbn). Definition: If an asymptotic distribution exists, it is not necessarily true that any one outcome of the sequence of random variables is a convergent sequence of numbers. It is the sequence of probability distributions that converges. Central limit theorem: Perhaps the most common distribution to arise as an asymptotic distribution is the normal distribution. In particular, the central limit theorem provides an example where the asymptotic distribution is the normal distribution. Central limit theorem: Central limit theorem Suppose {X1,X2,…} is a sequence of i.i.d. random variables with E[Xi]=μ and Var ⁡[Xi]=σ2<∞ . Let Sn be the average of {X1,…,Xn} . Then as n approaches infinity, the random variables n(Sn−μ) converge in distribution to a normal N(0,σ2) :The central limit theorem gives only an asymptotic distribution. As an approximation for a finite number of observations, it provides a reasonable approximation only when close to the peak of the normal distribution; it requires a very large number of observations to stretch into the tails. Central limit theorem: Local asymptotic normality Local asymptotic normality is a generalization of the central limit theorem. It is a property of a sequence of statistical models, which allows this sequence to be asymptotically approximated by a normal location model, after a rescaling of the parameter. An important example when the local asymptotic normality holds is in the case of independent and identically distributed sampling from a regular parametric model; this is just the central limit theorem. Central limit theorem: Barndorff-Nielson & Cox provide a direct definition of asymptotic normality.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Crème de cassis** Crème de cassis: Crème de cassis (French pronunciation: ​[kʁɛm də kasis]) (also known as Cassis liqueur) is a sweet, dark red liqueur made from blackcurrants.Several cocktails are made with crème de cassis, including the popular wine cocktail, kir.It may also be served as an after-dinner liqueur or as a frappé. Ingredients: It is made from blackcurrants that are crushed and soaked in alcohol, with sugar subsequently added. The quality of crème de cassis depends upon the variety of fruit used, the content of the berries, and the production process. Origin and production: The modern version of the beverage first appeared in 1841, when it displaced "ratafia de cassis", which had been produced in prior centuries. Origin and production: While crème de cassis is a specialty of Burgundy, it is also made in Anjou, England, Luxembourg, Alberta, Quebec, Vermont and Tasmania.In 1979, Germany attempted to restrict the import based on the alcohol content being too low. The Europe Court of Justice found this to be a breach of trade, in Rewe-Zentral AG v Bundesmonopolverwaltung für Branntwein.In 2015, the new protected geographical indication (PGI) "Crème de Cassis de Bourgogne" was approved. Promoted by a syndicate of fruit producers and liqueurs companies from Burgundy, this "Crème de Cassis de Bourgogne" guarantees the Burgundian origin and the minimum quantity of berries used in its production, essentially the variety Noir de Bourgogne. If the berries come specifically from Dijon, the capital of Burgundy, the label may say "Crème de Cassis de Dijon" instead. Sales: Nearly 16 million litres (4.2 million US gallons) of crème de cassis are produced annually in France. It is consumed mostly in France but is also exported. In popular culture: It is a favourite drink of the fictional detective Hercule Poirot.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Biosynthesis of doxorubicin** Biosynthesis of doxorubicin: Doxorubicin (DXR) is a 14-hydroxylated version of daunorubicin, the immediate precursor of DXR in its biosynthetic pathway. Daunorubicin is more abundantly found as a natural product because it is produced by a number of different wild type strains of streptomyces. In contrast, only one known non-wild type species, streptomyces peucetius subspecies caesius ATCC 27952, was initially found to be capable of producing the more widely used doxorubicin. This strain was created by Arcamone et al. in 1969 by mutating a strain producing daunorubicin, but not DXR, at least in detectable quantities. Subsequently, Hutchinson's group showed that under special environmental conditions, or by the introduction of genetic modifications, other strains of streptomyces can produce doxorubicin. His group has also cloned many of the genes required for DXR production, although not all of them have been fully characterized. In 1996, Strohl's group discovered, isolated and characterized dox A, the gene encoding the enzyme that converts daunorubicin into DXR. By 1999, they produced recombinant Dox A, a Cytochrome P450 oxidase, and found that it catalyzes multiple steps in DXR biosynthesis, including steps leading to daunorubicin. This was significant because it became clear that all daunorubicin producing strains have the necessary genes to produce DXR, the much more therapeutically important of the two. Hutchinson's group went on to develop methods to improve the yield of DXR, from the fermentation process used in its commercial production, not only by introducing Dox A encoding plasmids, but also by introducing mutations to deactivate enzymes that shunt DXR precursors to less useful products, for example baumycin-like glycosides. Some triple mutants, that also over-expressed Dox A, were able to double the yield of DXR. This is of more than academic interest because at that time DXR cost about $1.37 million per kg and current production in 1999 was 225 kg per annum. More efficient production techniques have brought the price down to $1.1 million per kg for the non-liposomal formulation. Although DXR can be produced semi-synthetically from daunorubicin, the process involves electrophilic bromination and multiple steps and the yield is poor. Since daunorubicin is produced by fermentation, it would be ideal if the bacteria could complete DXR synthesis more effectively. Overview: The anthracycline skeleton of doxorubicin (DXR) is produced by a Type II polyketide synthase (PKS) in streptomyces peucetius. First, a 21-carbon decaketide chain (Fig 1. (1)) is synthesized from a single 3-carbon propionyl group from propionyl-CoA, and 9 2-carbon units derived from 9 sequential (iterative) decarboxylative condensations of malonyl-CoA. Each malonyl-CoA unit contributes a 2-carbon ketide unit to the growing polyketide chain. Each addition is catalyzed by the "minimal PKS" consisting of an acyl carrier protein (ACP), a ketosynthase (KS)/chain length factor (CLF) heterodimer and a malonyl-Coa:ACP acyltransferase(MAT). (refer to top of Figure 10. Overview: This process is very similar to fatty acid synthesis, by fatty acid synthases and to Type I polyketide synthesis. But, in contrast to fatty acid synthesis, the keto groups of the growing polyketide chain are not modified during chain elongation and they are not usually fully reduced. In contrast to Type I PKS systems, the synthetic enzymes (KS, CLF, ACP and AT) are not attached covalently to each other, and may not even remain associated during each step of the polyketide chain synthesis. Overview: After the 21-carbon decaketide chain of DXR is completed, successive modifications are made to eventually produce a tetracyclic anthracycline aglycone (without glycoside attached). The daunosamine amino sugar, activated by addition of Thiamine diphosphate (TDP), is created in another series of reactions. It is joined to the anthracycline aglycone and further modifications are done to produce first daunorubicin then DXR. Overview: There are at least 3 gene clusters important to DXR biosynthesis: dps genes which specify the enzymes required for the linear polyketide chain synthesis and its first cyclizations, the dnr cluster is responsible for the remaining modifications of the anthracycline structure and the dnm genes involved in the amino sugar, daunosamine, synthesis. Additionally, there is a set of "self resistance" genes to reduce the toxic impact of the anthracycline on the producing organism. One mechanism is a membrane pump that causes efflux of the DXR out of the cell (drr loci). Since these complex molecules are only advantageous under specific conditions, and require a lot of energy to produce, their synthesis is tightly regulated. Polyketide Chain Synthesis: Doxorubicin is synthesized by a specialized polyketide synthase. Polyketide Chain Synthesis: The initial event in DXR synthesis is the selection of the propionyl-CoA starter unit and its decarboxylative addition to a two carbon ketide unit, derived from malonyl-CoA to produce the five carbon B-ketovaleryl ACP. The five carbon diketide is delivered by the ACP to the cysteine sulfhydryl group at the KS active site, by thioester exchange, and the ACP is released from the chain. The free ACP picks up another malonate group from malonyl-CoA, also by thioester exchange, with release of the CoA. The ACP brings the new malonate to the active site of the KS where is it decarboxylated, possibly with the help of the CLF subunit, and joined to produce a 7 carbon triketide, now anchored to the ACP (see top of Figure 1). Again the ACP hands the chain off to the KS subunit and the process is repeated iteratively until the decaketide is completed. Polyketide Chain Synthesis: In most Type II systems the initiating event is delivery by ACP of an acetate unit, derived from acetyl-CoA, to the active site of the ketosynthase (KS) subunit of the KS/CLF heterodimer. The default mode for Type II PKS systems is the incorporation of acetate as the primer unit, and that holds true for the DXR "minimal PKS". In other words, the action of KS/CLF/ACP (Dps A, B and G) from this system will not produce 21-carbon decaketides, but 20-carbon decaketides instead, because acetate is the “preferred” starter. The process of specifying propionate is not completely understood, but it is clear that it depends on an additional protein, Dps C, which may be acting as a ketosynthase or acyltransferase selective for propionyl-CoA, and possibly Dps D makes a contribution.A dedicated MAT has been found to be dispensable for polyketide production under in vitro conditions. The PKS may "borrow" the MAT from its own fatty acid synthase and this may be the primary way ACP receives its malonate group in DXR biosynthesis. Additionally, there is excellent evidence that "self-malonylation" is an inherent characteristic of Type II ACPs. In summary, a given Type II PKS may provide its own MAT (s), it may borrow one from FAS, or its ACP may “self-malonylate”. Polyketide Chain Synthesis: It is unknown whether the same KS/CLF/ACP ternary complex chaperones the growth of a full-length polyketide chain through the entire catalytic cycle, or whether the ACP dissociates after each condensation reaction. A 2.0-Å resolution structure of the actinorhodin KS/CLF, which is very similar to the dps KS/CLF, shows polyketides being elongated inside an amphipathic tunnel formed at the interface of the KS and CLF subunits. The tunnel is about 17-Å long and one side has many charged amino acid residues which appear to be stabilizing the carbonyl groups of the chain, while the other side is hydrophobic. This structure explains why both subunits are necessary for chain elongation and how the reactive growing chain is protected from random spontaneous reactions until it is positioned properly for orderly cyclization. The structure also suggests a mechanism for chain length regulation. Amino acid side groups extend into the tunnel and act as "gates". A couple of particularly bulky residues may be impassable by the chain, causing termination. Modifications to tunnel residues based on this structure were able to alter the chain length of the final product. The final condensation causes the polyketide chain to "buckle" allowing an intramolecular attack by the C-12 methylene carbanion, generated by enzyme catalyzed proton removal and stabilized by electrostatic interactions in the tunnel, on the C-7 carbonyl (see 3 in Figure 1). This tunnel aided intramolecular aldol condensation provides the first cyclization when the chain is still in the tunnel. The same C-7/C-12 attack occurs in the biosynthesis of DXR, in a similar fashion. Conversion to 12-deoxyalkalonic acid: The 21-carbon decaketide is converted to 12-deoxyalkalonic acid (5), the first free easily isolated intermediate in DXR biosynthesis, in 3 steps. These steps are catalyzed by the final 3 enzymes in the dps gene cluster and are considered part of the polyketide synthase. Conversion to 12-deoxyalkalonic acid: While the decaketide is still associated with the KS/CLF heterodimer the 9-carbonyl group is reduced by Dps E, the 9-ketoreductase, using NADPH as the reducing agent/hydride donor. Dps F, the “1st ring cyclase” /aromatase, is very specific and is in the family of C-7/C-12 cyclases that require prior C-9 keto-reduction. These two reactions are felt to occur while the polyketide chain is still partially in the KS/CLF tunnel and it is not known what finally cleaves the chain from its covalent link to the KS or ACP. If the Dps F cyclase is inactivated by mutations or gene deletions, the chain will cyclize spontaneously in random fashion. Thus, Dps F is thought to “chaperone” or help fold the polyketide to ensure non-random cyclization, a reaction that is energetically favorable and leads to subsequent dehydration and resultant aromatization.Next, Dps Y regioselectively promotes formation of the next two carbon-carbon bonds and then catalyzes dehydration leading to aromatization of one of the rings to give (5). Conversion to ε-rhodomycinone: The next reactions are catalyzed by enzymes originating from the dnr gene cluster. Dnr G, a C-12 oxygenase (see (5) for numbering) introduces a keto group using molecular oxygen. It is an "anthrone type oxygenase", also called a quinone-forming monooxygenase, many of which are important 'tailoring enzymes' in the biosynthesis of several types of aromatic polyketide antibiotics. They have no cofactors: no flavins, metals or energy sources. Their mechanism is poorly understood but may involve a "protein radical".Alkalonic acid (6), a quinone, is the product. Dnr C, alkalonic acid-O-methyltransferase methylates the carboxylic acid end of the molecule forming an ester, using S-adenosyl methionine (SAM) as the cofactor/methyl group donor. The product is alkalonic acid methyl ester (7). The methyl group is removed later, but it serves to activate the adjacent methylene bridge facilitating its attack on the terminal carbonyl group, a reaction catalyzed by DnrD. Conversion to ε-rhodomycinone: Dnr D, the fourth ring cyclase (AAME cyclase), catalyzes an intramolecular aldol addition reaction. No cofactors are required and neither aromatization nor dehydration occurs. A simple base catalyzed mechanism is proposed. The product is aklaviketone (8). Dnr H, aklaviketone reductase, stereospecifically reduces the 17-keto group of the new fourth ring to a 17-OH group to give aklavinone (9). This introduces a new chiral center and NADPH is a cofactor. Dnr F, aklavinone-11-hydroxylase, is a FAD monooxygenase that uses NADPH to activate molecular oxygen for subsequent hydroxylation. ε-rhodomycinone (10) is the product. Conversion to doxorubicin: Dnr S, daunosamine glycosyltransferase catalyzes the addition of the TDP activated glycoside, L-daunosamine-TDP to ε-rhodomycinone to give rhodomycin D (Figure 2). The release of TDP drives the reaction forward. The enzyme has sequence similarity to glycosyltransferases of the other "unusual sugars" added to Type II PKS aromatic products. Dnr P, rhodomycin D methylesterase, removes the methyl group added previously by DnrC. It initially served to activate the adjacent methylene bridge, and after that it prevented its carboxyl group from leaving the C-10 carbon (see Fig 2). Had the carboxyl group not been esterified prior to the fourth ring cyclization, its departure as CO2 would have been favored by the formation of a bicyclic aromatic system. After C-7 reduction and glycosylation, the C-8 methylene bridge is no longer activated for deprotonation, thereby making aromatization less likely. Note that the non-isolable intermediate, with numbering, is the 3rd molecule in Figure 2. The numbering system is very odd and a vestige of early nomenclature. The decarboxylation of the intermediate occurs spontaneously, or by the influence of Dnr P, giving 13-deoxycarminomycin. Conversion to doxorubicin: A crystal structure, with bound products, of aclacinomycin methylesterase, an [enzyme] with 53% sequence homology to Dnr P, from streptomyces purpurascens, has been solved. It is able to catalyze the same reaction and uses a classic Ser-His-Asp catalytic triad with serine acting as the nucleophile and gly-met providing stabilization of the transition state by forming an "oxyanion hole". The active site amino acids are almost entirely the same as Dnr P, and the mechanism is almost certainly identical. Conversion to doxorubicin: Although Dox A is shown next in the biosynthetic scheme (Figure 2), Dnr K, carminomycin 4-O-methyltransferase is able to O-methylate the 4-hydroxyl group of any of the glycosides in Figure 2. A 2.35 Å resolution crystal structure of the enzyme with bound products has recently been solved. The orientation of the products is consistent with a SN2 mechanism of methyl transfer. Site-directed mutagenesis of the potential acid/base residues in the active site did not affect catalysis leading to the conclusion that Dnr K most likely acts as an entropic enzyme in that rate enhancement is mainly due to orientational and proximity effects. This is in contrast to most other O-methyltransferases where acid/base catalysis has been demonstrated to be an essential contribution to rate enhancement. Conversion to doxorubicin: Dox A catalyzes three successive oxidations in streptomyces peucetius. Deficient DXR production is not primarily due to low levels of or malfunctioning Dox A, but because there are many products diverted away from the pathway shown in Figure 2. Each of the glycosides is a potential target of shunt enzymes, not shown, some of which are products of the dnr gene cluster. Mutations of these enzymes does significantly boost DXR production. In addition, Dox A has a very low kcat/Km value for C-14 oxidation (130/M) compared to C-13 oxidation (up to 22,000/M for some substrates). Genetic manipulation to overexpress Dox A has also increased yields, particularly if the genes for the shunt enzymes are inactivated simultaneously. Conversion to doxorubicin: Dox A is a cytochrome P-450 monooxygenase that has broad substrate specificity, catalyzing anthracycline hydroxylation at C-13 and C-14 ( Figure 2). The enzyme has an absolute requirement for molecular oxygen and NADPH. Initially, two successive oxidations are done at C-13, followed by a single oxidation of C-14 that converts daunorubicin to doxorubicin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thetical grammar** Thetical grammar: Thetical grammar forms one of the two domains of discourse grammar, the other domain being sentence grammar. The building blocks of thetical grammar are theticals, that is, linguistic expressions which are interpolated in, or juxtaposed to, clauses or sentences but syntactically, semantically and, typically, prosodically independent from these structures. The two domains are associated with contrasting principles of designing texts: Whereas sentence grammar is essentially restricted to the structure of sentences in a propositional format, thetical grammar concerns the overall contours of discourse beyond the sentence, thereby being responsible for a higher level of discourse production. An example: The following example, taken from the Comprehensive Grammar of English, illustrates the main characteristics of thetical grammar. a. They considered Miss Hartley a very good teacher. An example: b. They considered Miss Hartley, a very good teacher.The phrase a very good teacher is a complement of the sentence in (a.), that is, it is part of the syntax of the sentence; in the framework of discourse grammar, it is therefore classified as belonging to sentence grammar. In (b.), by contrast, the same phrase (but printed in italics) is not part of the syntax; it is syntactically independent from the rest of the sentence, commonly classified as a non-restrictive appositive. And it is also different in other ways: Whereas in (a.) it is part of the prosody of the sentence, in (b) it is separated from the preceding clause by a tone unit boundary in spoken English and by a comma in written English. And third, there is also a difference in meaning: Whereas the meaning of a very good teacher in (a.) is determined by its syntactic function as a complement of the sentence, it is fairly independent from the sentence meaning in (b.); the former meaning has therefore been called restrictive and the latter non-restrictive. An example: The phrase a very good teacher in (b) is classified as belonging to thetical grammar, that is, as a thetical. Theticals are defined in the following way: They are syntactically unattached, they are typically set off prosodically from the rest of the utterance, their meaning is non-restrictive, they tend to be positionally mobile, and their internal structure is built on principles of sentence grammar but can be elliptic. Principles and concepts: Sentence grammar is organized in terms of propositional concepts and clauses and their combination. Thetical grammar, by contrast, concerns the linguistic discourse beyond the sentence, its functions relate to the situation of discourse, most of all to the organization of texts, speaker-hearer interaction, and attitudes of the speaker. The domain of thetical grammar includes but is not restricted to what in other works is referred to variously as parentheticals, syntactic non-clausal units, extra-clausal constituents, disjuncts, or supplements. Paradigm examples of theticals are formulae of social exchange (Good morning!, please), vocatives (Waiter!), interjections (ouch!, wow!), and discourse markers (if you will, you know, now, well), but theticals also include a virtually unlimited pool of other expressions that are produced spontaneously, like a very good teacher in the example of (b) above. Principles and concepts: While being separate in principle, thetical grammar interacts in multiple ways with sentence grammar in shaping linguistic discourse. The main way of interaction is via cooptation, an operation whereby chunks of sentence grammar such as clauses, phrases, words, or any other units are deployed for use in thetical grammar.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Small extension node** Small extension node: The small extension node (SEN) is part of a US military communication system known as Mobile Subscriber Equipment (MSE). A SEN is composed of two shelters, a switching shelter and a Line of sight radio terminal shelter(LOS). History: Prior to the advent of the satellite based Joint Network Node (JNN), the United States Army used a system known as Mobile Subscriber Equipment (MSE) in order to provide tactical battlefield communications. MSE is a Line-Of-Sight (LOS) terrestrial based communications system limited by terrain and distance. MSE is still in use in limited quantities. Developed as a direct replacement of the Multichannel communications telephone switching system used from the 1960 to late 1980s. History: The MSE SEN primary role is to provide tactical telephone and data network communications to the battlefield. Capabilities include integration into existing and backwards compatible interfaces for older branch exchange, as well a field radio integration. This integration allows for the SEN capabilities to provide communications in battlefield as well as civilian communication disaster support. Description: A SEN switching shelter contains switching, multiplexing, and communications security (COMSEC) equipment for secure digital voice and data communications. A single switching shelter is mounted on the back of a HMMWV, powered is provided by a 10 kW diesel generator, and the SEN is operated by up to a six soldier team. To provide communications for a Corps area the Signal Battalion would deploy forty SEN's, amongst Node Centers, Large Extension Nodes, and Radio Access Units. Description: The current switch is designated AN/TTC-48, with a suffix to identify each of the ten versions in operation - (V)1, V(2), A(V)1, A(V)2, B(V)1, B(V)2, C(V)1, C(V)2, C(V)3, and C(V)4. The (V)1 provides 26 digital lines and 10-digital trunks and the (V)2 provides 41 digital lines and 13-digital trunks. Both versions interface at various levels with the MSE Area Communication Systems through cable, via line of sight or via tactical satellite terminal.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Electro-diesel multiple unit** Electro-diesel multiple unit: An electro-diesel multiple unit (EDMU) or bi-mode multiple unit (BMU) is a form of a multiple unit that can be powered either by electric power picked up from the overhead lines or third rail (like an electric multiple unit – EMU) or by using an onboard diesel engine, driving an electric generator, which produces AC or DC electric power (like a diesel-electric multiple unit – DEMU). List of BMUs: Asia China Two variants of the China Railway CR200J-SG Fuxing (HXD1D-J and FXN3-J) high-speed train are electro-diesel (bi-mode) multiple units specifically designed for plateau operation. They are HXD1D-J manufactured by CRRC Zhuzhou Locomotive and FXN3-J manufactured by CRRC Dalian. Two variants are served on Sichuan–Tibet railway. Oceania Australia NSW TrainLink Regional Train Project is building 117 bi-mode carriages (1.5 kV DC) to replace its diesel fleet of interstate and regional XPT, Xplorer and Endeavour trainsets to be delivered by CAF from 2023. Europe France Bombardier has built dual-mode variants of its AGC series for the French operator SNCF; the electricity is collected by means of a pantograph. B 81500 – multiple unit trains using 1.5 kV DC catenary. In service since 2005. B 82500 – multiple unit trains using both 1.5 kV DC and 25 kV AC catenary. In service since October 2007. Alstom Règiolis (B 83500, B 84500, B 85000, B 85900). In service since April 2014 (B 83500 and B 84500). Italy BTR 813, first electro-diesel version of the Stadler Flirt, service in the Valle d'Aosta region since October 2019. Netherlands 18 bi-mode units of the Stadler WINK have entered service in 2021 with Arriva Netherlands. In addition to electric and diesel propulsion, these trains can also run on battery power. Norway 14 bi-modal variants of the Stadler Flirt trains, called Norske Tog Class 76 in Norway, held by the state owned Norske Tog and operated by the line operator SJ AB (as SJ Nord) entering service in 2021. This class is used on the regional train services around Trondheim. Poland Newag Impuls is offered in electro-diesel (hybrid) version. The first units were delivered to West Pomeranian Voivodeship and started regular revenue service in early 2021. Russia DT1 (ДТ1) Russian-gauge multiple unit. In service since 2009. United Kingdom Electro-diesel multiple units whose electricity source is 25 kV 50 Hz AC overhead line include: British Rail Class 800 – high-speed multiple unit for use on Great Western Railway and London North Eastern Railway inter-city services. In service since October 2017. British Rail Class 802 – high-speed multiple unit for use on Great Western Railway, Hull Trains and TransPennine Express inter-city services. In service since August 2018. British Rail Class 805 – high-speed multiple unit for use on West Coast Main Line Avanti West Coast services to Shrewsbury and the North Wales Coast Line, replacing Class 221s British Rail Class 768 – multiple unit converted from Class 319 (with external dual-system/voltage support), for use by Rail Operations Group on parcel services. British Rail Class 769 – multiple unit converted from Class 319 (with external dual-system/voltage support), for use on Great Western Railway, Northern and Transport for Wales regional services. Introduction from December 2019. British Rail Class 755 – multiple unit for use on Greater Anglia regional services. In service since July 2019 (755/4).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alopecia totalis** Alopecia totalis: Alopecia totalis is the loss of all hair on the head and face. Its causes are unclear, but believed to be autoimmune. Research suggests there may be a genetic component linked to developing alopecia totalis; the presence of DRB1*0401 and DQB1*0301, both of which are human leukocyte antigens (HLA), were found to be associated with long-standing alopecia totalis. Treatment: Methotrexate and corticosteroids are proposed treatments.Scalp cooling has specifically been used to prevent alopecia in docetaxel chemotherapy, although it has been found prophylactic in other regimens as well. Treatment effects may take time to resolve, with one study showing breast cancer survivors wearing wigs up to 2 years after chemotherapy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Montana Formation** Montana Formation: The Montana Formation is a geologic formation in Montana. It preserves fossils dating back to the Cretaceous period.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dutasteride/tamsulosin** Dutasteride/tamsulosin: Dutasteride/tamsulosin, sold under the brand name Jalyn among others, is a medication produced by GlaxoSmithKline for the treatment of adult male symptomatic benign prostatic hyperplasia (BPH). It is a combination of two previously existing medications: dutasteride, brand name Avodart, and tamsulosin, brand name Flomax. It contains 0.5 mg of dutasteride and 0.4 mg of tamsulosin hydrochloride.Jalyn was the result of the CombAT (Combination of Avodart and Tamsulosin) trial of 2008. It was approved by the U.S. Food and Drug Administration (FDA) on June 14, 2010. In June 2011, the FDA approved a label change to warn of "Increased Risk of High-grade Prostate Cancer" from Jalyn.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Backchannel (blog)** Backchannel (blog): Backchannel is an online magazine that publishes in-depth stories on technology-related news. Numerous prominent journalists have been recruited to write for the site, including Steven Levy, Andrew Leonard, Susan P. Crawford, Virginia Heffernan, Doug Menuez, Peter Diamandis, Jessi Hempel, and many others. In addition, Backchannel has interviewed many notable figures, such as Demis Hassabis of Google DeepMind and Orrin Hatch of the Republican Party. Publication: Backchannel began as an in-house publication on the Medium website. In 2016, Backchannel was purchased by Condé Nast. In 2017, it was announced that Backchannel would be moving off Medium and be hosted by Wired, while remaining editorially independent.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**5-HT3 receptor** 5-HT3 receptor: The 5-HT3 receptor belongs to the Cys-loop superfamily of ligand-gated ion channels (LGICs) and therefore differs structurally and functionally from all other 5-HT receptors (5-hydroxytryptamine, or serotonin receptors) which are G protein-coupled receptors. This ion channel is cation-selective and mediates neuronal depolarization and excitation within the central and peripheral nervous systems.As with other ligand gated ion channels, the 5-HT3 receptor consists of five subunits arranged around a central ion conducting pore, which is permeable to sodium (Na), potassium (K), and calcium (Ca) ions. Binding of the neurotransmitter 5-hydroxytryptamine (serotonin) to the 5-HT3 receptor opens the channel, which, in turn, leads to an excitatory response in neurons. The rapidly activating, desensitizing, inward current is predominantly carried by sodium and potassium ions. 5-HT3 receptors have a negligible permeability to anions. They are most closely related by homology to the nicotinic acetylcholine receptor. Structure: The 5-HT3 receptor differs markedly in structure and mechanism from the other 5-HT receptor subtypes, which are all G-protein-coupled. A functional channel may be composed of five identical 5-HT3A subunits (homopentameric) or a mixture of 5-HT3A and one of the other four 5-HT3B, 5-HT3C, 5-HT3D, or 5-HT3E subunits (heteropentameric). It appears that only the 5-HT3A subunits form functional homopentameric channels. All other subunit subtypes must heteropentamerize with 5-HT3A subunits to form functional channels. Additionally, there has not currently been any pharmacological difference found between the heteromeric 5-HT3AC, 5-HT3AD, 5-HT3AE, and the homomeric 5-HT3A receptor. N-terminal glycosylation of receptor subunits is critical for subunit assembly and plasma membrane trafficking. The subunits surround a central ion channel in a pseudo-symmetric manner (Fig.1). Each subunit comprises an extracellular N-terminal domain which comprises the orthosteric ligand-binding site; a transmembrane domain consisting of four interconnected alpha helices (M1-M4), with the extracellular M2-M3 loop involved in the gating mechanism; a large cytoplasmic domain between M3 and M4 involved in receptor trafficking and regulation; and a short extracellular C-terminus (Fig.1). Whereas extracellular domain is the site of action of agonists and competitive antagonists, the transmembrane domain contains the central ion pore, receptor gate, and principle selectivity filter that allows ions to cross the cell membrane. Human and mouse genes: The genes encoding human 5-HT3 receptors are located on chromosomes 11 (HTR3A, HTR3B) and 3 (HTR3C, HTR3D, HTR3E), so it appears that they have arisen from gene duplications. The genes HTR3A and HTR3B encode the 5-HT3A and 5-HT3B subunits and HTR3C, HTR3D and HTR3E encode the 5-HT3C, 5-HT3D and 5-HT3E subunits. HTR3C and HTR3E do not seem to form functional homomeric channels, but when co-expressed with HTR3A they form heteromeric complex with decreased or increased 5-HT efficacies. The pathophysiological role for these additional subunits has yet to be identified.The human 5-HT3A receptor gene is similar in structure to the mouse gene which has 9 exons and is spread over ~13 kb. Four of its introns are exactly in the same position as the introns in the homologous α7-acetylcholine receptor gene, clearly showing their evolutionary relationship. Human and mouse genes: Expression. The 5-HT3C, 5-HT3D and 5-HT3E genes tend to show peripherally restricted pattern of expression, with high levels in the gut. In human duodenum and stomach, for example, 5-HT3C and 5-HT3E mRNA might be greater than for 5-HT3A and 5-HT3B. Polymorphism. In patients treated with chemotherapeutic drugs, certain polymorphism of the HTR3B gene could predict successful antiemetic treatment. This could indicate that the 5-HTR3B receptor subunit could be used as biomarker of antiemetic drug efficacy. Tissue distribution: The 5-HT3 receptor is expressed throughout the central and peripheral nervous systems and mediates a variety of physiological functions. On a cellular level, it has been shown that postsynaptic 5-HT3 receptors mediate fast excitatory synaptic transmission in rat neocortical interneurons, amygdala, and hippocampus, and in ferret visual cortex. 5-HT3 receptors are also present on presynaptic nerve terminals. There is some evidence for a role in modulation of neurotransmitter release, but evidence is inconclusive. Effects: When the receptor is activated to open the ion channel by agonists, the following effects are observed: CNS: nausea and vomiting center in brain stem, anxiety, as well as anticonvulsant and pro-nociceptive activity. PNS: neuronal excitation (in autonomic, nociceptive neurons), emesis Agonists: Agonists for the receptor include: Cereulide 2-methyl-5-HT Alpha-Methyltryptamine Bufotenin Chlorophenylbiguanide Ethanol Ibogaine Phenylbiguanide Quipazine RS-56812: Potent and selective 5-HT3 partial agonist, 1000x selectivity over other serotonin receptors SR-57227 Varenicline YM-31636 S 21007 (SAR c.f. CGS-12066A) Antagonists: Antagonists for the receptor (sorted by their respective therapeutic application) include: Antiemetics AS-8112 Granisetron Ondansetron Tropisetron Gastroprokinetics Alosetron Batanopride Metoclopramide (high doses) Renzapride Zacopride M1, the major active metabolite of mosapride Antidepressants Mianserin Mirtazapine Vortioxetine Antipsychotics Clozapine Olanzapine Quetiapine Antimalarials Quinine Chloroquine Mefloquine Others 3-Tropanyl indole-3-carboxylate Cannabidiol (CBD) Delta-9-Tetrahydrocannabinol Lamotrigine (epilepsy and bipolar disorder) Memantine (Alzheimer's disease medication) Menthol Thujone Positive Allosteric Modulators: These agents are not agonists at the receptor, but increase the affinity or efficacy of the receptors for an agonist: Indole Derivatives 5-chloroindole Small Organic Anaesthetics Ethanol Chloroform Halothane Isoflurane Discovery: Identification of the 5-HT3 receptor did not take place until 1986, lacking selective pharmacological tools. However, with the discovery that the 5-HT3 receptor plays a prominent role in chemotherapy- and radiotherapy-induced vomiting, and the concomitant development of selective 5-HT3 receptor antagonists to suppress these side effects aroused intense interest from the pharmaceutical industry and therefore the identification of 5-HT3 receptors in cell lines and native tissues quickly followed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Smart Move (FIRST)** Smart Move (FIRST): Smart Move is the name of the 2009-10 FIRST Lego League challenge, released September 3, 2009. It is centered on transport and how to make new and more efficient forms. Project: Teams were tasked with identifying a transportation problem, whether it be local or worldwide. They then had to create an innovative solution to the problem and share it with others. Gameplay: The table performance portion of Smart Move is played on a 4 ft by 8 ft field rimmed by wood boards. At competition, two of these fields are placed together to form an 8 ft square. In each 2+1⁄2-minute match, a team competes on each field with their robot to earn up to 400 points manipulating the mission models. One of the mission models, the Yellow Bridge, straddles both fields in the center. Both teams can earn points from completing this mission. The touch penalty objects are warning beacon models. All 8 are worth 10 points each if they are upright on the field, but are removed from play every time the robot is touched outside of base. Missions: Gain Access To Places (choose one) The robot needs to be in one of these positions when the match ends: TARGET SPOT - Parked with its drive wheels or treads touching the round target. Value: 25 points. YELLOW BRIDGE DECK - Parked with its drive wheels or treads touching your yellow bridge decking, but not touching any red decking or the mat. Value: 20 points. VEHICLE SHARING - Parked with its drive wheels or treads touching the red bridge decking, but not touching the mat. Value: 25 points. Gain Access to Things ACCESS MARKERS - Access markers need to be in their “down” position. Value: 25 points each. LOOPS - Required Condition: Loops need to be in Base. Value: 10 points each. BONUSES - If all three gray loops have reached Base, a team member may take one red loop into Base by hand. If all three red loops have reached Base, a team member may take one loop of any color into Base by hand. Avoid Impacts WARNING BEACONS - Warning beacons need to be upright (square to the mat). Value: 10 points each. Missions: ALSO: Warning beacons are the touch penalty objects for Smart Move. This means each time a team member touches his vehicle while it's completely out of Base, the referee removes one upright beacon. The beacons are removed in order from south to north, then from west to east. If there are no upright beacons at the time of the touch, there is no penalty. Missions: SENSOR WALLS - Sensor walls need to be upright (square to the mat). To count, each upright wall needs a "down" access marker, up to four walls. Value: 10 points each, max 40. Survive Impacts SENSOR WALLS - No (zero) sensor walls are upright. Value: 40 points. VEHICLE IMPACT TEST - The truck needs to no longer touch the ramp's red stopper beam. The robot needs to be completely out of Base when it releases the truck, otherwise the referee removes two upright warning beacons (in the same manner as two touch penalties). Value: 20 points. SINGLE PASSENGER RESTRAINT TEST - The crash‐test figure needs to be aboard the robot for the entire match. The first time the robot is without the figure, the referee removes the figure. Any reasonable constraint system is allowed. Value: 15 points. MULTIPLE PASSENGER SAFETY TEST - All four people are sitting or standing in or on a transport device of the team's design and some portion of that object is in the round target area. Value: 10 points.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PDGFA** PDGFA: Platelet-derived growth factor subunit A is a protein that in humans is encoded by the PDGFA gene.The protein encoded by this gene is a member of the platelet-derived growth factor family. The four members of this family are mitogenic factors for cells of mesenchymal origin and are characterized by a motif of eight cysteines. This gene product can exist either as a homodimer or as a heterodimer with the platelet-derived growth factor beta polypeptide, where the dimers are connected by disulfide bonds. Studies using knockout mice have shown cellular defects in oligodendrocytes, alveolar smooth muscle cells, and Leydig cells in the testis; knockout mice die either as embryos or shortly after birth. Two splice variants have been identified for this gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Site-specific browser** Site-specific browser: A site-specific browser (SSB) is a software application that is dedicated to accessing pages from a single source (site) on a computer network such as the Internet or a private intranet. SSBs typically simplify the more complex functions of a web browser by excluding the menus, toolbars and browser GUI associated with functions that are external to the workings of a single site. These applications are typically started by a desktop icon which is usually a favicon.Site-specific browsers are often implemented through the use of existing application frameworks such as Gecko, WebKit, Microsoft's Internet Explorer (the underlying layout engines, specifically Trident and JScript) and Opera's Presto. SSBs built upon these frameworks allow web applications and social networking tools to start with desktop icons launching in a manner similar to standard non-browser applications. Some technologies, including Adobe's AIR and JavaFX use specialized development kits that can create cross-platform SSBs. Since version 6.0, the Curl platform has offered detached applets Archived 2011-07-08 at the Wayback Machine and the EmbeddedBrowserGraphic class which can be used as an SSB on the desktop. Applications: One early example of an SSB is MacDICT, a Mac OS 9 application that accessed various web sites to define, translate, or find synonyms for words typed into a text box. A more current example is WeatherBug Desktop, which is a standalone client accessing information also available at the weatherbug.com website but configured to display real-time weather data for a user-specified location. Applications: The first general purpose SSB is believed to be Bubbles which launched late 2005 on the Windows platform and later coined the term "Site Specific Extensions" for SSB userscripts and introduced the SSB Javascript API. Applications: On 2 September 2008, the Google Chrome web browser was released for Windows. Although Chrome is a full featured browser, it also contains a "Create application shortcut" menu item that adds the ability to create a stand-alone SSB window for any site. This is similar to Mozilla Prism (formerly WebRunner), now discontinued, but which is available as an add-on to the Firefox browser version 3.Examples of applications of SSBs in various situations include: Social networking: dedicated application to access and use sites such as Facebook, MySpace, Twitter, or personal blog pages Email: dedicated to webmail sites such as Gmail, Hotmail, or Yahoo! Mail Business: customer relationship management (CRM) or ERP client for sites such as Salesforce.com, specific web/browser hybrid implementations such as Elements SBM or intranet pages from suites like those sold by Oracle or SAP Mapping: SSB specific to maps from providers like Google Maps, Mapquest, or Yahoo! Maps Retail: desktop portal to major retailers that are accessed frequently or consumer services such as Carfax or CNET Mobile applications As of 2019, Firefox and Google Chrome on Android and Safari on iOS allow the creation of site-specific browsers for progressive web applications (PWAs). Software: Utilities that produce site-specific browsers: WebCatalog (macOS/Windows/Linux, isolated cookie storage) Chromeless (macOS, isolated cookie storage, discontinued) Fluid (Mac OS X only, isolated cookie storage) Epichrome (Mac OS only, discontinued) Unite (Mac OS only) Coherence (Mac OS only) Google Chrome (Available for Windows, Mac, and Linux: "Application shortcut" feature, though not entirely sandboxed like Mozilla Prism) (feature modified, "Create shortcut...", possibly sometimes unavailable, as of 2020) ICE (software)|ICE (Linux only, developed for Peppermint OS) Mailplane (Mac OS only) Mozilla Prism (cross-platform, Flash-compatible, and true application isolation (e.g., cookies); discontinued) GNOME Web ("Install Site as Web Application" feature) Microsoft Edge Internet Explorer 9 and higher Wavebox (Available for Windows, Mac, and Linux) Hermit (Available for Android only) iOS Safari: Share --> Add to Home Screen. Software: NoScript's ABE module with rules likeSite x.com y.net Accept from x.com y.net Deny Site * Deny Rich web application platforms: JavaFX 2.0 Adobe Air Curl RIA platform Microsoft SilverlightWidget engines: Opera Widgets
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Breast cancer classification** Breast cancer classification: Breast cancer classification divides breast cancer into categories according to different schemes criteria and serving a different purpose. The major categories are the histopathological type, the grade of the tumor, the stage of the tumor, and the expression of proteins and genes. As knowledge of cancer cell biology develops these classifications are updated. Breast cancer classification: The purpose of classification is to select the best treatment. The effectiveness of a specific treatment is demonstrated for a specific breast cancer (usually by randomized, controlled trials). That treatment may not be effective in a different breast cancer. Some breast cancers are aggressive and life-threatening, and must be treated with aggressive treatments that have major adverse effects. Other breast cancers are less aggressive and can be treated with less aggressive treatments, such as lumpectomy. Breast cancer classification: Treatment algorithms rely on breast cancer classification to define specific subgroups that are each treated according to the best evidence available. Classification aspects must be carefully tested and validated, such that confounding effects are minimized, making them either true prognostic factors, which estimate disease outcomes such as disease-free or overall survival in the absence of therapy, or true predictive factors, which estimate the likelihood of response or lack of response to a specific treatment.Classification of breast cancer is usually, but not always, primarily based on the histological appearance of tissue in the tumor. A variant from this approach, defined on the basis of physical exam findings, is that inflammatory breast cancer (IBC), a form of ductal carcinoma or malignant cancer in the ducts, is distinguished from other carcinomas by the inflamed appearance of the affected breast, which correlates with increased cancer aggressivity. Schemes or aspects: Overview Breast cancers can be classified by different schemata. Each of these aspects influences treatment response and prognosis. Description of a breast cancer would optimally include all of these classification aspects, as well as other findings, such as signs found on physical exam. A full classification includes histopathological type, grade, stage (TNM), receptor status, and the presence or absence of genes as determined by DNA testing: Histopathology. Although breast cancer has many different histologies, the considerable majority of breast cancers are derived from the epithelium lining the ducts or lobules, and are classified as mammary ductal carcinoma. Carcinoma in situ is proliferation of cancer cells within the epithelial tissue without invasion of the surrounding tissue. In contrast, invasive carcinoma invades the surrounding tissue. Perineural and/or lymphovascular space invasion is usually considered as part of the histological description of a breast cancer, and when present may be associated with more aggressive disease. Schemes or aspects: Grade. Grading focuses on the appearance of the breast cancer cells compared to the appearance of normal breast tissue. Normal cells in an organ like the breast become differentiated, meaning that they take on specific shapes and forms that reflect their function as part of that organ. Cancerous cells lose that differentiation. In cancer, the cells that would normally line up in an orderly way to make up the milk ducts become disorganized. Cell division becomes uncontrolled. Cell nuclei become less uniform. Pathologists describe cells as well differentiated (low-grade), moderately differentiated (intermediate-grade), and poorly differentiated (high-grade) as the cells progressively lose the features seen in normal breast cells. Poorly differentiated cancers have a worse prognosis. Schemes or aspects: Stage. The TNM classification for staging breast cancer is based on the size of the cancer where it originally started in the body and the locations to which it has travelled. These cancer characteristics are described as the size of the tumor (T), whether or not the tumor has spread to the lymph nodes (N) in the armpits, neck, and inside the chest, and whether the tumor has metastasized (M) (i.e. spread to a more distant part of the body). Larger size, nodal spread, and metastasis have a larger stage number and a worse prognosis. The main stages are: Stage 0 which is in situ disease or Paget's disease of the nipple. Stage 0 is a pre-cancerous or marker condition, either ductal carcinoma in situ (DCIS) or lobular carcinoma in situ (LCIS). Schemes or aspects: Stages 1–3 are within the breast or regional lymph nodes. Stage 4 is a metastatic cancer. Metastatic breast cancer has a less favorable prognosis. Schemes or aspects: Receptor status. Cells have receptors on their surface and in their cytoplasm and nucleus. Chemical messengers such as hormones bind to receptors, and this causes changes in the cell. Breast cancer cells may or may not have many different types of receptors, the three most important in the present classification being: estrogen receptor (ER), progesterone receptor (PR), and HER2/neu. Cells with or without these receptors are called ER positive (ER+), ER negative (ER-), PR positive (PR+), PR negative (PR-), HER2 positive (HER2+), and HER2 negative (HER2-). Cells with none of these receptors are called basal-like or triple negative. HER2-low has some HER2 proteins on the cell surface, but not enough to be classified as HER2-positive. Trastuzumab deruxtecan is the first approved therapy by the US Food and Drug Administration (FDA) targeted to people with the HER2-low breast cancer subtype. Schemes or aspects: DNA-based classification. Understanding the specific details of a particular breast cancer may include looking at the cancer cell DNA or RNA by several different laboratory approaches. When specific DNA mutations or gene expression profiles are identified in the cancer cells this may guide the selection of treatments, either by targeting these changes, or by predicting from these alterations which non-targeted therapies are most effective. Schemes or aspects: Other classification approaches. Computer models such as Adjuvant can combine the various classification aspects according to validated algorithms and present visually appealing graphics that assist in treatment decisions. The USC/Van Nuys prognostic index (VNPI) classifies ductal carcinoma in situ (DCIS) into dissimilar risk categories that may be treated accordingly. The choice of which treatment to receive can be substantially influenced by comorbidity assessments. Familial breast cancers may potentially undergo dissimilar treatment (such as mastectomy). Histopathology: Histopathologic classification is based upon characteristics seen upon light microscopy of biopsy specimens. They can broadly be classified into: Carcinoma in situ . This group constitutes about 15-30% of breast biopsies, more so in countries with high coverage of breast screening programs. These have favorable prognosis, with 5-year survival rates of 97-99%. Histopathology: Invasive carcinoma. This group constitutes the other 70-85%. The most common type in this group is invasive ductal carcinoma, representing about 80% of invasive carcinomas. In the US, 55% of breast cancers are invasive ductal carcinoma. Invasive lobular carcinoma represent about 10% of invasive carcinomas, and 5% of all breast cancers in the US. The overall 5-year survival rate for both invasive ductal carcinoma and invasive lobular carcinoma was approximately 85% in 2003. Ductal carcinoma in situ, on the other hand, is in itself harmless, although if untreated approximately 60% of these low-grade DCIS lesions will become invasive over the course of 40 years in follow-up. Histopathology: WHO classification The 2012 World Health Organization (WHO) classification of tumors of the breast which includes benign (generally harmless) tumors and malignant (cancerous) tumors, recommends the following pathological types: Grade: The grading of a cancer in the breast depends on the microscopic similarity of breast cancer cells to normal breast tissue, and classifies the cancer as well differentiated (low-grade), moderately differentiated (intermediate-grade), and poorly differentiated (high-grade), reflecting progressively less normal appearing cells that have a worsening prognosis. Although grading is fundamentally based on how biopsied, cultured cells behave, in practice the grading of a given cancer is derived by assessing the cellular appearance of the tumor. The closer the appearance of the cancer cells to normal cells, the slower their growth and the better the prognosis. If cells are not well differentiated, they will appear immature, will divide more rapidly, and will tend to spread. Well differentiated is given a grade of 1, moderate is grade 2, while poor or undifferentiated is given a higher grade of 3 or 4 (depending upon the scale used). Grade: The Nottingham system is recommended for breast cancer grading. The Nottingham system is also called the Bloom–Richardson–Elston system (BRE), or the Elston-Ellis modification of the Scarff-Bloom-Richardson grading system. It grades breast carcinomas by adding up scores for tubule formation, nuclear pleomorphism, and mitotic count, each of which is given 1 to 3 points. The scores for each of these three criteria are then added together to give an overall final score and corresponding grade. It is not applicable to medullary carcinomas which are histologically high-grade by definition, while being clinically low-grade if lymph nodes are negative. It is also not applicable to metaplastic carcinomas.The grading criteria are as follows: Tubule formation This parameter assesses what percent of the tumor forms normal duct structures. In cancer, there is a breakdown of the mechanisms that cells use to attach to each other and communicate with each other, to form tissues such as ducts, so the tissue structures become less orderly. Grade: Note: The overall appearance of the tumor has to be considered. Grade: 1 point: tubular formation in more than 75% of the tumor (it may in addition be termed "majority of tumor") 2 points: tubular formation in 10 to 75% of the tumor ("moderate") 3 points: tubular formation in less than 10% of the tumor ("little or none") Nuclear pleomorphism This parameter assesses whether the cell nuclei are uniform like those in normal breast duct epithelial cells, or whether they are larger, darker, or irregular (pleomorphic). In cancer, the mechanisms that control genes and chromosomes in the nucleus break down, and irregular nuclei and pleomorphic changes are signs of abnormal cell reproduction. Grade: Note: The cancer areas having cells with the greatest cellular abnormalities should be evaluated. Grade: 1 point: nuclei with minimal or mild variation in size and shape 2 points: nuclei with moderate variation in size and shape 3 points: nuclei with marked variation in size and shape Mitotic count This parameter assesses how many mitotic figures (dividing cells) the pathologist sees in 10x high power microscope field. One of the hallmarks of cancer is that cells divide uncontrollably. The more cells that are dividing, the worse the cancer. Grade: Note: Mitotic figures are counted only at the periphery of the tumor, and counting should begin in the most mitotically active areas. Overall grade The scores for each of these three criteria are added together to give a final overall score and a corresponding grade as follows: 3-5 Grade 1 tumor (well-differentiated). Best prognosis. 6-7 Grade 2 tumor (moderately differentiated). Medium prognosis. 8-9 Grade 3 tumor (poorly differentiated). Worst prognosis.Lower-grade tumors, with a more favorable prognosis, can be treated less aggressively, and have a better survival rate. Higher-grade tumors are treated more aggressively, and their intrinsically worse survival rate may warrant the adverse effects of more aggressive medications. Stage: Staging is the process of determining how much cancer there is in the body and where it is located. The underlying purpose of staging is to describe the extent or severity of an individual's cancer, and to bring together cancers that have similar prognosis and treatment. Staging of breast cancer is one aspect of breast cancer classification that assists in making appropriate treatment choices, when considered along with other classification aspects such as estrogen receptor and progesterone receptor levels in the cancer tissue, the human epidermal growth factor receptor 2 (HER2/neu) status, menopausal status, and the person's general health.Staging information that is obtained prior to surgery, for example by mammography, x-rays and CT scans, is called clinical staging and staging by surgery is known as pathological staging. Stage: Pathologic staging is more accurate than clinical staging, but clinical staging is the first and sometimes the only staging type. For example, if clinical staging reveals stage IV disease, extensive surgery may not be helpful, and (appropriately) incomplete pathological staging information will be obtained. The American Joint Committee on Cancer (AJCC) and the International Union Against Cancer (UICC) recommend TNM staging, which is a two step procedure. Their TNM system, which they now develop jointly, first classifies cancer by several factors, T for tumor, N for nodes, M for metastasis, and then groups these TNM factors into overall stages. Stage: Primary Tumor (T) Tumor – The tumor values (TX, T0, Tis, T1, T2, T3 or T4) depend on the cancer at the primary site of origin in the breast, as follows: TX: inability to assess that site Tis: ductal carcinoma in situ (DCIS), lobular carcinoma in situ (LCIS) or Paget's disease T1: Less than 2 cmT1a: 0.1 to 0.5 cm T1b: 0.5 to 1.0 cm T1c: 1.0 to 2.0 cmT2: 2 to 5 cm T3: Larger than 5 cm T4T4a: Chest wall involvement T4b: Skin involvement T4c: Both 4a and 4b T4d: Inflammatory breast cancer, a clinical circumstance where typical skin changes involve at least a third of the breast. Stage: Regional Lymph Nodes (N) Lymph Node – The lymph node values (NX, N0, N1, N2 or N3) depend on the number, size and location of breast cancer cell deposits in various regional lymph nodes, such as the armpit (axillary lymph nodes), the collar area (supraclavicular lymph nodes), and inside the chest (internal mammary lymph nodes.) The armpit is designated as having three levels: level I is the low axilla, and is below or outside the lower edge of the pectoralis minor muscle; level II is the mid-axilla which is defined by the borders of the pectoralis minor muscle; and level III, or high (apical) axilla which is above the pectoralis minor muscle. Each stage is as follows: N0: There is some nuance to the official definitions for N0 disease, which includes:N0(i+) : Isolated Tumor Cell clusters (ITC), which are small clusters of cells not greater than 0.2 mm, or single tumor cells, or a cluster of fewer than 200 cells in a single histologic cross-section, whether detected by routine histology or immunohistochemistry. Stage: N0(mol-): regional lymph nodes have no metastases histologically, but have positive molecular findings (RT-PCR).N1: Metastases in 1-3 axillary lymph nodes and/or in internal mammary nodes; and/or in clinically negative internal mammary nodes with micrometastasis, or macrometastasis on sentinel lymph node biopsy.N1mi: Micrometastasis, that is, lymph node clusters at least 2 mm or 200 cells, but less than 2.0 mm. At least one carcinoma focus over 2.0 mm is called "Lymph node metastasis". If one node qualifies as metastasis, all other nodes even with smaller foci are counted as metastases as well.N2: Fixed/matted ipsilateral axillary nodes. Stage: N3N3a – Ipsilateral infraclavicular nodes N3b – Ipsilateral internal mammary nodes N3c – Ipsilateral supraclavicular nodes Distant Metastases (M) M0: No clinical or radiographic evidence of distant metastases M0(i+): Molecularly or microscopically detected tumor cells in circulating blood, bone marrow or non-regional nodal tissue, no larger than 0.2 mm, and without clinical or radiographic evidence or symptoms or signs of metastases, and which, perhaps counter-intuitively, does not change the stage grouping, as staging for in M0(i+) is done according to the T and N values M1: Distant detectable metastases as determined by classic clinical and radiographic means, and/or metastasis that are histologically larger than 0.2 mm. Stage: Overall stage A combination of T, N and M, as follows: Stage 0: Tis Stage I: T1N0 Stage II: T2N0, T3N0 T0N1, T1N1, or T2N1 Stage III: Invasion into skin and/or ribs, matted lymph nodes, T3N1, T0N2, T1N2, T2N2, T3N2, AnyT N3, T4 any N, locally advanced breast cancer Stage IV: M1, advanced breast cancer Staging and prognosis The impact of different stages on outcome can be appreciated in the following table, taken from patient data in the 2013-2015 period, and using the AJCC 8th edition for staging. It does not show the influence of important additional factors such as estrogen receptor (ER) or HER2/neu receptor status, and does not reflect the impact of newer treatments. Stage: Previous editions Although TNM classification is an internationally agreed system, it has gradually evolved through its different editions; the dates of publication and of adoption for use of AJCC editions is summarized in the table in this article; past editions are available from AJCC for web download.Several factors are important when reviewing reports for individual breast cancers or when reading the medical literature, and applying staging data. Stage: It is crucial to be aware that the TNM system criteria have varied over time, sometimes fairly substantially, according to the different editions that AJCC and UICC have released. Readers are assisted by the provision in the table of direct links to the breast cancer chapters of these various editions. Stage: As a result, a given stage may have quite a different prognosis depending on which staging edition is used, independent of any changes in diagnostic methods or treatments, an effect that can contribute to "stage migration". For example, differences in the 1998 and 2003 categories resulted in many cancers being assigned differently, with apparent improvement in survival rates.As a practical matter, reports often use the staging edition that was in place when the study began, rather than the date of acceptance or publication. However, it is worth checking whether the author updated the staging system during the study, or modified the usual classification rules for specific use in the investigation. Stage: A different effect on staging arises from evolving technologies that are used to assign patients to particular categories, such that increasingly sensitive methods tend to cause individual cancers to be reassigned to higher stages, making it improper to compare that cancer's prognosis to the historical expectations for that stage. Finally, of course, a further important consideration is the effect of improving treatments over time as well. Previous editions featured three metastatic values (MX, M0 and M1) which referred respectively to absence of adequate information, the confirmed absence, or the presence of breast cancer cells in locations other than the breast and regional lymph nodes, such as to bone, brain, lung. AJCC has provided web accessible poster versions of the current versions of these copyrighted TNM descriptors and groups, and readers should refer to that up to date, accurate information or to the National Cancer Institute (NCI) or National Comprehensive Cancer Network sites which reprints these with AJCC permission. For accurate, complete, current details refer to the accessible copyrighted documentation from AJCC, or to the authorized documentation from NCI or NCCN; for past editions refer to AJCC. Receptor status: The receptor status of breast cancers has traditionally been identified by immunohistochemistry (IHC), which stains the cells based on the presence of estrogen receptors (ER), progesterone receptors (PR) and HER2. This remains the most common method of testing for receptor status, but DNA multi-gene expression profiles can categorize breast cancers into molecular subtypes that generally correspond to IHC receptor status; one commercial source is the BluePrint test, as discussed in the following section. Receptor status: Receptor status is a critical assessment for all breast cancers as it determines the suitability of using targeted treatments such as tamoxifen and or trastuzumab. These treatments are now some of the most effective adjuvant treatments of breast cancer. Estrogen receptor positive (ER+) cancer cells depend on estrogen for their growth, so they can be treated with drugs to reduce either the effect of estrogen (e.g. tamoxifen) or the actual level of estrogen (e.g. aromatase inhibitors), and generally have a better prognosis. Generally, prior to modern treatments, HER+ had a worse prognosis, however HER2+ cancer cells respond to drugs such as the monoclonal antibody, trastuzumab, (in combination with conventional chemotherapy) and this has improved the prognosis significantly. Conversely, triple negative cancer (i.e. no positive receptors), lacking targeted treatments, now has a comparatively poor prognosis.Androgen receptor is expressed in 80-90% of ER+ breast cancers and 40% of "triple negative" breast cancers. Activation of androgen receptors appears to suppress breast cancer growth in ER+ cancer while in ER- breast it appears to act as growth promoter. Efforts are underway to utilize this as prognostic marker and treatment. Receptor status: Molecular subtype Receptor status was traditionally considered by reviewing each individual receptor (ER, PR, her2) in turn, but newer approaches look at these together, along with the tumor grade, to categorize breast cancer into several conceptual molecular classes that have different prognoses and may have different responses to specific therapies. DNA microarrays have assisted this approach, as discussed in the following section. Proposed molecular subtypes include: Basal-like: ER-, PR- and HER2-; also called triple negative breast cancer (TNBC). Most BRCA1 breast cancers are basal-like TNBC. Receptor status: Luminal A: ER+ and low grade Luminal B: ER+ but often high grade Luminal ER-/AR+: (overlapping with apocrine and so called molecular apocrine) - recently identified androgen responsive subtype which may respond to antihormonal treatment with bicalutamide ERBB2/HER2-amplified: has overexpressed HER2/neu Normal breast-like Claudin-low: a more recently described class; often triple-negative, but distinct in that there is low expression of cell-cell junction proteins including E-cadherin and frequently there is infiltration with lymphocytes. DNA classification: Traditional DNA classification Traditional DNA classification was based on the general observation that cells that are dividing more quickly have a worse prognosis, and relied on either the presence of protein Ki67 or the percentage of cancer cell DNA in S phase. These methods, and scoring systems that used DNA ploidy, are used much less often now, as their predictive and prognostic power was less substantial than other classification schemes such as the TNM stage. In contrast, modern DNA analyses are increasingly relevant in defining underlying cancer biology and in helping choose treatments. DNA classification: HER2/neu HER2/neu status can be analyzed by fluorescent in-situ hybridization (FISH) assays. Some commentators prefer this approach, claiming a higher correlation than receptor immunohistochemistry with response to trastuzumab, a targeted therapy, but guidelines permit either testing method. DNA classification: DNA microarrays Background DNA microarrays have compared normal cells to breast cancer cells and found differences in the expression of hundreds of genes. Although the significance of many of those genetic differences is unknown, independent analyses by different research groups has found that certain groups of genes have a tendency to co-express. These co-expressing clusters have included hormone receptor-related genes, HER2-related genes, a group of basal-like genes, and proliferation genes. As might therefore be anticipated, there is considerable similarity between the receptor and microarray classifications, but assignment of individual tumors is by no means identical. By way of illustration, some analyses have suggested that approximately 75% of receptor classified triple-negative breast cancers (TNBC) basal-like tumors have the expected DNA expression profile, and a similar 75% of tumors with a typical basal-like DNA expression profile are receptor TNBC as well. To say this in a different way to emphasize things, this means that 25% of triple-negative breast cancer (TNBC) basal-like tumors as defined by one or other classification are excluded from the alternative classification's results. Which classification scheme (receptor IHC or DNA expression profile) more reliably assorts particular cancers to effective therapies is under investigation. DNA classification: Several commercially marketed DNA microarray tests analyze clusters of genes and may help decide which possible treatment is most effective for a particular cancer. The use of these assays in breast cancers is supported by Level II evidence or Level III evidence. No tests have been verified by Level I evidence, which is rigorously defined as being derived from a prospective, randomized controlled trial where patients who used the test had a better outcome than those who did not. Acquiring extensive Level I evidence would be clinically and ethically challenging. However, several validation approaches are being actively pursued. DNA classification: Numerous genetic profiles have been developed. The most heavily marketed are: Oncotype DX is supported by Level II evidence, and was originally designed for use in estrogen receptor (ER) positive tumors, and has been endorsed by the American Society of Clinical Oncology (ASCO) and the NCCN. MammaPrint is supported only by Level III evidence, can be performed on estrogen receptor (ER) positive and negative tumors, and has FDA approval. DNA classification: Two other tests also only have Level III evidence: Theros and MapQuant Dx.These multigene assays, some partially and some completely commercialized, have been scientifically reviewed to compare them with other standard breast cancer classification methods such as grade and receptor status. Although these gene-expression profiles look at different individual genes, they seem to classify a given tumor into similar risk groups and thus provide concordant predictions of outcome.Although there is considerable evidence that these tests can refine the treatment decisions in a meaningful proportion of breast cancers they are fairly expensive; proposed selection criteria for which particular tumors may benefit by being interrogated by these assays remain controversial, particularly with lymph node positive cancers. One review characterized these genetic tests collectively as adding "modest prognostic information for patients with HER2-positive and triple-negative tumors, but when measures of clinical risk are equivocal (e.g., intermediate expression of ER and intermediate histologic grade), these assays could guide clinical decisions". DNA classification: Oncotype DX Oncotype DX assesses 16 cancer-related genes and 5 normal comparator reference genes, and is therefore sometimes known as the 21-gene assay. It was designed for use in estrogen receptor (ER) positive tumors. The test is run on formalin fixed, paraffin-embedded tissue. Oncotype results are reported as a Recurrence Score (RS), where a higher RS is associated with a worse prognosis, referring to the likelihood of recurrence without treatment. In addition to that prognostic role, a higher RS is also associated with a higher probability of response to chemotherapy, which is termed a positive predictive factor. DNA classification: These results suggest that not only does Oncotype stratify estrogen-receptor positive breast cancer into different prognostic groups, but also suggest that cancers that have a particularly favorable Oncotype DX microarray result tend to derive minimal benefit from adjuvant chemotherapy and so it may be appropriate to choose to avoid side effects from that additional treatment. As an additional example, a neoadjuvant clinical treatment program that included initial chemotherapy followed by surgery and subsequent additional chemotherapy, radiotherapy, and hormonal therapy found a strong correlation of the Oncotype classification with the likelihood of a complete response (CR) to the presurgical chemotherapy.Since high risk features may already be evident in many high risk cancers, for example hormone-receptor negativity or HER-2 positive disease, the Oncotype test may especially improve the risk assessment that is derived from routine clinical variables in intermediate risk disease. Results from both the US and internationally suggest that Oncotype may assist in treatment decisions.Oncotype DX has been endorsed by the American Society of Clinical Oncology (ASCO) and the National Comprehensive Cancer Network (NCCN). The NCCN Panel considers the 21-gene assay as an option when evaluating certain tumors to assist in estimating likelihood of recurrence and benefit from chemotherapy, emphasizing that the recurrence score should be used along with other breast cancer classification elements when stratifying risk. Oncotype fulfilled all California Technology Assessment Forum (CTAF) criteria in October 2006. The U.S. Food and Drug Administration (FDA) does not mandate approval of this class of tests if they are performed at a single, company-operated laboratory Genomic Health, which developed Oncotype DX, offers the test under these so-called home brew rules and, accordingly, to that extent the Oncotype DX assay is not specifically FDA approved. DNA classification: MammaPrint and BluePrint The MammaPrint gene pattern is a commercial-stage 70-gene panel marketed by Agendia, that was developed in patients under age 55 years who had lymph node negative breast cancers (N0). The commercial test is marketed for use in breast cancer irrespective of estrogen receptor (ER) status. The test is run on formalin fixed, paraffin-embedded tissue. MammaPrint traditionally used rapidly frozen tissue but a room temperature, molecular fixative is available for use within 60 minutes of obtaining fresh tissue samples.A summary of clinical trials using MammaPrint is included in the MammaPrint main article. The available evidence for Mammaprint was reviewed by California Technology Assessment Forum (CTAF) in June 2010; the written report indicated that MammaPrint had not yet fulfilled all CTAF criteria. MammaPrint has 5 FDA clearances and is the only FDA cleared microarray assay available. To be eligible for the MammaPrint gene expression profile, a breast cancer should have the following characteristics: stage 1 or 2, tumor size less than 5.0 cm, estrogen receptor positive (ER+) or estrogen receptor negative (ER-). In the US, the tumor should also be lymph node negative (N0), but internationally the test may be performed if the lymph node status is negative or positive with up to 3 nodes.One method of assessing the molecular subtype of a breast cancer is by BluePrint, a commercial-stage 80-gene panel marketed by Agendia, either as a standalone test, or combined with the MammaPrint gene profile. DNA classification: Other DNA assays and choice of treatment The choice of established chemotherapy medications, if chemotherapy is needed, may also be affected by DNA assays that predict relative resistance or sensitivity. Topoisomerase II (TOP2A) expression predicts whether doxorubicin is relatively useful. Expression of genes that regulate tubulin may help predict the activity of taxanes. DNA classification: Various molecular pathway targets and DNA results are being incorporated in the design of clinical trials of new medicines. Specific genes such as p53, NME1, BRCA and PIK3CA/Akt may be associated with responsiveness of the cancer cells to innovative research pharmaceuticals. BRCA1 and BRCA2 polymorphic variants can increase the risk of breast cancer, and these cancers tend to express a pr ofile of genes, such as p53, in a pattern that has been called "BRCA-ness." Cancers arising from BRCA1 and BRCA2 mutations, as well as other cancers that share a similar "BRCA-ness" profile, including some basal-like receptor triple negative breast cancers, may respond to treatment with PARP inhibitors such as olaparib. Combining these newer medicines with older agents such as 6-Thioguanine (6TG) may overcome the resistance that can arise in BRCA cancers to PARP inhibitors or platinum-based chemotherapy. mTOR inhibitors such as everolimus may show more effect in PIK3CA/Akt e9 mutants than in e20 mutants or wild types.DNA methylation patterns can epigenetically affect gene expression in breast cancer and may contribute to some of the observed differences between genetic subtypes.Tumors overexpressing the Wnt signaling pathway co-receptor low-density lipoprotein receptor-related protein 6 (LRP6) may represent a distinct subtype of breast cancer and a potential treatment target.Numerous clinical investigations looked at whether testing for variant genotype polymorphic alleles of several genes could predict whether or not to prescribe tamoxifen; this was based on possible differences in the rate of conversion of tamoxifen to the active metabolite, endoxifen. Although some studies had suggested a potential advantage from CYP2D6 testing, data from two large clinical trials found no benefit. Testing for the CYP2C19*2 polymorphism gave counterintuitive results. The medical utility of potential biomarkers of tamoxifen responsiveness such as HOXB13, PAX2, and estrogen receptor (ER) alpha and beta isoforms interaction with SRC3 have all yet to be fully defined. Other classification approaches: Computer models Computer models consider several traditional factors concurrently to derive individual survival predictions and calculations of potential treatment benefits. The validated algorithms can present visually appealing graphics that assist in treatment decisions. In addition, other classifications of breast cancers do exist and no uniform system has been consistently adopted worldwide. Adjuvant! is based on US cohorts and presents colored bar charts that display information that may assist in decisions regarding systemic adjuvant therapies. Successful validation was seen with Canadian and Dutch cohorts. Adjuvant! seemed less applicable to a British cohort and accordingly PREDICT is being developed in the United Kingdom. Other classification approaches: Other immunohistochemical tests Among the immunohistochemical tests that may further stratify prognosis, BCL2 has shown promise in preliminary studies. Van Nuys prognostic index The USC/Van Nuys prognostic index (VNPI) classifies ductal carcinoma in situ (DCIS) into dissimilar risk categories that may be treated accordingly.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Marginal product of labor** Marginal product of labor: In economics, the marginal product of labor (MPL) is the change in output that results from employing an added unit of labor. It is a feature of the production function, and depends on the amounts of physical capital and labor already in use. Definition: The marginal product of a factor of production is generally defined as the change in output resulting from a unit or infinitesimal change in the quantity of that factor used, holding all other input usages in the production process constant. The marginal product of labor is then the change in output (Y) per unit change in labor (L). In discrete terms the marginal product of labor is: ΔYΔL. In continuous terms, the MPL is the first derivative of the production function: ∂Y∂L. Graphically, the MPL is the slope of the production function. Examples: There is a factory which produces toys. When there are no workers in the factory, no toys are produced. When there is one worker in the factory, six toys are produced per hour. When there are two workers in the factory, eleven toys are produced per hour. There is a marginal product of labor of five when there are two workers in the factory compared to one. When the marginal product of labor is increasing, this is called increasing marginal returns. However, as the number of workers increases, the marginal product of labor may not increase indefinitely. When not scaled properly, the marginal product of labor may go down when the number of employees goes up, creating a situation known as diminishing marginal returns. When the marginal product of labor becomes negative, it is known as negative marginal returns. Marginal costs: The marginal product of labor is directly related to costs of production. Costs are divided between fixed and variable costs. Fixed costs are costs that relate to the fixed input, capital, or rK, where r is the rental cost of capital and K is the quantity of capital. Variable costs (VC) are the costs of the variable input, labor, or wL, where w is the wage rate and L is the amount of labor employed. Thus, VC = wL . Marginal cost (MC) is the change in total cost per unit change in output or ∆C/∆Q. In the short run, production can be varied only by changing the variable input. Thus only variable costs change as output increases: ∆C = ∆VC = ∆(wL). Marginal cost is ∆(Lw)/∆Q. Now, ∆L/∆Q is the reciprocal of the marginal product of labor (∆Q/∆L). Therefore, marginal cost is simply the wage rate w divided by the marginal product of labor MC=ΔVCΔQ ΔVC=wΔL ΔL/ΔQ (the change in quantity of labor to effect a one unit change in output) =1/MPL Therefore MC=w/MPL Thus if the marginal product of labor is rising then marginal costs will be falling and if the marginal product of labor is falling marginal costs will be rising (assuming a constant wage rate). Relation between MPL and APL: The average product of labor (APL) is the total product of labor divided by the number of units of labor employed, or Q/L. The average product of labor is a common measure of labor productivity. The APL curve is shaped like an inverted “u”. At low production levels the APL tends to increase as additional labor is added. The primary reason for the increase is specialization and division of labor. At the point the APL reaches its maximum value APL equals the MPL. Beyond this point the APL falls. Relation between MPL and APL: During the early stages of production MPL is greater than APL. When the MPL is above the APL the APL will increase. Eventually the MPL reaches it maximum value at the point of diminishing returns. Beyond this point MPL will decrease. However, at the point of diminishing returns the MPL is still above the APL and APL will continue to increase until MPL equals APL. When MPL is below APL, APL will decrease. Relation between MPL and APL: Graphically, the APL curve can be derived from the total product curve by drawing secants from the origin that intersect (cut) the total product curve. The slope of the secant line equals the average product of labor, where the slope = dQ/dL. The slope of the curve at each intersection marks a point on the average product curve. The slope increases until the line reaches a point of tangency with the total product curve. This point marks the maximum average product of labor. It also marks the point where MPL (which is the slope of the total product curve) equals the APL (the slope of the secant). Beyond this point the slope of the secants become progressively smaller as APL declines. The MPL curve intersects the APL curve from above at the maximum point of the APL curve. Thereafter, the MPL curve is below the APL curve. Diminishing marginal returns: The falling MPL is due to the law of diminishing marginal returns. The law states, "as units of one input are added (with all other inputs held constant) a point will be reached where the resulting additions to output will begin to decrease; that is marginal product will decline." The law of diminishing marginal returns applies regardless of whether the production function exhibits increasing, decreasing, or constant returns to scale. The key factor is that the variable input is being changed while all other factors of production are being held constant. Under such circumstances diminishing marginal returns are inevitable at some level of production.Diminishing marginal returns differs from diminishing returns. Diminishing marginal returns means that the marginal product of the variable input is falling. Diminishing returns occur when the marginal product of the variable input is negative. That is when a unit increase in the variable input causes total product to fall. At the point that diminishing returns begin the MPL is zero. MPL, MRPL and profit maximization: The general rule is that a firm maximizes profit by producing that quantity of output where marginal revenue equals marginal costs. The profit maximization issue can also be approached from the input side. That is, what is the profit maximizing usage of the variable input? To maximize profits the firm should increase usage "up to the point where the input’s marginal revenue product equals its marginal costs". So, mathematically the profit maximizing rule is MRPL = MCL. The marginal profit per unit of labor equals the marginal revenue product of labor minus the marginal cost of labor or MπL = MRPL − MCLA firm maximizes profits where MπL = 0. MPL, MRPL and profit maximization: The marginal revenue product is the change in total revenue per unit change in the variable input assume labor. That is, MRPL = ∆TR/∆L. MRPL is the product of marginal revenue and the marginal product of labor or MRPL = MR × MPL. Derivation:MR = ∆TR/∆Q MPL = ∆Q/∆L MRPL = MR × MPL = (∆TR/∆Q) × (∆Q/∆L) = ∆TR/∆L Example Assume that the production function is 90 L−L2 30 Output price is $40 per unit. 90 −2L 40 90 −2L) 3600 80 L MRPL=MCL (Profit Max Rule) 3600 80 30 3570 80 L 44.625 44.625 is the profit maximizing number of workers. 90 L−L2 90 44.625 44.625 )2 4016.25 1991.39 2024.86 Thus, the profit maximizing output is 2024.86 units, units might be given in thousands. Therefore, quantity must not be discrete. MPL, MRPL and profit maximization: And the profit is Π=TR−TC TC=MCL∗L (Actually marginal cost of labor is wages paid for each worker. Therefore we get total cost if we multiply it by the quantity of labor not by the quantity of products) 40 2024.86 30 44.625 80994.4 1338.75 79655.65 Some might be confused by the fact that 44.625 as intuition would say that labor should be discrete. Remember, however, that labor is actually a time measure as well. Thus, it can be thought of as a worker not working the entire hour. Marginal productivity ethics: In the aftermath of the marginal revolution in economics, a number of economists including John Bates Clark and Thomas Nixon Carver sought to derive an ethical theory of income distribution based on the idea that workers were morally entitled to receive a wage exactly equal to their marginal product. In the 20th century, marginal productivity ethics found few supporters among economists, being criticised not only by egalitarians but by economists associated with the Chicago school such as Frank Knight (in The Ethics of Competition) and the Austrian School, such as Leland Yeager. However, marginal productivity ethics were defended by George Stigler. Marginal productivity ethics: A Review of Economics and Economic Methodology argues against pay to their marginal product to pay equal to the amount of their labor input. This is known as the Labor theory of value. Marx characterizes the value of labor as a relationship between the person and things and how the perceived exchange of products is viewed socially. Alejandro Valle Baeza and Blanca Gloria Martínez González, Researchers compared productivity levels from countries that pay based on the marginal productivity and labor theory. The found that across countries, marginal productivity is more widely used than labor value, but when they measured productivity based on labor value, "productivity changes not only because of savings in both living labor and means of production, but it is also modified by changes in the productivity of these means of production."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Climbing rope** Climbing rope: A climbing rope is a rope that is used in climbing. It is a critical part of an extensive chain of protective equipment (which also includes climbing harnesses, anchors, belay devices, and carabiners) used by climbers to help prevent potentially fatal fall-related accidents. Climbing rope: Climbing ropes must meet very strict requirements so that they do not break in the event of an accidental fall. However, they also need to be light, flexible for knotting, and resistant to chafing over sharp and rough rocks; all that in all possible weather conditions. Although ropes made of natural fibres such as hemp and flax were used in the early days of alpinism, modern climbing uses kernmantle ropes made of a core of nylon or other synthetic material and intertwined in a special way, surrounded by a separate sheath woven over it. The main strength of the rope is in the core, and the sheath of the rope represents only a small fraction of the overall strength of the rope. Climbing rope: Climbing ropes can be classified into three categories according to their elasticity: static, semi-static, and dynamic ropes. Static rope: Static ropes are ropes that are specifically designed for little or no stretch. As a result, they are unable to absorb large shocks. They should therefore not be used to protect a climber against a fall. On the other hand, they are particularly strong and can withstand a large load under static load. They find their application in fixed ropes, zip lines and shuttles. Semi-static rope: Semi-static ropes have limited stretch. They can absorb small shocks and are also statically loaded yet very strong. However, these ropes may not be used to protect climbers from falling. They are used as fixed ropes, for rescue operations, and in caving. Dynamic rope: Dynamic ropes are used in sport climbing. They are sufficiently stretchable to safely absorb a fall. However, they are relatively weak in static loads and therefore should not be used for zip lines and amusement rides. A falling climber quickly develops enormous kinetic energy. This energy is released as soon as the climber stops falling. Some of this energy goes to the belay chain, the rest is split between the belayer and the climber. The rigid parts of the belay chain are strong, but only absorb a limited amount of energy. The human body can also only handle a limited amount of force on the body (the so-called catch or impact value) without incurring a back injury. Dynamic ropes therefore are designed to stretch by a limited amount to catch falls. By stretching, a large part of the energy generated is captured so that the final capture impulse for a single rope is less than 12 kN, under testing conditions as defined in the CE standards. UIAA rules mandate that stretching be less than 40%.Dynamic ropes can be single ropes, half ropes, and twin ropes, each with different specifications. Dry rope: Dry ropes are ropes that have been treated to repel water. To achieve a UIAA Water Repellent grade, a rope must not absorb more than 5% of the rope's weight. This is in contrast to non-treated ropes which can absorb up to 50% of rope's weight in water.The dry treatment prevents dirt and other particulates from getting into the rope, extending the rope life. However, the dry treatment will wear off with extended use. Dry ropes are more expensive than non-treated ropes, so they are typically saved for ice climbing or wet weather. Maintenance: Ropes must be inspected regularly, and retired from use if significantly damaged or worn.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Velar ejective fricative** Velar ejective fricative: The velar ejective fricative is a type of consonantal sound, used in some spoken languages. The symbol in the International Phonetic Alphabet that represents this sound is ⟨xʼ⟩. Features: Features of the velar ejective fricative: Its manner of articulation is fricative, which means it is produced by constricting air flow through a narrow channel at the place of articulation, causing turbulence. Its place of articulation is velar, which means it is articulated with the back of the tongue (the dorsum) at the soft palate. Its phonation is voiceless, which means it is produced without vibrations of the vocal cords. It is an oral consonant, which means air is allowed to escape through the mouth only. The airstream mechanism is ejective (glottalic egressive), which means the air is forced out by pumping the glottis upward.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BTG2** BTG2: Protein BTG2 also known as BTG family member 2 or NGF-inducible anti-proliferative protein PC3 or NGF-inducible protein TIS21, is a protein that in humans is encoded by the BTG2 gene (B-cell translocation gene 2) and in other mammals by the homologous Btg2 gene. This protein controls cell cycle progression and proneural genes expression by acting as a transcription coregulator that enhances or inhibits the activity of transcription factors. BTG2: The protein BTG2 is the human homolog of the PC3 (pheochromocytoma cell 3) protein in rat and of the Tis21 (tetradecanoyl phorbol acetate-inducible sequence 21) protein in mouse. Tis21 had been originally isolated as a sequence induced by TPA in mouse fibroblasts, whereas PC3 was originally isolated as sequence induced at the beginning of neuron differentiation; BTG2 was then isolated in human cells as sequence induced by p53 and DNA damage.The protein encoded by the gene BTG2 (which is the official name assigned to the gene PC3/Tis21/BTG2) is a member of the BTG/Tob family (that comprises six proteins BTG1, BTG2/PC3/Tis21, BTG3/ANA, BTG4/PC3B, Tob1/Tob and Tob2). This family has structurally related proteins that appear to have antiproliferative properties. In particular, the BTG2 protein has been shown to negatively control a cell cycle checkpoint at the G1 to S phase transition in fibroblasts and neuronal cells by direct inhibition of the activity of cyclin D1 promoter. Regulator of neuron differentiation: A number of studies in vivo have shown that BTG2 expression is associated with the neurogenic asymmetric division in neural progenitor cells. Tis21-GFP has been used as a neurogenic marker because it is not expressed until neurogenesis begins, is present in almost all early-born neurons, and interacts with neuron producing intermediate progenitor cells. Moreover, when directly overexpressed in vivo in neural progenitor cells, BTG2 induces their differentiation. In fact, in the neuronal PC12 cell line BTG2 is not able to trigger differentiation by itself, but only to synergize with NGF, while in vivo BTG2 is fully able to induce differentiation of progenitor cells, i.e., during embryonic development in the neuroblast of the neural tube and in granule precursors of cerebellum, as well in adult progenitor cells of the dentate gyrus and of the subventricular zone. Notably, it has recently been shown that BTG2 is essential for the differentiation of new neurons, using a BTG2 knock out mouse. BTG2 is thus a pan-neural gene required for the development of the new neurons generated during adulthood, in the two neurogenic regions of adult brain, i.e., the hippocampus and the subventricular zone. Such requirement of BTG2 in neuron maturation is consistent with the fact that during brain development BTG2 is expressed in the proliferating neuroblasts of the ventricular zone of the neural tube, and to a lower extent in the differentiating neuroblasts of the mantle zone; postnatally it is expressed in cerebellar precursors mainly in the proliferating regions of the neuropithelium (i.e., in the external granular layer), and in the hippocampus in proliferating and differentiating progenitor cells. The pro-differentiative action of BTG2 appears to be consequent not only to inhibition of cell cycle progression but also to a BTG2-dependent activation of proneural genes in neural progenitor cells. In fact, BTG2 activates proneural genes by associating with the promoter of Id3, a key inhibitor of proneural gene activity, and by negatively regulating its activity.BTG2 is a transcriptional cofactor, given that it has been shown to associate with, and regulate the promoters not only of Id3 but also of cyclin D1 and RAR-β, being part of transcriptional complexes. It has been shown that when the differentiation of new neurons of the hippocampus - a brain region important for learning and memory - is either accelerated or delayed by means of overexpression or deletion of BTG2, respectively, spatial and contextual memory is heavily altered. This suggests that the time the young neurons spend in different states of neuronal differentiation is critical for their ultimate function in learning and memory, and that BTG2 may play a role in the timing of recruitment of the new neuron into memory circuits.In conclusion, the main action of Btg2 on neural progenitor cells of the dentate gyrus and subventricular zone during adult neurogenesis is the positive control of their terminal differentiation (see for review:). During the early postnatal development of the cerebellum, Btg2 is mainly required to control the migration and differentiation of the precursor cells of cerebellar granule neurons. In contrast, BTG1, the closest homolog to Btg2, appears to negatively regulate the proliferation of adult stem cells in the dentate gyrus and subventricular zone, maintaining in quiescence the stem cells pool and preserving it from depletion. BTG1 is also necessary to limit the proliferative expansion of cerebellar precursor cells, as without BTG1 the adult cerebellum is larger and unable to coordinate motor activity. Medulloblastoma suppressor: BTG2 has been shown to inhibit medulloblastoma, the very aggressive tumor of cerebellum, by inhibiting the proliferation and triggering the differentiation of the precursors of cerebellar granule neurons. This demonstration was obtained by overexpressing BTG2 in a mouse model of medulloblastoma, presenting activation of the sonic hedgehog pathway (heterozygous for the gene Patched1). More recently, it has been shown that the ablation of BTG2 greatly enhances the medulloblastoma frequency by inhibiting the migration of cerebellar granule neuron precursors. This impairment of migration of the precursors of cerebellar granule neurons forces them to remain at the surface of the cerebellum, where they continue to proliferate, becoming target of transforming insults. The impairment of migration of the precursors of cerebellar granule neurons (GCPs) depends on the inhibition of expression of the chemokine CXCL3 consequent to ablation of BTG2. In fact, the transcription of CXCL3 is directly regulated by BTG2, and CXCL3 is able to induce cell-autonomously the migration of cerebellar granule precursors. Treatment with CXCL3 prevents the growth of medulloblastoma lesions in a Shh-type mouse model of medulloblastoma. Thus, CXCL3 is a target for medulloblastoma therapy. Interactions: BTG2 has been shown to interact with PRMT1, HOXB9, CNOT8 and HDAC1 HDAC4 and HDAC9. It has also been studied with Pax6 and Tbr2 when observing the role of Tis21 in neurogenic divisions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gadget** Gadget: A gadget is a mechanical device or any ingenious article. Gadgets are sometimes referred to as gizmos. History: The etymology of the word is disputed. The word first appears as reference to an 18th-century tool in glassmaking that was developed as a spring pontil. As stated in the glass dictionary published by the Corning Museum of Glass, a gadget is a metal rod with a spring clip that grips the foot of a vessel and so avoids the use of a pontil. Gadgets were first used in the late 18th century. According to the Oxford English Dictionary, there is anecdotal evidence for the use of "gadget" as a placeholder name for a technical item whose precise name one can't remember since the 1850s; with Robert Brown's 1886 book Spunyarn and Spindrift, A sailor boy’s log of a voyage out and home in a China tea-clipper containing the earliest known usage in print.A widely circulated story holds that the word gadget was "invented" when Gaget, Gauthier & Cie, the company behind the repoussé construction of the Statue of Liberty (1886), made a small-scale version of the monument and named it after their firm; however this contradicts the evidence that the word was already used before in nautical circles, and the fact that it did not become popular, at least in the USA, until after World War I. Other sources cite a derivation from the French gâchette which has been applied to various pieces of a firing mechanism, or the French gagée, a small tool or accessory.The October 1918 issue of Notes and Queries contains a multi-article entry on the word "gadget" (12 S. iv. 187). H. Tapley-Soper of The City Library, Exeter, writes: A discussion arose at the Plymouth meeting of the Devonshire Association in 1916 when it was suggested that this word should be recorded in the list of local verbal provincialisms. Several members dissented from its inclusion on the ground that it is in common use throughout the country; and a naval officer who was present said that it has for years been a popular expression in the service for a tool or implement, the exact name of which is unknown or has for the moment been forgotten. I have also frequently heard it applied by motor-cycle friends to the collection of fitments to be seen on motor cycles. 'His handle-bars are smothered in gadgets' refers to such things as speedometers, mirrors, levers, badges, mascots, &c., attached to the steering handles. The 'jigger' or short-rest used in billiards is also often called a 'gadget'; and the name has been applied by local platelayers to the 'gauge' used to test the accuracy of their work. In fact, to borrow from present-day Army slang, 'gadget' is applied to 'any old thing.' The usage of the term in military parlance extended beyond the navy. In the book "Above the Battle" by Vivian Drake, published in 1918 by D. Appleton & Co., of New York and London, being the memoirs of a pilot in the British Royal Flying Corps, there is the following passage: "Our ennui was occasionally relieved by new gadgets -- "gadget" is the Flying Corps slang for invention! Some gadgets were good, some comic and some extraordinary."By the second half of the twentieth century, the term "gadget" had taken on the connotations of compactness and mobility. In the 1965 essay "The Great Gizmo" (a term used interchangeably with "gadget" throughout the essay), the architectural and design critic Reyner Banham defines the item as: A characteristic class of US products––perhaps the most characteristic––is a small self-contained unit of high performance in relation to its size and cost, whose function is to transform some undifferentiated set of circumstances to a condition nearer human desires. The minimum of skills is required in its installation and use, and it is independent of any physical or social infrastructure beyond that by which it may be ordered from catalogue and delivered to its prospective user. A class of servants to human needs, these clip-on devices, these portable gadgets, have coloured American thought and action far more deeply––I suspect––than is commonly understood. Other uses: The first atomic bomb was nicknamed the gadget by the Scientists of the Manhattan Project, tested at the Trinity site. Application gadgets: In the software industry, "Gadget" refers to computer programs that provide services without needing an independent application to be launched for each one, but instead run in an environment that manages multiple gadgets. There are several implementations based on existing software development techniques, like JavaScript, form input, and various image formats. Proprietary formats include Google Desktop, Google Gadgets, Microsoft Gadgets, the AmigaOS Workbench and dashboard software Apple Widgets. Application gadgets: The earliest documented use of the term gadget in context of software engineering was in 1985 by the developers of AmigaOS, the operating system of the Amiga computers (intuition.library and also later gadtools.library). It denotes what other technological traditions call GUI widget—a control element in graphical user interface. This naming convention remains in continuing use (as of 2008) since then. Application gadgets: The X11 windows system 'Intrinsics' also defines gadgets and their relationship to widgets (buttons, labels etc.). The gadget was a windowless widget which was supposed to improve the performance of the application by reducing the memory load on the X server. A gadget would use the Window id of its parent widget and had no children of its own It is not known whether other software companies are explicitly drawing on that inspiration when featuring the word in names of their technologies or simply referring to the generic meaning. The word widget is older in this context. In the movie "Back to School" from 1986 by Alan Metter, there is a scene where an economics professor Dr. Barbay, wants to start for educational purposes a fictional company that produces "widgets: It's a fictional product."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quarkonium** Quarkonium: In particle physics, quarkonium (from quark and -onium, pl. quarkonia) is a flavorless meson whose constituents are a heavy quark and its own antiquark, making it both a neutral particle and its own antiparticle. The name "quarkonium" is analogous to positronium, the bound state of electron and anti-electron. The particles are short-lived due to matter-antimatter annihilation. Light quarks: Light quarks (up, down, and strange) are much less massive than the heavier quarks, and so the physical states actually seen in experiments (η, η′, and π0 mesons) are quantum mechanical mixtures of the light quark states. The much larger mass differences between the charm and bottom quarks and the lighter quarks results in states that are well defined in terms of a quark–antiquark pair of a given flavor. Heavy quarks: Examples of quarkonia are the J/ψ meson (the ground state of charmonium, cc) and the ϒ meson (bottomonium, bb). Because of the high mass of the top quark, toponium (θ meson) does not exist, since the top quark decays through the electroweak interaction before a bound state can form (a rare example of a weak process proceeding more quickly than a strong process). Usually, the word "quarkonium" refers only to charmonium and bottomonium, and not to any of the lighter quark–antiquark states. Heavy quarks: Charmonium In the following table, the same particle can be named with the spectroscopic notation or with its mass. In some cases excitation series are used: ψ′ is the first excitation of ψ (which, for historical reasons, is called J/ψ particle); ψ″ is a second excitation, and so on. That is, names in the same cell are synonymous. Heavy quarks: Some of the states are predicted, but have not been identified; others are unconfirmed. The quantum numbers of the X(3872) particle have been measured recently by the LHCb experiment at CERN. This measurement shed some light on its identity, excluding the third option among the three envisioned, which are: a charmonium hybrid state a D0 D∗0 molecule a candidate for the 11D2 stateIn 2005, the BaBar experiment announced the discovery of a new state: Y(4260). CLEO and Belle have since corroborated these observations. At first, Y(4260) was thought to be a charmonium state, but the evidence suggests more exotic explanations, such as a D "molecule", a 4-quark construct, or a hybrid meson. Heavy quarks: Notes: [*] Needs confirmation. [†] Interpretation as a 1−− charmonium state not favored. [‡] Predicted, but not yet identified. Bottomonium In the following table, the same particle can be named with the spectroscopic notation or with its mass. Some of the states are predicted, but have not been identified; others are unconfirmed. Heavy quarks: Notes: [*] Preliminary results. Confirmation needed.The ϒ(1S) state was discovered by the E288 experiment team, headed by Leon Lederman, at Fermilab in 1977, and was the first particle containing a bottom quark to be discovered. On 21 December 2011, the χb2(3P) state was the first particle discovered in the Large Hadron Collider; the discovery article was first posted on arXiv. In April 2012, Tevatron's DØ experiment confirmed the result in a paper published in Physical Review D. Heavy quarks: The J = 1 and J = 2 states were first resolved by the CMS experiment in 2018. Toponium The theta meson hasn't been and isn't expected to be observed in nature, as top quarks decay too fast to form mesons in nature (and be detected). QCD and quarkonium: The computation of the properties of mesons in quantum chromodynamics (QCD) is a fully non-perturbative one. As a result, the only general method available is a direct computation using lattice QCD (LQCD) techniques. However, for heavy quarkonium, other techniques are also effective. QCD and quarkonium: The light quarks in a meson move at relativistic speeds, since the mass of the bound state is much larger than the mass of the quark. However, the speed of the charm and the bottom quarks in their respective quarkonia is sufficiently small for relativistic effects in these states to be much reduced. It is estimated that the velocity, v , is roughly 0.3 times the speed of light for charmonia and roughly 0.1 times the speed of light for bottomonia. The computation can then be approximated by an expansion in powers of v/c and v2/c2 . This technique is called non-relativistic QCD (NRQCD). QCD and quarkonium: NRQCD has also been quantized as a lattice gauge theory, which provides another technique for LQCD calculations to use. Good agreement with the bottomonium masses has been found, and this provides one of the best non-perturbative tests of LQCD. For charmonium masses the agreement is not as good, but the LQCD community is actively working on improving their techniques. Work is also being done on calculations of such properties as widths of quarkonia states and transition rates between the states. QCD and quarkonium: An early, but still effective, technique uses models of the effective potential to calculate masses of quarkonium states. In this technique, one uses the fact that the motion of the quarks that comprise the quarkonium state is non-relativistic to assume that they move in a static potential, much like non-relativistic models of the hydrogen atom. One of the most popular potential models is the so-called Cornell (or funnel) potential: V(r)=−ar+br, where r is the effective radius of the quarkonium state, a and b are parameters. QCD and quarkonium: This potential has two parts. The first part, a/r , corresponds to the potential induced by one-gluon exchange between the quark and its anti-quark, and is known as the Coulombic part of the potential, since its 1/r form is identical to the well-known Coulombic potential induced by the electromagnetic force. QCD and quarkonium: The second part, br , is known as the confinement part of the potential, and parameterizes the poorly understood non-perturbative effects of QCD. Generally, when using this approach, a convenient form for the wave function of the quarks is taken, and then a and b are determined by fitting the results of the calculations to the masses of well-measured quarkonium states. Relativistic and other effects can be incorporated into this approach by adding extra terms to the potential, much as is done for the model hydrogen atom in non-relativistic quantum mechanics. QCD and quarkonium: This form was derived from QCD up to QCD 3r2) by Sumino (2003). It is popular because it allows for accurate predictions of quarkonium parameters without a lengthy lattice computation, and provides a separation between the short-distance Coulombic effects and the long-distance confinement effects that can be useful in understanding the quark / anti-quark force generated by QCD. Quarkonia have been suggested as a diagnostic tool of the formation of the quark–gluon plasma: Both disappearance and enhancement of their formation depending on the yield of heavy quarks in plasma can occur.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Think aloud protocol** Think aloud protocol: A think-aloud (or thinking aloud) protocol is a method used to gather data in usability testing in product design and development, in psychology and a range of social sciences (e.g., reading, writing, translation research, decision making, and process tracing). Description: Think-aloud protocols involve participants thinking aloud as they are performing a set of specified tasks. Participants are asked to say whatever comes into their mind as they complete the task. This might include what they are looking at, thinking, doing, and feeling. This gives observers insight into the participant's cognitive processes (rather than only their final product), to make thought processes as explicit as possible during task performance. In a formal research protocol, all verbalizations are transcribed and then analyzed. In a usability testing context, observers are asked to take notes of what participants say and do, without attempting to interpret their actions and words, and especially noting places where they encounter difficulty. Test sessions may be completed on participants own devices or in a more controlled setting. Sessions are often audio- and video-recorded so that developers can go back and refer to what participants did and how they reacted. History: The think-aloud method was introduced in the usability field by Clayton Lewis while he was at IBM, and is explained in Task-Centered User Interface Design: A Practical Introduction by Lewis and John Rieman. The method was developed based on the techniques of protocol analysis by K. Ericsson and H. Simon. However, there are some significant differences between the way Ericsson and Simon propose that protocols be conducted and how they are actually conducted by usability practitioners, as noted by Ted Boren and Judith Ramey. These differences arise from the specific needs and context of usability testing; practitioners should be aware of these differences and adjust their method to meet their needs while still collecting valid data. For example, they may need to prompt for additional information more often than Ericsson and Simon would allow, but should take care not to influence what participants say and do. Process: A typical procedure of think-aloud protocols would include: Design the study and write the guide: Determine the number and type of participant for the study. Generally 5 participants would be sufficient. The next step is to write a guide that ask the participants to complete the tasks intended with clear step-by-step instructions. In the script, there should be reminders to participants to say their thoughts out when performing tasks. Process: Recruit participants: The team should set up a screener for eligibility of participants. After contacting the person of interest and setting up meeting details such as time and location, the team could also provide additional information to help participant better prepare for the activity. Conduct think-aloud protocol: After stating the purpose and asking for consent, the team should proceed by giving instructions to the participant. Ask open-ended questions and follow-up questions. The team should avoid asking leading questions or giving clues. Process: Analyze the findings and summarize insights: The team should use notes taken during the sessions to generate insights and to find common patterns. Based on the findings, the design team could then decide directions to take action on.As Kuusela and Paul state, the think-aloud protocol can be distinguished into two different types of experimental procedures. The first is the concurrent think-aloud protocol, collected during the task. The second is the retrospective think-aloud protocol, gathered after the task as the participant walks back through the steps they took previously, often prompted by a video recording of themselves. There are benefits and drawbacks to each approach, but in general a concurrent protocol may be more complete, while a retrospective protocol has less chance to interfere with task performance. Nonetheless, some concurrent protocols have not produced such interference effects, suggesting that it may be possible to optimize both completeness and authenticity of verbal reports. Benefits: The think-aloud method allows researchers to discover what users genuinely think of your design. Related Method: A related but slightly different data-gathering method is the talk-aloud protocol. This involves participants only describing their actions but not other thoughts. This method is thought to be more objective in that participants merely report how they go about completing a task rather than interpreting or justifying their actions (see the standard works by Ericsson & Simon).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GameDay (software)** GameDay (software): GameDay is a software program that allows sports fans to track games with live stats. For Major League Baseball, it was introduced in 2002, a year after all team sites were migrated to MLB.com. Today the software provides improved features such as camera angle and pitch speed, as well as pitch angle and break. It also contain a news ticker. On Yahoo.com this is known as GameChannel.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Uzbek Soviet Encyclopedia** Uzbek Soviet Encyclopedia: The Uzbek Soviet Encyclopedia (Uzbek: Oʻzbek sovet ensikopediyasi, OʻzSE in Latin script, Ўзбек совет энциклопедияси, ЎзСЭ in Cyrillic script; Russian: Узбекская советская энциклопедия, УзСЭ) is the largest and most comprehensive encyclopedia in the Uzbek language, comprising 14 volumes. It is the first general-knowledge encyclopedia in Uzbek. The Uzbek Soviet Encyclopedia was printed in the Cyrillic script. Although the encyclopedia contained some articles translated from the Russian-language Great Soviet Encyclopedia, its coverage of topics skewed towards Uzbek interests. History: The Uzbek Soviet Encyclopedia was published in Tashkent from 1971 to 1980 by the Uzbek Soviet Encyclopedia Publishing House. Doctor Ibrohim Moʻminov, a member of the Academy of Sciences of Uzbekistan, was the chief editor of volumes one through nine. Komiljon Zufarov was the chief editor of volumes ten through fourteen. The Uzbek Soviet Encyclopedia was not available in Russian. Content: The Uzbek Soviet Encyclopedia is a comprehensive source of knowledge in social and economic studies and in the applied sciences. A major value of the encyclopedia is its comprehensive information about the USSR in general and the Uzbek SSR in particular. Every aspect of life in Soviet Uzbekistan is systematically presented, including history, economy, science, art, and culture. There are comprehensive biographies of prominent Uzbek cultural and scientific figures who are not as well known outside of Uzbekistan. Content: The Uzbek Soviet Encyclopedia contains extensive writings on Sufism, and generally positive coverage of Uzbek Sufi philosophers such as Khoja Akhmet Yassawi. The encyclopedia initially criticized anti-Soviet writers such as Abdulrauf Fitrat and Choʻlpon as bourgeois nationalists, but these figures were rehabilitated during glasnost.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tetraphenyllead** Tetraphenyllead: Tetraphenyllead is an organolead compound with the chemical formula (C6H5)4Pb or PbPh4. It is a white solid. Preparation: Tetraphenyllead can be produced by the reaction of phenylmagnesium bromide and lead chloride at diethyl ether. This is the method used by P. Pfeiffer and P. Truskier to produce tetraphenyllead first at 1904. (C6H5)MgBr+2PbCl2→Et2OPb(C6H5)4+Pb+4MgBrCl Reactions: Hydrogen chloride's ethanol solution can reacts with tetraphenyllead and substitute some of the phenyl groups to chlorine atoms: Pb(C6H5)4+HCl→EthanolPb(C6H5)3Cl+C6H6 Pb(C6H5)3Cl+HCl→EthanolPb(C6H5)2Cl2+C6H6 Just like tetrabutyllead, tetraphenyllead and sulfur reacts explosively at 150 °C and produce diphenyl sulfide and lead sulfide: Pb(C6H5)4+3S→PbS+2S(C6H5)2 Tetraphenyllead reacts with iodine in chloroform to produce triphenyllead iodide.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FAM** FAM: Fam or FAM is a colloquial term for 'family and friend' or an acronym of 'friend and mate' especially for intimate friends. It may also refer to: People: Anthony Famiglietti (born 1978), American athlete Fam Ekman (born 1946), Swedish-Norwegian children's writer and illustrator Fam Irvoll (born 1980), Norwegian fashion designer Konstantin Fam (born 1972), Russian filmmaker Sport: Football Association of Malawi Football Association of Malaysia Football Association of Maldives Fútbol Americano de México Media: Filipinas, Ahora Mismo, a Philippine radio show Fam (TV series), an American television sitcom that debuted in 2019 Other uses: Fam Islands in Indonesia Fam language Acronyms: Fat acceptance movement, a social movement Federal Air Marshal, in the United States Federation of Associations of Maharashtra, an Indian trade association Fertility awareness method, a set of medical practices File Alteration Monitor, a UNIX system software Filipino American Museum, in New York City Fitchburg Art Museum, in Massachusetts, United States Fluorescein amidite, a chemical Foreign Affairs Manual, published by the United States Department of State Free Aceh Movement in Indonesia Free and Accepted Masons, a fraternal organisation Fuzzy associative matrix, a fuzzy logic term Mexican Air Force (Spanish: Fuerza Aérea Mexicana)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Signedness** Signedness: In computing, signedness is a property of data types representing numbers in computer programs. A numeric variable is signed if it can represent both positive and negative numbers, and unsigned if it can only represent non-negative numbers (zero or positive numbers). As signed numbers can represent negative numbers, they lose a range of positive numbers that can only be represented with unsigned numbers of the same size (in bits) because roughly half the possible values are non-positive values, whereas the respective unsigned type can dedicate all the possible values to the positive number range. For example, a two's complement signed 16-bit integer can hold the values −32768 to 32767 inclusively, while an unsigned 16 bit integer can hold the values 0 to 65535. For this sign representation method, the leftmost bit (most significant bit) denotes whether the value is negative (0 for positive or zero, 1 for negative). In programming languages: For most architectures, there is no signed–unsigned type distinction in the machine language. Nevertheless, arithmetic instructions usually set different CPU flags such as the carry flag for unsigned arithmetic and the overflow flag for signed. Those values can be taken into account by subsequent branch or arithmetic commands. In programming languages: The C programming language, along with its derivatives, implements a signedness for all integer data types, as well as for "character". For Integers, the unsigned modifier defines the type to be unsigned. The default integer signedness is signed, but can be set explicitly with signed modifier. By contrast, the C standard declares signed char, unsigned char, and char, to be three distinct types, but specifies that all three must have the same size and alignment. Further, char must have the same numeric range as either signed char or unsigned char, but the choice of which depends on the platform. Integer literals can be made unsigned with U suffix. For example, 0xFFFFFFFF gives −1, but 0xFFFFFFFFU gives 4,294,967,295 for 32-bit code. In programming languages: Compilers often issue a warning when comparisons are made between signed and unsigned numbers or when one is cast to the other. These are potentially dangerous operations as the ranges of the signed and unsigned types are different.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IBM 421** IBM 421: The IBM 421 accounting machine saw use in the 1960s. The largely-mechanical IBM 421 read 80-column punch cards and could print upper-case letters of the alphabet, the decimal digits 0 to 9, a period (.), and plus and minus signs. IBM 421: The operation of the 421 was directed by the use of a removable control panel and a carriage tape. By means of the control panel, any column of the card could be wired to any print column, by means of a wire link (the end terminals of which were manually inserted into slots in the control panel). After manual wiring, the control panel was inserted in the side of the machine, and a hand-operated lever moved the control panel so that the wire links made contact with corresponding terminals in the machine. IBM 421: The 421 had 64 positions of memory, typically used to store data from a leading punch card. There were also three external program switches (Minor, Major and Super Major) that were used to alter the function of the plug board Program Selectors. IBM 421: IBM 421 uses included: Tabulating (listing) punch cards Calculating totals Calculating grand totals.421s sold in the UK could total pre-1970 currency with twenty shillings to the pound and 12 pennies to the shilling. A UK 1964-onwards example of commercial use was 421s in multiple South Eastern Electric Board locations calculating and printing the quarterly electricity bill (in pounds, shillings and pence) for each of its thousands of customers after the 421 had read three punched cards for each customer: Name and address card The "old" meter reading card The “new” meter reading card.A 421 could be cable-attached to a "Summary" or "Gang Punch" (IBM 514?) to punch cards with summary totals calculated by the 421. IBM 421: The printing speed was about 100 lines per minute. When calculating totals, printing was suppressed and the machine read cards faster at about 150 cards per minute. IBM 421: There was one type bar for each print column. Each type bar had every character in the available set. When printing a column of a card, the type bar was raised until the desired character was in position at the current line, and then the type bar was hit by a "hammer", thus impressing the character onto the paper. The entire line was printed simultaneously (that is, all hammers struck simultaneously). IBM 421: An interesting use of the machine was the evaluation of polynomials.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Arp 220** Arp 220: Arp 220 is the result of a collision between two galaxies which are now in the process of merging. It is the 220th object in Halton Arp's Atlas of Peculiar Galaxies. Features: Arp 220 is the closest Ultraluminous Infrared Galaxy (ULIRG) to Earth, at 250 million light years away. Its energy output was discovered by IRAS to be dominated by the far-infrared part of the spectrum. It is often regarded as the prototypical ULIRG and has been the subject of much study as a result. Features: Most of its energy output is thought to be the result of a massive burst of star formation, or starburst, probably triggered by the merging of two smaller galaxies. HST observations of Arp 220 in 2002 and 1997, taken in visible light with the ACS, and in infrared light with NICMOS, revealed more than 200 huge star clusters in the central part of the galaxy. Features: The most massive of these clusters contains enough material to equal about 10 million suns. Features: X-ray observations by the Chandra and XMM-Newton satellites have shown that Arp 220 probably includes an active galactic nucleus (AGN) at its core, which raises interesting questions about the link between galaxy mergers and AGN, since it is believed that galactic mergers often trigger starbursts, and may also give rise to the supermassive black holes that appear to power AGN. Features: Luminous far-infrared objects like Arp 220 have been found in surprisingly large numbers by sky surveys of submillimetre wavelengths using instruments such as the Submillimetre Common-User Bolometer Array (SCUBA) at the James Clerk Maxwell Telescope (JCMT). Arp 220 and other relatively local ULIRGs are being studied as equivalents of this kind of object. Astronomers from the Arecibo Observatory have detected organic molecules in the galaxy.Arp 220 contains at least two bright maser sources, an OH megamaser, and a water maser.In October 2011, astronomers spotted a record-breaking seven supernova all found at the same time in Arp 220.The merging of the two galaxies started around 700 million years ago.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Impedance (accelerator physics)** Impedance (accelerator physics): Impedance in Accelerator Physics is a quantity that characterizes the self interaction of a charged particle beam, mediated by the beam environment, such as the vacuum chamber, RF cavities, and other elements encountered along the accelerator or storage ring. Definition in terms of wakefunction: The impedance is defined as the Fourier transform of the Wakefunction. Z0||(ω)=∫−∞∞dzce−iωz/cW0′(z) From this expression and the fact that the wake function is real, one can derive the property: Z∗||(ω)=Z||(−ω) Important sources of impedance: The impedance is defined at all positions along the beam trajectory. The beam travels through a vacuum chamber. Substantial impedance is generated in transitions, where the shape of the beam pipe changes. The RF cavities are another important source. Impedance models: In the absence of detailed geometric modeling, one can use various models to represent different aspects of the accelerator beam pipe structure. One such model is the Broadband resonator For the longitudinal case, one has Z||(ω)=Rs1−iQ(ωrω−ωωr)1+Q2(ωrω−ωωr)2 with Rs the shunt impedance, Q , the quality factor, and ωr the resonant frequency. Resistive Wall Given a circular beam piper of radius b , and conductivity σ , the impedance is given by Z(ω)=1−icbω2πσ The corresponding longitudinal wakefield is approximately given by W(s)=q2πbcσ1s3/2 The transverse wake-function from the resistive wall is given by W(s)≈1s1/2 Effect of Impedance on beam: The impedance acts back on the beam and can cause a variety of effects, often considered deleterious for accelerator functioning. In general, impedance effects are classified under the category of "collective effects" due to the fact that the whole beam must be considered together, and not just a single particle. The whole beam may, however, cause particular changes in the dynamics of individual particles such as tune shifts and coupling. Whole beam changes include emittance growth and instabilities that can lead to beam loss.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Copine** Copine: In molecular biology, copines is a name for the group of human proteins that includes members such as CPNE1, CPNE4, CPNE6, and CPNE8. These are highly conserved, calcium-dependent membrane proteins found in a variety of eukaryotes. The domain structure of these 55 kDa proteins suggests that they may have a role in membrane trafficking in some prokaryotes as well as eukaryotes. Copines contains two C2 domains which play a role in signal transduction by binding to calcium, phospholipids, or polyphosphates. Both domains are located at the N-terminal portion of the protein which is not the case for most other double C2 domain proteins, and their role is most similar to that carried out by proteins that exhibit a single C2 domain. The core domain located at the C-terminus part of the copine is found to have a unique and conserved primary sequence. The function of the core domain is still uncertain, however, researchers believe it has a similar function to the "A domain" in integrins. This similarity in function involves serving as a binding site for target proteins, and is supported by evidence that the copine core domain exhibits secondary and tertiary structures comparable to the integrin A domain.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SrcML** SrcML: srcML (source M L) is a document-oriented XML representation of source code. It was created in a collaborative effort between Michael L. Collard and Jonathan I. Maletic. The abbreviation, srcML, is short for Source Markup Language. srcML wraps source code (text) with information from the Abstract Syntax Tree or AST (tags) into a single XML document. All original text is preserved so that the original source code document can be recreated from the srcML markup. The only exception is the possibility of newline normalization.The purpose of srcML is to provide full access to the source code at the lexical, documentary, structural, and syntactic levels. The format also provides easy support for fact-extraction and transformation. It is supported by the srcML toolkit maintained on the srcML website and has been shown to perform scalable, lightweight fact-extraction and transformation. srcML toolkit: The srcML toolkit consists of the command-line program called srcml, which translates from source code to srcML when provided a code file on the command line or translates from srcML to source code when a srcml archive is provided on the command line. The program also supports direct queries and transformations of srcML archives using tools like XPath, XSLT, and RELAXNG. The srcML toolkit is actively maintained and currently support C, C++, C#, and Java. srcML format: The srcML format consists of all text from the original source code file plus XML tags. Specifically, the text is wrapped with srcML elements that indicate the syntactic structure of the code. In short, this explicitly identifies all syntactic structures in the code.The tags used in srcML are listed out below along with what category they fall within. srcML uses XML namespaces. Below is a list of the prefix used to denote each namespace, and the namespaces themselves. Note: for a srcML archive, the entire project will be contained within a single root unit element, and each individual file will be contained as a unit element within the root unit element. Single file conversion: The following shows how srcml can be used on single files. The following example converts the C++ file main.cpp to the srcML file main.cpp.xml: srcml main.cpp -o main.cpp.xml The following command will extract the source code from the file main.cpp.xml and place it into the C++ file main.cpp: srcml main.cpp.xml -o main.cpp Project conversion: The following shows how src2srcml and srcml2src can be used with an entire project: The following example converts the project 'project' to the srcML file project.xml srcml project -o project.xml The following command will extract the source code files from the file project.xml and place it into the directory project: srcml —to-dir project project.xml Program transformation with srcML: srcML allows the use of most if not all current XML APIs and tools to write transformations. It also allows for the use of XSLT directly using the argument—xslt={name}.xls on the srcml2src command. Using srcML's markup with XSLT allows the user to apply Program Transformations on an XML-like structure(srcML) to obtain transformed xml which can then be written back its source code representation using the srcml2src tool. The application of srcML to program transformation is explained, in detail, by Collard et al.The following command will run the XSLT program program.xsl on the srcML archive project.xml srcml —xslt program.xsl project.xml Fact extraction with srcML: In it simplest form, Fact Extraction using srcML leverages XPath in order to address parts of the srcML document and pull information about various entities or characteristics of the source code. Of course, it is not limited to this. Any standard XML API may be used. The application of srcML to fact extraction is explained, in detail, by Kagdi et al.cpp:directive, cpp:file, cpp:include, cpp:define, cpp:undef, cpp:line, cpp:if, cpp:ifdef, cpp:ifndef, cpp:else, cpp:elif, cpp:endif, cpp:then, cpp:pragma, cpp:errorliteral, operator, modifier An example to create a srcML archive from an entire software project. Fact extraction with srcML: The following command runs the XPath path on a srcML archive project.xml srcml —xpath "xpath" project.xml Work is being done on providing convenient extension functions. Source Code Difference Analysis with srcML: srcML brings a lot of advantages to doing difference analysis on source code. One of these advantages is the ability to query for differences between specific sections of a codebase as well as across versions of the same codebase. The application of srcML for difference Analysis is explained, in detail, by Maletic et al. Examples: As an example of how srcML is used, here is an XPath expression that could be used to find all classes in a source document: //src:class Another example might be finding all comments within functions: /src:function//src:comment Due to the fact that srcML is based on xml, all XML tools can be used with srcML, which provides rich functionality.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**De novo synthesis** De novo synthesis: In chemistry, de novo synthesis (from Latin 'from the new') refers to the synthesis of complex molecules from simple molecules such as sugars or amino acids, as opposed to recycling after partial degradation. For example, nucleotides are not needed in the diet as they can be constructed from small precursor molecules such as formate and aspartate. Methionine, on the other hand, is needed in the diet because while it can be degraded to and then regenerated from homocysteine, it cannot be synthesized de novo. Nucleotide: De novo pathways of nucleotides do not use free bases: adenine (abbreviated as A), guanine (G), cytosine (C), thymine (T), or uracil (U). The purine ring is built up one atom or a few atoms at a time and attached to ribose throughout the process. Pyrimidine ring is synthesized as orotate and attached to ribose phosphate and later converted to common pyrimidine nucleotides. Cholesterol: Cholesterol is an essential structural component of animal cell membranes. Cholesterol also serves as a precursor for the biosynthesis of steroid hormones, bile acid and vitamin D. In mammals cholesterol is either absorbed from dietary sources or is synthesized de novo. Up to 70-80% of de novo cholesterol synthesis occurs in the liver, and about 10% of de novo cholesterol synthesis occurs in the small intestine. Cancer cells require cholesterol for cell membranes, so cancer cells contain many enzymes for de novo cholesterol synthesis from acetyl-CoA. Fatty-acid (de novo lipogenesis): De novo lipogenesis (DNL) is the process by which carbohydrates (primarily, especially after a high-carbohydrate meal) from the circulation are converted into fatty acids, which can be further converted into triglycerides or other lipids. Acetate and some amino acids (notably leucine and isoleucine) can also be carbon sources for DNL.Normally, de novo lipogenesis occurs primarily in adipose tissue. But in conditions of obesity, insulin resistance, or type 2 diabetes de novo lipogenesis is reduced in adipose tissue (where carbohydrate-responsive element-binding protein (ChREBP) is the major transcription factor) and is increased in the liver (where sterol regulatory element-binding protein 1 (SREBP-1c) is the major transcription factor). ChREBP is normally activated in the liver by glucose (independent of insulin). Obesity and high-fat diets cause levels of carbohydrate-responsive element-binding protein in adipose tissue to be reduced. By contrast, high blood levels of insulin, due to a high carbohydrate meal or insulin resistance, strongly induces SREBP-1c expression in the liver. The reduction of adipose tissue de novo lipogenesis, and the increase in liver de novo lipogenesis due to obesity and insulin resistance leads to fatty liver disease. Fatty-acid (de novo lipogenesis): Fructose consumption (in contrast to glucose) activates both SREBP-1c and ChREBP in an insulin independent manner. Although glucose can be converted into glycogen in the liver, fructose invariably increases de novo lipogenesis in the liver, elevating plasma triglycerides, more than glucose. Moreover, when equal amounts of glucose or fructose sweetened beverages are consumed, the fructose beverage not only causes a greater increase in plasma triglycerides, but causes a greater increase in abdominal fat.DNL is elevated in non-alcoholic fatty liver disease (NAFLD), and is a hallmark of the disease. Compared with healthy controls, patients with NAFLD have an average 3.5 -fold increase in DNL.De novo fatty-acid synthesis is regulated by two important enzymes, namely acetyl-CoA carboxylase and fatty acid synthase. The enzyme acetyl CoA carboxylase is responsible for introducing a carboxyl group to acetyl CoA, rendering malonyl-CoA. Then, the enzyme fatty-acid synthase is responsible for turning malonlyl-CoA into fatty-acid chain. De novo fatty-acid synthesis is mainly not active in human cells, since diet is the major source for it. In mice, FA de novo synthesis increases in WAT with the exposure to cold temperatures which might be important for maintenance of circulating TAG levels in the blood stream, and to supply FA for thermogenesis during prolonged cold exposures. DNA: De novo DNA synthesis refers to the synthetic creation of DNA rather than assembly or modification of natural precursor template DNA sequences. Initial oligonucleotide synthesis is followed by artificial gene synthesis, and finally by a process cloning, error correction, and verification, which often involves cloning the genes into plasmids into Escherichia coli or yeast.Primase is an RNA polymerase, and it can add a primer to an existing strand awaiting replication. DNA polymerase cannot add primers, and therefore, needs primase to add the primer de novo.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Accommodative convergence** Accommodative convergence: Accommodative convergence is that portion of the range of inward rotation of both eyes (i.e. convergence) that occurs in response to an increase in optical power for focusing by the crystalline lens (i.e. accommodation). When the human eye engages the accommodation system to focus on a near object, signal is automatically sent to the extraocular muscles that are responsible for turning their eyes inward. This is helpful for maintaining single, clear, and comfortable vision during reading or similar near tasks. However, errors in this relationship can cause problems, such as hyperopic individuals having a tendency for crossed eyes because of the over exertion of their accommodation system. Accommodative convergence: Clinically, accommodative convergence is measured as a ratio of convergence, measured in prism diopters, to accommodation, measured in diopters of near demand. The patient is instructed to make a near target perfectly clear and their phoria is measured as the focusing demand on the eye is changed with lenses. Accommodative convergence: To determine stimulus AC/A, the denominator refers to the value of the stimulus, whereas to determine response AC/A, the actual accommodation elicited is the denominator. Determination of response AC/A an increase in AC/A mainly after 40 years of age, whereas assessment of the stimulus AC/A does not show change in AC/A with increasing age. Whether there is a significant increase in the response AC/A before age 40 is unclear. Research on convergence accommodation (CA) shows a decrease in CA/C, whether measured by response or stimulus methods, with increasing age. Accommodative convergence: Schor C, Narayan V. Graphical analysis of prism adaptation, convergence accommodation, and accommodative convergence. Am J Optom Physiol Optics. 1982;59:774-784. 10. Wick B, Currie D. Convergence accommodation: Laborator)' and clinical evaluation. Optom Vis Sci. 1991;68:226-231.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Between a Rock and a Hard Place** Between a Rock and a Hard Place: Between a rock and a hard place, or simply a rock and a hard place, is an expression meaning having to choose between two difficult options. It may also refer to: Literature: A Rock and a Hard Place, a 1988 Vietnam War novel by David Sherman A Rock and a Hard Place: One Boy's Triumphant Story, a 1993 memoir considered to be a literary hoax, supposedly by Anthony Godby Johnson Between a Rock and a Hard Place (book), a 2004 autobiography by Aron Ralston Music: Between a Rock and a Hard Place (Australian Crawl album), 1985 Between a Rock and a Hard Place (Artifacts album), 1994 A Rock and a Hard Place, a song by Sisters of Mercy from their 1985 album First and Last and Always "Rock and a Hard Place", a 1989 single by the Rolling Stones "(Between A) Rock and a Hard Place", a song by Cutting Crew from their 1989 album The Scattering "Rock and a Hard Place" (Bailey Zimmerman song), a 2022 song by Bailey Zimmerman Television: "A Rock and a Hard Place", a 1997 episode of Hercules: The Legendary Journeys "Between a Rock and a Hard Place", a 2009 episode of Make It or Break It "Rock and a Hard Place", a 2013 episode of Supernatural "Rock and Hard Place", a 2022 episode of Better Call Saul "A Rock and a Hard Place", a 1980 episode of the Incredible Hulk live-action series with Bill Bixby and Lou Ferrigno.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**1001 Crystal Mazes Collection** 1001 Crystal Mazes Collection: 1001 Crystal Mazes Collection is a logic puzzle game developed by Teyon for the Nintendo DSiWare. It was available in the Nintendo DSi Shop for 500 Nintendo DSi Points. Gameplay: 1001 Crystal Mazes Collection is a jewel logic game in which a player pushes colorful crystals around a maze to their target destinations. The game gets more challenging with each level. Gameplay: The player can choose from 1001 mazes. There is a coin with an image of a girl or a boy that shows a player's current position and can be moved using both a directional pad and a stylus (by touching arrows visible on the touchscreen). The player can push a crystal in front of them when they move. Only one element can be moved at a time and the positions of the walls cannot be changed. When a crystal is pushed in a corner or two of them are aligned next to each other alongside a wall, they can no longer be moved which causes the player to lose.The number of crystals visible on the screen is different in each maze. The simplest ones contain 3-4 elements, while in the most difficult mazes players have to move over 30 crystals. Reception: 1001 Crystal Mazes Collection received an overall score of 7/10 from IGN and a four out of ten stars from Nintendo Life.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Equational prover** Equational prover: EQP, an abbreviation for equational prover, is an automated theorem proving program for equational logic, developed by the Mathematics and Computer Science Division of the Argonne National Laboratory. It was one of the provers used for solving a longstanding problem posed by Herbert Robbins, namely, whether all Robbins algebras are Boolean algebras.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Panel PC** Panel PC: A panel PC, also known as a panel-mounted computer, touch panel PC or industrial panel PC is a combined industrial PC and Computer monitor so that the entire computer can be mounted in any manner available for mounting a display alone. It eliminates the need for a separate space for the computer. A panel PC is typically ruggedized for used in industrial or high-traffic settings and Industrial Panel PCs have higher dependability applications. Mounting: Panel PCs can have a range of mounting options, Panel Mounting, VESA Mount (Flat Display Mounting Interface), rackmount or DIN rail Mount Panel PCs often come with mounting brackets or flanges for direct installation onto a panel or a cutout in an enclosure such as an electrical enclosure. The enclosure can be electrical cabinets, control panels and machinery cabinets. Cooling is a consideration when mounting a panel PC into electrical enclosure or rackmount Applications: Panel PCs are commonly used in industrial automation, manufacturing, process control and machinery control applications. Construction and features: It may include a range of computer ports and connectivity options such as serial port, EtherNet/IP, CAN bus,Modbus A panel PC typically has a touchscreen (Touch Panel PC) enabling users to interact with the computer directly on the display. This eliminates the need for separate input devices, such as a keyboard or a computer mouse. Panel PCs come in various display sizes, range a small as 6 inch up to larger sizes for example 24 inch. Heavier duty Panel PC models sealed to IP67 standards to be waterproof at the front panel and including models which are explosion proof for installation into hazardous environments.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stereotactic radiation therapy** Stereotactic radiation therapy: Stereotactic radiation therapy (SRT), also called stereotactic external-beam radiation therapy and stereotaxic radiation therapy, is a type of external radiation therapy that uses special equipment to position the patient and precisely deliver radiation to a tumor. The total dose of radiation is divided into several smaller doses given over several days. Stereotactic radiation therapy is used to treat brain tumors and other brain disorders. It is also being studied in the treatment of other types of cancer, such as lung cancer. What differentiates Stereotactic from conventional radiotherapy is the precision with which it is delivered. There are multiple systems available, some of which use specially designed frames which physically attach to the patient's skull while newer more advanced techniques use thermoplastic masks and highly accurate imaging systems to locate the patient. The end result is the delivery of high doses of radiation with sub-millimetre accuracy. Stereotactic radiation therapy: Stereotactic External-Beam radiation Therapy, sometimes called SBRT is now being used to treat Small Cell Lung Cancer, and Sarcomas that have metastasized to the lungs. The high doses used in thoracic SBRT can sometimes cause adverse effects ranging from mild rib fatigue and transient esophagitis, to fatal events such as pneumonitis or hemorrhage. Stereotactic ablative radiotherapy, administers very high doses of radiation, using several beams of various intensities aimed at different angles to precisely target the tumor(s)in the lungs. The images taken from CAT scans and MRIs are used to design a four-dimensional, customized treatment plan that determines each beam's intensity and positioning. The goal is to deliver the highest possible dose of radiation to kill the cancer while minimizing exposure to healthy organs. Since sarcomas often metastasize to the lungs, this treatment is an effective tool in fighting the progression of the disease.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tests of special relativity** Tests of special relativity: Special relativity is a physical theory that plays a fundamental role in the description of all physical phenomena, as long as gravitation is not significant. Many experiments played (and still play) an important role in its development and justification. The strength of the theory lies in its unique ability to correctly predict to high precision the outcome of an extremely diverse range of experiments. Repeats of many of those experiments are still being conducted with steadily increased precision, with modern experiments focusing on effects such as at the Planck scale and in the neutrino sector. Their results are consistent with the predictions of special relativity. Collections of various tests were given by Jakob Laub, Zhang, Mattingly, Clifford Will, and Roberts/Schleif.Special relativity is restricted to flat spacetime, i.e., to all phenomena without significant influence of gravitation. The latter lies in the domain of general relativity and the corresponding tests of general relativity must be considered. Experiments paving the way to relativity: The predominant theory of light in the 19th century was that of the luminiferous aether, a stationary medium in which light propagates in a manner analogous to the way sound propagates through air. By analogy, it follows that the speed of light is constant in all directions in the aether and is independent of the velocity of the source. Thus an observer moving relative to the aether must measure some sort of "aether wind" even as an observer moving relative to air measures an apparent wind. Experiments paving the way to relativity: First-order experiments Beginning with the work of François Arago (1810), a series of optical experiments had been conducted, which should have given a positive result for magnitudes of first order in v/c (i.e., of (v/c)1 ) and which thus should have demonstrated the relative motion of the aether. Yet the results were negative. An explanation was provided by Augustin Fresnel (1818) with the introduction of an auxiliary hypothesis, the so-called "dragging coefficient", that is, matter is dragging the aether to a small extent. This coefficient was directly demonstrated by the Fizeau experiment (1851). It was later shown that all first-order optical experiments must give a negative result due to this coefficient. In addition, some electrostatic first-order experiments were conducted, again having a negative results. In general, Hendrik Lorentz (1892, 1895) introduced several new auxiliary variables for moving observers, demonstrating why all first-order optical and electrostatic experiments have produced null results. For example, Lorentz proposed a location variable by which electrostatic fields contract in the line of motion and another variable ("local time") by which the time coordinates for moving observers depend on their current location. Experiments paving the way to relativity: Second-order experiments The stationary aether theory, however, would give positive results when the experiments are precise enough to measure magnitudes of second order in v/c (i.e., of (v/c)2 ). Albert A. Michelson conducted the first experiment of this kind in 1881, followed by the more sophisticated Michelson–Morley experiment in 1887. Two rays of light, traveling for some time in different directions were brought to interfere, so that different orientations relative to the aether wind should lead to a displacement of the interference fringes. But the result was negative again. The way out of this dilemma was the proposal by George Francis FitzGerald (1889) and Lorentz (1892) that matter is contracted in the line of motion with respect to the aether (length contraction). That is, the older hypothesis of a contraction of electrostatic fields was extended to intermolecular forces. However, since there was no theoretical reason for that, the contraction hypothesis was considered ad hoc. Experiments paving the way to relativity: Besides the optical Michelson–Morley experiment, its electrodynamic equivalent was also conducted, the Trouton–Noble experiment. By that it should be demonstrated that a moving condenser must be subjected to a torque. In addition, the Experiments of Rayleigh and Brace intended to measure some consequences of length contraction in the laboratory frame, for example the assumption that it would lead to birefringence. Though all of those experiments led to negative results. (The Trouton–Rankine experiment conducted in 1908 also gave a negative result when measuring the influence of length contraction on an electromagnetic coil.)To explain all experiments conducted before 1904, Lorentz was forced to again expand his theory by introducing the complete Lorentz transformation. Henri Poincaré declared in 1905 that the impossibility of demonstrating absolute motion (principle of relativity) is apparently a law of nature. Experiments paving the way to relativity: Refutations of complete aether drag The idea that the aether might be completely dragged within or in the vicinity of Earth, by which the negative aether drift experiments could be explained, was refuted by a variety of experiments. Oliver Lodge (1893) found that rapidly whirling steel disks above and below a sensitive common path interferometric arrangement failed to produce a measurable fringe shift. Experiments paving the way to relativity: Gustaf Hammar (1935) failed to find any evidence for aether dragging using a common-path interferometer, one arm of which was enclosed by a thick-walled pipe plugged with lead, while the other arm was free. The Sagnac effect showed that aether wind caused by earth drag cannot be demonstrated. The existence of the aberration of light was inconsistent with aether drag hypothesis. Experiments paving the way to relativity: The assumption that aether drag is proportional to mass and thus only occurs with respect to Earth as a whole was refuted by the Michelson–Gale–Pearson experiment, which demonstrated the Sagnac effect through Earth's motion.Lodge expressed the paradoxical situation in which physicists found themselves as follows: "...at no practicable speed does ... matter [have] any appreciable viscous grip upon the ether. Atoms must be able to throw it into vibration, if they are oscillating or revolving at sufficient speed; otherwise they would not emit light or any kind of radiation; but in no case do they appear to drag it along, or to meet with resistance in any uniform motion through it." Special relativity: Overview Eventually, Albert Einstein (1905) drew the conclusion that established theories and facts known at that time only form a logical coherent system when the concepts of space and time are subjected to a fundamental revision. For instance: Maxwell-Lorentz's electrodynamics (independence of the speed of light from the speed of the source), the negative aether drift experiments (no preferred reference frame), Moving magnet and conductor problem (only relative motion is relevant), the Fizeau experiment and the aberration of light (both implying modified velocity addition and no complete aether drag).The result is special relativity theory, which is based on the constancy of the speed of light in all inertial frames of reference and the principle of relativity. Here, the Lorentz transformation is no longer a mere collection of auxiliary hypotheses but reflects a fundamental Lorentz symmetry and forms the basis of successful theories such as Quantum electrodynamics. Special relativity offers a large number of testable predictions, such as: Fundamental experiments The effects of special relativity can phenomenologically be derived from the following three fundamental experiments: Michelson–Morley experiment, by which the dependence of the speed of light on the direction of the measuring device can be tested. It establishes the relation between longitudinal and transverse lengths of moving bodies. Special relativity: Kennedy–Thorndike experiment, by which the dependence of the speed of light on the velocity of the measuring device can be tested. It establishes the relation between longitudinal lengths and the duration of time of moving bodies. Special relativity: Ives–Stilwell experiment, by which time dilation can be directly tested.From these three experiments and by using the Poincaré-Einstein synchronization, the complete Lorentz transformation follows, with γ = 1 / 1 − v 2 / c 2 {\textstyle \gamma =1/{\sqrt {1-v^{2}/c^{2}}}} being the Lorentz factor: Besides the derivation of the Lorentz transformation, the combination of these experiments is also important because they can be interpreted in different ways when viewed individually. For example, isotropy experiments such as Michelson-Morley can be seen as a simple consequence of the relativity principle, according to which any inertially moving observer can consider himself as at rest. Therefore, by itself, the MM experiment is compatible to Galilean-invariant theories like emission theory or the complete aether drag hypothesis, which also contain some sort of relativity principle. However, when other experiments that exclude the Galilean-invariant theories are considered (i.e. the Ives–Stilwell experiment, various refutations of emission theories and refutations of complete aether dragging), Lorentz-invariant theories and thus special relativity are the only theories that remain viable. Special relativity: Constancy of the speed of light Interferometers, resonators Modern variants of Michelson-Morley and Kennedy–Thorndike experiments have been conducted in order to test the isotropy of the speed of light. Contrary to Michelson-Morley, the Kennedy-Thorndike experiments employ different arm lengths, and the evaluations last several months. In that way, the influence of different velocities during Earth's orbit around the Sun can be observed. Laser, maser and optical resonators are used, reducing the possibility of any anisotropy of the speed of light to the 10−17 level. In addition to terrestrial tests, Lunar Laser Ranging Experiments have also been conducted as a variation of the Kennedy-Thorndike-experiment.Another type of isotropy experiments are the Mössbauer rotor experiments in the 1960s, by which the anisotropy of the Doppler effect on a rotating disc can be observed by using the Mössbauer effect (those experiments can also be utilized to measure time dilation, see below). Special relativity: No dependence on source velocity or energy Emission theories, according to which the speed of light depends on the velocity of the source, can conceivably explain the negative outcome of aether drift experiments. It wasn't until the mid-1960s that the constancy of the speed of light was definitively shown by experiment, since in 1965, J. G. Fox showed that the effects of the extinction theorem rendered the results of all experiments previous to that time inconclusive, and therefore compatible with both special relativity and emission theory. More recent experiments have definitely ruled out the emission model: the earliest were those of Filippas and Fox (1964), using moving sources of gamma rays, and Alväger et al. (1964), which demonstrated that photons didn't acquire the speed of the high speed decaying mesons which were their source. In addition, the de Sitter double star experiment (1913) was repeated by Brecher (1977) under consideration of the extinction theorem, ruling out a source dependence as well.Observations of Gamma-ray bursts also demonstrated that the speed of light is independent of the frequency and energy of the light rays. Special relativity: One-way speed of light A series of one-way measurements were undertaken, all of them confirming the isotropy of the speed of light. However, only the two-way speed of light (from A to B back to A) can unambiguously be measured, since the one-way speed depends on the definition of simultaneity and therefore on the method of synchronization. The Einstein synchronization convention makes the one-way speed equal to the two-way speed. However, there are many models having isotropic two-way speed of light, in which the one-way speed is anisotropic by choosing different synchronization schemes. They are experimentally equivalent to special relativity because all of these models include effects like time dilation of moving clocks, that compensate any measurable anisotropy. However, of all models having isotropic two-way speed, only special relativity is acceptable for the overwhelming majority of physicists since all other synchronizations are much more complicated, and those other models (such as Lorentz ether theory) are based on extreme and implausible assumptions concerning some dynamical effects, which are aimed at hiding the "preferred frame" from observation. Special relativity: Isotropy of mass, energy, and space Clock-comparison experiments (periodic processes and frequencies can be considered as clocks) such as the Hughes–Drever experiments provide stringent tests of Lorentz invariance. They are not restricted to the photon sector as Michelson-Morley but directly determine any anisotropy of mass, energy, or space by measuring the ground state of nuclei. Upper limit of such anisotropies of 10−33 GeV have been provided. Thus these experiments are among the most precise verifications of Lorentz invariance ever conducted. Special relativity: Time dilation and length contraction The transverse Doppler effect and consequently time dilation was directly observed for the first time in the Ives–Stilwell experiment (1938). In modern Ives-Stilwell experiments in heavy ion storage rings using saturated spectroscopy, the maximum measured deviation of time dilation from the relativistic prediction has been limited to ≤ 10−8. Other confirmations of time dilation include Mössbauer rotor experiments in which gamma rays were sent from the middle of a rotating disc to a receiver at the edge of the disc, so that the transverse Doppler effect can be evaluated by means of the Mössbauer effect. By measuring the lifetime of muons in the atmosphere and in particle accelerators, the time dilation of moving particles was also verified. On the other hand, the Hafele–Keating experiment confirmed the resolution of the twin paradox, i.e. that a clock moving from A to B back to A is retarded with respect to the initial clock. However, in this experiment the effects of general relativity also play an essential role. Special relativity: Direct confirmation of length contraction is hard to achieve in practice since the dimensions of the observed particles are vanishingly small. However, there are indirect confirmations; for example, the behavior of colliding heavy ions can only be explained if their increased density due to Lorentz contraction is considered. Contraction also leads to an increase of the intensity of the Coulomb field perpendicular to the direction of motion, whose effects already have been observed. Consequently, both time dilation and length contraction must be considered when conducting experiments in particle accelerators. Special relativity: Relativistic momentum and energy Starting with 1901, a series of measurements was conducted aimed at demonstrating the velocity dependence of the mass of electrons. The results actually showed such a dependency but the precision necessary to distinguish between competing theories was disputed for a long time. Eventually, it was possible to definitely rule out all competing models except special relativity. Special relativity: Today, special relativity's predictions are routinely confirmed in particle accelerators such as the Relativistic Heavy Ion Collider. For example, the increase of relativistic momentum and energy is not only precisely measured but also necessary to understand the behavior of cyclotrons and synchrotrons etc., by which particles are accelerated near to the speed of light. Special relativity: Sagnac and Fizeau Special relativity also predicts that two light rays traveling in opposite directions around a spinning closed path (e.g. a loop) require different flight times to come back to the moving emitter/receiver (this is a consequence of the independence of the speed of light from the velocity of the source, see above). This effect was actually observed and is called the Sagnac effect. Currently, the consideration of this effect is necessary for many experimental setups and for the correct functioning of GPS. Special relativity: If such experiments are conducted in moving media (e.g. water, or glass optical fiber), it is also necessary to consider Fresnel's dragging coefficient as demonstrated by the Fizeau experiment. Although this effect was initially understood as giving evidence of a nearly stationary aether or a partial aether drag it can easily be explained with special relativity by using the velocity composition law. Special relativity: Test theories Several test theories have been developed to assess a possible positive outcome in Lorentz violation experiments by adding certain parameters to the standard equations. These include the Robertson-Mansouri-Sexl framework (RMS) and the Standard-Model Extension (SME). RMS has three testable parameters with respect to length contraction and time dilation. From that, any anisotropy of the speed of light can be assessed. On the other hand, SME includes many Lorentz violation parameters, not only for special relativity, but for the Standard model and General relativity as well; thus it has a much larger number of testable parameters. Special relativity: Other modern tests Due to the developments concerning various models of Quantum gravity in recent years, deviations of Lorentz invariance (possibly following from those models) are again the target of experimentalists. Because "local Lorentz invariance" (LLI) also holds in freely falling frames, experiments concerning the weak Equivalence principle belong to this class of tests as well. The outcomes are analyzed by test theories (as mentioned above) like RMS or, more importantly, by SME. Special relativity: Besides the mentioned variations of Michelson–Morley and Kennedy–Thorndike experiments, Hughes–Drever experiments are continuing to be conducted for isotropy tests in the proton and neutron sector. To detect possible deviations in the electron sector, spin-polarized torsion balances are used. Time dilation is confirmed in heavy ion storage rings, such as the TSR at the MPIK, by observation of the Doppler effect of lithium, and those experiments are valid in the electron, proton, and photon sector. Other experiments use Penning traps to observe deviations of cyclotron motion and Larmor precession in electrostatic and magnetic fields. Possible deviations from CPT symmetry (whose violation represents a violation of Lorentz invariance as well) can be determined in experiments with neutral mesons, Penning traps and muons, see Antimatter Tests of Lorentz Violation. Astronomical tests are conducted in connection with the flight time of photons, where Lorentz violating factors could cause anomalous dispersion and birefringence leading to a dependency of photons on energy, frequency or polarization. With respect to threshold energy of distant astronomical objects, but also of terrestrial sources, Lorentz violations could lead to alterations in the standard values for the processes following from that energy, such as Vacuum Cherenkov radiation, or modifications of synchrotron radiation. Neutrino oscillations (see Lorentz-violating neutrino oscillations) and the speed of neutrinos (see measurements of neutrino speed) are being investigated for possible Lorentz violations. Other candidates for astronomical observations are the Greisen–Zatsepin–Kuzmin limit and Airy disks. The latter is investigated to find possible deviations of Lorentz invariance that could drive the photons out of phase. Observations in the Higgs sector are under way.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Atmospheric instability** Atmospheric instability: Atmospheric instability is a condition where the Earth's atmosphere is considered to be unstable and as a result local weather is highly variable through distance and time. Atmospheric stability is a measure of the atmosphere's tendency to discourage vertical motion, and vertical motion is directly correlated to different types of weather systems and their severity. In unstable conditions, a lifted thing, such as a parcel of air will be warmer than the surrounding air. Because it is warmer, it is less dense and is prone to further ascent. Atmospheric instability: In meteorology, instability can be described by various indices such as the Bulk Richardson Number, lifted index, K-index, convective available potential energy (CAPE), the Showalter, and the Vertical totals. These indices, as well as atmospheric instability itself, involve temperature changes through the troposphere with height, or lapse rate. Effects of atmospheric instability in moist atmospheres include thunderstorm development, which over warm oceans can lead to tropical cyclogenesis, and turbulence. In dry atmospheres, inferior mirages, dust devils, steam devils, and fire whirls can form. Stable atmospheres can be associated with drizzle, fog, increased air pollution, a lack of turbulence, and undular bore formation. Forms: There are two primary forms of atmospheric instability: Convective instability Dynamic instability (fluid mechanics)Under convective instability thermal mixing through convection in the form of warm air rising leads to the development of clouds and possibly precipitation or convective storms. Dynamic instability is produced through the horizontal movement of air and the physical forces it is subjected to such as the Coriolis force and pressure gradient force. Dynamic lifting and mixing produces cloud, precipitation and storms often on a synoptic scale. Cause of instability: Whether or not the atmosphere has stability depends partially on the moisture content. In a very dry troposphere, a temperature decrease with height less than 9.8C per kilometer ascent indicates stability, while greater changes indicate instability. This lapse rate is known as the dry adiabatic lapse rate. In a completely moist troposphere, a temperature decrease with height less than 6C per kilometer ascent indicates stability, while greater changes indicate instability. In the range between 6C and 9.8C temperature decrease per kilometer ascent, the term conditionally unstable is used. Indices used for its determination: Lifted Index The lifted index (LI), usually expressed in kelvins, is the temperature difference between the temperature of the environment Te(p) and an air parcel lifted adiabatically Tp(p) at a given pressure height in the troposphere, usually 500 hPa (mb). When the value is positive, the atmosphere (at the respective height) is stable and when the value is negative, the atmosphere is unstable. Thunderstorms are expected with values below −2, and severe weather is anticipated with values below −6. Indices used for its determination: K Index The K index is derived arithmetically: K-index = (850 hPa temperature – 500 hPa temperature) + 850 hPa dew point – 700 hPa dew point depression The temperature difference between 850 hPa (5,000 feet (1,500 m) above sea level) and 500 hPa (18,000 feet (5,500 m) above sea level) is used to parameterize the vertical temperature lapse rate. Indices used for its determination: The 850 hPa dew point provides information on the moisture content of the lower atmosphere. The vertical extent of the moist layer is represented by the difference of the 700 hPa temperature (10,000 feet (3,000 m) above sea level) and 700 hPa dew point. Indices used for its determination: CAPE and CIN Convective available potential energy (CAPE), sometimes, simply, available potential energy (APE), is the amount of energy a parcel of air would have if lifted a certain distance vertically through the atmosphere. CAPE is effectively the positive buoyancy of an air parcel and is an indicator of atmospheric instability, which makes it valuable in predicting severe weather. CIN, convective inhibition, is effectively negative buoyancy, expressed B-; the opposite of convective available potential energy (CAPE), which is expressed as B+ or simply B. As with CAPE, CIN is usually expressed in J/kg but may also be expressed as m2/s2, as the values are equivalent. In fact, CIN is sometimes referred to as negative buoyant energy (NBE). Indices used for its determination: It is a form of fluid instability found in thermally stratified atmospheres in which a colder fluid overlies a warmer one. When an air mass is unstable, the element of the air mass that is displaced upwards is accelerated by the pressure differential between the displaced air and the ambient air at the (higher) altitude to which it was displaced. This usually creates vertically developed clouds from convection, due to the rising motion, which can eventually lead to thunderstorms. It could also be created in other phenomenon, such as a cold front. Even if the air is cooler on the surface, there is still warmer air in the mid-levels, that can rise into the upper-levels. However, if there is not enough water vapor present, there is no ability for condensation, thus storms, clouds, and rain will not form. Indices used for its determination: Bulk Richardson Number The Bulk Richardson Number (BRN) is a dimensionless number relating vertical stability and vertical wind shear (generally, stability divided by shear). It represents the ratio of thermally-produced turbulence and turbulence generated by vertical shear. Practically, its value determines whether convection is free or forced. High values indicate unstable and/or weakly sheared environments; low values indicate weak instability and/or strong vertical shear. Generally, values in the range of around 10 to 45 suggest environmental conditions favorable for supercell development. Indices used for its determination: Showalter index The Showalter index is a dimensionless number computed by taking the temperature at the 850 hPa level which is then taken dry adiabatically up to saturation, then up to the 500 hPa level, which is then subtracted by the observed 500 hPa level temperature. If the value is negative, then the lower portion of the atmosphere is unstable, with thunderstorms expected when the value is below −3. The application of the Showalter index is especially helpful when there is a cool, shallow air mass below 850 hPa that conceals the potential convective lifting. However, the index will underestimate the potential convective lifting if there are cool layers that extend above 850 hPa and it does not consider diurnal radiative changes or moisture below 850 hPa. Effects: Stable atmosphere Stable conditions, such as during a clear and calm night, will cause pollutants to become trapped near ground level. Drizzle occurs within a moist air mass when it is stable. Air within a stable layer is not turbulent. Conditions associated with a marine layer, a stable atmosphere common on the west side of continents near cold water currents, leads to overnight and morning fog. Undular bores can form when a low level boundary such as a cold front or outflow boundary approaches a layer of cold, stable air. The approaching boundary will create a disturbance in the atmosphere producing a wave-like motion, known as a gravity wave. Although the undular bore waves appear as bands of clouds across the sky, they are transverse waves, and are propelled by the transfer of energy from an oncoming storm and are shaped by gravity. The ripple-like appearance of this wave is described as the disturbance in the water when a pebble is dropped into a pond or when a moving boat creates waves in the surrounding water. The object displaces the water or medium the wave is travelling through and the medium moves in an upward motion. However, because of gravity, the water or medium is pulled back down and the repetition of this cycle creates the transverse wave motion. Effects: Unstable atmosphere Within an unstable layer in the troposphere, the lifting of air parcels will occur, and continue for as long as the nearby atmosphere remains unstable. Once overturning through the depth of the troposphere occurs (with convection being capped by the relatively warmer, more stable layer of the stratosphere), deep convective currents lead to thunderstorm development when enough moisture is present. Over warm ocean waters and within a region of the troposphere with light vertical wind shear and significant low level spin (or vorticity), such thunderstorm activity can grow in coverage and develop into a tropical cyclone. Over hot surfaces during warm days, unstable dry air can lead to significant refraction of the light within the air layer, which causes inferior mirages.When winds are light, dust devils can develop on dry days within a region of instability at ground level. Small-scale, tornado-like circulations can occur over or near any intense surface heat source, which would have significant instability in its vicinity. Those that occur near intense wildfires are called fire whirls, which can spread a fire beyond its previous bounds. A steam devil is a rotating updraft that involves steam or smoke. They can form from smoke issuing from a power plant smokestack. Hot springs and warm lakes are also suitable locations for a steam devil to form, when cold arctic air passes over the relatively warm water.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Knitting needle** Knitting needle: A knitting needle or knitting pin is a tool in hand-knitting to produce knitted fabrics. They generally have a long shaft and taper at their end, but they are not nearly as sharp as sewing needles. Their purpose is two-fold. The long shaft holds the active (unsecured) stitches of the fabric, to prevent them from unravelling, whereas the tapered ends are used to form new stitches. Most commonly, a new stitch is formed by inserting the tapered end through an active stitch, catching a loop (also called a bight) of fresh yarn and drawing it through the stitch; this secures the initial stitch and forms a new active stitch in its place. In specialized forms of knitting the needle may be passed between active stitches being held on another needle, or indeed between/through inactive stitches that have been knit previously. Knitting needle: The size of a needle is described first by its diameter and secondly by its length. The size of the new stitch is determined in large part by the diameter of the knitting needle used to form it, because that affects the length of the yarn-loop drawn through the previous stitch. Thus, large stitches can be made with large needles, whereas fine knitting requires fine needles. In most cases, the knitting needles being used in hand-knitting are of the same diameter; however, in uneven knitting, needles of different sizes may be used. Larger stitches may also be made by wrapping the yarn more than once around the needles with every stitch. The length of a needle determines how many stitches it can hold at once; for example, very large projects such as a shawl with hundreds of stitches might require a longer needle than a small project such as a scarf or bootie. Various sizing systems for needles are in common use. Types: Single-pointed needles The most widely recognized form of needle is the single-pointed needle. It is a slender, straight stick tapered to a point at one end, with a knob at the other end to prevent stitches from slipping off. Such needles are always used in pairs and are usually 10-16 inches (25.4–40.6 cm) long but, due to the compressibility of knitted fabrics, may be used to knit pieces significantly wider. The knitting of new stitches occurs only at the tapered ends. Fictional depictions of knitting in movies, television programs, animation, and comic strips almost always show knitting done on straight needles. Both Wallace and Gromit and Monty Python, for example, show this type of knitting. Types: Double-pointed needles The oldest type of needle is the straight double-pointed needle. Double-pointed needles are tapered at both ends, which allows them to be knit from either end. They are typically used (and sold) in sets of four and five, and are commonly used for circular knitting. Since the invention of the circular needle, they have been most commonly used to knit smaller tube-shaped pieces such as sleeves, collars, and socks. Usually two needles are active while the others hold the remaining stitches. Double-pointed needles are somewhat shorter than single-pointed or circular needles, and are usually used in the 13–20 cm length range, although they are also made longer. Types: Double-pointed needles are depicted in a number of 14th-century oil paintings, typically called Knitting Madonnas, depicting Mary knitting with double-pointed needles (Rutt, 2003). A cable needle is a special type of double-pointed needle that is typically very short and used to hold a very small number of stitches temporarily while the knitter is forming a cable pattern. They are often U-shaped, or have a U-shaped bend, to keep the held stitches from falling off while the primary needle is being used. Types: Circular needles The first US patent for a circular needle was issued in 1918, although in Europe they may have been used a little earlier. Circulars are composed of two pointed, straight tips connected by a flexible cable and may be used for both knitting flat or knitting in the round. The two tapered ends, typically 4–5 inches (10.5–13 cm) long, are rigid, allowing for easy knitting, and are connected by the flexible strand (usually made of nylon or coated wire). The tips may be permanently connected to the cable and made in overall lengths from 9 inches (23 cm) to 60 inches (150 cm) or composed of cables and interchangeable tips. This allows various lengths and diameters to be combined into many different sizes of needles, allowing for a great variety of needs to be met by a relatively few component pieces. The ability to work from either end of one needle is convenient in several types of knitting, such as slip-stitch versions of double knitting. Types: In using circulars to knit flat pieces of fabric the two ends are used just as two separate needles would be. The knitter holds one tip in each hand and knits straight across the width of the fabric, turns the work, and knits or purls back the other way. Using circular needles has some advantages, for example, the weight of the fabric is more evenly distributed, therefore less taxing, on the arms and wrists of the knitter and, the length of the cable may be longer than would be practical with rigid needles since the cable and fabric rest in the lap of the knitter rather than extending straight out past the arms. Types: The lack of a purl row in stockinette stitch, since in the round (commonly referred to as ITR) knitting is all done using the knit stitch, is often perceived to be one of the greatest benefits of ITR. Knitting ITR with circulars is done in a spiral, the same way as using double-pointed needles (usually called DPNs). Additionally, circulars eliminate the need to continually switch from one needle to the next, and there is no possibility of stitches falling off the back end of the needles, as may happen when using DPNs. Much larger tubes may be knit ITR, too, helping items to be completed more quickly. Construction of garments such as sweaters may be greatly simplified when knitting ITR, since the finishing steps of sewing a back, two fronts, and two sleeves of a sweater together may be almost entirely eliminated in neck down ITR knitting. Types: Knitting educator and authority Elizabeth Zimmermann helped popularize knitting ITR specifically with circular needles. Types: Numerous techniques have been devised for the production of narrow tubular knitting on circular needles. One common method is to use two needles in place of the four or five double-pointed needles traditionally used, while a newer technique is to use one circular needle that is significantly longer than the circumference of the item being knitted. This technique is known as Magic Loop and has recently become a popular method of producing tubular knitting, as only one needle is required. The Guinness World Record for knitting with the largest knitting needles: The current holder of this title is Elizabeth "Betsy" Bond who is a British art student and creator of the world's largest knitting needles, which are 14 feet long. To achieve the world record in 2018, Bond needed to knit at least 10 stitches and 10 rows of yarn with her needles. The yarn she used for the feat was made of 35 pounds of machine knitted, hand-twisted cotton material.She beats Julia Hopson of Penzance in Cornwall. Julia had knitted a tension square of ten stitches and ten rows in stocking stitch using knitting needles that were 6.5 cm in diameter and 3.5 metres long. Needle materials: In addition to common wood and metal needles, antique knitting needles were sometimes made from tortoiseshell, ivory and walrus tusks; these materials are now banned due to their impact on endangered species, and needles made from them are virtually impossible to find. There are, however, a now vintage style of needle which appears to be tortoiseshell, but is actually made from a celluloid, sometimes known as shellonite. These needles were made in Australia, but are no longer manufactured. Modern knitting needles are made of bamboo, aluminium, steel, wood, plastic, glass, casein and carbon fibers. Needle storage: A tall, cylindrical container with padding on the bottom to keep the points sharp can store straight needles neatly. Fabric or plastic cases similar to cosmetic bags or a chef's knife bag allow straight needles to be stored together yet separated by size, then rolled to maximize space. Circular needles may be stored with the cables coiled in cases made specifically for this purpose or hung dangling from a hanger device with cables straight. If older circulars with the nylon or plastic cables are coiled for storage it may be necessary to soak them in hot water for a few minutes to get them to uncoil and relax for ease of use. Most recently manufactured cables eliminate this problem and may be stored coiled without any difficulty. Care must be taken not to kink the metal cables of older circulars, as these kinks will not come out and may damage or snag yarn as it is knit. Needle gauge: A needle gauge makes it possible to determine the size of a knitting needle. Some may also be used to gauge the size of crochet hooks. Most needles come with the size written on them, but with use and time, the label often wears off, and many needles (like double-pointed needles) tend not to be labelled. Needle gauge: Needle gauges can be made of any material, but are often made of metal and plastic. They tend to be about 3 by 5 inches. There are holes of various sizes through which the needles are passed to determine which hole they fit best, and often a ruler along the edge for determining the tension (also called gauge) of a sample. Needle sizes and conversions: In the UK, the metric system is used. Previously, needles 'numbers' were the Standard Wire Gauge designation of the wire from which metal needles were made. The origin of the numbering system is uncertain but it is thought that needle numbers were based on the number of increasingly fine dies that the wire had to be drawn through. This meant thinner needles had a larger number. Needle sizes and conversions: In the current US system, things are opposite, that is, smaller numbers indicate smaller needles. There is an "old US system" that is divided into standard and steel needles, the latter being fine lace needles. Occasionally, older lace patterns will refer to these smaller needles in the old measurement system. Finally, there was a system used in continental Europe that predated the metric system. It is largely obsolete, but some older or reprinted patterns call for pins in these sizes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**VDAC2** VDAC2: Voltage-dependent anion-selective channel protein 2 is a protein that in humans is encoded by the VDAC2 gene on chromosome 10. This protein is a voltage-dependent anion channel and shares high structural homology with the other VDAC isoforms. VDACs are generally involved in the regulation of cell metabolism, mitochondrial apoptosis, and spermatogenesis. Additionally, VDAC2 participates in cardiac contractions and pulmonary circulation, which implicate it in cardiopulmonary diseases. VDAC2 also mediates immune response to infectious bursal disease (IBD). Structure: The three VDAC isoforms in human are highly conserved, particularly with respect to their 3D structure. VDACs form a wide β-barrel structure, inside of which the N-terminal resides to partially close the pore. The sequence of the VDAC2 isoform contains an abundance of cysteines, which allow for the formation of disulfide bridges and, ultimately, affect the flexibility of the β-barrel. VDACs also contain a mitochondrial targeting sequence for the protein's translocation to the outer mitochondrial membrane. In particular, VDAC2 possesses an N-terminal longer by 11 residues compared to the other two isoforms. Function: VDAC2 belongs to the mitochondrial porin family and is expected to share similar biological functions to the other VDAC isoforms. VDACs generally are involved in cellular energy metabolism by transporting ATP and other small ions and metabolites across the outer mitochondrial membrane. In mammalian cardiomyocytes, VDAC2 promotes mitochondrial transport of calcium ions in order to power cardiac contractions.In addition, VDACs form part of the mitochondrial permeability transition pore (MPTP) and, thus, facilitate cytochrome C release, leading to apoptosis. VDACs have also been observed to interact with pro- or antiapoptotic proteins, such as Bcl-2 family proteins and kinases, and so may contribute to apoptosis independently from the MPTP. VDAC2 in particular has demonstrated a protective effect in cells undergoing mitochondrial apoptosis, and may even confer protection during aging.Furthermore, VDAcs have been linked to spermatogenesis, sperm maturation, motility, and fertilization. Though all VDAC isoforms are ubiquitously expressed, VDAC2 is majorly found in the sperm outer dense fiber (ODF), where it is hypothesized to promote proper assembly and maintenance of sperm flagella. It also localizes to the acrosomal membrane of the sperm, where it putatively mediates calcium ion transmembrane transport. Clinical significance: The VDAC2 protein belongs to a group of mitochondrial membrane channels involved in translocation of adenine nucleotides through the outer membrane. These channels may also function as a mitochondrial binding site for hexokinase and glycerol kinase. The VDAC is an important constituent in apoptotic signaling and oxidative stress, most notably as part of the mitochondrial death pathway and cardiac myocyte apoptosis signaling. Programmed cell death is a distinct genetic and biochemical pathway essential to metazoans. An intact death pathway is required for successful embryonic development and the maintenance of normal tissue homeostasis. Apoptosis has proven to be tightly interwoven with other essential cell pathways. The identification of critical control points in the cell death pathway has yielded fundamental insights for basic biology, as well as provided rational targets for new therapeutics a normal embryologic processes, or during cell injury (such as ischemia-reperfusion injury during heart attacks and strokes) or during developments and processes in cancer, an apoptotic cell undergoes structural changes including cell shrinkage, plasma membrane blebbing, nuclear condensation, and fragmentation of the DNA and nucleus. This is followed by fragmentation into apoptotic bodies that are quickly removed by phagocytes, thereby preventing an inflammatory response. It is a mode of cell death defined by characteristic morphological, biochemical and molecular changes. It was first described as a "shrinkage necrosis", and then this term was replaced by apoptosis to emphasize its role opposite mitosis in tissue kinetics. In later stages of apoptosis the entire cell becomes fragmented, forming a number of plasma membrane-bounded apoptotic bodies which contain nuclear and or cytoplasmic elements. The ultrastructural appearance of necrosis is quite different, the main features being mitochondrial swelling, plasma membrane breakdown and cellular disintegration. Apoptosis occurs in many physiological and pathological processes. It plays an important role during embryonal development as programmed cell death and accompanies a variety of normal involutional processes in which it serves as a mechanism to remove "unwanted" cells. Clinical significance: The VDAC2 protein has been implicated in cardioprotection against ischemia-reperfusion injury, such as during ischemic preconditioning of the heart. Although a large burst of reactive oxygen species (ROS) is known to lead to cell damage, a moderate release of ROS from the mitochondria, which occurs during nonlethal short episodes of ischemia, can play a significant triggering role in the signal transduction pathways of ischemic preconditioning leading to reduction of cell damage. It has even been observed that during this release of reactive oxygen species, VDAC2 plays an important role in the mitochondrial cell death pathway transduction hereby regulating apoptotic signaling and cell death. Clinical significance: The VDAC2 protein has been linked persistent pulmonary hypertension of the newborn (PPHN), which causes a large majority of neonatal morbidity and mortality, due to its role as a major regulator of endothelium-dependent nitric oxide synthase (eNOS) in the pulmonary endothelium. eNOS has been attributed with regulating NOS activity in response to physiological stimuli, which is vital to maintain NO production for proper blood circulation to the lungs. As a result, VDAC2 is significantly involved in pulmonary circulation and may become a therapeutic target for treating diseases such as pulmonary hypertension,VDAC2 may also serve an immune function, as it has been hypothesized to detect and induce apoptosis in cells infected by the IBD virus. IBD, the equivalent HIV in birds, can compromise their immune systems and even cause fatal injury to the lymphoid organ, Studies of this process indicate that VDAC2 interacts with the viral protein V5 to mediate cell death. Interactions: VDAC2 has been shown to interact with: BAK Parkin eNOS
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Positive illusions** Positive illusions: Positive illusions are unrealistically favorable attitudes that people have towards themselves or to people that are close to them. Positive illusions are a form of self-deception or self-enhancement that feel good; maintain self-esteem; or avoid discomfort, at least in the short term. There are three general forms: inflated assessment of one's own abilities, unrealistic optimism about the future, and an illusion of control. The term "positive illusions" originates in a 1988 paper by Taylor and Brown. "Taylor and Brown's (1988) model of mental health maintains that certain positive illusions are highly prevalent in normal thought and predictive of criteria traditionally associated with mental health."There are controversies about the extent to which people reliably demonstrate positive illusions, as well as whether these illusions are beneficial to the people who have them. Types: In the above-average effect, people regard themselves more positively than they regard others and less negatively than others regard them. Positive attributes are judged to be more descriptive of themselves than of an average person, whereas negative ones are judged to be less descriptive of themselves than of an average person. Despite the fact that it is statistically impossible for most people to be superior to their peers, rather than being equally aware of ones strengths and weaknesses, people are more aware of their strengths and not very aware of their weaknesses. This effect has been widely recognized across traits and abilities including the different abilities of driving, parenting, leadership, teaching, ethics, and general health. This effect is also evident in memory; most people also tend to perceive their ability to remember as better than it actually is.The illusion of control is an exaggerated assessment of the individual's personal control over environmental circumstances such as the roll of dice or flip of coin.Optimism bias is a tendency for people to overestimate their likelihood of experiencing a wide variety of pleasant events, such as enjoying their first job or having a gifted child, and somewhat underestimate their risk of succumbing to negative events, such as getting divorced or falling victim to a chronic disease. This illusory nature of optimism is also evident in peoples' under-estimation of the time taken for a variety of tasks. Origins: Like many forms of human perception, self-perception is prone to illusion. Positive illusions have been commonly understood as one of the apparent effects of self-enhancement, a desire to maximize the positivity of one's self-views and a function of boosting self-esteem. It might be due to the desire to see oneself more favorably relative to one's peers. These kinds of self-serving attributions seemed to be displayed by positive self-viewers only. In fact, the negative-viewers were found to display the opposite pattern. Research suggests that there may be some genetic contributions to the ability of developing positive illusions. Early environment also plays an important role, in which people are more able to develop these positive beliefs in nurturing environments than in harsh ones.Alternative explanations involve dimensions like the easiness and commonness of the tasks. In addition, tasks that shifted attention from the self to the comparative target would stop people overly optimising.The cultural prevalence also has a significant role in positive illusions. Although it is easy to document positive illusions in individualistic Western cultures, people in collectivist East Asian cultures are much less likely to self-enhance and, indeed, are often self-effacing instead.Most studies find that people tend to have inflated views of themselves. The research indicates that the relationship between people's self-evaluations and objective assessments is relatively weak. One explanation for this is that most people only have mild positive illusions.However, according to recent studies there is evidence that there are significant individual differences between the strength of positive illusions people have. Therefore, some people may have extremely inflated self-views, some mild and some very little and when examined across a population this effect appears weak. Benefits and liabilities: Positive illusions can have advantages and disadvantages for the individual, and there is a controversy over whether they are evolutionarily adaptive. The illusions may have direct health benefits by helping the person cope with stress, or by promoting work towards success. On the other hand, unrealistically positive expectations may prevent people from taking sensible preventive action for medical risks. Research in 2001 provided evidence that people who have positive illusions may have both short term benefits and long term costs. Specifically, self-enhancement is not correlated with academic success or graduation rates in college. Benefits and liabilities: Mental health Taylor and Brown's Social Psychological Model of mental health has assumed that positive beliefs would be tied to psychological well-being, and that positive self-evaluations, even unrealistic, would promote good mental health. The reference to well-being here means the ability to feel good about oneself, to be creative and/or productive in one's work, to form satisfying relationships with other people and to effectively combat stress when necessary. Positive illusions are particularly useful for helping people to get through major stressful events or traumas, such as life-threatening illnesses or serious accidents. People who are able to develop or maintain their positive beliefs in the face of these potential setbacks tend to cope more successfully with them, and show less psychological distress than those less able. For example, psychological research shows that cancer survivors often report a higher quality of life than people who have never had cancer at all. This could be physiologically protective because they have been able to use the traumatic experience to evoke an increased sense of meaning and purpose. This relates to the concept of psychological resilience or an individual's ability to cope with challenges and stress. Self-enhancing was found to be correlated with resilience in the face of the 9/11 tragedy among participants either in or near the towers.People also hold positive illusions because such beliefs often enhance their productivity and persistence with tasks on which they might otherwise give up. When people believe they can achieve a difficult goal, this expectation often creates a sense of energy and excitement, resulting in more progress than would otherwise have been the case. Benefits and liabilities: Positive illusions can be argued to be adaptive because they enable people to feel hopeful in the face of uncontrollable risks.In addition, there seems to be a relationship between illusions and positive mood. Studies have found that the direction of this relationship is that positive illusions cause positive mood states.However, more recent findings found that all forms of illusion, positive or not, were associated with more depressive symptoms and various other studies reject the link between positive illusions and mental health, well-being or life satisfaction, maintaining that accurate perception of reality is compatible with happiness.When studying the link between self-esteem and positive illusions, Compton (1992) identified a group which possessed high self-esteem without positive illusions, and that these individuals weren't depressed, neurotic, psychotic, maladjusted nor personality disordered, thus concluding that positive illusions aren't necessary for high self-esteem. Compared to the group with positive illusions and high self-esteem, the nonillusional group with high self-esteem was higher on self-criticism and personality integration and lower on psychoticism.A meta-analysis of 118 studies including 7013 subjects found that slightly more studies supported the idea of depressive realism, but these studies were poorer in quality, used non-clinical samples, were more readily generalized, used self-reports instead of interviews and used attentional bias or judgment of contingency as a method of measuring depressive realism, as methods such as recall of feedback and evaluation of performance showed results counter to depressive realism.Another recent meta-analysis supports Taylor and Browns central claim. Its results indicate that different forms of self-enhancement are positively linked to personal adjustment (high subjective well-being and low depressiveness). Self-enhancement was not only linked to self-ratings of personal adjustment, but also to adjustment ratings made by informants (including clinical experts). Furthermore, self-enhancement was also a longitudinal predictor of personal adjustment. Benefits and liabilities: Physical health Apart from having better psychological adjustment with more active coping, the ability to develop and sustain positive beliefs in the face of setbacks has its health benefits. Research with men who had the HIV virus, or already diagnosed with AIDS has shown that those who hold unrealistically positive assessments of their abilities to control their health conditions take longer to develop symptoms, experience a slower course of illness, as well as other positive cognitive outcomes, such as acceptance of the loss. Benefits and liabilities: Potential liabilities There are several potential risks that may arise if people hold positive illusions about their personal qualities and likely outcomes. First of all, they might set themselves up for unpleasant surprises for which they are ill-prepared when their overly optimistic beliefs are disconfirmed. They may also have to tackle the consequences thereafter. However, research suggests that, for the most part, these adverse outcomes do not occur. People's beliefs are more realistic at times when realism serves them particularly well: for example, when initially making plans; when accountability is likely or following negative feedback from the environment. Following a setback or failure, all is still not lost, as people's overly positive beliefs may be used again in a new undertaking.A second risk is that people who hold positive illusions will set goals, or undertake courses of actions which are more likely to produce failure than success. This concern appears to be largely without basis. Research shows that when people are deliberating future courses of actions for themselves, such as whether to take a particular job or go to graduate school, their perceptions are fairly realistic, but they can become overly optimistic when they turn to implementing their plans. Although there is no guarantee that one's realistic prediction would turn out to be accurate, the shift from realism to optimism may provide the fuel needed to bring potentially difficult tasks from conception to fruition.A third risk is that positive self-perceptions may have social costs. A specific source of evidence of the self-serving pattern in ability assessment examined the use of idiosyncratic definitions of traits and abilities. The authors suggested that the social costs occur when one's definition of ability is perceived to be the only one relevant to achievement outcomes. In other words, wherever people fail to recognise when other plausible definitions of ability are relevant for success, estimates of their future well-being will be overstated. Benefits and liabilities: A fourth risk is that it may be harmful to realize that one's actual competence does not match up to their illusions. This can be harmful to the ego and result in actually performing worse in situations such as college.Although positive illusions may have short-term benefits, they come with long-term costs. Positive illusions have been linked with decreasing levels of self-esteem and well-being, as well as narcissism and lower academic achievement among students. Negative counterparts: Although more academic attention has focused on positive illusions, there are systematic negative illusions that are revealed under slightly different circumstances. For example, while college students rate themselves as more likely than average to live to 70, they believe they are less likely than average to live to 100. People regard themselves as above average on easy tasks such as riding a bicycle but below average on difficult tasks like riding a unicycle. In 2007 Moore named the latter effect the "worse-than-average effect". In general, people overestimate their relative standing when their absolute standing is high and underestimate it when their absolute standing is low. Mitigation: Depressive realism suggests that depressed people actually have a more realistic view of themselves and the world than mentally healthy people. The nature of depression seems to have its role in diminishing positive illusions. For example, individuals who are low in self-esteem, slightly depressed, or both, are more balanced in self-perceptions. Likewise, these mildly depressed individuals are found to be less vulnerable to overestimations of (their) control over events and to assess future circumstances in biased fashion.However, these findings may not be because depressed people have fewer illusions than people who are not depressed. Studies such as Dykman et al. (1989) show that depressed people believe they have no control in situations where they actually do, so their perspection is not more accurate overall. It might also be that the pessimistic bias of depressives results in "depressive realism" when, for example, measuring estimation of control when there is none, as proposed by Allan et al. (2007). Also, Msetfi et al. (2005) and Msetfi et al. (2007) found that when replicating Alloy and Abramson's findings the overestimation of control in nondepressed people only showed up when the interval was long enough, implying that this is because they take more aspects of a situation into account than their depressed counterparts. Mitigation: Two hypotheses have been stated in the literature with regard to avoiding the drawbacks of positive illusions: firstly by minimising the illusions in order to take the full advantage of the benefits, and secondly through making important decisions. According to Roy Baumeister, a small amount of positive distortion may be optimal. He hypothesizes that those who fall within this optimal margin of illusion may provide for the best mental health.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Categorical variable** Categorical variable: In statistics, a categorical variable (also called qualitative variable) is a variable that can take on one of a limited, and usually fixed, number of possible values, assigning each individual or other unit of observation to a particular group or nominal category on the basis of some qualitative property. In computer science and some branches of mathematics, categorical variables are referred to as enumerations or enumerated types. Commonly (though not in this article), each of the possible values of a categorical variable is referred to as a level. The probability distribution associated with a random categorical variable is called a categorical distribution. Categorical variable: Categorical data is the statistical data type consisting of categorical variables or of data that has been converted into that form, for example as grouped data. More specifically, categorical data may derive from observations made of qualitative data that are summarised as counts or cross tabulations, or from observations of quantitative data grouped within given intervals. Often, purely categorical data are summarised in the form of a contingency table. However, particularly when considering data analysis, it is common to use the term "categorical data" to apply to data sets that, while containing some categorical variables, may also contain non-categorical variables. Categorical variable: A categorical variable that can take on exactly two values is termed a binary variable or a dichotomous variable; an important special case is the Bernoulli variable. Categorical variables with more than two possible values are called polytomous variables; categorical variables are often assumed to be polytomous unless otherwise specified. Discretization is treating continuous data as if it were categorical. Dichotomization is treating continuous data or polytomous variables as if they were binary variables. Regression analysis often treats category membership with one or more quantitative dummy variables. Examples of categorical variables: Examples of values that might be represented in a categorical variable: The roll of a six-sided die: possible outcomes are 1,2,3,4,5, or 6. Demographic information of a population: gender, disease status. The blood type of a person: A, B, AB or O. The political party that a voter might vote for, e. g. Green Party, Christian Democrat, Social Democrat, etc. The type of a rock: igneous, sedimentary or metamorphic. The identity of a particular word (e.g., in a language model): One of V possible choices, for a vocabulary of size V. Notation: For ease in statistical processing, categorical variables may be assigned numeric indices, e.g. 1 through K for a K-way categorical variable (i.e. a variable that can express exactly K possible values). In general, however, the numbers are arbitrary, and have no significance beyond simply providing a convenient label for a particular value. In other words, the values in a categorical variable exist on a nominal scale: they each represent a logically separate concept, cannot necessarily be meaningfully ordered, and cannot be otherwise manipulated as numbers could be. Instead, valid operations are equivalence, set membership, and other set-related operations. Notation: As a result, the central tendency of a set of categorical variables is given by its mode; neither the mean nor the median can be defined. As an example, given a set of people, we can consider the set of categorical variables corresponding to their last names. We can consider operations such as equivalence (whether two people have the same last name), set membership (whether a person has a name in a given list), counting (how many people have a given last name), or finding the mode (which name occurs most often). However, we cannot meaningfully compute the "sum" of Smith + Johnson, or ask whether Smith is "less than" or "greater than" Johnson. As a result, we cannot meaningfully ask what the "average name" (the mean) or the "middle-most name" (the median) is in a set of names. Notation: This ignores the concept of alphabetical order, which is a property that is not inherent in the names themselves, but in the way we construct the labels. For example, if we write the names in Cyrillic and consider the Cyrillic ordering of letters, we might get a different result of evaluating "Smith < Johnson" than if we write the names in the standard Latin alphabet; and if we write the names in Chinese characters, we cannot meaningfully evaluate "Smith < Johnson" at all, because no consistent ordering is defined for such characters. However, if we do consider the names as written, e.g., in the Latin alphabet, and define an ordering corresponding to standard alphabetical order, then we have effectively converted them into ordinal variables defined on an ordinal scale. Number of possible values: Categorical random variables are normally described statistically by a categorical distribution, which allows an arbitrary K-way categorical variable to be expressed with separate probabilities specified for each of the K possible outcomes. Such multiple-category categorical variables are often analyzed using a multinomial distribution, which counts the frequency of each possible combination of numbers of occurrences of the various categories. Regression analysis on categorical outcomes is accomplished through multinomial logistic regression, multinomial probit or a related type of discrete choice model. Number of possible values: Categorical variables that have only two possible outcomes (e.g., "yes" vs. "no" or "success" vs. "failure") are known as binary variables (or Bernoulli variables). Because of their importance, these variables are often considered a separate category, with a separate distribution (the Bernoulli distribution) and separate regression models (logistic regression, probit regression, etc.). As a result, the term "categorical variable" is often reserved for cases with 3 or more outcomes, sometimes termed a multi-way variable in opposition to a binary variable. Number of possible values: It is also possible to consider categorical variables where the number of categories is not fixed in advance. As an example, for a categorical variable describing a particular word, we might not know in advance the size of the vocabulary, and we would like to allow for the possibility of encountering words that we haven't already seen. Standard statistical models, such as those involving the categorical distribution and multinomial logistic regression, assume that the number of categories is known in advance, and changing the number of categories on the fly is tricky. In such cases, more advanced techniques must be used. An example is the Dirichlet process, which falls in the realm of nonparametric statistics. In such a case, it is logically assumed that an infinite number of categories exist, but at any one time most of them (in fact, all but a finite number) have never been seen. All formulas are phrased in terms of the number of categories actually seen so far rather than the (infinite) total number of potential categories in existence, and methods are created for incremental updating of statistical distributions, including adding "new" categories. Categorical variables and regression: Categorical variables represent a qualitative method of scoring data (i.e. represents categories or group membership). These can be included as independent variables in a regression analysis or as dependent variables in logistic regression or probit regression, but must be converted to quantitative data in order to be able to analyze the data. One does so through the use of coding systems. Analyses are conducted such that only g -1 (g being the number of groups) are coded. This minimizes redundancy while still representing the complete data set as no additional information would be gained from coding the total g groups: for example, when coding gender (where g = 2: male and female), if we only code females everyone left over would necessarily be males. In general, the group that one does not code for is the group of least interest.There are three main coding systems typically used in the analysis of categorical variables in regression: dummy coding, effects coding, and contrast coding. The regression equation takes the form of Y = bX + a, where b is the slope and gives the weight empirically assigned to an explanator, X is the explanatory variable, and a is the Y-intercept, and these values take on different meanings based on the coding system used. The choice of coding system does not affect the F or R2 statistics. However, one chooses a coding system based on the comparison of interest since the interpretation of b values will vary. Categorical variables and regression: Dummy coding Dummy coding is used when there is a control or comparison group in mind. One is therefore analyzing the data of one group in relation to the comparison group: a represents the mean of the control group and b is the difference between the mean of the experimental group and the mean of the control group. It is suggested that three criteria be met for specifying a suitable control group: the group should be a well-established group (e.g. should not be an “other” category), there should be a logical reason for selecting this group as a comparison (e.g. the group is anticipated to score highest on the dependent variable), and finally, the group's sample size should be substantive and not small compared to the other groups.In dummy coding, the reference group is assigned a value of 0 for each code variable, the group of interest for comparison to the reference group is assigned a value of 1 for its specified code variable, while all other groups are assigned 0 for that particular code variable.The b values should be interpreted such that the experimental group is being compared against the control group. Therefore, yielding a negative b value would entail the experimental group have scored less than the control group on the dependent variable. To illustrate this, suppose that we are measuring optimism among several nationalities and we have decided that French people would serve as a useful control. If we are comparing them against Italians, and we observe a negative b value, this would suggest Italians obtain lower optimism scores on average. Categorical variables and regression: The following table is an example of dummy coding with French as the control group and C1, C2, and C3 respectively being the codes for Italian, German, and Other (neither French nor Italian nor German): Effects coding In the effects coding system, data are analyzed through comparing one group to all other groups. Unlike dummy coding, there is no control group. Rather, the comparison is being made at the mean of all groups combined (a is now the grand mean). Therefore, one is not looking for data in relation to another group but rather, one is seeking data in relation to the grand mean.Effects coding can either be weighted or unweighted. Weighted effects coding is simply calculating a weighted grand mean, thus taking into account the sample size in each variable. This is most appropriate in situations where the sample is representative of the population in question. Unweighted effects coding is most appropriate in situations where differences in sample size are the result of incidental factors. The interpretation of b is different for each: in unweighted effects coding b is the difference between the mean of the experimental group and the grand mean, whereas in the weighted situation it is the mean of the experimental group minus the weighted grand mean.In effects coding, we code the group of interest with a 1, just as we would for dummy coding. The principal difference is that we code −1 for the group we are least interested in. Since we continue to use a g - 1 coding scheme, it is in fact the −1 coded group that will not produce data, hence the fact that we are least interested in that group. A code of 0 is assigned to all other groups. Categorical variables and regression: The b values should be interpreted such that the experimental group is being compared against the mean of all groups combined (or weighted grand mean in the case of weighted effects coding). Therefore, yielding a negative b value would entail the coded group as having scored less than the mean of all groups on the dependent variable. Using our previous example of optimism scores among nationalities, if the group of interest is Italians, observing a negative b value suggest they obtain a lower optimism score. Categorical variables and regression: The following table is an example of effects coding with Other as the group of least interest. Categorical variables and regression: Contrast coding The contrast coding system allows a researcher to directly ask specific questions. Rather than having the coding system dictate the comparison being made (i.e., against a control group as in dummy coding, or against all groups as in effects coding) one can design a unique comparison catering to one's specific research question. This tailored hypothesis is generally based on previous theory and/or research. The hypotheses proposed are generally as follows: first, there is the central hypothesis which postulates a large difference between two sets of groups; the second hypothesis suggests that within each set, the differences among the groups are small. Through its a priori focused hypotheses, contrast coding may yield an increase in power of the statistical test when compared with the less directed previous coding systems.Certain differences emerge when we compare our a priori coefficients between ANOVA and regression. Unlike when used in ANOVA, where it is at the researcher's discretion whether they choose coefficient values that are either orthogonal or non-orthogonal, in regression, it is essential that the coefficient values assigned in contrast coding be orthogonal. Furthermore, in regression, coefficient values must be either in fractional or decimal form. They cannot take on interval values. Categorical variables and regression: The construction of contrast codes is restricted by three rules: The sum of the contrast coefficients per each code variable must equal zero. The difference between the sum of the positive coefficients and the sum of the negative coefficients should equal 1. Coded variables should be orthogonal.Violating rule 2 produces accurate R2 and F values, indicating that we would reach the same conclusions about whether or not there is a significant difference; however, we can no longer interpret the b values as a mean difference. Categorical variables and regression: To illustrate the construction of contrast codes consider the following table. Coefficients were chosen to illustrate our a priori hypotheses: Hypothesis 1: French and Italian persons will score higher on optimism than Germans (French = +0.33, Italian = +0.33, German = −0.66). This is illustrated through assigning the same coefficient to the French and Italian categories and a different one to the Germans. The signs assigned indicate the direction of the relationship (hence giving Germans a negative sign is indicative of their lower hypothesized optimism scores). Hypothesis 2: French and Italians are expected to differ on their optimism scores (French = +0.50, Italian = −0.50, German = 0). Here, assigning a zero value to Germans demonstrates their non-inclusion in the analysis of this hypothesis. Again, the signs assigned are indicative of the proposed relationship. Categorical variables and regression: Nonsense coding Nonsense coding occurs when one uses arbitrary values in place of the designated “0”s “1”s and “-1”s seen in the previous coding systems. Although it produces correct mean values for the variables, the use of nonsense coding is not recommended as it will lead to uninterpretable statistical results. Categorical variables and regression: Embeddings Embeddings are codings of categorical values into low-dimensional real-valued (sometimes complex-valued) vector spaces, usually in such a way that ‘similar’ values are assigned ‘similar’ vectors, or with respect to some other kind of criterion making the vectors useful for the respective application. A common special case are word embeddings, where the possible values of the categorical variable are the words in a language and words with similar meanings are to be assigned similar vectors. Categorical variables and regression: Interactions An interaction may arise when considering the relationship among three or more variables, and describes a situation in which the simultaneous influence of two variables on a third is not additive. Interactions may arise with categorical variables in two ways: either categorical by categorical variable interactions, or categorical by continuous variable interactions. Categorical by categorical variable interactions This type of interaction arises when we have two categorical variables. In order to probe this type of interaction, one would code using the system that addresses the researcher's hypothesis most appropriately. The product of the codes yields the interaction. One may then calculate the b value and determine whether the interaction is significant. Categorical variables and regression: Categorical by continuous variable interactions Simple slopes analysis is a common post hoc test used in regression which is similar to the simple effects analysis in ANOVA, used to analyze interactions. In this test, we are examining the simple slopes of one independent variable at specific values of the other independent variable. Such a test is not limited to use with continuous variables, but may also be employed when the independent variable is categorical. We cannot simply choose values to probe the interaction as we would in the continuous variable case because of the nominal nature of the data (i.e., in the continuous case, one could analyze the data at high, moderate, and low levels assigning 1 standard deviation above the mean, at the mean, and at one standard deviation below the mean respectively). In our categorical case we would use a simple regression equation for each group to investigate the simple slopes. It is common practice to standardize or center variables to make the data more interpretable in simple slopes analysis; however, categorical variables should never be standardized or centered. This test can be used with all coding systems.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DevOps toolchain** DevOps toolchain: A DevOps toolchain is a set or combination of tools that aid in the delivery, development, and management of software applications throughout the systems development life cycle, as coordinated by an organisation that uses DevOps practices. Generally, DevOps tools fit into one or more activities, which supports specific DevOps initiatives: Plan, Create, Verify, Package, Release, Configure, Monitor, and Version Control. Toolchains: In software, a toolchain is the set of programming tools that is used to perform a complex software development task or to create a software product, which is typically another computer program or a set of related programs. In general, the tools forming a toolchain are executed consecutively so the output or resulting environment state of each tool becomes the input or starting environment for the next one, but the term is also used when referring to a set of related tools that are not necessarily executed consecutively.As DevOps is a set of practices that emphasizes the collaboration and communication of both software developers and other information technology (IT) professionals, while automating the process of software delivery and infrastructure changes, its implementation can include the definition of the series of tools used at various stages of the lifecycle; because DevOps is a cultural shift and collaboration between development and operations, there is no one product that can be considered a single DevOps tool. Instead a collection of tools, potentially from a variety of vendors, are used in one or more stages of the lifecycle. Stages of DevOps: Plan Plan is composed of two things: "define" and "plan". This activity refers to the business value and application requirements. Specifically "Plan" activities include: Production metrics, objects and feedback Requirements Business metrics Update release metrics Release plan, timing and business case Security policy and requirementA combination of the IT personnel will be involved in these activities: business application owners, software development, software architects, continual release management, security officers and the organization responsible for managing the production of IT infrastructure. Stages of DevOps: Create Create is composed of the building, coding, and configuring of the software development process. The specific activities are: Design of the software and configuration Coding including code quality and performance Software build and build performance Release candidateTools and vendors in this category often overlap with other categories. Because DevOps is about breaking down silos, this is reflective in the activities and product solutions. Stages of DevOps: Verify Verify is directly associated with ensuring the quality of the software release; activities designed to ensure code quality is maintained and the highest quality is deployed to production. The main activities in this are: Acceptance testing Regression testing Security and vulnerability analysis Performance Configuration testingSolutions for verify related activities generally fall under four main categories: Test automation, Static analysis, Test Lab, and Security. Stages of DevOps: Packaging Packaging refers to the activities involved once the release is ready for deployment, often also referred to as staging or Preproduction / "preprod". This often includes tasks and activities such as: Approval/preapprovals Package configuration Triggered releases Release staging and holding Release Release related activities include schedule, orchestration, provisioning and deploying software into production and targeted environment. The specific Release activities include: Release coordination Deploying and promoting applications Fallbacks and recovery Scheduled/timed releasesSolutions that cover this aspect of the toolchain include application release automation, deployment automation and release management. Stages of DevOps: Configure Configure activities fall under the operation side of DevOps. Once software is deployed, there may be additional IT infrastructure provisioning and configuration activities required. Specific activities including: Infrastructure storage, database and network provisioning and configuring Application provision and configuration.The main types of solutions that facilitate these activities are continuous configuration automation, configuration management, and infrastructure as code tools. Stages of DevOps: Monitor Monitoring is an important link in a DevOps toolchain. It allows IT organization to identify specific issues of specific releases and to understand the impact on end-users. A summary of Monitor related activities are: Performance of IT infrastructure End-user response and experience Production metrics and statisticsInformation from monitoring activities often impacts Plan activities required for changes and for new release cycles. Stages of DevOps: Version Control Version Control is an important link in a DevOps toolchain and a component of software configuration management. Version Control is the management of changes to documents, computer programs, large web sites, and other collections of information. A summary of Version Control related activities are: Non-linear development Distributed development Compatibility with existent systems and protocols Toolkit-based designInformation from Version Control often supports Release activities required for changes and for new release cycles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Turkey ham** Turkey ham: Turkey ham is a ready-to-eat, processed meat made from cooked or cured turkey meat, water and other ingredients such as binders. Turkey ham products contain no pork products. Several companies in the United States produce turkey ham and market it under various brand names. It was invented circa 1975 by Jennie-O who first introduced it to consumers that year. Around January 1980, the American Meat Institute tried to ban use of the term "turkey ham" for products that are composed solely of turkey and contain no pork. Turkey ham may also be used as a substitute for bacon where religious restrictions forbid the consumption of pork. Overview: Turkey ham is a processed meat product made primarily from cooked or cured turkey meat and water, formed into the shape of a ham and often sold pre-sliced. It is a ready-to-eat product that can be consumed cold or heated. Overview: Production Turkey ham is produced from turkey meat such as cured turkey thigh meat and other meat from the animals, which can be machine-deboned. Contrary to the product's name, turkey ham products do contain ham and pork products. Some turkey ham products are manufactured with added water, which adds moisture and weight, and some include binders, which serves to bind the moisture and fat in the meat to improve texture. Turkey ham is sometimes flavored to resemble the flavor of ham. Turkey ham typically has a 5 percent fat content, and some turkey hams are produced as fat-free. Turkey hams are typically produced in two sizes, whole and half-sized.Some U.S. producers and brands of turkey ham include Butterball, Cargill, Jennie-O, Louis Rich, Norbest and Oscar Mayer. History: Turkey ham was developed by Jennie-O and was first introduced to American consumers by the company in 1975. Turkey ham was a successful venture for Jennie-O, as the processed meat brought in revenues that were ten times higher compared to those the company realized from unprocessed turkey thighs. Labeling: Around January 1980, the American Meat Institute (AMI) attempted to ban the use of the term "turkey ham" for products that contain no ham and are entirely composed of turkey, which the AMI described as "flagrant consumer deception". Use of the term "turkey ham" for such products was also opposed by some ham producers in the United States. Circa this time, the U.S. government began requiring turkey ham producers to include the words "cured turkey thigh meat" on turkey ham packaging. In 2010, it was written in the Handbook of Poultry Science and Technology, Secondary Processing that the term "cured turkey thigh meat" always followed the words "turkey ham" on American turkey ham packaging.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Asexual reproduction in starfish** Asexual reproduction in starfish: Asexual reproduction in starfish takes place by fission or through autotomy of arms. In fission, the central disc breaks into two pieces and each portion then regenerates the missing parts. In autotomy, an arm is shed with part of the central disc attached, which continues to live independently as a "comet", eventually growing a new set of arms. Although almost all sea stars can regenerate their limbs, only a select few sea star species are able to reproduce in these ways. Fission: Fissiparity in the starfish family Asteriidae is confined to the genera Coscinasterias, Stephanasterias and Sclerasterias. Another family in which asexual reproduction by fission has independently arisen is the Asterinidae. The life span is at least four years.A dense population of Stephanasterias albula was studied at North Lubec, Maine. All the individuals were fairly small, with arm lengths not exceeding 18 mm (0.71 in), but no juveniles were found, suggesting that there had been no recent larval recruitment and that this species may be obligately fissiparous. Fission seemed to take place only in the spring and summer and for any individual, occurred once a year or once every two years.Another species, Coscinasterias tenuispina, has a variable number of arms but is often found with 7 arms divided into dis-similar sized groups of 3 and 4. It is unclear why fission starts in any particular part of the disc rather than any other, but the origin seemed to bear some relation to the position of the madreporites and the longest arm. This species typically reproduces sexually in the winter and by fission at other times of year. The undivided individual has 1 to 5 madreporites and at least one is found in each offspring. New arms usually appear in groups of 4 and are normally accompanied by the appearance of additional madreporites. The presence of multiple madreporites seems to be a prerequisite of fission. In Brazil, only male individuals have been found and fission takes place all the year round, though primarily in the winter. Fission seems to be correlated with certain stress factors such as particularly low tides, when many starfish may be exposed to the air.Nepanthia belcheri has a variable number of arms and divides by fission in a similar manner. It is a hermaphrodite, some individuals having gonads that function as testes and others gonads that function as ovaries. After fission, the gonads regress and individuals that previously had mature female gonads become masculinized, developing male-type gonads. Many larger individuals provide evidence from the varying lengths of their arms that they have divided by fission on several occasions.In Sclerasterias, fissiparity seems to be confined to very young individuals. In these, there is a transitory hexamerous symmetry in what is a normally a pentamerously symmetrical genus. The immature individuals with 6 arms appear so different in appearance from mature individuals with 5 arms that they were at one time considered to be two genera, Hydrasterias and Sclerasterias. Juveniles with arms measuring between 8 mm and 15 mm (occasionally 20 mm) are usually involved in fission and undergo multiple divisions. A sample of 36 young Sclerasterias euplecta of this size was examined. 9 had only 5 arms and did not show evidence of fissiparity while the remainder had 6 arms, usually 3 longer than the other 3, following prior fission. In another sample of juvenile Scierasterias heteropau, the arms were similarly arranged in groups of three and there were 4 madreporites, 2 on the original and 2 on the regenerated section. Active fissiparity seems to be correlated with 6 arms and 4 madreporites. At some stage in their development as yet unexplained, only 5 arms and one madreporite appear, and the ability to divide in this way is lost. Autotomy as a means of asexual reproduction: History Writing in 1872, Lutken suggested that in certain members of the Ophiuroidea, "a radiary division occurs in which cast off arms formed new rays and a disk". Six years later Ernst Haeckel observed that members of the genus Ophidiaster (Linckia) were prone to cast off arms and that new discs, arms, madreporites and mouths formed on the severed surface of these.In 1904, Kellogg observed numerous severed arms on reefs at Apia, Samoa, noting that many were sprouting new arms and suggested that Linckia diplax and Linckia pacifica had the ability to generate new individuals in this way. He thought the arms might be shed by autotomy. In the same year, Monks showed experimentally that the "comets" developing from the severed arms of Linckia columbiae could indeed grow into new individuals. Autotomy as a means of asexual reproduction: Autotomy of arms Linckia multifora and Linckia guildingi are two species of starfish found on Hawaii which were found to exhibit autotomy, shedding one or more arms frequently. The arms are known as "comets" and can move about independently and each one can grow into a new individual. Though severed from the nervous system and the water vascular system they still exhibit normal behaviour patterns.In a study undertaken in Hawaii, it was found that the detachment of an arm was not a sudden event. Most fractures took place about 2.5 cm (1 in) from the disk and started with a small crack appearing on the lower surface of the arm. This spread laterally and upwards towards the dorsal surface. Then the tube feet on the arm and those on the body pulled the two parts of the animal in opposite directions until they parted. The process could take about an hour to complete. The damaged tissue healed in about 10 days and the animal grew a new arm over the course of several months. Breaks took place in various positions on the arm, though Crozier noted a particular breaking zone in Coscinasterias tenuispina. The immediate cause of the autotomy is not always apparent. Of 50 specimens of Linckia multifora brought to the laboratory, 18 had shed one or more arms within 24 hours. The mortality rate of newly severed arms was high, many succumbing to bacterial infection while the wounds were fresh. Once the wound had healed, in about 10 days, survival was more likely.When arms were severed into several lengths in the laboratory, it was found that those over 1 cm (0.4 in) in length were capable of regenerating. These included the tips of the arms and the central sections with wounds at each end. It takes about 10 months to regenerate a new disk with arms 1 cm (0.4 in) in length. The first development in the regeneration cycle is the formation of a crescent-shaped ridge at the damaged end. Grooves begin to form and a mouth develops at the point from which they radiate. The arms start to form and tube feet begin to appear. As the arms grow the disc begins to develop and eventually a madreporite appears. This process lasts for some time, and about 10 months after separation, the comet has a half disc and 4 arms about 1 cm (0.4 in) long.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pseudoconvex function** Pseudoconvex function: In convex analysis and the calculus of variations, both branches of mathematics, a pseudoconvex function is a function that behaves like a convex function with respect to finding its local minima, but need not actually be convex. Informally, a differentiable function is pseudoconvex if it is increasing in any direction where it has a positive directional derivative. The property must hold in all of the function domain, and not only for nearby points. Formal definition: Consider a differentiable function f:X⊆Rn→R , defined on a (nonempty) convex open set X of the finite-dimensional Euclidean space Rn . This function is said to be pseudoconvex if the following property holds: Equivalently: Here ∇f is the gradient of f , defined by: ∇f=(∂f∂x1,…,∂f∂xn). Note that the definition may also be stated in terms of the directional derivative of f , in the direction given by the vector v=y−x . This is because, as f is differentiable, this directional derivative is given by: Properties: Relation to other types of "convexity" Every convex function is pseudoconvex, but the converse is not true. For example, the function f(x)=x+x3 is pseudoconvex but not convex. Similarly, any pseudoconvex function is quasiconvex; but the converse is not true, since the function f(x)=x3 is quasiconvex but not pseudoconvex. This can be summarized schematically as: To see that f(x)=x3 is not pseudoconvex, consider its derivative at x=0 : f′(0)=0 . Then, if f(x)=x3 was pseudoconvex, we should have: In particular it should be true for y=−1 . But it is not, as: f(−1)=(−1)3=−1<f(0)=0 Sufficient optimality condition For any differentiable function, we have the Fermat's theorem necessary condition of optimality, which states that: if f has a local minimum at x∗ in an open domain, then x∗ must be a stationary point of f (that is: ∇f(x∗)=0 ). Properties: Pseudoconvexity is of great interest in the area of optimization, because the converse is also true for any pseudoconvex function. That is: if x∗ is a stationary point of a pseudoconvex function f , then f has a global minimum at x∗ . Note also that the result guarantees a global minimum (not only local). Properties: This last result is also true for a convex function, but it is not true for a quasiconvex function. Consider for example the quasiconvex function: This function is not pseudoconvex, but it is quasiconvex. Also, the point x=0 is a critical point of f , as f′(0)=0 . However, f does not have a global minimum at x=0 (not even a local minimum). Properties: Finally, note that a pseudoconvex function may not have any critical point. Take for example the pseudoconvex function: f(x)=x3+x , whose derivative is always positive: f′(x)=3x2+1>0,∀x∈R Examples: An example of a function that is pseudoconvex, but not convex, is: 0. Examples: The figure shows this function for the case where 0.2 . This example may be generalized to two variables as: The previous example may be modified to obtain a function that is not convex, nor pseudoconvex, but is quasiconvex: The figure shows this function for the case where 0.5 0.6 . As can be seen, this function is not convex because of the concavity, and it is not pseudoconvex because it is not differentiable at x=0 Generalization to nondifferentiable functions: The notion of pseudoconvexity can be generalized to nondifferentiable functions as follows. Given any function f:X→R , we can define the upper Dini derivative of f by: where u is any unit vector. The function is said to be pseudoconvex if it is increasing in any direction where the upper Dini derivative is positive. More precisely, this is characterized in terms of the subdifferential ∂f as follows: where [x,y] denotes the line segment adjoining x and y. Related notions: A pseudoconcave function is a function whose negative is pseudoconvex. A pseudolinear function is a function that is both pseudoconvex and pseudoconcave. For example, linear–fractional programs have pseudolinear objective functions and linear–inequality constraints. These properties allow fractional-linear problems to be solved by a variant of the simplex algorithm (of George B. Dantzig).Given a vector-valued function η , there is a more general notion of η -pseudoconvexity and η -pseudolinearity; wherein classical pseudoconvexity and pseudolinearity pertain to the case when η(x,y)=y−x
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Railway platform height** Railway platform height: Railway platform height is the built height – above top of rail (ATR) – of passenger platforms at stations. A connected term is train floor height, which refers to the ATR height of the floor of rail vehicles. Worldwide, there are many, frequently incompatible, standards for platform heights and train floor heights. Where raised platforms are in use, train widths must also be compatible, in order to avoid both large gaps between platforms and trains and mechanical interference liable to cause equipment damage. Railway platform height: Differences in platform height (and platform gap) can pose a risk for passenger safety. Differences between platform height and train floor height may also make boarding much more difficult, or impossible, for wheelchair-using passengers and people with other mobility impairments, increasing station dwell time as platform or staff are required to deploy ramps to assist boarding. Platform ramps, steps, and platform gap fillers together with hazard warnings such as "mind the gap" are used to reduce risk and facilitate access. Platform height affects the loading gauge (the maximum size of train cars), and must conform to the structure gauge physical clearance specifications for the system. Tracks which are shared between freight and passenger service must have platforms which do not obstruct either type of railroad car. Railway platform height: To reduce construction costs, the platforms at stations on many railway systems are of low height, making it necessary for passenger cars to be equipped with external steps or internal stairs allowing passengers access to and from car floor levels. When railways were first introduced in the 19th century, low platforms were widely used from the 1880s, especially in rural areas, except in the United Kingdom. Over the years, raised platforms have become far more widespread, and are almost universal for high-speed express routes and universal in cities on commuter and rapid transit lines. Raised platforms on narrow gauge railways can prevent track gauge conversion to standard gauge or broad gauge. Height categories: Buses, trams, trolleys, and railway passenger cars are divided into several typical categories. Height categories: Ultra Low Floor tram – 180 mm (7 in) Low floor tram – 300 to 350 mm (12 to 14 in) High floor tram – more than 600 mm (24 in) Low floor train – 550 mm (22 in) Train (in UK or narrow gauge) – 800 to 1,200 mm (31.5 to 47.2 in) Standard North American passenger cars – 1,300 mm (51 in) Train (standard gauge (except UK) or broad gauge) – 1,300 to 1,370 mm (51 to 54 in)These are floor heights. The platforms can be much lower, overcome by onboard staircases. Africa: Algeria Typical Algerian platforms are 550 mm (21.7 in) above rail. Kenya The 1,435 mm (4 ft 8+1⁄2 in) SGR platforms are two standard heights of 300 mm (11.8 in) and 1,250 mm (49.2 in) above rail heads. The 1,000 mm (3 ft 3+3⁄8 in) meter gauge platforms are 1100mm. Asia: China China Railway platforms are classified into the following categories of "low" 380 mm (15.0 in), "medium" 550 mm (21.7 in), "high" 760 mm (29.9 in) and "ultra high" 1,250 mm (49.2 in) (latter 2 for most new and rebuilt platforms). Areas adjacent to broad gauge countries/regions, such as Xinjiang and Inner-Mongolia, are still equipped with low platforms. Under the concession period since late 2016, platforms on the southeastern corridor from Shenzhen to Ruili to be 1,250 mm (49.2 in) ATR, whereas the northern-, central-, and western-Chinese platforms to be 380 mm (15.0 in) ATR, are recommended. Asia: Most CRH platforms are 1,250 millimetres (49.2 in) above top of rail, with the remainders being 760 millimetres (29.9 in). The proposed 1,524 mm (5 ft) (Russian gauge) Rail North China platforms will be 200 mm (7.9 in) above rails. Hong Kong Hong Kong's railway network consists of the local MTR network (including the former KCR), Hong Kong Tramways, and the Hong Kong section of the XRL high-speed line. Asia: MTR network Platforms on the MTR are 1,250 mm (49.213 in) above the rail for the Tung Chung line and Airport Express, collectively known as the Airport Railway lines.The height of platforms on the Disneyland Resort line and the urban lines are 1,100 mm (43.307 in). The urban lines include the Tsuen Wan line, Kwun Tong line, Tseung Kwan O line, Island line, and South Island line. Asia: Former KCR network All platforms on the East Rail line and Tuen Ma line are 1,066.8 mm (42 in) above rail heads.The light rail system uses a platform height of 910 mm (36 in) above rail level. High-speed rail line Trains at Hong Kong West Kowloon railway station travel along the XRL on China's high-speed rail system and so must be compliant with the platform height standard of 1,250 mm (49.213 in) above the rail. India There are two standard heights of platforms in India: 200 mm (7.9 in) and 760 mm (29.9 in). Indonesia There are three standard heights of the platforms, 180 mm (7.1 in) (low), 430 mm (16.9 in) (medium), and 1,000 mm (39.4 in) (high) above rail heads. Most railway stations in Indonesia use low platforms. Iran Iran's platforms are 380 mm (15.0 in), 550 mm (21.7 in) and 760 mm (29.9 in). Like in China, areas adjacent to broad gauge countries/regions such as the eastern regions such as around Mashhad and Zahedan, still equipped low platforms. Israel Israel Railways platforms fall in the range between 760 mm (29.9 in) to 1,060 mm (41.7 in) above top of rail. Asia: Japan The Japanese National Railways (JNR) for many years used a triple-standard for its conventional (Cape gauge) lines: 760 mm (29.9 in) for long-distance trains (originally step-fitted passenger cars pulled by steam engines); 1,100 mm (43.3 in) for commuter trains (step-less electric multiple units at a time when long-distance trains were not); and 920 mm (36.2 in) shared platforms that could serve both with relatively little discomfort (roughly level with the step on passenger carriages but not too low to board commuter trains).However, increasing electrification and the phasing-out of locomotive traction in favor of multiple units has made the distinction a matter of historical, rather than practical relevance. Recently, at Japan Railways Group stations in urban centers such as Tokyo and Osaka, whose lines were the earliest to be electrified, 1,100 mm (43.3 in) is the norm and lower-level platforms are generally raised to this height during station improvements or refurbishment. Elsewhere, such as Hokkaido and the Tohoku/Hokuriku region of Honshu, 920 mm (36.2 in) – and even 760 mm (29.9 in) platforms are still commonplace. As this represents a potential obstacle when boarding modern commuter trains, workarounds such as a step built into the floor of area-specific trainsets are often employed. Nevertheless, with accessibility becoming a greater concern as Japan's population ages, raising the level of the platform itself (in tandem with other improvements such as elevators and escalators) is seen as the most practical solution. Asia: In at least one case, with the E721 series EMU used on JR East lines in the Tohoku region, the floor of the train itself is lowered to be nearly level to existing 920 mm (36.2 in) platforms. This makes level boarding feasible at many stations (and boarding less of a hassle at stations with the lowest 760 mm (29.9 in) platforms). However, this (along with a different standard of electrification) also makes through service southward to Tokyo impossible, and prevents them from running on certain through lines, such as the Senseki-Tohoku Line, since the Senseki Line portion uses the higher 1,100 mm (43.3 in) platforms (and DC electrification). Asia: In contrast to the above standards, the standard gauge Shinkansen (Bullet Train) has, since its original inception, used only 1,250 mm (49.2 in) platforms. However, exceptions from this include the "Mini-Shinkansen" Yamagata Shinkansen and Akita Shinkansen lines, which use 1,100 mm (43.3 in) platforms to maintain compatibility with conventional JR trainsets. Most standard gauge non-JR commuter railways, such as Kintetsu Nara Line and Keisei Line, use 1,250 mm (49.2 in) platforms. North Korea North Korea's platforms are standardized at 1,250 mm (49.2 in) only. In there, 1,250 mm (49.2 in) is the norm, lower-level platforms are already raised to this height. Asia: South Korea Korail adopted 550 mm (21.7 in) high platforms to operate KTX. Typically, older platforms are lower than 500 mm. For metro trains, higher platforms which height after 1,135 mm (44.7 in) are used. Nuriro trains are using mechanical steps to allow both type of platforms. Korail has a long-term plan to change platform standards to higher platforms; both KTX-Eum and EMU-320 are designed to use higher platforms. Asia: Philippines There are various platform heights for railway lines in the Philippines. For heavy rail and commuter rail systems such as the LRT Line 2 and the PNR Metro Commuter Line, most stations are generally set at 1,100 mm (43.3 in). For the LRT Line 1 and MRT Line 3 which use light rail vehicles, the platform heights are at 620 mm (24.4 in) and 920 mm (36.2 in), respectively. Future train lines such as the Metro Manila Subway and the North–South Commuter Railway will use the same heavy rail standard at 1,100 mm (43.3 in), while the PNR South Long Haul's platform height will be the Chinese standard of 1,250 mm (49.2 in). All cargo loading platforms are 1,250 mm (49.2 in). Asia: Previously, the Philippine National Railways had lower platforms prior to the 2009 reconstruction of its network. Some stations such as Santa Mesa have its 200 mm (7.9 in) curb height platforms still intact as of 2020, while others such as Naga and EDSA have 760 mm (29.9 in) platforms built during the early 2000s. Taiwan Taiwan high-speed rail platforms are 1,250 mm (49.2 in) above rail. Asia: In Taiwan, Taiwan Railways Administration's platforms were 760 mm (29.9 in) tall and passengers must take two stair steps to enter the train. In 2001, however, the platforms were raised to 960 mm (37.8 in), cutting the steps needed to one. Between 2016 and 2020, platforms were again raised to 1,150 mm (45.3 in), and the unnecessary gap on trains were filled in. Asia: Thailand Old railway platforms are usually less than 500 mm (20 in) in height. New platforms along double tracking projects, red line projects, and metro stations are built at 1,100 mm (43.3 in) height. Bang Bamru railway station is built with both high and low platforms. Pakistan In Pakistan, most platforms are 200 mm (7.9 in) above rail. Turkmenistan In Turkmenistan, most platforms are 200 mm (7.9 in) above rail. Uzbekistan In Uzbekistan, most platforms are 200 mm (7.9 in) above rail. Eurasia: Kazakhstan In Kazakhstan, only Astana Nurly Jol station and Russian Railway's Petropavlovsk station have 550 mm (21.7 in) platforms. Almost everywhere else, the platforms are 200 mm (7.9 in) above top of rail. Eurasia: Russia As of late 2015, there are three standard heights of platforms, which include: 200 mm (7.9 in) for long-distance trains (originally locomotive-hauled step-fitted passenger carriages); 1,100 mm (43.3 in) for direct-current only commuter trains (step-less direct current commuter electric multiple units at a time when long-distance trains were not); and 550 mm (21.7 in) for shared platforms that could serve both with relatively little discomfort (roughly level with the steps on passenger carriages but not too low to board commuter trains).In some urban areas, such as Moscow and St Petersburg, served only by local traffic, use 1,100 mm (43.3 in) platforms for direct-current electric multiple units. Elsewhere, 550 mm (21.7 in) and even 200 mm (7.9 in) platforms are almost commonplace. In some cases, such as VR Sm4 of Finland, the floor of the train itself lowered to be nearly level to 550 mm (21.7 in) platforms. This makes level boarding feasible at some stations (and boarding less of a hassle at stations with the lowest 200 mm (7.9 in) platforms). Eurasia: The proposed 1,676 mm (5 ft 6 in) Indian gauge Indo-Siberian railways platforms will be 200 mm (7.9 in) above top of rail. Turkey In Turkey, the standard platform height for commuter railways is 1,050 mm (41.3 in) and for mainline & high-speed railways it's 550 mm (21.7 in). But most of the platforms throughout the network are old and thus out of standard. Europe: European Union The European Union Commission issued a TSI (Technical Specifications for Interoperability) on 30 May 2002 (2002/735/EC) that sets out standard platform heights for passenger steps on high-speed rail. These standard heights are 550 and 760 mm (21.7 and 29.9 in) . There are special cases: 840 mm (33.1 in) for the Netherlands, 915 mm (36.0 in) for Great Britain, and 915 mm (36.0 in) for Ireland. Europe: Broad-gauge railways The proposed 1,520 mm (4 ft 11+27⁄32 in) (Russian gauge) railways (e.g. Arctic Railway and Kosice-Vienna broad gauge line) and the proposed 7 ft 1⁄4 in (2,140 mm) (Brunel gauge) railways will be 200 mm (7.9 in) for Sweden and Norway, 200 mm (7.9 in) and 550 mm (21.7 in) for Poland and Slovakia, and 380 mm (15.0 in) for Germany and Austria. Europe: Channel Tunnel Platforms for Eurotunnel Shuttle are 1,100 mm (43.3 in) above rails. Europe: Rail Baltica The 1,435 mm (4 ft 8+1⁄2 in) European standard gauge Rail Baltica II platforms will be 1,250 mm (49.2 in) above rails. Previously, this line would be 550 mm (21.7 in) above rails, but cut off the Lithuanian sections and eliminate the freight transport provision make change to high-floor level-boarding trains on the European standard gauge tracks, much like the US's Brightline West and the UK's High Speed 2. Europe: Belgium Belgium has been using mixed type of platform heights (due to the age of the network, and the different companies running it before 1923). As of 2017 the most common platform heights for small stop places and stations are low platform heights of 280 mm (11.0 in).There is a plan to comply with the European TSI by raising all low platform heights to one of the European Standard Heights. Most stations will by then be equipped with 550 mm platforms, and direct current EMUs dedicated platforms will be upgraded in their final version to 760 mm. Some stations, or stopping points, already having 760 mm platform heights will keep the platforms at these heights. Europe: Finland In Finland, the current standard platform height is 550 mm (21.7 in) in Helsinki/Turku urban areas. Platforms that in the reminder of the network are built to the older standard of ranging 127 mm (5.0 in) to 265 mm (10.4 in) above top of rail.The sole exception on the national railway network is the Nikkilä halt which has a platform height of 400 mm (15.8 in).The majority of the passenger rolling stocks in Finland and the other Russian gauge compatible network have bottom steps lower than 550 mm (21.7 in), thus the platforms with 550 mm (21.7 in) height can create negative vertical gaps, unlike the rest of Europe. There are current proposed figures: Minimum height clearance of the overhead bridges must be 8.1 m (26 ft 7 in) above platform level to provide tracks raising/lowering to changing platform heights between 127 mm (5.0 in) and 550 mm (21.7 in) without major structural change, and also provide container double-stacking under 25kV AC overhead lines. Europe: Platform heights of ranging 127 mm (5.0 in) to 265 mm (10.4 in) for long-distance trains. Platform height of 550 mm (21.7 in) for commuter trains. Platform height of 350 mm (13.8 in) for shared platforms. Europe: Germany Germany's EBO standard specifies an allowable range between 380 mm (15.0 in) and 960 mm (37.8 in) . This does not include light rail systems that follow the BOStrab standard, with newer metro lines to use low-floor trams which have a usual floor height of 300 to 350 mm (11.8 to 13.8 in) so that platforms are constructed as low as 300 mm in accordance with BOStrab that requires the platform height not to be higher than the floor height.The traditional platforms had a very diverse height as the nationwide railway network is a union of earlier railway operators. Prior to followed by the European TSI standard the EBO standard requires that new platform construction be at a regular height of 760 mm (29.9 in) . The TSI standard of 550 mm (21.7 in) height, historically common in the East, is widely used on regional lines. Only the S-Bahn suburban rail systems had a higher platform height and these are standardized on 960 mm (37.8 in). Europe: Ireland While older platforms on the Dublin and Kingstown Railway were at lower levels, all platforms are now 915mm above rail and all new platforms are being built at that level. Amongst other work, there is an ongoing program of platform renewal. Both of Ireland's railway companies (Irish Rail in the Republic of Ireland and Northern Ireland Railways in Northern Ireland) have had some derogations from EU standards as their mainline rail systems, while connected to each other, are not connected to any other system. Europe: The electric DART fleet has carriage floors at 1,067 mm (42.0 in) above top of rail creating a step of 152 mm (6.0 in) , while the diesel fleet is typically one step (150 to 200 mm or 5.9 to 7.9 in) higher than the platform. On Dublin's Luas tram system, platforms are approximately 280 mm (11 in) above rail. Tram floors are at the same height, but have internal steps over the bogies. Luxembourg The 760 mm (29.9 in) platforms for the Namur-Luxembourg line (with 3kV DC electrification). The remainder of the network, the platforms are 380 mm (15.0 in) above rails. Netherlands European Commission decision 2002/735/EC which concerns trans-European interoperability for high-speed rail specifies that rolling stock be built for operational suitability platform height of 840 mm (33.1 in) . Dutch infrastructure maintainer ProRail has committed to upgrading all stations to 760 mm (29.9 in) platform height. Poland Typical platforms in Poland are 760 mm (29.9 in) high. In some rural or urban/suburban areas (e.g. around Warsaw) platforms used by local traffic are lower or higher (550 to 1,060 mm or 21.7 to 41.7 in), respectively. All newly built platforms are 550 or 760 mm (21.7 or 29.9 in) high. Spain While older platforms in Spain are lower than the rest of Europe, many platforms are now 680 mm (27 in) above rail. Following track gauge conversion from Iberian gauge to standard gauge, platforms to be raised to 1,250 mm (49.2 in) for new regional trainsets. Europe: Sweden Sweden has generally 380 to 580 mm (15.0 to 22.8 in) platforms for mainline trains. Stockholm Commuter Rail has almost always its own platforms at 730 mm (28.7 in) height which allows stepless trains of type X60. The Arlanda Express service has 1,150 mm (45.3 in) platform height with floor at platform level. They have their own platforms and trains, which are incompatible with mainline platforms and trains, even if the Arlanda Express goes on a mainline. The stations Sundbyberg and Knivsta have one platform each used by both commuter trains and regional mainline trains, which can cause uncomfortable steps, but is accepted. Sundbyberg has 730 mm and Knivsta has around 500 mm. Stockholm Central station has after the commuter trains moved to the "City" station, two high 730 mm platforms, now used for mainline trains. The Stockholm Metro and Saltsjöbanan have 1,125 mm (44.3 in), while tramways in general have a very low platform, often also used by buses which must allow boarding from places without platform. Europe: United Kingdom The standard height for platforms in the United Kingdom is 915 mm (36.02 in) with a margin of ± 25 mm (0.98 in). On the Heathrow Express the platform height is specified at 1,100 mm (43.3 in) .High Speed 2 is being built with a platform height of 1,115 mm (43.9 in), which does not conform to the European Union technical standards for interoperability for high-speed rail (EU Directive 96/48/EC). This is to provide true step free access to trains at the new HS2 stations, which is not possible using European Standards or UK standard heights. HS2 trains will operate outside of the HS2 line using existing infrastructure, which will not be step free. High Speed 1 has a platform height of 760 mm (29.9 in) on its international platforms. The Great Western Main Line, North London Line, Gospel Oak to Barking Line and Great Eastern Main Line platforms will be mixture of 760 mm (29.9 in) (for intercity trains) and 1,100 mm (43.3 in) (for London commuter trains). Europe: France The standard height for all platforms in France is 550 mm (21.7 in), following the european guidelines. However, this rule is not respected for parts of the RER and Transilien network. North America: Canada Intercity and commuter rail In Canada, Via Rail intercity trains have level boarding with platforms 48 inches (1,219 mm) above the top of rail at stub platforms at Montreal Central Station, Quebec City Gare du Palais and a single platform at Ottawa station. The remainder of stations in the Via Rail network have low platforms 5 inches (127 mm) to 8 inches (203 mm) above the rail.GO Transit regional trains have a floor height of 610 millimetres (24 in) above the top of rail, and GO Transit plans to raise platforms to provide level boarding at that height. Currently, platforms are 127 millimetres (5 in) above the top of rail, with a raised "mini-platform" (550 millimetres (22 in) above rails) which provides level boarding from one door of the train.Exo commuter trains have level boarding with platforms 48 inches (1,219 mm), 50 inches (1,270 mm), or 51 inches (1,295 mm) above the top of rail at Montreal Central (stub platforms and REM platforms), Côte-de-Liesse, Repentigny, Terrebonne, and Mascouche stations. The remainder of stations in the Exo network have low platforms 5 inches (127 mm) or 8 inches (203 mm) above the top of rail.All UP Express stations have level boarding with platforms 48 inches (1,219 mm) above the top of rail.West Coast Express has accessible boarding platforms at all stations. However, unlike the SkyTrain, there is a small height difference and door-level for wheelchair access are provided at all stations. North America: Metro and light rail All rapid transit and light rail systems, except for the Toronto streetcar system, provide level boarding between trains and platforms. The platform heights vary per line, as per the table below. North America: On the Toronto streetcar system, most stops are in mixed traffic accessed from the road surface, without raised platforms. Where raised platforms do exist, they are at sidewalk curb height and not at the height of the vehicle floor. As a result, people using wheeled mobility aids need to use the wheelchair ramp even at stops where a raised platform exists. North America: United States New and substantially renovated stations in the United States must comply with the Americans with Disabilities Act, which requires level boarding. Most inter-city and commuter rail systems use either 48-inch (1,219 mm) high platforms that allow level boarding, or 8-inch (203 mm) low platforms. Metro and light rail systems feature a variety of different platform heights. North America: Intercity and commuter rail with high platforms Most commuter rail systems in the northeastern United States have standardized on 48-inch (1,219 mm) high platforms, and is in general the floor height of single-deck trains. This height was introduced in the 1960s on the Long Island Rail Road with the M1 railcars.: 212  MBTA Commuter Rail, CTrail's Hartford Line and Shore Line East, Long Island Rail Road, Metro-North Railroad, NJ Transit, and SEPTA Regional Rail all use this height for new and renovated stations, though low platforms remain at some older stations. North America: Outside the Northeast U.S., Metra Electric District, South Shore Line, RTD, WES Commuter Rail, and SMART use 48-inch platforms. MARC has high-level platforms at most Penn Line stations; although low platforms are used on the Camden Line and Brunswick Line due to freight clearances (and in the latter case, the need to operate with the low-floor-only Superliner), Baltimore-Camden and Monocacy (stations outside of freight routes) as well as Greenbelt (a station with passing tracks) still feature high platforms. North America: Amtrak intercity services feature high-level platforms on the Northeast Corridor, Keystone Corridor, Empire Corridor, and New Haven–Springfield Line, although some stations on these lines have not been retrofitted with high platforms. High-level platforms are also present at a small number of stations on other lines, including Worcester, Roanoke, Raleigh, and several Downeaster stations. Brightline service in Florida also uses high level platforms. North America: At some stations, a desired high-level platform is impractical due to wide freight trains or other practicalities. (Gauntlet tracks, which permit wide freights to pass full-length high-level platforms, have practical issues of their own.) At these locations, mini-high platforms are often used for accessibility. Mini-high platforms have a short length of high platform, long enough for one or two doors, with an accessible ramp to the longer low platform. The platform edge is usually hinged so that it can be flipped out of the way of passing freights. North America: Intercity and commuter rail with low platforms Most other commuter rail systems in the U.S. and Amtrak stations have 8-inch (203 mm) low-level platforms to accommodate freight trains, with mini-high platforms or portable lifts to reach the 22-inch (559 mm)-high floors of low-level bilevel railcars. Single-deck cars, which generally serve the prevalent high platforms in the Northeast, feature trapdoors that expose stairs so that passengers can access the low platforms. North America: Double-deck commuter railcars are designed to be compatible with single-deck cars by having a third, intermediate deck above the bogies at both ends, with a matching floor height of 48 inches (1,219 mm). (Mixed consists of double decks and single decks can sometimes be seen in the FrontRunner system in Utah.) The Bombardier BiLevel Coach is used on many commuter rail networks in North America, with Coaster having 22-inch (559 mm) platforms to match their floor height. Once electrified, the new Caltrain trains will be equipped for both 22-and-50.5-inch (559 and 1,283 mm) platform heights in anticipation of sharing facilities with California High-Speed Rail trains. A small number of systems do use low-floor single deck trains, including TEXRail and others that use Stadler FLIRT and GTW rolling stock. North America: All of Amtrak's bilevel cars, which are all Superliners, are entirely low-floor and have step-free passthroughs on the upper deck, with the exception of "transition" sleeper cars where one end features stairs to maintain compatibility with single-deck cars (including Amtrak's own baggage cars). North America: Metro and light rail Platform heights of metro systems vary by system and even by line. For example, on the MBTA subway system in the Greater Boston area, Blue Line platforms are 41.5 inches (1,054 mm) above top of rail (ATR), while Orange Line platforms are at 45 inches (1,143 mm), and Red Line platforms are at 49 inches (1,245 mm). Bay Area Rapid Transit stations have platform heights of 39 inches (991 mm).Most light rail systems have platforms around 12–14 inches (300–360 mm) ATR, allowing level boarding on low-floor light rail vehicles. Most new systems are built to this standard, and some older systems like VTA light rail have been converted. Several systems including MetroLink use higher platforms with level boarding. Several older light rail systems have high-floor vehicles but low platforms, with mini-high platforms or lifts for accessibility. Some, like the MBTA Green Line, are being converted to low-floor rolling stock, while others, like Baltimore Light Rail have permanent mini-high platforms. Muni Metro has 34-inch (864 mm) high platforms in the subway section as well as some surface stops, and mini-high platforms at other surface stops; the vehicles have movable stairs inside to serve both high and low platforms. Oceania: Australia The majority of railway systems in Australia use high level platforms with a platform height a small distance below the train floor level. Exception to this include Queensland who have narrow gauge trains and lower platforms, and South Australia who have trains fitted with low level steps to enable the use of low level platforms.In New South Wales, by 2000, the platform step (the difference between the platform height and the train floor height) had been allowed to grow to a maximum of about 300 mm (11.8 in), which was uncomfortably large. For Sydney's 2000 Olympics, new and altered platforms were designed to match the Tangara trains, which are 3,000 mm (9 ft 10+1⁄8 in) wide, leaving a platform gap of about 80 mm (3+1⁄8 in) and a step height close to zero. This has become the standard for all subsequent platforms and trains in NSW. Oceania: In Victoria, the standard platform height for metropolitan and regional stations is 1080mm above the top of rail.The standard gauge lines in South Australia, Western Australia and Northern Territory, most platforms are 200 mm (7.9 in) above rails. Oceania: Metro and light rail The tramway network in Melbourne have some low level platforms and low floor vehicles, but most trams have steps and are boarded from the road. The Adelaide Tram line has low platforms at almost all stops and operates almost entirely with low-floor trams which also have retractable ramps for street boarding where required by persons unable to step up. The Gold Coast and Sydney light rail networks have low floor trams and platforms at all stops. South America: Argentina Platforms for long-distance trains are 200 mm (7.9 in) above rail, and platforms for Buenos Aires commuter trains are 1,100 mm (43.3 in).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**C minor** C minor: C minor is a minor scale based on C, consisting of the pitches C, D, E♭, F, G, A♭, and B♭. Its key signature consists of three flats. Its relative major is E♭ major and its parallel major is C major. The C natural minor scale is: Changes needed for the melodic and harmonic versions of the scale are written in with accidentals as necessary. The C harmonic minor and melodic minor scales are:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Circle of equal altitude** Circle of equal altitude: The circle of equal altitude, also called circle of position (CoP), is defined as the locus of points on Earth on which an observer sees a celestial object such as the sun or a star, at a given time, with the same observed altitude. It was discovered by the American sea-captain Thomas Hubbard Sumner in 1837, published in 1843 and is the basis of an important method in celestial navigation Discovery: Sumner discovered the line on a voyage from South Carolina to Greenock in Scotland in 1837. On December 17, as he was nearing the coast of Wales, he was uncertain of his position after several days of cloudy weather and no sights. A momentary opening in the clouds allowed him to determine the altitude of the sun. This, together with the chronometer time and the latitude enabled him to calculate the longitude. But he was not confident of his latitude, which depended on dead reckoning (DR). So he calculated longitude using his DR value and two more values of latitude 10' and 20' to the north. He found that the three positions were on a straight line which happened to pass through Smalls Lighthouse. He realised that he must be located somewhere on that line and that if he set course E.N.E. along the line he should eventually sight the Smalls Light which, in fact he did, in less than an hour. Having found the line empirically, he then worked out the theory, and published this in a book in 1843. The method was quickly recognized as an important development in celestial navigation, and was made available to every ship in the United States Navy. Parameters: The center of the CoP, is the geographical position (GP) of the observed body, the substellar point for a star, the subsolar point for the sun. The radius is the great circle distance equal to the zenith distance of the body. Center = geographical position (GP) of the body: ( Bgp , Lgp ) = (Dec, -GHA) If Lgp is defined as west longitude (+W/-E) then it will be +GHA, since HA (GHA or LHA) is always measured west-ward (+W/-E). Radius = zenith distance: zd [nm] = 60 ⋅ (90 - Ho) (aka co-altitude of Ho)As the circles used for navigation generally have a radius of thousands of miles, a segment a few tens of miles long closely approximates a straight line, as described in Sumner's original use of the method. Equation: The equation links the following variables The position of the observer: B, L. The coordinates of the observed star, its geographical position: GHA, Dec. The true altitude of the body: Ho. Equation: sin sin sin cos cos cos ⁡(LHA) Being B the latitude (+N/-S), L the longitude (+E/-W). LHA = GHA + L is the local hour angle (+W/-E), Dec and GHA are the declination and Greenwich hour angle of the star observed. And Ho is the true or observed altitude, that is, the altitude measured with a sextant corrected for dip, refraction and parallax. Special cases of COPs: Parallel of latitude by Polaris altitude. Parallel of latitude by altitude of the sun at noon, or meridian altitude. Meridian of longitude known the time and latitude. Circle of illumination or terminator (star = Sun, Ho = 0 for places at Sunrise/Sunset).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Analyte-specific reagent** Analyte-specific reagent: Analyte-specific reagents (ASRs) are a class of biological molecules which can be used to identify and measure the amount of an individual chemical substance in biological specimens. Regulatory definition: The U.S. Food and Drug Administration (FDA) defines analyte specific reagents (ASRs) in 21 CFR 864.4020 as “antibodies, both polyclonal and monoclonal, specific receptor proteins, ligands, nucleic acid sequences, and similar reagents which, through specific binding or chemical reaction with substances in a specimen, are intended to use in a diagnostic application for identification and quantification of an individual chemical substance or ligand in biological specimens.” In simple terms an analyte specific reagent is the active ingredient of an in-house test.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HAT1** HAT1: Histone acetyltransferase 1, also known as HAT1, is an enzyme that, in humans, is encoded by the HAT1 gene. Function: The protein encoded by this gene is a type B histone acetyltransferase (HAT) that is involved in the rapid acetylation of newly synthesized cytoplasmic histones, which are, in turn, imported into the nucleus for de novo deposition onto nascent DNA chains. Histone acetylation, in particular, of histone H4, plays an important role in replication-dependent chromatin assembly. To be specific, this HAT can acetylate soluble but not nucleosomal histone H4 at lysines 5 and 12, and, to a lesser degree, histone H2A at lysine 5.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Eight-wheel drive** Eight-wheel drive: Eight-wheel drive, often notated as 8WD or 8×8, is a drivetrain configuration that allows all eight wheels of an eight-wheeled vehicle to be drive wheels (that is, to receive power from the engine) simultaneously. Unlike four-wheel drive drivetrains, the configuration is largely confined to heavy-duty off-road and military vehicles, such as armored vehicles, tractor units or all-terrain vehicles such as the Argo Avenger. Operation: When such a vehicle only has eight wheels by definition all are driven. When it has twelve – with two pairs of ganged "dual" wheels on each rear axle – all are also driven but the 8×8 designation remains. Very occasionally, on the Sterling T26 for example, the two front axles can be fitted with ganged "dual" wheels. For most military applications where traction/mobility are considered more important than payload capability, single wheels on each axle (often referred to as super singles) are the norm. On some vehicles, usually recovery trucks or heavy tractor units, the rear two axles will have wider single tires than the front two axles.Heavy hauler and ballast tractor 8×8s have had a long history as prime movers both in the military (as tank transports and artillery tractors), and commercially in logging and heavy equipment hauling both on- and off-road. Operation: Most eight-wheel drive trucks have two forward axles and two at the rear, with only the front pair steering. Occasionally a single front axle and three rear (tridem) are seen, an example being the Oshkosh M1070 tank transporter. In such configurations, the front and rear axle usually steer. Other set ups include that of the ZIL-135. Operation: Many wheeled armored vehicles. have an 8x8 driveline, and on these the axles (which usually have independent suspension) are more evenly spaced. Latest generation 8x8 wheeled armored vehicles have steering on the rearmost (fourth) axle to improve mobility in urban and confined situations.In the case of both truck and armored vehicle applications, drive may be limited to the rear two axles for on-road use, this reducing driveline stress and tire wear, and increasing fuel efficiency.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Magnesium citrate** Magnesium citrate: Magnesium citrate is a magnesium preparation in salt form with citric acid in a 1:1 ratio (1 magnesium atom per citrate molecule). It contains 11.23% magnesium by weight. There is an exothermic heat generation when water is added, which is "most disagreeable when taken orally."The name "magnesium citrate" is ambiguous and sometimes may refer to other salts such as trimagnesium dicitrate which has a magnesium:citrate ratio of 3:2, or monomagnesium dicitrate with a ratio of 1:2, or a mix of two or three of the salts of magnesium and citric acid. Magnesium citrate: Magnesium citrate (sensu lato) is used medicinally as a saline laxative and to completely empty the bowel prior to a major surgery or colonoscopy. It is available without a prescription, both as a generic and under various brand names. It is also used in the pill form as a magnesium dietary supplement. As a food additive, magnesium citrate is used to regulate acidity and is known as E number E345. Mechanism of action: Magnesium citrate works by attracting water through the tissues by a process known as osmosis. Once in the intestine, it can attract enough water into the intestine to induce defecation. The additional water stimulates bowel motility. This means it can also be used to treat rectal and colon problems. Magnesium citrate functions best on an empty stomach, and should always be followed with a full (eight ounce or 250 ml) glass of water or juice to help counteract water loss and aid in absorption. Magnesium citrate solutions generally produce bowel movement in one-half to three hours. Use and dosage: The maximum upper tolerance limit (UTL) for magnesium in supplement form for adults is 350 mg of elemental magnesium per day, according to the National Institutes of Health (NIH). In addition, according to the NIH, total dietary requirements for magnesium from all sources (in other words, food and supplements) is 320–420 mg of elemental magnesium per day, though there is no UT for dietary magnesium. Use and dosage: Laxative Magnesium citrate is used as a laxative agent.As a laxative syrup with a concentration of 1.745 g of magnesium citrate per fluid ounce, a typical dose for adults and children twelve years or older is between 7 and 10 US fluid ounces (210 and 300 ml; 7.3 and 10.4 imp fl oz), followed immediately with a full 8 US fluid ounces (240 ml; 8.3 imp fl oz) glass of water. Consuming an adult dose of 10 oz of laxative syrup (@ 1.745 g/oz) implies a consumption of 17.45 g of magnesium citrate in a single 10 US fl oz (300 ml; 10 imp fl oz) dose resulting in a consumption of approximately 2.0 g of elemental magnesium per single dose. This laxative dose contains five times the recommended nutritional dose for children. Magnesium citrate is not recommended for use in children and infants two years of age or less. Use and dosage: Magnesium deficiency treatment Although less common, as a magnesium supplement the citrate form is sometimes used because it is believed to be more bioavailable than other common pill forms, such as magnesium oxide. But, according to one study, magnesium gluconate was found to be marginally more bioavailable than even magnesium citrate.Potassium-magnesium citrate, as a supplement in pill form, is useful for the prevention of kidney stones. Side effects: Magnesium citrate is generally not a harmful substance, but care should be taken by consulting a healthcare professional if any adverse health problems are suspected or experienced. Extreme magnesium overdose can result in serious complications such as slow heart beat, low blood pressure, nausea, drowsiness, etc. If severe enough, an overdose can even result in coma or death. However, a moderate overdose will be excreted through the kidneys, unless one has serious kidney problems. Rectal bleeding or failure to have a bowel movement after use could be signs of a serious condition.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nanofluid** Nanofluid: A nanofluid is a fluid containing nanometer-sized particles, called nanoparticles. These fluids are engineered colloidal suspensions of nanoparticles in a base fluid. The nanoparticles used in nanofluids are typically made of metals, oxides, carbides, or carbon nanotubes. Common base fluids include water, ethylene glycol and oil. Nanofluid: Nanofluids have novel properties that make them potentially useful in many applications in heat transfer, including microelectronics, fuel cells, pharmaceutical processes, and hybrid-powered engines, engine cooling/vehicle thermal management, domestic refrigerator, chiller, heat exchanger, in grinding, machining and in boiler flue gas temperature reduction. They exhibit enhanced thermal conductivity and the convective heat transfer coefficient compared to the base fluid. Knowledge of the rheological behaviour of nanofluids is found to be critical in deciding their suitability for convective heat transfer applications. Nanofluid: Nanofluids also have special acoustical properties and in ultrasonic fields display additional shear-wave reconversion of an incident compressional wave; the effect becomes more pronounced as concentration increases.In analysis such as computational fluid dynamics (CFD), nanofluids can be assumed to be single phase fluids; however, almost all new academic papers use a two-phase assumption. Classical theory of single phase fluids can be applied, where physical properties of nanofluid is taken as a function of properties of both constituents and their concentrations. An alternative approach simulates nanofluids using a two-component model.The spreading of a nanofluid droplet is enhanced by the solid-like ordering structure of nanoparticles assembled near the contact line by diffusion, which gives rise to a structural disjoining pressure in the vicinity of the contact line. However, such enhancement is not observed for small droplets with diameter of nanometer scale, because the wetting time scale is much smaller than the diffusion time scale. Synthesis: Nanofluids are produced by several techniques: Direct Evaporation (1 step) Gas condensation/dispersion (2 step) Chemical vapour condensation (1 step) Chemical precipitation (1 step) Bio-based (2 step)Several liquids including water, ethylene glycol, and oils have been used as base fluids. Although stabilization can be a challenge, on-going research indicates that it is possible. Nano-materials used so far in nanofluid synthesis include metallic particles, oxide particles, carbon nanotubes, graphene nano-flakes and ceramic particles.A bio-based, environmentally friendly approach for the covalent functionalization of multi-walled carbon nanotubes (MWCNTs) using clove buds was developed. There are no any toxic and hazardous acids which are typically used in common carbon nanomaterial functionalization procedures, employed in this synthesis. The MWCNTs are functionalized in one pot using a free radical grafting reaction. The clove-functionalized MWCNTs are then dispersed in distilled water (DI water), producing a highly stable MWCNT aqueous suspension (MWCNTs Nanofluid). Smart cooling nanofluids: Realizing the modest thermal conductivity enhancement in conventional nanofluids, a team of researchers at Indira Gandhi Centre for Atomic Research Centre, Kalpakkam developed a new class of magnetically polarizable nanofluids where the thermal conductivity enhancement up to 300% of basefluids is demonstrated. Fatty-acid-capped magnetite nanoparticles of different sizes (3-10 nm) have been synthesized for this purpose. It has been shown that both the thermal and rheological properties of such magnetic nanofluids are tunable by varying the magnetic field strength and orientation with respect to the direction of heat flow. Such response stimuli fluids are reversibly switchable and have applications in miniature devices such as micro- and nano-electromechanical systems. Smart cooling nanofluids: In 2013, Azizian et al. considered the effect of an external magnetic field on the convective heat transfer coefficient of water-based magnetite nanofluid experimentally under laminar flow regime. Up to 300% enhancement obtained at Re=745 and magnetic field gradient of 32.5 mT/mm. The effect of the magnetic field on the pressure drop was not as significant. Response stimuli nanofluids for sensing applications: Researchers have invented a nanofluid-based ultrasensitive optical sensor that changes its colour on exposure to extremely low concentrations of toxic cations. The sensor is useful in detecting minute traces of cations in industrial and environmental samples. Existing techniques for monitoring cations levels in industrial and environmental samples are expensive, complex and time-consuming. The sensor is designed with a magnetic nanofluid that consists of nano-droplets with magnetic grains suspended in water. At a fixed magnetic field, a light source illuminates the nanofluid where the colour of the nanofluid changes depending on the cation concentration. This color change occurs within a second after exposure to cations, much faster than other existing cation sensing methods. Response stimuli nanofluids for sensing applications: Such response stimulus nanofluids are also used to detect and image defects in ferromagnetic components. The photonic eye, as it has been called, is based on a magnetically polarizable nano-emulsion that changes colour when it comes into contact with a defective region in a sample. The device might be used to monitor structures such as rail tracks and pipelines. Magnetically responsive photonic crystals nanofluids: Magnetic nanoparticle clusters or magnetic nanobeads with the size 80–150 nanometers form ordered structures along the direction of the external magnetic field with a regular interparticle spacing on the order of hundreds of nanometers resulting in strong diffraction of visible light in suspension. Nanolubricants: Another word used to describe nanoparticle based suspensions is Nanolubricants. They are mainly prepared using oils used for engine and machine lubrication. So far several materials including metals, oxides and allotropes of carbon have been used to formulate nanolubricants. The addition of nanomaterials mainly enhances the thermal conductivity and anti-wear property of base oils. Although MoS2, graphene, Cu based fluids have been studied extensively, the fundamental understanding of underlying mechanisms is still needed. Nanolubricants: Molybdenum disulfide (MoS2) and graphene work as third body lubricants, essentially becoming tiny microscopic ball bearings, which reduce the friction between two contacting surfaces. This mechanism is beneficial if a sufficient supply of these particles are present at the contact interface. The beneficial effects are diminished as the rubbing mechanism pushes out the third body lubricants. Changing the lubricant, like-wise, will null the effects of the nanolubricants drained with the oil. Nanolubricants: Other nanolubricant approaches, such as Magnesium Silicate Hydroxides (MSH) rely on nanoparticle coatings by synthesizing nanomaterials with adhesive and lubricating functionalities. Research into nanolubricant coatings has been conducted in both the academic and industrial spaces. Nanoborate additives as well as mechanical model descriptions of diamond-like carbon (DLC) coating formations have been developed by Ali Erdemir at Argonne National Labs. Companies such as TriboTEX provide consumer formulations of synthesized MSH nanomaterial coatings for vehicle engines and industrial applications. Nanofluids in petroleum refining process: Many researches claim that nanoparticles can be used to enhance crude oil recovery. It is evident that development of nanofluids for oil and gas industry has a great practical aspects. Applications: Nanofluids are primarily used for their enhanced thermal properties as coolants in heat transfer equipment such as heat exchangers, electronic cooling system(such as flat plate) and radiators. Heat transfer over flat plate has been analyzed by many researchers. However, they are also useful for their controlled optical properties. Graphene based nanofluid has been found to enhance Polymerase chain reaction efficiency. Nanofluids in solar collectors is another application where nanofluids are employed for their tunable optical properties. Nanofluids have also been explored to enhance thermal desalination technologies, by altering thermal conductivity and absorbing sunlight, but surface fouling of the nanofluids poses a major risk to those approaches. Researchers proposed nanofluids for electronics cooling. Nanofluids also can be used in machining. Thermophysical properties of nanofluids: Thermal conductivity, viscosity, density, specific heat, and surface tension are considered some main thermophysical properties of nanofluids. Various parameters like nanoparticle type, size, and shape, volume concentration, fluid temperature, and nanofluid preparation method have effect on thermophysical properties of nanofluids. Viscosity of nanofluids Density of nanofluids Thermal conductivity of nanofluids Nanoparticle migration: The early studies indicating anomalous increases in nanofluid thermal properties over those of the base fluid, particularly the heat transfer coefficient, have been largely discredited. One of the main conclusions taken from a study involving over thirty labs throughout the world was that "no anomalous enhancement of thermal conductivity was observed in the limited set of nanofluids tested in this exercise". The COST funded research programme, Nanouptake (COST Action CA15119)[1] was founded with the intention "to develop and foster the use of nanofluids as advanced heat transfer/thermal storage materials to increase the efficiency of heat exchange and storage systems". One of the final outcomes, involving an experimental study in five different labs, concluded that "there are no anomalous or unexplainable effects".Despite these apparently conclusive experimental investigations theoretical papers continue to follow the claim of anomalous enhancement, see, particularly via Brownian and thermophoretic mechanisms, as suggested by Buongiorno. Brownian diffusion is due to the random drifting of suspended nanoparticles in the base fluid which originates from collisions between the nanoparticles and liquid molecules. Thermophoresis induces nanoparticle migration from warmer to colder regions, again due to collisions with liquid molecules. The mismatch between experimental and theoretical results is explained in Myers et al. In particular it is shown that Brownian motion and thermophoresis effects are too small to have any significant effect: their role is often amplified in theoretical studies due to the use of incorrect parameter values. Experimental validation of the assertions of are provided in Alkasmoul et al. Brownian diffusion as a cause for enhanced heat transfer is dismissed in the discussion of the use of nanofluids in solar collectors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cephamycin** Cephamycin: Cephamycins are a group of β-lactam antibiotics. They are very similar to cephalosporins, and the cephamycins are sometimes classified as cephalosporins. Cephamycin: Like cephalosporins, cephamycins are based upon the cephem nucleus. Unlike most cephalosporins, cephamycins are a very efficient antibiotic against anaerobic microbes.Cephamycins were originally produced by Streptomyces, but synthetic ones have been produced as well.Cephamycins possess a methoxy group at the 7-alpha position.In addition, cephamycins have been shown to be stable against extended-spectrum beta-lactamase (ESBL) producing organisms, although their use in clinical practice is lacking for this indication.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hemosiderin hyperpigmentation** Hemosiderin hyperpigmentation: Hemosiderin hyperpigmentation is pigmentation due to deposits of hemosiderin, and occurs in purpura, haemochromatosis, hemorrhagic diseases, and stasis dermatitis.: 853
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fluoronickelate** Fluoronickelate: The fluoronickelates are a class of chemical compounds containing an anion with nickel at its core, surrounded by fluoride ions which act as ligands. This makes it a fluoroanion. The nickel atom can be in a range of oxidation states from +2, +3 to +4. The hexafluoronickelate(IV)2− ion NiF62− contains nickel in the maximal +4 state, and is in octahedral coordination by the fluoride atoms. It forms a commercially available salt Potassium hexafluoronickelate(IV) K2NiF6. Solid double salts can also contain tetrafluoronickelate NiF4 eg K2NiF4.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trans-aconitate 3-methyltransferase** Trans-aconitate 3-methyltransferase: In enzymology, a trans-aconitate 3-methyltransferase (EC 2.1.1.145) is an enzyme that catalyzes the chemical reaction S-adenosyl-L-methionine + trans-aconitate ⇌ S-adenosyl-L-homocysteine + (E)-2-(methoxycarbonylmethyl)butenedioateThus, the two substrates of this enzyme are S-adenosyl methionine and trans-aconitate, whereas its two products are S-adenosylhomocysteine and (E)-2-(methoxycarbonylmethyl)butenedioate. This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:(E)-prop-1-ene-1,2,3-tricarboxylate 3'-O-methyltransferase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Persian Speech Corpus** Persian Speech Corpus: The Persian Speech Corpus is a Modern Persian speech corpus for speech synthesis. The corpus contains phonetic and orthographic transcriptions of about 2.5 hours of Persian speech aligned with recorded speech on the phoneme level, including annotations of word boundaries. Previous spoken corpora of Persian include FARSDAT, which consists of read aloud speech from newspaper texts from 100 Persian speakers and the Telephone FARsi Spoken language DATabase (TFARSDAT) which comprises seven hours of read and spontaneous speech produced by 60 native speakers of Persian from ten regions of Iran.The Persian Speech Corpus was built using the same methodologies laid out in the doctoral project on Modern Standard Arabic of Nawar Halabi at the University of Southampton. The work was funded by MicroLinkPC, who own an exclusive license to commercialise the corpus, though the corpus is available for non-commercial use through the corpus' website. It is distributed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Persian Speech Corpus: The corpus was built for speech synthesis purposes, but has been used for building HMM based voices in Persian. It can also be used to automatically align other speech corpora with their phonetic transcript and could be used as part of a larger corpus for training speech recognition systems. Contents: The corpus is downloadable from its website, and contains the following: 396 .wav files containing spoken utterances 396 .lab files containing text utterances 396 .TextGrid files containing the phoneme labels with time stamps of the boundaries where these occur in the .wav files. phonetic-transcript.txt which has the form "[wav_filename]" "[Phoneme Sequence]" in every line orthographic-transcript.txt which has the form "[wav_filename]" "[Orthographic Transcript]" in every line
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MP-2001** MP-2001: MP-2001, also known as 2,3,4-trimethoxyestra-1,3,5(10)-trien-17β-ol or 2,4-dimethoxyestradiol 3-methyl ether, is a steroid and derivative of estradiol that was described in 1966 and is devoid of estrogenic activity but produces potent analgesic effects in animals. It was never marketed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Diacetylene** Diacetylene: Diacetylene (also known as butadiyne) is the organic compound with the formula C4H2. It is the simplest compound containing two triple bonds. It is first in the series of polyynes, which are of theoretical but not of practical interest. Occurrence: Diacetylene has been identified in the atmosphere of Titan and in the protoplanetary nebula CRL 618 by its characteristic vibrational spectrum. It is proposed to arise by a reaction between acetylene and the ethynyl radical (C2H), which is produced when acetylene undergoes photolysis. This radical can in turn attack the triple bond in acetylene and react efficiently even at low temperatures. Diacetylene has also been detected on the Moon. Preparation: This compound may be made by the dehydrohalogenation of 1,4-dichloro-2-butyne by potassium hydroxide (in alcoholic medium) at ~70°C: ClCH CCH Cl KOH HC CH KCl +2H2O The bis(trimethylsilyl)-protected derivative may be prepared by the Hay coupling of (trimethylsilyl)acetylene: Me Si CH Me Si SiMe 3
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Benstonite** Benstonite: Benstonite is a mineral with formula Ba6Ca6Mg(CO3)13. Discovered in 1954, the mineral was described in 1961 and named after Orlando J. Benston (1901–1966). Description and occurrence: Benstonite is translucent and white, pale yellow, or pale yellow-brown in color. The mineral occurs as cleavable masses; cleavage fragments are nearly perfectly rhombohedral in shape. Cleavage faces are up to 1 cm (0.39 in) across and slightly curved. On large specimens, the faces exhibit a mosaic structure similar to that in some specimens of dolomite and siderite. Benstonite fluoresces red or yellow under x-rays and longwave and shortwave ultraviolet. The mineral also exhibits strong red phosphorescence.Benstonite is known to occur in Canada, China, India, Italy, Namibia, Russia, Sweden, and the United States. It occurs in association with alstonite, barite, barytocalcite, calcite, daqingshanite, fluorite, huntite, monazite, phlogopite, pyrite, sphalerite, strontianite, and quartz. Synthesis: The mineral was first synthesized in 1973 during a study of the Ba-Mg-Ca-CO3 system in aqueous solution. At room temperature, a solution containing proportional quantities of magnesium chloride, barium chloride, and calcium chloride was prepared, to which sodium carbonate was added. The solution immediately precipitated, and after sitting for two weeks, the precipitate was identified as nearly pure benstonite. History: Orlando J. Benston of Malvern, Arkansas, visited a barite mine near the Magnet Cove igneous complex on New Year's Eve, 1954. He collected samples of a mineral that he guessed might be alstonite or barytocalcite on the basis of qualitative tests. Friedrich Lippmann identified it as a new mineral and described it in the journal Naturwissenschaften in 1961. He named it Benstonite in honor of Benston.Type specimens are held at Victor Goldschmidt University in Germany and the National Museum of Natural History in the United States.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Silver cyanate** Silver cyanate: Silver cyanate is the cyanate salt of silver. It can be made by the reaction of potassium cyanate with silver nitrate in aqueous solution, from which it precipitates as a solid. Silver cyanate: AgNO3 + KNCO → Ag(NCO) + K+ + NO−3Alternatively, the reaction AgNO3 + CO(NH2)2 → AgNCO + NH4NO3analogous to the reaction used for the industrial production of sodium cyanate, may be used.Silver cyanate is a beige to gray powder. It crystallises in the monoclinic crystal system in space group P21/m with parameters a = 547.3 pm, b = 637.2 pm, c = 341.6 pm, and β = 91°. Each unit cell contains two cyanate ions and two silver ions. The silver ions are each equidistant from two nitrogen atoms forming a straight N–Ag–N group. The nitrogen atoms are each coordinated to two silver atoms, so that there are zigzag chains of alternating silver and nitrogen atoms going in the direction of the monoclinic "b" axis, with the cyanate ions perpendicular to that axis.Silver cyanate reacts with nitric acid to form silver nitrate, carbon dioxide, and ammonium nitrate. Silver cyanate: AgNCO + 2 HNO3 + H2O → AgNO3 + CO2 + NH4NO3
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Freestyle swimming** Freestyle swimming: Freestyle is a category of swimming competition, defined by the rules of the Swimming Federation (SF), in which competitors are subject to a lot of limited restrictions on their swimming stroke. Freestyle races are the most common of all swimming competitions, with distances beginning with 50 meters (55 yards) and reaching 1,500 meters (1,600 yards), also known as the mile. The term 'freestyle stroke' is sometimes used as a synonym for 'front crawl', as front crawl is the fastest surface swimming stroke. It is now the most common stroke used in freestyle competitions.The first Olympics held open water swimming events, but after a few Olympics, closed water swimming was introduced. The front crawl or freestyle was the first event that was introduced. Technique: Freestyle swimming implies the use of legs and arms for competitive swimming, except in the case of the individual medley or medley relay events. The front crawl is most commonly chosen by swimmers, as this provides the greatest speed. During a race, the competitor circles the arms forward in alternation, kicking the feet up and down (flutter kick). Individual freestyle events can also be swum using one of the officially regulated strokes (breaststroke, butterfly, or backstroke). For the freestyle part of medley swimming competitions, however, one cannot use breaststroke, butterfly, or backstroke. Front crawl is based on the Trudgen that was improved by Richmond Cavill from Sydney, Australia. Cavill developed the stroke by observing a young boy from the Solomon Islands, Alick Wickham. Cavill and his brothers spread the Australian crawl to England, New Zealand and America, creating the freestyle used worldwide today. During the Olympic Games, front crawl is swum almost exclusively during freestyle. Some of the few rules state that swimmers must touch the end of the pool during each length and cannot push off the bottom, hang on the wall, or pull on the lane lines during the course of the race. However, other than this any form or variation of strokes is considered legal with the race. As with all competitive events, false starts can lead to disqualification of the swimmer. New developments in the sport: Times have consistently dropped over the years due to better training techniques and to new developments in the sport. New developments in the sport: In the first four Olympics, swimming competitions were not held in pools, but in open water (1896 – the Mediterranean, 1900 – the Seine river, 1904 – an artificial lake, 1906 – the Mediterranean). The 1904 Olympics freestyle race was the only one ever measured at 100 yards, instead of the usual 100 meters. A 100-meter pool was built for the 1908 Olympics and sat in the center of the main stadium's track and field oval. The 1912 Olympics, held in the Stockholm harbor, marked the beginning of electronic timing. New developments in the sport: Male swimmers wore full body suits up until the 1940s, which caused more drag in the water than their modern swimwear counterparts. Also, over the years, some design considerations have reduced swimming resistance, making the pool faster, namely: proper pool depth, elimination of currents, increased lane width, energy-absorbing racing lane lines and gutters, and the use of other innovative hydraulic, acoustic, and illumination designs. New developments in the sport: The 1924 Olympics was the first to use the standard 50 meter pool with marked lanes. In freestyle events, swimmers originally dove from the pool walls, but diving blocks were eventually incorporated at the 1936 Olympics. The flip turn was developed in the 1950s, resulting in faster times. Lane design created in the early 1970s has also cut down turbulence in water, aiding in the more dynamic pool used today. Rules and regulation: Freestyle means "any style" for individual swims and any style but breaststroke, butterfly, or backstroke for both the individual medley, and medley relay competitions. The wall has to be touched at every turn and upon completion. Some part of the swimmer must be above water at any time, except for the first 15 meters after the start and every turn. This rule was introduced (see History of swimming) to prevent swimmers from using the faster underwater swimming, such as the fish kick, to their advantage, or even swimming entire laps underwater. The exact FINA rules are: Freestyle means that in an event so designated the swimmer may swim any style, except that in individual medley or medley relay events, freestyle means any style other than backstroke, breaststroke, or butterfly Some part of the swimmer must touch the wall upon completion of each length and at the finish Some part of the swimmer must break the surface of the water throughout the race, except it shall be permissible for the swimmer to be completely submerged during the turn and for a distance of not more than 15 meters after the start and each turn. By that point the head must have broken the surface. Competitions: There are nine competitions used in freestyle swimming, both using either a long time (50 meter) or a short time (25 meter) pool. The United States also employs short time yards (25 yard pool). In the United States, it is common for swimmers to compete in a 25-yard pool during the Fall, Winter, and Spring, and then switch over to a 50-meter pool format during the Summer. Competitions: 50 m freestyle (50 yards for short time yards) 100 m freestyle (100 yards for short time yards) 200 m freestyle (200 yards for short time yards) 400 m freestyle (500 yards for short time yards) 800 m freestyle (1000 yards for short time yards) 1500 m freestyle (1650 yards for short time yards) 4×50 m freestyle relay (4 x 50 yards for short time yards) 4 × 100 m freestyle relay (4 x 100 yards for short time yards) 4 × 200 m freestyle relay (4 x 200 yards for short time yards)Young swimmers (typically 8 years old and younger) have the option to swim a 25 yard/meter freestyle event. Competitions: Freestyle is also part of the medley over the following distances: 100 m individual medley (short 25 m pool only) 200 m individual medley (200 yard individual medley in short time yards) 400 m individual medley (400 yards individual medley in short time yards) 4 × 100 m medley relay (4 x 100 yard medley relay in short time yards) 4 × 200 m medley relay (4 x 200 yard medley relay in short time yards)In the long-distance races of the 800 and 1,500 meters (870 and 1,640 yards), some meets hosted by FINA (including the Olympics) only have the 800 meters (870 yards) distance for women and the 1,500 meters (1,600 yards) distance for men. However, FINA does keep records in the 1,500 meters (1,600 yards) distance for women and the 800 meters (870 yards) distance for men, and the FINA World Championships, as well as many other meets, have both distances for both sexes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ciliopathy** Ciliopathy: A ciliopathy is any genetic disorder that affects the cellular cilia or the cilia anchoring structures, the basal bodies, or ciliary function. Primary cilia are important in guiding the process of development, so abnormal ciliary function while an embryo is developing can lead to a set of malformations that can occur regardless of the particular genetic problem. The similarity of the clinical features of these developmental disorders means that they form a recognizable cluster of syndromes, loosely attributed to abnormal ciliary function and hence called ciliopathies. Regardless of the actual genetic cause, it is clustering of a set of characteristic physiological features which define whether a syndrome is a ciliopathy. Ciliopathy: Although ciliopathies are usually considered to involve proteins that localize to motile and/or immotile (primary) cilia or centrosomes, it is possible for ciliopathies to be associated with unexpected proteins such as XPNPEP3, which localizes to mitochondria but is believed to affect ciliary function through proteolytic cleavage of ciliary proteins.Significant advances in understanding the importance of cilia were made in the mid-1990s. However, the physiological role that this organelle plays in most tissues remains elusive. Additional studies of how ciliary dysfunction can lead to such severe disease and developmental pathologies is still a subject of current research. Signs and symptoms: A wide variety of symptoms are potential clinical features of ciliopathy. The signs most exclusive to a ciliopathy, in descending order of exclusivity, are:: 138  Dandy–Walker malformation (cerebellar vermis hypoplasia, usually with hydrocephalus) Agenesis of the corpus callosum Situs inversus Posterior encephalocele Polycystic kidneys Postaxial polydactyly Liver disease Retinitis pigmentosa Intellectual disabilityA case with polycystic ovary syndrome, multiple subcutaneous cysts, renal function impairment, Caroli disease and liver cirrhosis due to ciliopathy has been described.Phenotypes sometimes associated with ciliopathies can include: Anencephaly Breathing abnormalities Cerebellar vermis hypoplasia Diabetes Exencephaly Eye movement abnormalities Hydrocephalus Hypoplasia of the corpus callosum Hypotonia Infertility Cognitive impairment/defects Obesity Other polydactyly Respiratory dysfunction Renal cystic disease Retinal degeneration Sensorineural deafness Spina bifida Pathophysiology: "In effect, the motile cilium is a nanomachine composed of perhaps over 600 proteins in molecular complexes, many of which also function independently as nanomachines." Cilia "function as mechano- or chemosensors and as a cellular global positioning system to detect changes in the surrounding environment." For example, ciliary signaling plays a role in the initiation of cellular replacement after cell damage.In addition to this sensory role mediating specific signaling cues, cilia play "a secretory role in which a soluble protein is released to have an effect downstream of the fluid flow" in epithelial cells, and can of course mediate fluid flow directly in the case of motile cilia. Primary cilia in the retina play a role in transferring nourishment to the non-vascularized rod and cone cells from the pigment epithelial vascularized cells several micrometres behind the surface of the retina. Pathophysiology: Signal transduction pathways involved include the Hedgehog signaling pathway and the Wnt signaling pathway.Dysfunctional cilia can lead to: Chemosensation abnormalities, typically via ciliated epithelial cellular dysfunction. Defective thermosensation or mechanosensation, often via ciliated epithelial cellular dysfunction. Cellular motility dysfunction Issues with displacement of extracellular fluid Paracrine signal transduction abnormalitiesIn organisms of normal health, cilia are critical for: development homeostasis reproduction Genetics: "Just as different genes can contribute to similar diseases, so the same genes and families of genes can play a part in a range of different diseases." For example, in just two of the diseases caused by malfunctioning cilia, Meckel–Gruber syndrome and Bardet–Biedl syndrome, patients who carry mutations in genes associated with both diseases "have unique symptoms that are not seen in either condition alone." The genes linked to the two different conditions "interact with each other during development." Systems biologists are endeavoring to define functional modules containing multiple genes and then look at disorders whose phenotypes fit into such modules.A particular phenotype can overlap "considerably with several conditions (ciliopathies) in which primary cilia are also implicated in pathogenicity. One emerging aspect is the wide spectrum of ciliopathy gene mutations found within different diseases." List of ciliopathies: "The phenotypic parameters that define a ciliopathy may be used to both recognize the cellular basis of a number of genetic disorders and to facilitate the diagnosis and treatment of some diseases of unknown" cause. Known ciliopathies Likely ciliopathies Possible ciliopathies History: Although non-motile or primary cilia were first described in 1898, they were largely ignored by biologists. However, microscopists continued to document their presence in the cells of most vertebrate organisms. The primary cilium was long considered—with few exceptions—to be a largely useless evolutionary vestige, a vestigial organelle. Recent research has revealed that cilia are essential to many of the body's organs. These primary cilia play important roles in chemosensation, mechanosensation, and thermosensation. Cilia may thus be "viewed as sensory cellular antennae that coordinate a large number of cellular signaling pathways, sometimes coupling the signaling to ciliary motility or alternatively to cell division and differentiation."Recent advances in mammalian genetic research have made possible the understanding of a molecular basis for a number of dysfunctional mechanisms in both motile and primary cilia structures of the cell. A number of critical developmental signaling pathways essential to cellular development have been discovered. These are principally but not exclusively found in the non-motile or primary cilia. A number of common observable characteristics of mammalian genetic disorders and diseases are caused by ciliary dysgenesis and dysfunction. Once identified, these characteristics thus describe a set of hallmarks of a ciliopathy.Cilia have recently been implicated in a wide variety of human genetic diseases by "the discovery that numerous proteins involved in mammalian disease localize to the basal bodies and cilia." For example, in just a single area of human disease physiology, cystic renal disease, cilia-related genes and proteins have been identified to have causal effect in polycystic kidney disease, nephronophthisis, Senior–Løken syndrome type 5, orofaciodigital syndrome type 1 and Bardet–Biedl syndrome.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lead(II) oxide** Lead(II) oxide: Lead(II) oxide, also called lead monoxide, is the inorganic compound with the molecular formula PbO. PbO occurs in two polymorphs: litharge having a tetragonal crystal structure, and massicot having an orthorhombic crystal structure. Modern applications for PbO are mostly in lead-based industrial glass and industrial ceramics, including computer components. It is an amphoteric oxide. Types: Lead oxide exists in two types: Red tetragonal (α-PbO), obtained at lower temperatures than the β-PbO Yellow orthorhombic (β-PbO), which is obtained temperatures higher than 486 °C (907 °F) Synthesis: PbO may be prepared by heating lead metal in air at approximately 600 °C (1,100 °F). At this temperature it is also the end product of decomposition of other oxides of lead in air: PbO 293 Pb 12 19 351 Pb 12 17 375 Pb 605 PbO Thermal decomposition of lead(II) nitrate or lead(II) carbonate also results in the formation of PbO: 2 Pb(NO3)2 → 2 PbO + 4 NO2 + O2 PbCO3 → PbO + CO2PbO is produced on a large scale as an intermediate product in refining raw lead ores into metallic lead. The usual lead ore is galena (lead(II) sulfide). At a temperature of around 1,000 °C (1,800 °F) the sulfide is converted to the oxide: 2 PbS + 3 O2 → 2 PbO + 2 SO2 From lead There are two principal methods to make lead monoxide both of which resemble combustion of the lead at high temperature: Barton pot method. Synthesis: The refined molten lead droplets are oxidized in a vessel under a forced air flow which carries them out to the separation system (e.g. cyclonic separators) for further processing.: 245  Oxides produced by this method are mostly a mixture of α-PbO and β-PbO. The overall reaction is:2Pb + O2450 °C (842 °F)→2PbOBall mill method The lead balls are oxidized in a cooled rotating drum. The oxidation is achieved by collisions of the balls. Just like in Barton pot method, the supply of air and separators may also be used.: 245 Structure: As determined by X-ray crystallography, both polymorphs, tetragonal and orthorhombic feature a pyramidal four-coordinate lead center. In the tetragonal form the four lead–oxygen bonds have the same length, but in the orthorhombic two are shorter and two longer. The pyramidal nature indicates the presence of a stereochemically active lone pair of electrons. When PbO occurs in tetragonal lattice structure it is called litharge; and when the PbO has orthorhombic lattice structure it is called massicot. The PbO can be changed from massicot to litharge or vice versa by controlled heating and cooling. The tetragonal form is usually red or orange color, while the orthorhombic is usually yellow or orange, but the color is not a very reliable indicator of the structure. The tetragonal and orthorhombic forms of PbO occur naturally as rare minerals. Reactions: Metallic lead is obtained by reducing PbO with carbon monoxide at around 1,200 °C (2,200 °F): PbO + CO → Pb + CO2The red and yellow forms of this material are related by a small change in enthalpy: PbO(red) → PbO(yellow) ΔH = 1.6 kJ/molPbO is amphoteric, which means that it reacts with both acids and with bases. With acids, it forms salts of Pb2+ via the intermediacy of oxo clusters such as [Pb6O(OH)6]4+. With strong bases, PbO dissolves to form plumbite (also called plumbate(II)) salts: PbO + H2O + OH− → [Pb(OH)3]− Applications: The kind of lead in lead glass is normally PbO, and PbO is used extensively in making glass. Depending on the glass, the benefit of using PbO in glass can be one or more of increasing the refractive index of the glass, decreasing the viscosity of the glass, increasing the electrical resistivity of the glass, and increasing the ability of the glass to absorb X-rays. Adding PbO to industrial ceramics (as well as glass) makes the materials more magnetically and electrically inert (by raising their Curie temperature) and it is often used for this purpose. Historically PbO was also used extensively in ceramic glazes for household ceramics, and it is still used, but not extensively any more. Other less dominant applications include the vulcanization of rubber and the production of certain pigments and paints. PbO is used in cathode ray tube glass to block X-ray emission, but mainly in the neck and funnel because it can cause discoloration when used in the faceplate. Strontium oxide and Barium oxide are preferred for the faceplate.The consumption of lead, and hence the processing of PbO, correlates with the number of automobiles, because it remains the key component of automotive lead–acid batteries. Applications: Niche or declining uses A mixture of PbO with glycerine sets to a hard, waterproof cement that has been used to join the flat glass sides and bottoms of aquariums, and was also once used to seal glass panels in window frames. It is a component of lead paints. PbO was one of the raw materials for century eggs, a type of Chinese preserved egg. but it has been gradually replaced due to health problems. It was an unscrupulous practice in some small factories but it became rampant in China and forced many honest manufacturers to label their boxes "lead-free" after the scandal went mainstream in 2013. In powdered tetragonal litharge form, it can be mixed with linseed oil and then boiled to create a weather-resistant sizing used in gilding. The litharge would give the sizing a dark red color that made the gold leaf appear warm and lustrous, while the linseed oil would impart adhesion and a flat durable binding surface. PbO is used in certain condensation reactions in organic synthesis.PbO is the input photoconductor in a video camera tube called the Plumbicon. Health issues: Lead oxide may be fatal if swallowed or inhaled. It causes irritation to skin, eyes, and respiratory tract. It affects gum tissue, the central nervous system, the kidneys, the blood, and the reproductive system. It can bioaccumulate in plants and in mammals.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Synechia (eye)** Synechia (eye): Ocular synechia is an eye condition where the iris adheres to either the cornea (i.e. anterior synechia) or lens (i.e. posterior synechia). Synechiae can be caused by ocular trauma, iritis or iridocyclitis and may lead to certain types of glaucoma. It is sometimes visible on careful examination but usually more easily through an ophthalmoscope or slit-lamp. Synechia (eye): Anterior synechia causes closed angle glaucoma, which means that the iris closes the drainage way of aqueous humour which in turn raises the intraocular pressure. Posterior synechia also cause glaucoma, but with a different mechanism. In posterior synechia, the iris adheres to the lens, blocking the flow of aqueous humor from the posterior chamber to the anterior chamber. This blocked drainage raises the intraocular pressure. Management: Mydriatic or cycloplegic agents, such as topical homatropine, which is similar in action to atropine, are useful in breaking and preventing the formation of posterior synechia by keeping the iris dilated and away from the crystalline lens. Dilation of the pupil in an eye with synechia can cause the pupil to take an irregular, non-circular shape (dyscoria) as shown in the photograph. If the pupil can be fully dilated during the treatment of iritis, the prognosis for recovery from synechia is good. This is a treatable status. Management: To subdue inflammation, topical corticosteroids can be used. A prostaglandin analogue, such as travoprost, may be used if the intra-ocular pressure is elevated.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The Ghost Pirates** The Ghost Pirates: The Ghost Pirates is a horror novel by English writer William Hope Hodgson, first published in 1909. In it, Hodgson never describes in detail the ghosts – if this is indeed what they are, since their true nature is left ambiguous – he merely reports on their gradual commandeering of the ship. Story: The novel is presented as the transcribed testimony of Jessop, who we ultimately discover is the only survivor of the final voyage of the Mortzestus, having been rescued from drowning by the crew of the passing Sangier. It begins with Jessop's recounting how he came to be aboard the ill-fated Mortzestus and the rumors surrounding the vessel. Jessop then begins to recount the unusual events that rapidly increase in both frequency and severity. In the telling of his tale, Jessop offers only sparse interpretation of the events, spending most of the time relating the story in an almost journalistic fashion, presenting a relatively unvarnished description of the events and conversations as they occurred. He describes his confusion and uncertainty about what he believes he has seen, at times fearing for his own sanity. He eventually hears other members of the crew speak of strange events, most of which the rest of the crew pass off as either bad luck or the result of the witness being either tired or "dotty". Jessop only offers brief personal interpretation; he states that while he cannot discount the idea that the beings plaguing the ship may be ghosts, he presents his theory that they may be beings from another dimension that, while sharing the same physical space as theirs, are normally completely separated to the extent that neither dimension is aware of the existence of the other. He offers only vague, superficial suggestions as to the cause of his theorized dimensional breach. Style: The seafaring jargon, coupled with the phonetically rendered dialects of some of the crew, make the text at times somewhat opaque, while at the same time lending it an air of authenticity and believability. Through the use of compactly written prose and simple, almost offhand foreshadowing, Hodgson gradually increases the suspense and sense of dread. Added to this is the fact that the beings invading the ship are neither described in any detail nor explained as to their origin or motive. The combination of these literary devices allows Hodgson to amplify the feeling of impending doom until the moment of the novel's unavoidable climax, when the "sea-devils", as Lovecraft describes them, pull the Mortzestus beneath the waves. Reception: The economic style of writing has led horror writer Robert Weinberg to describe The Ghost Pirates as "one of the finest examples of the tightly written novel ever published."H.P. Lovecraft commented "The Ghost Pirates . . . is a powerful account of a doomed and haunted ship on its last voyage, and of the terrible sea-devils (of quasi-human aspect, and perhaps the spirits of bygone buccaneers) that besiege it and finally drag it down to an unknown fate. With its command of maritime knowledge, and its clever selection of hints and incidents suggestive of latent horrors in nature, this book at times reaches enviable peaks of power."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**W Sagittarii** W Sagittarii: W Sagittarii (W Sgr, Gamma-1 Sagittarii (γ¹ Sgr)) is a multiple star system star in the constellation Sagittarius, and a Cepheid variable star. W Sagittarii is an optical line-of-sight companion nearly a degree from the much brighter γ2 Sgr (Al Nasl) which marks the nozzle or spout of the teapot asterism. System: W Sgr is listed as component A of a multiple star system catalogued as ADS 11029 and WDS J18050-2935. Components B and C are at 33" and 46" respectively and both are 13th magnitude. They are purely optical companions, not physically associated with W Sgr.Component A, W Sgr, is itself a triple star system, with the components referred to as W Sgr Aa1, Aa2, and Ab. These have also been referred to as components Aa, Ab, and B respectively. The outer companion Ab has been resolved at a separation of 0.14" and is over 5 magnitudes fainter than the primary supergiant. The inner components can only be identified spectroscopically by their radial velocity variations. The primary is a 6 M☉ yellow supergiant, while the secondary is an early F main sequence star with a mass less than 1.4 M☉. Variability: The supergiant component W Sgr Aa1 is a variable star which pulsates regularly between magnitudes 4.3 and 5.1 every 7.59 days. During the pulsations, that temperature and spectral type also vary. It is classified as a Classical Cepheid (δ Cephei) variable.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Extracorporeal** Extracorporeal: An extracorporeal is a medical procedure which is performed outside the body. Extracorporeal devices are the artificial organs that remain outside the body while treating a patient. Extracorporeal devices are useful in hemodialysis and cardiac surgery. Circulatory procedures: A procedure in which blood is taken from a patient's circulation to have a process applied to it before it is returned to the circulation. All of the apparatus carrying the blood outside the body is termed the extracorporeal circuit. Apheresis Autotransfusion Hemodialysis Hemofiltration Plasmapheresis Extracorporeal carbon dioxide removal Extracorporeal cardiopulmonary resuscitation Extracorporeal membrane oxygenation (ECMO) Cardiopulmonary bypass during open heart surgery. Other procedures: Extracorporeal shockwave lithotripsy (ESWL), which is unrelated to other extracorporeal therapies, in that the device used to break up the kidney stones is held completely outside the body, whilst the lithotripsy itself occurs inside the body. Extracorporeal radiotherapy, where a large bone with a tumour is removed and given a dose far exceeding what would otherwise be safe to give to a patient.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Handgun holster** Handgun holster: A handgun holster is a device used to hold or restrict the undesired movement of a handgun, most commonly in a location where it can be easily withdrawn for immediate use. Holsters are often attached to a belt or waistband, but they may be attached to other locations of the body (e.g., the ankle holster). Holsters vary in the degree to which they secure or protect the firearm. Some holsters for law enforcement officers have a strap over the top of the holster to make the handgun less likely to fall out of the holster or harder for another person to grab the gun. Some holsters have a flap over the top to protect the gun from the elements. Function: Holsters are generally designed to offer protection to the handgun, secure its retention, and provide ready access to it. The need for ready access is often at odds with the need for security and protection, so users must consider their needs. Choosing the right balance of security and availability can be very important, especially in the case of a defensive weapon holster, where failure to access the weapon quickly or damage or loss of the weapon due to insufficient retention or protection could leave the user inadequately defended. Function: One of the most important functions of a holster is trigger coverage. Many choose to carry a firearm with a round in the chamber so that it is immediately available to use. Although some gun users believe this to be dangerous, practically all modern handguns are designed to be carried this way, with safety features that are designed to prevent the weapon from discharging unless the trigger is pulled. The use of a holster that blocks access to the trigger effectively mitigates this risk. Holsters specifically designed for the model of firearm tend to perform best in this respect. Likewise, those constructed of more rigid materials better prevent manipulation of the trigger when holstered.Holsters are generally designed to be used with one hand, allowing the handgun to be removed and/or replaced with the same hand. To be able to return the handgun to its holster one-handed, the holster must be made from stiff material that holds its shape so that the holster won't collapse when the object is no longer inside to give it support. Function: Holsters are generally attached to a person's belt or waistband or clipped to another article of clothing. Some holsters, such as ankle holsters, have integrated support. Other holsters may fit inside a pocket, to add stability and protection to the handgun, keeping it more reliably secure and accessible than if it were in the pocket alone. Function: Holsters are generally worn in a location where they can be readily accessible. Common locations are: at the waist (outside (OWB) or inside (IWB) the waistband), behind the back (small of back (SOB), at the ankle, at the chest (in an elastic belly band or shoulder holster), or on the upper thigh. Holsters are sometimes contained in an external bag, such as a purse or fanny pack. Materials: Since holsters are typically made from fairly stiff yet tough materials, there are a limited number of common choices. The traditional material, particularly for handgun holsters, is leather. It has an attractive appearance and can be dyed in many colors and/or embossed with elaborate designs for cosmetic reasons. Ballistic nylon is another common fabric for holsters, as it is stiff, wear resistant, and thick enough to provide protection from gun shots and bullets. Molded plastics, such as Kydex, are also popular, due to their low cost and robustness. Common types and styles: Holster designs for firearms cover a wide range of shapes, materials, and retention/release mechanisms, from simple leather pouches hanging from a belt to highly protective holsters with flaps that cover the entire handgun, to highly adjustable competition holsters that hold the handgun at a precise position and release instantly when activated. The wide range of types indicates the highly varied circumstances in which holsters are used, and the varying preferences of the users. Common types and styles: Categories by use Holsters can be divided into four broad categories by use: duty holsters, worn by uniformed law enforcement and peace officers and security personnel; tactical holsters, worn by military, security, and law enforcement personnel in certain situations; concealment holsters, worn by plainclothes peace officers and private persons; and sporting holsters, worn for shooting sports and hunting. Common types and styles: Duty holsters are designed to be carried openly, so concealment is not an issue, but retention and appearance are. Duty holsters can be made of leather, nylon, or plastic; they are designed to be attached to a duty belt, and worn on the dominant side. Duty holsters are generally only found for service and compact size handguns as opposed to small subcompact handguns as these are generally only used for concealed carry backup guns.The primary characteristic that often distinguishes duty holsters from all other holster designs is retention. Modern law enforcement duty holsters are available with varying levels of retention security (i.e. Level I, Level II, Level II+, Level III, etc). Some security features are passive (such as retention screws, decoy straps, or hood guards), while others are active and require deliberate manipulation by the officer during the draw (such as traditional thumbreak snaps). While a higher level of retention will make it more difficult for a suspect to take a holstered handgun from an officer, it may also reduce the speed and ease with which an officer may draw their handgun (especially if the security features are active and not passive). Therefore, when selecting a duty holster, an officer may be forced to find a compromise of speed and retention that they are comfortable with. Common types and styles: Tactical/military holsters are usually made of nylon or plastic. They may be made in a camouflage pattern to match the wearer's uniform. They are often of a drop-leg design and offer a retention device. Some military holsters still use the old flap design (also referred to as a "suicide" or "widow maker" holster, which is cumbersome and slow on the draw, but provides greater protection for the holstered firearm against the elements). Common types and styles: There is some overlap between duty holsters, tactical holsters, and military holsters. Weapon retention is generally not as important a consideration in military use as it is in law enforcement due to the differences in their work environments.Concealment holsters are designed to be easily concealed, as well as lightweight and unobtrusive. They are generally designed for subcompact and compact handguns since they are easier to conceal. Concealment holsters are designed to be worn under clothing, such as on the belt under a coat, under pants in an ankle holster, or in a trouser pocket. Since the holster is held close to the body, comfort is important, and concealment holsters often have broad surfaces in contact with the user's body, to distribute the pressure across a wider area and prevent abrasion of the skin. Protecting the handgun from the user's perspiration is often an important consideration in such carry locations. Often the outside of the holster is broader, to help break up the outline of the handgun and prevent printing, where the outline of the gun can be seen through clothing. For pocket holsters, the external flat side is often the side with a nap, or rougher surface, to hold the holster in place when drawing the pistol. Common types and styles: Sporting holsters cover a wide spectrum of styles: maximum access for fast draw shooting, highly adjustable holsters used in IPSC and pin shooting, old-fashioned holsters used in Cowboy Action Shooting, high retention, maximum protection holsters used for handgun hunting, and simple holsters used to hold a handgun while out plinking. Like any sporting equipment, sporting holsters evolve to maximize the benefits given the rules of the game, where applicable, so the competitive sports have the most specialized holsters. Common types and styles: Holsters for hunting can be unique if they are designed to carry large handguns or to make allowances for telescopic sights. Large handguns are often carried in holsters that are slung across the shoulder, and removed from the body before the handgun is drawn. Slow access is acceptable in this case because the handgun is not expected to be used for defensive purposes. Common types and styles: Categories by method of wear Popular holster types are: Outside the waistband (OWB) or belt holsters, are most commonly used by police and military, and by citizens who choose to open carry. Belt holsters can be worn high and close to the body, slightly behind the hip bone ("4:00 position"), and can be concealed under a long, untucked shirt or jacket. Common types and styles: Inside the waistband (IWB) holsters, which clip or mount to a belt and allow one to securely holster the weapon inside the pants. Some IWB holsters give the wearer the option of tucking a shirt over the firearm and holster. A variant design is an "appendix inside the waistband holster" (AIWB), intended to allow wear inside the front of the pants (as opposed to the side or rear, which is more typical). Common types and styles: Appendix rigs, a variant design of the AIWB holsters with an attached magazine carrier. Some are modular in design, such as the Dara Modular Appendix Rig. Below waistband (BWB), made popular by Urban Carry Holsters manufacturer, is a style of holster that attaches directly below the waistline and is more deeply concealed than a traditional IWB holster. Common types and styles: Shoulder holsters consist of two straps connected in a manner similar to a backpack, with the actual holster mounted to a strap on the right or the left side. Shoulder holsters are designed to position the handgun in one of three ways: a vertical position with the barrel pointed generally toward the ground, a vertical position with the barrel pointed generally upward, or a horizontal position with the barrel pointed generally behind the wearer. Shoulder holsters are typically comfortable for the wearer, as they distribute the weight across the shoulders instead of directly on the belt. Normally, the leather straps cross over on shoulders and back. The spare magazines hang in opposite directions of the body from the holster. It also allows to carry a gun in jacket or sports coat. The direction of this holster is either vertical, for long guns like large or full-frame caliber revolvers, or horizontal, for other firearms. The gun can also be placed over the chest or under the armpit, and the position depends on the gun design. Advantages of this holster are: it is comfortable to wear, even when wearing for a long time; it is easily concealable even with a jacket; a good design usually distributes the weight of the gun evenly. Common types and styles: Sling holsters are similar to shoulder holsters, but instead consist of a band worn over one shoulder and another around the chest. This style of holster (designated M3 for the early 1-strap model and M7 for the two-strap model in the U.S. military) was used for pilots, tank operators, and other vehicle drivers in World War II as they were easier to use in the seated position. They became popular with other soldiers who disliked the heavy leather flap on the standard issue M1911A1 hip holster. They are still produced by the U.S. military. Common types and styles: The belly band holster is a wide elastic belt with a built-in holster, usually worn under an untucked shirt, to facilitate access. There are various types, worn at the belt line or higher, with the gun placement anywhere from in front to under the armpit. In order to remain in place, a belly band must be extremely tight; this is generally uncomfortable – it is comparable to wearing a girdle. Common types and styles: Pocket holsters are used for very small weapons, such as a pocket pistols. Common types and styles: Small of back holsters place the weapon directly over the center of the back, allowing for even large handguns to be carried with little printing. While both comfortable and stylish, should the wearer fall onto the weapon (such as in a close quarters fight) serious spinal injury may occur. For this reason, in recent times many police departments in the United States have disallowed any equipment from being worn in this position. Common types and styles: Groin holsters place the handgun mostly below the waistline in front of the body. There are few body movement or clothing restrictions with this holster type. Common types and styles: Thigh holsters (also named as tactical or drop leg holsters) are a popular law enforcement and police item that stores the sidearm on the leg where the hand naturally hangs, making for a fast draw. Early U.S. cavalry units used these in the early 1900s with a leather thong strapping it to the leg. Modern ones often use a drop leg PALS grid with a modular holster attached, often with buckles for quick release. Law enforcement and military personnel wear these when a bulky vest or a full belt (as in the case of K9 officers) makes belt carry unhelpful or when they want an alternative to another holster. Western style holsters of this type, known as buscadero holsters, were worn by many actors in Western films and TV shows set in the 1800s, even though they weren't invented until the 1920s. Common types and styles: Ankle (aka "boot") holsters offer excellent concealment and are used by law enforcement officials who wish to carry a secondary weapon to back up their primary firearm. However, many officers find that even a small handgun bounces around too much while running or during other physical activities. Chest holsters can be attached to MOLLE-compatible vests and chest carriers. Like shoulder holsters, chest holsters are often easier to draw from than belt holsters when the operator is seated inside a vehicle. Common types and styles: Strut holsters are used exclusively for concealed carry. They are worn above the trouser belt line as a cross draw holster located directly under one's arm (9 o'clock position) or toward the front of the body (10 to 11 o'clock position). The design contains a strut which is shaped to nest behind one's trouser belt and attach to the holster at the other end. The strut transfers the weight of the firearm to the belt and retains the weapon in place for secure removal. A flexible band is also attached to the holster and worn above the waist to keep the weapon snug against the body. Concealment is achieved by wearing the unit inside of a shirt which may be tucked in or worn outside. Common types and styles: Pancake holsters are typically made of two pieces of the material with the handgun sandwiched between them, containing at least two belt slots. They should be carried slightly off the hip to the rear part of the back. The pancake style of carry allows pulling the gun tight against the body for a better concealment. Common types and styles: Cross-draw belt holsters are designed to be worn outside the waistline on the weak side of the body (opposite to the dominant hand). Although the cross-draw carry is often considered to be slower due to the necessary movement across the body, drawing the gun from a seated position can be more comfortable and even quicker carry method compared to the others. Cross-draw belt holsters may be an ideal option for wearing a backup gun on the waistline and also appropriate choice for women due to the comfort of carry and its natural adaptability to the female body.Other, specialized types of holsters are designed to be mounted inside briefcases, day planners, purses and filofaxes, or even articles of clothing, including Tank top & the bra. Common types and styles: Attachment options The safest way for carrying a handgun is carrying it in a holster that keeps the gun stable on its place and yet gives comfort and easy access when needed. As there are many different types of holster and ways for concealed carrying, one is able to choose the one that suits one's expectations and needs. For all these preferences such as – concealed carrying, safety, stability and easy accessibility, the most popular among customers are belt holsters. However, even in this group one can choose the different type of attaching the holster. Some of the most common belt holster attachment options are: Belt loops – it consist of two or more metal pieces which helps to properly attach the holster to the belt. Even though it takes longer to put on and take off, it gives the holster a better stability and fits perfectly. Two and more belt loops enable to wear holster in diverse angles. Common types and styles: Belt tunnel – one wider loop that is easily threaded through the belt. One disadvantage is worse stability. Belt snaps – much easier to put on and take off the belt and yet keep the holstered gun stable. Belt clip – the holster is securely clipped on the waistband without taking the belt off which helps to attach the holster quickly and easily. The belt clip can be either made of steel or polymer. Paddle – a comfortable way to wear a gun holstered, very easily attached to the belt of trousers – even without wearing a belt. Disadvantage is looser fixation and safety risks that come with it. Lowered belt loop – the best type for professionals for its easy and fast accessibility. Enables one to adjust the high and angle of the holstered handgun. Since it is the type for open carry, it is suitable only with an appropriate permission. Traditionally a holster is attached to a waist belt, thigh belt or shoulder harness that will offer draw-resistance. Other options like belly-bands, fanny packs or specially designed rigs like the Enigma offer alternative conceal carry options that do not require a standard belt. Makers: Custom leather workers typically focus on one area or two in leather work. Holster makers are those who usually stay put in their respective field. Any and all pistols, whether compact, mid-size or large hand-guns are sheathed in leather in a process that molds to the firearm, and hardens to a stout, strong and long lasting holster. These can be made into inside waist band, strong side, cross-over, shoulder holster, chest holster, pocket and inside the shirt. These holsters are made for competition shooters, recreational, security and law enforcement. Makers: A newer generation of manufacturing has come to the forefront for holster manufacturing, using things like Kydex, 3D printing, and also injection molding. These newer techniques provide for longer lasting products that are more easily adapted to different handgun combinations including lights, lasers, suppressors, sights and optics that are commonly installed on more modern handguns. Leather holsters are still very popular in many circles of competition, concealed carry, and outdoor activities, but the plastic holsters are outpacing leather holsters year over year due to their increased number of mounting options as well as the aforementioned benefits of modularity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Instrument error** Instrument error: Instrument error refers to the error of a measuring instrument, or the difference between the actual value and the value indicated by the instrument. There can be errors of various types, and the overall error is the sum of the individual errors. Types of errors include systematic errors random errors absolute error other error Systematic errors: The size of the systematic error is sometimes referred to as the accuracy. For example the instrument may always indicate a value 5% higher than the actual value; or perhaps the relationship between the indicated and actual values may be more complicated than that. A systematic error may arise because the instrument has been incorrectly calibrated, or perhaps because a defect has arisen in the instrument since it was calibrated. Instruments should be calibrated against a standard instrument that is known to be accurate, and ideally the calibration should be repeated at intervals. The most rigorous standards are those maintained by a standards organization such as NIST in the United States, or the ISO in Europe. Systematic errors: If the users know the amount of the systematic error, they may decide to adjust for it manually rather than having the instrument expensively adjusted to eliminate the error: e.g. in the above example they might manually reduce all the values read by about 4.8%. Random errors: The range in amount of possible random errors is sometimes referred to as the precision. Random errors may arise because of the design of the instrument. In particular they may be subdivided between errors in the amount shown on the display, and how accurately the display can actually be read. Amount shown on the display Sometimes the effect of random error can be reduced by repeating the measurement a few times and taking the average result. Random errors: How accurately the display can be read If the instrument has a needle which points to a scale graduated in steps of 0.1 units, then depending on the design of the instrument it is usually possible to estimate tenths between the successive marks on the scale, so it should be possible to read off the result to an accuracy of about 0.01 units. Other errors: The act of taking the measurement may alter the quantity being measured. For example, an ammeter has its own built-in resistance, so if it is connected in series to an electrical circuit, it will slightly reduce the current flowing through the circuit.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Novodur** Novodur: Lustran and Novodur are trade names for various types of styrenic resins (ABS, ASA, SMA) owned by INEOS Styrolution, which is part of INEOS. These resins are used mainly for housings and covers requiring good toughness, strength, stiffness, chemical resistance and a good to very good surface finish. In addition to the general-purpose injection molding grades, the range comprises a large number of high heat resistant grades as well as special-purpose products for extrusion and chemical electroplating. Glass fiber reinforced and flame retardant grades are also available. Novodur: In the Czech Republic (and formerly in Czechoslovakia), "Novodur" was a trade name (registered by the company Fatra a.s. for domestic use 2.6.1971) for PVC-U (non-softened polyvinyl chloride). The term is still used in the country, mostly as an informal term for plumbing pipes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bandwidth smearing** Bandwidth smearing: Bandwidth smearing is a chromatic aberration of the reconstructed image of a celestial body observed by an astronomical interferometer that occurs because of the frequency bandwidth. In Fourier terms, the different frequencies of the bandwidth probe different spatial frequencies which results in a reconstruct map containing elongated radial features. It is overcome by going to higher spectral resolutions or, in radioastronomy, by using different centres of phase for image reconstruction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Studia Logica** Studia Logica: Studia Logica (full name: Studia Logica, An International Journal for Symbolic Logic), is a scientific journal publishing papers employing formal tools from Mathematics and Logic. The scope of papers published in Studia Logica covers all scientific disciplines; the key criterion for published papers is not their topic but their method: they are required to contain significant and original results concerning formal systems and their properties. The journal offers papers on topics in general logic and on applications of logic to methodology of science, linguistics, philosophy, and other branches of knowledge. The journal is published by the Institute of Philosophy and Sociology of the Polish Academy of Sciences and Springer publications. History: The name Studia Logica appeared for the first time in 1934, but only one volume (edited by Jan Łukasiewicz) has been published that time. It had been published continuously since December 1953 in changing frequency by the Polish Academy of Sciences. Articles used to appear in Polish, Russian, German, English or French, and their summaries or full translations in at least two of the languages. Kazimierz Ajdukiewicz was chief editor until his death in 1963. The position was later taken by Jerzy Słupecki (1963-1970), Klemens Szaniawski (1970-1974). Under the editorship of Ryszard Wójcicki (1975-1980), who later headed the journal as chairman of the editorial board, Studia Logica moved to publish in English only, and partnered with a Dutch international distributor. Jacek Malinowski runs Studia Logica as Editor-in-Chief from 2006. Conferences: In 2003, to celebrate the 50 years of Studia Logica, two conferences were organized: in Warsaw/Mądralin (Poland) and in Roskilde (Denmark). They started a series of scientific conferences in collaboration with Studia Logica under the name "Trends in Logic". More than 20 Trends in Logic conferences have been organized, in different countries in Europe, Asia and South America. Full list of Trends in Logic conferences can be found at http://studialogica.org/past.events.html Bookseries Studia Logica Library: Studia Logica Library was founded by Ryszard Wójcicki. First book in the series, The Is-Ought Problem by Gerhard Schurz, was published in 1997. Originally, these volumes were published by Kluwer Academic Publishers, and starting in September 2005 (on Trends in Logic volume 24), they began publishing with Springer. Currently Studia Logica Library consist of three subseries: Trends in Logic run by Heinrich Wansing; Outstanding contributions run by Sven Ove Hansson; Logic in Asia run by Fenrong Liu and Hiroakira Ono.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Prismatic compound of antiprisms with rotational freedom** Prismatic compound of antiprisms with rotational freedom: Each member of this infinite family of uniform polyhedron compounds is a symmetric arrangement of antiprisms sharing a common axis of rotational symmetry. It arises from superimposing two copies of the corresponding prismatic compound of antiprisms (without rotational freedom), and rotating each copy by an equal and opposite angle. Prismatic compound of antiprisms with rotational freedom: This infinite family can be enumerated as follows: For each positive integer n>0 and for each rational number p/q>3/2 (expressed with p and q coprime), there occurs the compound of 2n p/q-gonal antiprisms (with rotational freedom), with symmetry group: Dnpd if nq is odd Dnph if nq is evenWhere p/q=2 the component is a tetrahedron, sometimes not considered a true antiprism.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Delsarte–Goethals code** Delsarte–Goethals code: The Delsarte–Goethals code is a type of error-correcting code. History: The concept was introduced by mathematicians Ph. Delsarte and J.-M. Goethals in their published paper.A new proof of the properties of the Delsarte–Goethals code was published in 1970. Function: The Delsarte–Goethals code DG(m,r) for even m ≥ 4 and 0 ≤ r ≤ m/2 − 1 is a binary, non-linear code of length 2m , size 2r(m−1)+2m and minimum distance 2m−1−2m/2−1+r The code sits between the Kerdock code and the second-order Reed–Muller codes. More precisely, we have K(m)⊆DG(m,r)⊆RM(2,m) When r = 0, we have DG(m,r) = K(m) and when r = m/2 − 1 we have DG(m,r) = RM(2,m). Function: For r = m/2 − 1 the Delsarte–Goethals code has strength 7 and is therefore an orthogonal array OA( 23m−1,2m,Z2,7)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Device Keys** Device Keys: Device Keys play a role in the cryptographic key management procedure in the Advanced Access Content System (AACS) specification. This specification defines a method for protecting audiovisual entertainment content, including high-definition content. Introduction: The AACS’s cryptographic key management procedure uses Device Keys to decrypt one or more elements of a Media Key Block (MKB), in order to extract a secret Media Key (Km). A MKB is located on the physical support (the disc) together with the content of the disc encrypted. MKB enables system renewability. The MKB is generated by AACS LA, and allows all compliant devices, each using their set of secret Device Keys, to calculate the same Media Key (Km). If a set of Device Keys is compromised in a way that threatens the integrity of the system, an updated MKB can be released that causes a device with the compromised set of Device Keys to be unable to calculate the correct Km. In this way, the compromised Device Keys are “revoked” by the new MKB. How it works: Each compliant device is given a set of secret Device Keys when manufactured. The actual number of keys may be different in different media types. These Device Keys, referred to as Kdi (i=0,1,…,n-1), are provided by AACS LA. The set of Device Keys may either be unique per device, or used commonly by multiple devices. A device shall treat its Device Keys as highly confidential. How it works: The MKB is encrypted in a subset difference tree approach. In order to decrypt it, a device must know the right Processing Key (P) which is available via the subset-difference tree process.Essentially, the set of Device Keys are arranged in a tree such that any given Device Key can be used to find lower level Processing keys. The processing keys at higher position in the tree than the given set of Device Keys are not reachable.A given set of Device Keys give access to a given set of Processing keys, it is to say to a given set of decodable MKB. How it works: This way, to revoke a given device key, the MKB needs only be encrypted with a Processing Key which is not reachable by its Device Keys set. Storing: Each device is given its Device Keys and a 31-bit number d called the device number. Storing: For each Device Key, there is an associated number denoted the path number, the “u” bit mask, and the “v” bit mask. The path number denotes the position in the tree associated with the Device Key. This path number defines a path from the root to that node in the tree.The “u” and “v” masks are used in the subset difference tree process. They are always a single sequence of 1-bits followed by a single sequence of 0-bits. The bit masks indicate “don’t care” bits in the path number; if a bit is 0 in the mask, the corresponding bit in the path number is “don’t care”. The deeper the position of a node in the tree, the shorter the sequence of 0-bits in the mask associated to that node. Storing: The device number, path number, and masks denote nodes within a binary tree. Sources: Introduction and Common Cryptographic Elements Rev 0.91 AACS Technical Overview 7/2004
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Technical art history** Technical art history: Technical art history is an interdisciplinary field of study at the cross-section of science and humanities in which an increasingly wide range of analytical tools is employed to shed light on the creative process from idea to artwork. Researchers from varying fields – among which art history, conservation, and conservation science – collaborate in an interdisciplinary manner to gain “a thorough understanding of the physical object in terms of original intention, choice of materials and techniques as well as the context in and for which the work was created, its meaning and contemporary perception.”The scientific analysis of art was initially simply referred to as “technical studies”, a term that was used in early publications by the Straus Center for Conservation and Technical Studies at the Harvard Art Museums in the 1930s. These technical studies entered the discipline art history in the first half of the twentieth century. Since then, the field has evolved rapidly from an auxiliary science into an independent scholarly field and there have been regular attempts to define its scope and aim in published texts. As the field and its name are still rather young, the definitions and objectives that are presented may vary from scholar to scholar. It is clear that with the emancipation of the field, it has exceeded the collaboration of just art historians, conservators and conservation scientists. A broad definition is therefore required to include methodologies from various fields such as anthropology, philology, history of science, and material culture.Two main pathways are followed to explore the physical reality of a work of art: an experimental approach, and the research of documentary sources. The experimental approach includes the direct analysis of works of art and artisanal materials by technical means. Documentary sources include books of secrets and other contemporary writings that deal with artists’ techniques and materials. These sources are vital to the interpretation of the experimental data. It is the combination of these two pathways that calls for the broad range of methodologies and interdisciplinarity of research in the field of technical art history. History and development: In the early twentieth century the first laboratories focusing on applying scientific techniques on artworks were established around the world. In 1888, the Rathgen-Forschungslabor was founded in Berlin as the first museum laboratory in the world, and in 1928, Edward Forbes established the first conservation research centre in the United States at the Fogg Art Museum at Harvard University (now the Straus Centre for Conservation and Technical Studies). The establishment of these and other similar institutes was imperative to the development of a new approach of studying materials and techniques, and to the shift of conservation from a craft to a science-based practice. History and development: In the decades preceding the inclusion of scientific techniques in the realm of art, the production process of the artwork was of lesser importance to the understanding of an object. Instead, artworks were considered strictly as expressions of human genius and intuition. Consequently, material and technique were considered merely necessary accessories to this process, not influencing the artist's creative decisions. This point of view is part of a broader hierarchical dichotomy within art history between the mind and the hand, or the intellectual and material side of works of art. The research that was conducted in these institutes, and the development of new analytical techniques such as X-radiography and infrared reflectography did not only support more scientifically oriented conservation practices, but also allowed art historians to (re-)gain an understanding of the artist's way of working.Coinciding with the development of new scientific techniques was the emergence of the so-called ‘new’ art histories (such as feminist art history) in the mid-1970s. These new discourses in art history focused on the relevance of art to social constructs, ignoring traditional approaches and terms like “connoisseurship”, “quality”, “style”, and “genius”. Interestingly, technical art history is not a part of the ‘new’ art histories, even though the establishment of the field was in full progress during this time. Whereas the new art histories in fact move further away from the actual object, by placing it in its social context, technical art history moves further towards the objects, and uses new methods to investigate more traditional targets such as style, provenance, and authenticity. Instead, technical art history could be seen as a part of the material turn, or new materialism, within art history. History and development: The 1996 Grove Dictionary of Art (nowadays Grove Art Online) was lacking the term “technical art history” even though technical examinations had been a part of the artworld for several decades already. The absence of the term could be explained by the fact that it was only first coined at a conference in 1992 by David Bomford, and first published in text in 1996. Even though technical analyses, and examination of artistic techniques had existed for decades, it is in this last quarter of the twentieth century that the truly collaborative, and most importantly interpretive, interdisciplinary study of technical art history established itself. Technical art history as it is known in the 21st century goes beyond what scientific techniques can shed light on, by relying on methodologies from other fields to interpret the experimental scientific data. Whereas the latest developments in sciences will extend the reach of art historians, art history will challenge the sciences for the development and improvement of diagnostic tools or theories. Methodologies and aim: Technical art historical research has two main pathways: an experimental scientific approach towards materials and techniques, and research into documentary sources on techniques and materials. These two pathways were first set out in 1972 by Joyce Plesters, one of the early pioneers of technical art history, and they still remain as the principal methods by which research attempt to approach the physical reality of artworks. Methodologies and aim: Experimental approach The rapid development of scientific analytical applications has provided unique insight into the material composition of works of art and their subsequent deterioration process. Often used techniques for the analysis of artworks include multispectral imaging, X-radiography, scanning macro-XRF, neutron activation autoradiography, dendrochronology and gas chromatography-mass spectrometry (GC-MS). An overview of the continuously growing list of commonly used techniques is presented in the Handbook of Scientific Techniques for the Examination of Works of Art.The data gained from these analytical techniques is crucial for understanding the present condition of an artwork, including its material history and the changes it has undergone. Methodologies and aim: Documentary sources To accurately interpret and understand the data from scientific techniques, a thorough understanding of the artist's working process is required. Multidisciplinary research into documentary sources on artist's techniques and materials brings researchers closer to the original voice of the artist as it is found in diaries, treatises, correspondence, and other (near˗)contemporary writings. Methodologies from various disciplines such as philology and history of science are incorporated to provide insight into the context in which the artists worked with certain materials, and the mentality towards techniques that were used to manipulate these materials. Methodologies and aim: A commonly used method to research documentary sources is the reconstruction of historic artisanal recipes. These reconstructions shed light on artist's workshop practices and the process of making in the workshop. Methodologies and aim: Aim As the field is so interdisciplinary, identifying one single aim is a futile endeavour that would undoubtedly neglect the many different research possibilities that technical art history offers. Through collaborative interpretation, the multidisciplinary data can shed light on a broad range of subjects such as the material history of artworks, the artist's working process, and their use of specific materials. Knowledge of the physical and material aspect of artworks can assist in the authentication of artworks, and as such the field has also been described as a modern connoisseurship. The Rembrandt Research Project is an example of a well-known large research initiative that employed methodologies from technical art history to analyse and authenticate works by Rembrandt. Methodologies and aim: Although a large part of the research within technical art history is focused on these material aspects, many new types of research have developed within the field that try to achieve a broader view of the process of making that includes historic techniques, workshop practices, the context in which artists worked, and the transmission of tacit knowledge through manuscripts. Two examples of research initiatives that investigate the process of converting (tacit) craft knowledge into (scientific) written knowledge are the Making and Knowing project (Columbia University), and the ARTECHNE project (Utrecht University and University of Amsterdam). Instead of focusing on an artwork as the primary object, these projects focus on written documents, to improve understanding on how techniques in the arts were transmitted among artists and craftsmen. New insight into the material histories of artworks, from its moment of creation until its current condition in the 21st century, will encourage to look at works of art with fresh eyes and might lead to new insights in art history. Subsequently, every new observation on a work of art – in material or methodological terms – will spark new experimental and documentary research for confirmation. Finally, being re-engaged with artists “and all their processes and ambitions for making art” brings us back closer to the hand of the artist, allowing us to critically reflect on our interpretation of their works. Educational programmes: Technical art history was slow to penetrate art history departments at universities, as it had to compete with the emerging ‘new’ art histories in the 1970s and 1980s. Nowadays, several universities around the world offer training and research programmes based on technical art history. New York University Stockholm University University of Amsterdam University of Delaware University of Glasgow Yale University West Virginia University
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fit for Life** Fit for Life: Fit for Life is a diet and lifestyle book series stemming from the principles of orthopathy. It is promoted mainly by the American writers Harvey and Marilyn Diamond. The Fit for Life book series describes a fad diet which specifies eating only fruit in the morning, eating predominantly "live" and "high-water-content" food, and, if animal protein is eaten, avoiding combining it with complex carbohydrates. Fit for Life: While the diet has been praised for encouraging the consumption of raw fruits and vegetables, several other aspects of the diet have been disputed by dietitians and nutritionists, and the American Dietetic Association and the American Academy of Family Physicians list it as a fad diet. Description: The diet is based on Diamond's exploration of Herbert M. Shelton theories of food combining. Both authors claimed to be able to bring about weight loss without the need to count calories or undertake anything more than a reasonable exercise program. In the first version of the program, Diamond claimed that if one eats the foods in the wrong combination they "cause fermentation" in the stomach. This in turn gives rise to the destruction of valuable enzymes and nutrients. Diamond categorized foods into two groups: "dead foods" that "clog" the body, and "living foods" that "cleanse" it. According to Fit for Life principles, dead foods are those that have highly refined or highly processed origins; while living foods are raw fruits and vegetables. The basic points of Fit for Life are as follows: Fruits are best eaten fresh and raw. Where possible they should be eaten alone. Description: Carbohydrates and proteins should never be combined in the same meal. Water dilutes stomach digestive juices and should never be drunk at meals. Description: Dairy products are considered of limited value and because of their allergic capacity, should seldom, if ever, be eaten.In the 2000s, the Fit for Life system added the Personalized Fit for Life Weight Management Program, which employs proprietary protocols called Biochemical "Analyzation", Metabolic Typing and Genetic Predispositions. The Diamonds claim that these protocols allow the personalization of the diet, which thus customized is effective only for one individual, and can be used for that person's entire life. This version of the diet also puts less emphasis on "live" and "dead" foods, and instead talks of "enzyme deficient foods". The Diamonds posit that enzymes that digest proteins interfere with enzymes that digest carbohydrates, justifying some of the rules above. They also began to sell nutritional supplements, advertised as enzyme supplements, many of which are strongly recommended in the newest version of Fit for Life. Publications and marketing: The diet came to public attention in the mid-1980s with the publication of Fit for Life, a New York Times best seller which sold millions of copies, over 12 million according to Harvey Diamond. Harvey Diamond has also appeared on dozens of television talk shows promoting his theories. In Fit for Life II (1989) the Diamonds warned against eating artificial food additives such as hydrogenated vegetable oil, which at the time was being promoted by the food industry as a healthy alternative to saturated fat. Tony Robbins promoted the Fit for Life principles and veganism to increase energy levels in his book Unlimited Power. Publications and marketing: Book series Fit for Life (1985) - by Harvey and Marilyn Diamond ISBN 0-446-30015-2 Living Health (1987) - by Harvey and Marilyn Diamond ISBN 0-446-51281-8 Fit for Life II (1989) - by Harvey and Marilyn Diamond ISBN 0-446-35875-4 Fit for Life: A New Beginning (2001) - by Harvey Diamond ISBN 1-57566-718-5 Fit for Life Not Fat For Life (2003) - by Harvey Diamond ISBN 978-0-7573-0113-1 Living Without Pain (2007) - by Harvey Diamond ISBN 0-9769961-0-3 Additional books by Marilyn Diamond A New Way of Eating from the Fit for Life Kitchen (1987) The American Vegetarian Cookbook from the Fit for Life Kitchen (1990) The Fit for Life Cookbook (1991) Fitonics for Life (1996) with Donald Burton Schnell Recipes for Life (1998) with Lisa Neurith Young For Life (2013) with Donald Burton Schnell Controversy: Scientific reception Health experts and science writers have dismissed the book as quackery. Controversy: Credentials The rigor of study underlying Harvey Diamond's credentials have been disputed, which has drawn questions about his competence to write about nutrition, because his doctoral degree came from the American College of Life Science, a non-accredited correspondence school founded in 1982 by T.C. Fry, who did not graduate high school or undergo a formal accreditation process himself. Fit for Life's personalized diet program has been criticized for providing a "Clinical Manual" that is heavily infused with alternative medicine claims about how the body works, some of which may be scientifically inaccurate or not accepted by conventional medicine. Controversy: Clinical trials Despite the fact that the Fit for Life web site mentioned "clinical trials", many of the proposed principles and benefits of the Fit for Life diet are not supported by citations to any scholarly research, and some of the claims have actually been directly refuted by scientific research. For example, a dissociated diet as that advertised by Fit for Life is as effective for weight loss as a calorie-restricted diet.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**After the Zap** After the Zap: After the Zap is a novel by Michael Armstrong published by Popular Library in 1987. Plot summary: After the Zap is a novel set in Alaska, where mutations in humans occur after an experimental electromagnetic device detonates. Reception: J. Michael Caparula reviewed After the Zap in Space Gamer/Fantasy Gamer No. 81. Caparula commented that "This is a fun, fast-paced book that I think SG/FG readers will like a lot." Reviews: Review by Faren Miller (1987) in Locus, #315 April 1987 Review by Joe Sanders (1987) in Fantasy Review, July–August 1987 Review by Edward Bryant (1987) in Rod Serling's The Twilight Zone Magazine, October 1987 Review by Ken Lake (1989) in Paperback Inferno, #80 Kliatt
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Divided visual field paradigm** Divided visual field paradigm: The Divided Visual Field Paradigm is an experimental technique that involves measuring task performance when visual stimuli are presented on the left or right visual hemifields. If a visual stimulus appears in the left visual field (LVF), the visual information is initially projected to the right cerebral hemisphere (RH), and conversely, if a visual stimulus appears in the right visual field (RVF), the visual information is initially received by the left cerebral hemisphere (LH). In this way, if a cerebral hemisphere has functional advantages with some aspect of a particular task, an experimenter might observe improvements in task performance when the visual information is presented on the contralateral visual field. Background: The divided visual field paradigm capitalizes on the lateralization of the visual system. Each cerebral hemisphere only receives information from one half of the visual field—specifically, from the contralateral hemifield. For example, retinal projections from ganglion cells in the left eye that receive information from the left visual field cross to the right hemisphere at the optic chiasm; while information from the right visual field received by the left eye will not cross at the optic chiasm, and will remain on the left hemisphere. Stimuli presented on the right visual field (RVF) will ultimately be processed first by the left hemisphere's (LH) occipital cortex, while stimuli presented on the left visual field (LVF) will be processed first by the right hemisphere's (RH) occipital cortex. Background: Because lateralized visual information is initially segregated between the two cerebral hemispheres, any differences in task performance (e.g., improved response time) between LVF/RVF conditions might be interpreted as differences in the RH or LH's ability to perform the task. Methodology: To enable the lateralized presentation of visual stimuli, participants must first be fixated at a centralized location, and must be unable to anticipate whether an upcoming stimulus will be presented to the right or left of fixation. Because the center of the visual field, the fovea, may project bilaterally to both RH and LH, lateralized stimuli should appear sufficiently far from fixation. Researchers recommend that the inside edge of any visual stimulus should be between 2.5° and 3° from central fixation Lateralized stimuli must also be presented very briefly, to eliminate the participant's ability to make an eye-movement toward the lateralized stimulus (which would result in the stimulus no longer being lateralized, and instead projected to both cerebral hemispheres). Since saccadic latencies to a lateralized stimulus can be as fast as 150ms following stimulus onset, the lateralized stimulus should only be presented for a duration of 180ms at most.A free software tool called the "Lateralizer" has been developed for piloting and conducting customizable experiments using the divided visual field paradigm. Limitations: A significant difference between RVF/LH and LVF/RH task performance using the divided visual field paradigm does provide evidence of a functional asymmetry between the two cerebral hemispheres. However, as described by Ivry and Robertson (1998), there are limitations to the types of inferences that can be made from this technique: These [divided visual field] methods have their limitations. A critical assumption has been that differences in performance with lateralized stimuli nearly always reflect functional differences between the two hemispheres. This is an extremely strong assumption. Researchers have tended to ignore or downplay the fact that asymmetries in brain function cannot be directly observed with these methods. It would require a leap of faith to assume that there is a straightforward mapping between lateralizing a stimulus and producing disproportionate activation throughout the contralateral hemisphere. Normal subjects have an intact corpus callosum, which provides for the rapid transfer of information from one hemisphere to the other. Limitations: Visual information can be transferred from one cerebral hemisphere to the other in as little as 3ms, so any task differences greater than 3ms may represent asymmetries in neural dynamics that are more complex than a single hemisphere's simple dominance for a particular task. Moreover, the divided visual field technique represents a relatively coarse and indirect method for localizing brain regions associated with cognitive function. Other neuroimaging techniques, including fMRI, PET, and EEG, will provide more spatial resolution, and more direct measures of neural activity. However, these methods are significantly more costly than the divided visual field paradigm.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dysmenorrhea** Dysmenorrhea: Dysmenorrhea, also known as period pain, painful periods or menstrual cramps, is pain during menstruation. Its usual onset occurs around the time that menstruation begins. Symptoms typically last less than three days. The pain is usually in the pelvis or lower abdomen. Other symptoms may include back pain, diarrhea or nausea.Dysmenorrhea can occur without an underlying problem. Underlying issues that can cause dysmenorrhea include uterine fibroids, adenomyosis, and most commonly, endometriosis. It is more common among those with heavy periods, irregular periods, those whose periods started before twelve years of age and those who have a low body weight. A pelvic exam and ultrasound in individuals who are sexually active may be useful for diagnosis. Conditions that should be ruled out include ectopic pregnancy, pelvic inflammatory disease, interstitial cystitis and chronic pelvic pain.Dysmenorrhea occurs less often in those who exercise regularly and those who have children early in life. Treatment may include the use of a heating pad. Medications that may help include NSAIDs such as ibuprofen, hormonal birth control and the IUD with progestogen. Taking vitamin B1 or magnesium may help. Evidence for yoga, acupuncture and massage is insufficient. Surgery may be useful if certain underlying problems are present.Estimates of the percentage of female adolescents, and women of reproductive age affected are between 50% and 90%. It is the most common menstrual disorder. Typically, it starts within a year of the first menstrual period. When there is no underlying cause, often the pain improves with age or following having a child. Signs and symptoms: The main symptom of dysmenorrhea is pain concentrated in the lower abdomen or pelvis. It is also commonly felt in the right or left side of the abdomen. It may radiate to the thighs and lower back.Symptoms often co-occurring with menstrual pain include nausea and vomiting, diarrhea, headache, dizziness, disorientation, fainting and fatigue. Symptoms of dysmenorrhea often begin immediately after ovulation and can last until the end of menstruation. This is because dysmenorrhea is often associated with changes in hormonal levels in the body that occur with ovulation. In particular, prostaglandins induce abdominal contractions that can cause pain and gastrointestinal symptoms. The use of certain types of birth control pills can prevent the symptoms of dysmenorrhea because they stop ovulation from occurring. Signs and symptoms: Dysmenorrhea is associated with increased pain sensitivity and heavy menstrual bleeding.For many women, primary dysmenorrhea gradually subsides in late second generation. Pregnancy has also been demonstrated to lessen the severity of dysmenorrhea, when menstruation resumes. However, dysmenorrhea can continue until menopause. 5–15% of women with dysmenorrhea experience symptoms severe enough to interfere with daily activities. Causes: There are two types of dysmenorrhea, primary and secondary, based on the absence or presence of an underlying cause. Primary dysmenorrhea occurs without an associated underlying condition, while secondary dysmenorrhea has a specific underlying cause, typically a condition that affects the uterus or other reproductive organs.Painful menstrual cramps can result from an excess of prostaglandins released from the uterus. Prostaglandins cause the uterine muscles to tighten and relax causing the menstrual cramps. This type of dysmenorrhea is called primary dysmenorrhea. Primary dysmenorrhea usually begins in the teens soon after the first period.Secondary dysmenorrhea is the type of dysmenorrhea caused by another condition such as endometriosis, uterine fibroids, uterine adenomyosis, and polycystic ovary syndrome. Rarely, birth defects, intrauterine devices, certain cancers, and pelvic infections cause secondary dysmenorrhea. If the pain occurs between menstrual periods, lasts longer than the first few days of the period, or is not adequately relieved by the use of nonsteroidal anti-inflammatory drugs (NSAIDs) or hormonal contraceptives, this could indicate another condition causing secondary dysmenorrhea.Membranous dysmenorrhea is a type of secondary dysmenorrhea in which the entire lining of the uterus is shed all at once rather than over the course of several days as is typical. Signs and symptoms include spotting, bleeding, abdominal pain, and menstrual cramps. The resulting uterine tissue is called a decidual cast and must be passed through the cervix and vagina. It typically takes the shape of the uterus itself. Membranous dysmenorrhea is extremely rare and there are very few reported cases. The underlying cause is unknown, though some evidence suggests it may be associated with ectopic pregnancy or the use of hormonal contraception. Causes: When laparoscopy is used for diagnosis, the most common cause of dysmenorrhea is endometriosis, in approximately 70% of adolescents.Other causes of secondary dysmenorrhea include leiomyoma, adenomyosis, ovarian cysts, pelvic congestion, and cavitated and accessory uterine mass. Risk factors: Genetic factors, stress and depression are risk factors for dysmenorrhea. Risk factors for primary dysmenorrhea include: early age at menarche, long or heavy menstrual periods, smoking, and a family history of dysmenorrhea.Dysmenorrhea is a highly polygenic and heritable condition. There is strong evidence of familial predisposition and genetic factors increasing susceptibility to dysmenorrhea. There have been multiple polymorphisms and genetic variants in both metabolic genes and genes responsible for immunity which have been associated with the disorder.Three distinct possible phenotypes have been identified for dysmenorrhea which include "multiple severe symptoms", "mild localized pain", and "severe localized pain". While there are likely differences in genotypes underlying each phenotype, the specific correlating genotypes have not yet been identified. These phenotypes are prevalent at different levels in different population demographics, suggesting different allelic frequencies across populations (in terms of race, ethnicity, and nationality).Polymorphisms in the ESR1 gene have been commonly associated with severe dysmenorrhea. Variant genotypes in the metabolic genes such as CYP2D6 and GSTM1 have been similarly been correlated with an increased risk of severe menstrual pain, but not with moderate or occasional phentoypes.The occurrence and frequency of secondary dysmenorrhea (SD) has been associated with different alleles and genotypes of those with underlying pathologies, which can affect the pelvic region or other areas of the body. Individuals with disorders may have genetic mutations related to their diagnoses which produce dysmenorrhea as a symptom of their primary diagnosis. It has been found that those with fibromyalgia who have the ESR1 gene variation Xbal and possess the Xbal AA genotype are more susceptible to experiencing mild to severe menstrual pain resulting from their primary pathology. Commonly, genetic mutations which are a hallmark of or associated with specific disorders can produce dysmenorrhea as a symptom which accompanies the primary disorder. Risk factors: In contrast with secondary dysmenorrhea, primary dysmenorrhea (PD) has no underlying pathology. Genetic mutation and variations have therefore been thought to underlie this disorder and contribute to the pathogenesis of PD. There are multiple single-nucleotide polymorphisms (SNP) associated with PD. Two of the most well studied include an SNP in the promoter of MIF and an SNP in the tumor necrosis factor (TNF-α) gene. When a cytosine 173 base pairs upstream of macrophage migration inhibitory factor (MIF) promoter was replaced by a guanine there was an associated increase in the likelihood of the individual experiencing PD. While a CC/GG genotype led to an increase in likelihood of the individual experiencing severe menstrual pain, a CC/GC genotype led to a more significant likelihood of the disorder impacting the individual overall and increasing the likelihood of any of the three phenotypes. A second associated SNP was located 308 base pairs upstream from the start codon of the TNF-α gene, in which guanine was substituted for adenine. A GG genotype at the loci is associated with the disorder and has been proposed as a possible genetic marker to predict PD.There has also been an association with mutations in the MEFV gene and dysmenorrhea, which are considered to be causative. The phenotypes associated with these mutations in the MEFV genes have been better studied; individuals who are heterozygous for these mutations are more likely to be affected by PD which presents as a severe pain phenotype.Genes related to immunity have been identified as playing a significant role in PD as well. IL1A was found to be the gene most associated with primary dysmenorrhea in terms of its phenotypic impact. This gene encodes a protein essential for the regulation of immunity and inflammation.15 While the mechanism of how it influences PD has yet to be discovered, it is assumed that possible mutations in IL1A or genes which interact with it impact the regulation of inflammation during menstruation. These mutations may therefore affect pain responses during menstruation which lead to the differing phenotypes associated with dysmenorrhea. Two additionally well studied SNPs which are suspected to contribute to PD were found in ZM1Z1 (the mutant allele called rs76518691) and NGF (the mutant allele called rs7523831). Both ZMIZ1 and NGF are associated with autoimmune responses and diseases, as well as pain response. The implication of these genes impacting Dysmenorrhea is significant as it suggests mutations which affect the immune system (specifically the inflammatory response) and pain response may also be a cause of primary dysmenorrhea. Mechanism: The underlying mechanism of primary dysmenorrhea is the contractions of the muscles of the uterus which induce a local ischemia.During an individual's menstrual cycle, the endometrium thickens in preparation for potential pregnancy. After ovulation, if the ovum is not fertilized and there is no pregnancy, the built-up uterine tissue is not needed and thus shed. Mechanism: Prostaglandins and leukotrienes are released during menstruation, due to the build up of omega-6 fatty acids. Release of prostaglandins and other inflammatory mediators in the uterus cause the uterus to contract and can result in systemic symptoms such as nausea, vomiting, bloating and headaches or migraines. Prostaglandins are thought to be a major factor in primary dysmenorrhea. When the uterine muscles contract, they constrict the blood supply to the tissue of the endometrium, which, in turn, breaks down and dies. These uterine contractions continue as they squeeze the old, dead endometrial tissue through the cervix and out of the body through the vagina. These contractions, and the resulting temporary oxygen deprivation to nearby tissues, are thought to be responsible for the pain or cramps experienced during menstruation. Mechanism: Compared with non-dysmnenorrhic individuals, those with primary dysmenorrhea have increased activity of the uterine muscle with increased contractility and increased frequency of contractions. Diagnosis: The diagnosis of dysmenorrhea is usually made simply on a medical history of menstrual pain that interferes with daily activities. However, there is no universally accepted standard technique for quantifying the severity of menstrual pains. There are various quantification models, called menstrual symptometrics, that can be used to estimate the severity of menstrual pains as well as correlate them with pain in other parts of the body, menstrual bleeding and degree of interference with daily activities. Diagnosis: Further work-up Once a diagnosis of dysmenorrhea is made, further workup is required to search for any secondary underlying cause of it, in order to be able to treat it specifically and to avoid the aggravation of a perhaps serious underlying cause. Further work-up includes a specific medical history of symptoms and menstrual cycles and a pelvic examination. Based on results from these, additional exams and tests may be motivated, such as: Gynecologic ultrasonography Laparoscopy Management: Treatments that target the mechanism of pain include non-steroidal anti-inflammatory drugs (NSAIDs) and hormonal contraceptives. NSAIDs inhibit prostaglandin production. With long-term treatment, hormonal birth control reduces the amount of uterine fluid/tissue expelled from the uterus. Thus resulting in shorter, less painful menstruation. These drugs are typically more effective than treatments that do not target the source of the pain (e.g. acetaminophen). Regular physical activity may limit the severity of uterine cramps. Management: NSAIDs Nonsteroidal anti-inflammatory drugs (NSAIDs) such as ibuprofen and naproxen are effective in relieving the pain of primary dysmenorrhea. They can have side effects of nausea, dyspepsia, peptic ulcer, and diarrhea. Management: Hormonal birth control Use of hormonal birth control may improve symptoms of primary dysmenorrhea. A 2009 systematic review (updated in 2023) found evidence that the low or medium doses of estrogen contained in the birth control pill reduces pain associated with dysmenorrhea. In addition, no differences between different birth control pill preparations were found. The review did not determine if the estrogen in birth control pills was more effective than NSAIDs.Norplant and Depo-provera are also effective, since these methods often induce amenorrhea. The intrauterine system (Mirena IUD) may be useful in reducing symptoms. Management: Other A review indicated the effectiveness of transdermal nitroglycerin. Reviews indicated magnesium supplementation seemed to be effective. A review indicated the usefulness of using calcium channel blockers. Management: Heat is effective compared to NSAIDs and is a preferred option by many patients, as it is easy to access and has no known side effects.Tamoxifen has been used effectively to reduce uterine contractility and pain in dysmenorrhea patients.There is some evidence that exercises performed 3 times a week for about 45 to 60 minutes, without particular intensity, reduces menstrual pain. Management: Alternative medicine There is insufficient evidence to recommend the use of many herbal or dietary supplements for treating dysmenorrhea, including melatonin, vitamin E, fennel, dill, chamomile, cinnamon, damask rose, rhubarb, guava, and uzara. Further research is recommended to follow up on weak evidence of benefit for: fenugreek, ginger, valerian, zataria, zinc sulphate, fish oil, and vitamin B1. A 2016 review found that evidence of safety is insufficient for most dietary supplements. There is some evidence for the use of fenugreek.One review found thiamine and vitamin E to be likely effective. It found the effects of fish oil and vitamin B12 to be unknown. Reviews found tentative evidence that ginger powder may be effective for primary dysmenorrhea. Reviews have found promising evidence for Chinese herbal medicine for primary dysmenorrhea, but that the evidence was limited by its poor methodological quality.A 2016 Cochrane review of acupuncture for dysmenorrhea concluded that it is unknown if acupuncture or acupressure is effective. There were also concerns of bias in study design and in publication, insufficient reporting (few looked at adverse effects), and that they were inconsistent. There are conflicting reports in the literature, including one review which found that acupressure, topical heat, and behavioral interventions are likely effective. It found the effect of acupuncture and magnets to be unknown.A 2007 systematic review found some scientific evidence that behavioral interventions may be effective, but that the results should be viewed with caution due to poor quality of the data.Spinal manipulation does not appear to be helpful. Although claims have been made for chiropractic care, under the theory that treating subluxations in the spine may decrease symptoms, a 2006 systematic review found that overall no evidence suggests that spinal manipulation is effective for treatment of primary and secondary dysmenorrhea.Valerian, Humulus lupulus and Passiflora incarnata may be safe and effective in the treatment of dysmenorrhea. Management: TENS A 2011 review stated that high-frequency transcutaneous electrical nerve stimulation may reduce pain compared with sham TENS, but seems to be less effective than ibuprofen. Surgery One treatment of last resort is presacral neurectomy. Epidemiology: Dysmenorrhea is one of the most common gynecological conditions, regardless of age or race. It is one of the most frequently identified causes of pelvic pain in those who menstruate. Dysmenorrhea is estimated to affect between 50% and 90% of female adolescents and women of reproductive age. Another report states that estimates can vary between 16% and 91% of surveyed individuals, with severe pain observed in 2% to 29% of menstruating individuals. Reports of dysmenorrhea are greatest among individuals in their late teens and 20s, with reports usually declining with age. The prevalence in adolescent females has been reported to be 67.2% by one study and 90% by another. It has been stated that there is no significant difference in prevalence or incidence between races, although one study of Hispanic adolescent females indicated an elevated prevalence and impact in this group. Another study indicated that dysmenorrhea was present in 36.4% of participants, and was significantly associated with lower age and lower parity. Childbearing is said to relieve dysmenorrhea, but this does not always occur. One study indicated that in nulliparous individuals with primary dysmenorrhea, the severity of menstrual pain decreased significantly after age 40.A survey in Norway showed that 14 percent of females between the ages of 20 to 35 experience symptoms so severe that they stay home from school or work. Among adolescent girls, dysmenorrhea is the leading cause of recurrent short-term school absence.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Accessible image** Accessible image: Accessibility is the design of products, devices, services, vehicles, or environments so as to be usable by people with disabilities. The concept of accessible design and practice of accessible development ensures both "direct access" (i.e. unassisted) and "indirect access" meaning compatibility with a person's assistive technology (for example, computer screen readers). Accessibility can be viewed as the "ability to access" and benefit from some system or entity. The concept focuses on enabling access for people with disabilities, or enabling access through the use of assistive technology; however, research and development in accessibility brings benefits to everyone. Therefore, an accessible society should eliminate digital divide or knowledge divide. Accessible image: Accessibility is not to be confused with usability, which is the extent to which a product (such as a device, service, or environment) can be used by specified users to achieve specified goals with effectiveness, efficiency, convenience, or satisfaction in a specified context of use.Accessibility is also strongly related to universal design, the process of creating products that are usable by the widest possible range of people, operating within the widest possible range of situations. Universal design typically provides a single general solution that can accommodate people with disabilities as well as the rest of the population. By contrast, accessible design is focused on ensuring that there are no barriers to accessibility for all people, including those with disabilities. Accessible image: A 2023 paper by researchers from the University of Oxford and University College London concluded that "active involvement of physically disabled individuals in the design and development of Metaverse platforms is crucial for promoting inclusivity". Legislation: The disability rights movement advocates equal access to social, political, and economic life which includes not only physical access but access to the same tools, services, organizations and facilities as non-disabled people (e.g., museums). Article 9 of the United Nations Convention on the Rights of Persons with Disabilities commits signatories to provide for full accessibility in their countries. Legislation: While it is often used to describe facilities or amenities to assist people with impaired mobility, through the provision of facilities like wheelchair ramps, the term can extend include other types of disability. Accessible facilities therefore extend to areas such as Braille signage, elevators, audio signals at pedestrian crossings, walkway contours, website accessibility and accessible publishing.In the United States, government mandates including Section 508, WCAG, DDA are all enforcing practices to standardize accessibility testing engineering in product development. Legislation: Accessibility modifications may be required to enable persons with disabilities to gain access to education, employment, transportation, housing, recreation, or even simply to exercise their right to vote. Legislation: National legislation Various countries have legislation requiring physical accessibility which are (in order of enactment): In the US, under the Americans with Disabilities Act of 1990, new public and private business construction generally must be accessible. Existing private businesses are required to increase the accessibility of their facilities when making any other renovations in proportion to the cost of the other renovations. The United States Access Board is "A Federal Agency Committed to Accessible Design for People with Disabilities". The Job Accommodation Network discusses accommodations for people with disabilities in the workplace. Many states in the US have their own disability laws. Legislation: In Australia, the Disability Discrimination Act 1992 has numerous provisions for accessibility. In South Africa the Promotion of Equality and Prevention of Unfair Discrimination Act 2000 has numerous provisions for accessibility. In the UK, the Equality Act 2010 has numerous provisions for accessibility. In Sri Lanka, the Supreme Court, on 27 April 2011 gave a landmark order to boost the inherent right of disabled persons to have unhindered access to public buildings and facilities. In Norway, the Discrimination and Accessibility Act Diskriminerings- og tilgjengelighetsloven defines lack of accessibility as discrimination and obliges public authorities to implement universal design in their areas. The Act refers to issue-specific legislation regarding accessibility in e.g. ICT, the built environment, transport and education. In Brazil, the law on the inclusion of people with disabilities has numerous provisions for accessibility. Legislation: In Canada relevant federal legislation includes the Canadian Human Rights Act, the Employment Equity Act, the Canadian Labour Code, and the Accessible Canada Act (Bill-C81) which made Royal Assent on June 21, 2019.Legislation may also be enacted on a state, provincial or local level. In Ontario, Canada, the Ontarians with Disabilities Act of 2001 is meant to "improve the identification, removal and prevention of barriers faced by persons with disabilities".The European Union (EU), which has signed the United Nations' Convention on the Rights of Persons with Disabilities, also has adopted a European Disability Strategy for 2010–20. The Strategy includes the following goals, among others: devising policies for inclusive, high-quality education; ensuring the European Platform Against Poverty includes a special focus on people with disabilities (the forum brings together experts who share best practices and experience); working towards the recognition of disability cards throughout the EU to ensure equal treatment when working, living or travelling in the bloc developing accessibility standards for voting premises and campaign material; taking the rights of people with disabilities into account in external development programmes and for EU candidate countries.A European Accessibility Act was proposed in late 2012. This Act would establish standards within member countries for accessible products, services, and public buildings. The harmonization of accessibility standards within the EU "would facilitate the social integration of persons with disabilities and the elderly and their mobility across member states, thereby also fostering the free movement principle". Assistive technology and adaptive technology: Assistive technology is the creation of a new device that assists a person in completing a task that would otherwise be impossible. Some examples include new computer software programs like screen readers, and inventions such as assistive listening devices, including hearing aids, and traffic lights with a standard color code that enables colorblind individuals to understand the correct signal. Assistive technology and adaptive technology: Adaptive technology is the modification, or adaptation, of existing devices, methods, or the creation of new uses for existing devices, to enable a person to complete a task. Examples include the use of remote controls, and the autocomplete (word completion) feature in computer word processing programs, which both help individuals with mobility impairments to complete tasks. Adaptations to wheelchair tires are another example; widening the tires enables wheelchair users to move over soft surfaces, such as deep snow on ski hills, and sandy beaches. Assistive technology and adaptive technology: Assistive technology and adaptive technology have a key role in developing the means for people with disabilities to live more independently, and to more fully participate in mainstream society. In order to have access to assistive or adaptive technology, however, educating the public and even legislating requirements to incorporate this technology have been necessary. The UN CRPD, and courts in the United States, Japan, UK, and elsewhere, have decided that when it is needed to assure secret ballot, authorities should provide voters with assistive technology. The European Court of Human Rights, on the contrary, in case Toplak v. Slovenia ruled that due to high costs, the abandonment of the assistive equipment in elections did not violate human rights. Employment: Accessibility of employment covers a wide range of issues, from skills training, to occupational therapy, finding employment, and retaining employment. Employment: Employment rates for workers with disabilities are lower than for the general workforce. Workers in Western countries fare relatively well, having access to more services and training as well as legal protections against employment discrimination. Despite this, in the United States the 2012 unemployment rate for workers with disabilities was 12.9%, while it was 7.3% for workers without disabilities. More than half of workers with disabilities (52%) earned less than $25,000 in the previous year, compared with just 38% of workers with no disabilities. This translates into an earnings gap where individuals with disabilities earn about 25 percent less of what workers without disabilities earn. Among occupations with 100,000 or more people, dishwashers had the highest disability rate (14.3%), followed by refuse and recyclable material collectors (12.7%), personal care aides (11.9%), and janitors and building cleaners (11.8%). The rates for refuse and recyclable material collectors, personal care aides, and janitors and building cleaners were not statistically different from one another.Surveys of non-Western countries are limited, but the available statistics also indicate fewer jobs being filled by workers with disabilities. In India, a large 1999 survey found that "of the 'top 100 multinational companies' in the country [...] the employment rate of persons with disabilities in the private sector was a mere 0.28%, 0.05% in multinational companies and only 0.58% in the top 100 IT companies in the country". India, like much of the world, has large sections of the economy that are without strong regulation or social protections, such as the informal economy. Other factors have been cited as contributing to the high unemployment rate, such as public service regulations. Although employment for workers with disabilities is higher in the public sector due to hiring programs targeting persons with disabilities, regulations currently restrict types of work available to persons with disabilities: "Disability-specific employment reservations are limited to the public sector and a large number of the reserved positions continue to be vacant despite nearly two decades of enactment of the PWD Act".Expenses related to adaptive or assistive technology required to participate in the workforce may be tax deductible expenses for individuals with a medical practitioner's prescription in some jurisdictions. Employment: Disability management Disability management (DM) is a specialized area of human resources that supports efforts of employers to better integrate and retain workers with disabilities. Some workplaces have policies in place to provide "reasonable accommodation" for employees with disabilities, but many do not. In some jurisdictions, employers may have legal requirements to end discrimination against persons with disabilities. Employment: It has been noted by researchers that where accommodations are in place for employees with disabilities, these frequently apply to individuals with "pre-determined or apparent disabilities as determined by national social protection or Equality Authorities", which include persons with pre-existing conditions who receive an official disability designation. One of the biggest challenges for employers is in developing policies and practises to manage employees who develop disabilities during the course of employment. Even where these exist, they tend to focus on workplace injuries, overlooking job retention challenges faced by employees who acquire a non-occupation injury or illness. Protecting employability is a factor that can help close the unemployment gap for persons with disabilities. Transportation: Providing mobility to people with disabilities includes changes for public facilities like gently sloping paths of travel for people with wheelchairs and difficulty walking up stairs, or audio announcements for the blind; dedicated services like paratransit; and adaptations to personal vehicles. Transportation: Adapted automobiles for persons with disabilities Automobile accessibility also refers to ease of use by disabled people. Automobiles, whether a car or a van, can be adapted for a range of physical disabilities. Foot pedals can be raised, or replaced with hand-controlled devices. Wheelchair hoists, lifts or ramps may be customized according to the needs of the driver. Ergonomic adaptations, such as a lumbar support cushion, may also be needed.Generally, the more limiting the disability, the more expensive the adaptation needed for the vehicle. Financial assistance is available through some organizations, such as Motability in the United Kingdom, which requires a contribution by the prospective vehicle owner. Motability makes vehicles available for purchase or lease.When an employee with a disability requires an adapted car for work use, the employee does not have to pay for a "reasonable adjustment" in the United Kingdom; if the employer is unable to pay the cost, assistance is offered by government programs. Transportation: Low floor A significant development in transportation, and public transport in particular, to achieve accessibility, is the move to "low-floor" vehicles. In a low-floor vehicle, access to part or all of the passenger cabin is unobstructed from one or more entrances by the presence of steps, enabling easier access for the infirm or people with push chairs. A further aspect may be that the entrance and corridors are wide enough to accommodate a wheelchair. Low-floor vehicles have been developed for buses, trolleybuses, and trams. Transportation: A low floor in the vehicular sense is normally combined in a conceptual meaning with normal pedestrian access from a standard kerb (curb) height. However, the accessibility of a low-floor vehicle can also be utilised from slightly raising portions of kerb at bus stops, or through use of level boarding bus rapid transit stations or tram stops. The combination of access from a kerb was the technological development of the 1990s, as step-free interior layouts for buses had existed in some cases for decades, with entrance steps being introduced as chassis designs and overall height regulations changed. Transportation: Low-floor buses may also be designed with special height adjustment controls that permit a stationary bus to temporarily lower itself to ground level, permitting wheelchair access. This is referred to as a kneeling bus. Transportation: At rapid transit systems, vehicles generally have floors in the same height as the platforms but the stations are often underground or elevated, so accessibility there is not a question of providing low-floor vehicles, but providing a step-free access from street level to the platforms (generally by elevators, which may be restricted to disabled passengers only, so that the step-free access is not obstructed by non-disabled people taking advantage). Transportation: Accessibility planning for transportation in the United Kingdom In the United Kingdom, local transport authorities are responsible for checking that all people who live within their area can access essential opportunities and services, and where gaps in provision are identified the local authorities are responsible for organizing changes to make new connections. These requirements are defined in the UK Community Planning Acts legislation and more detailed guidance has been issued by the Department for Transport for each local authority. This includes the requirement to produce an Accessibility Plan under Community Planning legislation and to incorporate this within their Local Transport Plan. An Accessibility Plan sets out how each local authority plans to improve access to employment, learning, health care, food shops and other services of local importance, particularly for disadvantaged groups and areas. Accessibility targets are defined in the accessibility plans, these are often the distance or time to access services by different modes of transport including walking, cycling and public transport. Transportation: Accessibility Planning was introduced as a result of the report "Making the Connections: Final Report on Transport and Social Exclusion". This report was the result of research carried out by the Social Exclusion Unit. The United Kingdom also has a "code of practice" for making train and stations accessible: "Accessible Train and Station Design for Disabled People: A Code of Practice". This code of practice was first published in 2002 with the objective of compliance to Section 71B of the Railways Act 1993, and revised after a public consultation period in 2008. Transportation: Some transport companies have since improved the accessibility of their services, such as incorporating low-floor buses into their stock as standard. In August 2021, South Western Railway announced the streamlining of their accessibility services, allowing passengers requiring assistance to inform the company with as little as 10 minutes' notice at all 189 stations on its network, replacing an older scheme wherein assisted journeys had to be booked six hours to a day in advance. The system will utilise clear signage at stations and QR codes, allowing customers to send details of the assistance they require and their planned journey to staff remotely.Making public services fully accessible to the public has led to some technological innovations. Public announcement systems using audio induction loop technology can broadcast announcements directly into the hearing aid of anyone with a hearing impairment, making them useful in such public places as auditoriums and train stations. Transportation: Accessibility in urban design Accessibility modifications to conventional urban environments has become common in recent decades. The use of a curb cut, or kassel kerb, to enable wheelchair or walker movement between sidewalk and street level is found in most major cities of wealthy countries. The creation of priority parking spaces and of disabled parking permits has made them a standard feature of urban environments. Features that assist people with visual impairments include braille signs and tactile paving to allow a user with a cane to easily identify stairways, train platforms, and similar areas that could pose a physical danger to anyone who has a visual impairment. Transportation: Urban design features that may appear to be simple conveniences for persons without disabilities are often essential to anyone who has a disability. The loss of these features presents a significant barrier. For example, sometimes a lack of prompt snow-clearing on sidewalks of major Canadian city streets means that wheelchair and walker users cannot reach pedestrian crossing buttons on crosswalk posts, due to snow bank accumulation around the posts, making the crossing buttons inaccessible. Public services must take into account the need to maintain accessibility features in the urban environment. Housing: Most existing and new housing, even in the wealthiest nations, lack basic accessibility features unless the designated, immediate occupant of a home currently has a disability. However, there are some initiatives to change typical residential practices so that new homes incorporate basic access features such as zero-step entries and door widths adequate for wheelchairs to pass through. Occupational Therapists are a professional group skilled in the assessment and making of recommendations to improve access to homes. They are involved in both the adaptation of existing housing to improve accessibility, and in the design of future housing.The broad concept of Universal design is relevant to housing, as it is to all aspects of the built environment. Furthermore, a Visitability movement begun by grass roots disability advocates in the 1980s focuses specifically on changing construction practices in new housing. This movement, a network of interested people working in their locales, works on educating, passing laws, and spurring voluntary home access initiatives with the intention that basic access become a routine part of new home construction. Housing: Accessibility and 'ageing in place' Accessibility in the design of housing and household devices has become more prominent in recent decades due to a rapidly ageing population in developed countries. Ageing seniors may wish to continue living independently, but the ageing process naturally increases the disabilities that a senior citizen will experience. A growing trend is the desire for many senior citizens to 'age in place', living as independently as possible for as long as possible. Accessibility modifications that allow ageing in place are becoming more common. Housing may even be designed to incorporate accessibility modifications that can be made throughout the life cycle of the residents. Housing: The English Housing Survey for 2018/19 found only 9% of homes in England have key features, such as a toilet at entrance level and sufficiently wide doorways, to deem them accessible. This was an improvement from 5% in 2005. More than 400,000 wheelchair users in England were living in homes which are neither adapted nor accessible. Voting: Under the Convention on the Rights of Persons with Disabilities, states parties are bound to assure accessible elections, voting, and voting procedures. In 2018, the United Nations Committee on the Rights of Persons with Disabilities issued an opinion that all polling stations should be fully accessible. At the European Court of Human Rights, there are currently two ongoing cases about the accessibility of polling places and voting procedures. They were brought against Slovenia by two voters and the Slovenian Disability Rights Association. As of January 2020, the case, called Toplak and Mrak v. Slovenia, was ongoing. The aim of the court procedure is to make accessible all polling places in Europe. Disability, information technology (IT) and telecommunications: Advances in information technology and telecommunications have represented a leap forward for accessibility. Access to the technology is restricted to those who can afford it, but it has become more widespread in Western countries in recent years. For those who use it, it provides the ability to access information and services by minimizing the barriers of distance and cost as well as the accessibility and usability of the interface. In many countries this has led to initiatives, laws and/or regulations that aim toward providing universal access to the internet and to phone systems at reasonable cost to citizens.A major advantage of advanced technology is its flexibility. Some technologies can be used at home, in the workplace, and in school, expanding the ability of the user to participate in various spheres of daily life. Augmentative and alternative communication technology is one such area of IT progress. It includes inventions such as speech-generating devices, teletypewriter devices, adaptive pointing devices to replace computer mouse devices, and many others. Mobile telecommunications devices and computer applications are also equipped with accessibility features. They can be adapted to create accessibility to a range of tasks, and may be suitable for different kinds of disability. Disability, information technology (IT) and telecommunications: The following impairments are some of the disabilities that affect communications and technology access, as well as many other life activities: communication disorders; hearing impairments; visual impairments; mobility impairments; a learning disability or impairment in mental functioning.Each kind of disability requires a different kind of accommodation, and this may require analysis by a medical specialist, an educational specialist or a job analysis when the impairment requires accommodation. Disability, information technology (IT) and telecommunications: Job analysis Examples of common assistive technologies Mobility impairments One of the first areas where information technology improved the quality of life for disabled individuals is the voice operated wheelchair. Quadriplegics have the most profound disability, and the voice operated wheelchair technology was first developed in 1977 to provide increased mobility. The original version replaced the joystick system with a module that recognized 8 commands. Many other technology accommodation improvements have evolved from this initial development.Missing arms or fingers may make the use of a keyboard and mouse difficult or impossible. Technological improvements such as speech recognition devices and software can improve access. Disability, information technology (IT) and telecommunications: Communication (including speech) impairments A communication disorder interferes with the ability to produce clearly understandable speech. There can be many different causes, such as nerve degeneration, muscle degeneration, stroke, and vocal cord injury. The modern method to deal with speaking disabilities has been to provide a text interface for a speech synthesizer for complete vocal disability. This can be a great improvement for people that have been limited to the use of a throat vibrator to produce speech since the 1960s. Disability, information technology (IT) and telecommunications: Hearing impairment An individual satisfies the definition of hearing disabled when hearing loss is about 30 dB for a single frequency, but this is not always perceptible as a disability. For example, loss of sensitivity in one ear interferes with sound localization (directional hearing), which can interfere with communication in a crowd. This is often recognized when certain words are confused during normal conversation. This can interfere with voice-only interfaces, like automated customer service telephone systems, because it is sometimes difficult to increase the volume and repeat the message. Disability, information technology (IT) and telecommunications: Mild to moderate hearing loss may be accommodated with a hearing aid that amplifies ambient sounds. Portable devices with speed recognition that can produce text can reduce problems associated with understanding conversation. This kind of hearing loss is relatively common, and this often grows worse with age. Disability, information technology (IT) and telecommunications: The modern method to deal with profound hearing disability is the Internet using email or word processing applications. The telecommunications device for the deaf (TDD) became available in the form of the teletype (TTY) during the 1960s. These devices consist of a keyboard, display and modem that connects two or more of these devices using a dedicated wire or plain old telephone service. Disability, information technology (IT) and telecommunications: Modern computer animation allows for sign language avatars to be integrated into public areas. This technology could potentially make train station announcements, news broadcasts, etc. accessible when a human interpreter is not available. Visual impairments A wide range of technology products are available to deal with visual impairment. This includes screen magnification for monitors, screen-reading technology for computers and small screen devices, mouse-over speech synthesis browsing, braille displays, braille printers, braille cameras, voice-operated phones, and tablets. One emerging product that will make ordinary computer displays available for the blind is the refreshable tactile display, which is very different from a conventional braille display. This provides a raised surface corresponding to the bright and dim spots on a conventional display. An example is the Touch Sight Camera for the Blind. Disability, information technology (IT) and telecommunications: Speech Synthesis Markup Language (V1.0 Released 7 September 2004) and Speech Recognition Grammar Specification (V1.0 released 16 March 2004) are relatively recent technologies intended to standardize communication interfaces using Augmented BNF Form and XML Form. These technologies assist visual impairments and physical impairment by providing interactive access to web content without the need to visually observe the content. While these technologies provides access for visually impaired individuals, the primary benefactor has been automated systems that replace live human customer service representatives that handle telephone calls. Disability, information technology (IT) and telecommunications: Web accessibility International standards and guidelines There have been a few major movements to coordinate a set of guidelines for accessibility for the web. The first and most well known is The Web Accessibility Initiative (WAI), which is part of the World Wide Web Consortium (W3C). This organization developed the Web Content Accessibility Guidelines (WCAG) 1.0 and 2.0 which explain how to make Web content accessible to everyone, including people with disabilities. Web "content" generally refers to the information in a Web page or Web application, including text, images, forms, and sounds. (More specific definitions are available in the WCAG documents.)The WCAG is separated into three levels of compliance, A, AA and AAA. Each level requires a stricter set of conformance guidelines, such as different versions of HTML (Transitional vs Strict) and other techniques that need to be incorporated into coding before accomplishing validation. Online tools allow users to submit their website and automatically run it through the WCAG guidelines and produce a report, stating whether or not they conform to each level of compliance. Adobe Dreamweaver also offers plugins which allow web developers to test these guidelines on their work from within the program. Disability, information technology (IT) and telecommunications: The ISO/IEC JTC1 SC36 WG7 24751 Individualized Adaptability and Accessibility in e-learning, education and training series is freely available and made of 3 parts: Individualized Adaptability and Accessibility in e-learning, education and training, Standards inventory and Guidance on user needs mapping. Disability, information technology (IT) and telecommunications: Another source of web accessibility guidance comes from the US government. In response to Section 508 of the US Rehabilitation Act, the Access Board developed standards to which U.S. federal agencies must comply in order to make their sites accessible. The U.S. General Services Administration has developed a website where one can take online training courses for free to learn about these rules. Disability, information technology (IT) and telecommunications: Web accessibility features Examples of accessibility features include: WAI-AA compliance with the WAI's WCAG Semantic Web markup (X)HTML Validation from the W3C for the page's content CSS Validation from the W3C for the page's layout Compliance with all guidelines from Section 508 of the US Rehabilitation Act A high contrast version of the site for individuals with low vision, and a low contrast (yellow or blue) version of the site for individuals with dyslexia Alternative media for any multimedia used on the site (video, flash, audio, etc.) Simple and consistent navigation Device independentWhile WCAG provides much technical information for use by web designers, coders and editors, BS 8878:2010 Web accessibility – Code of Practice has been introduced, initially in the UK, to help site owners and product managers to understand the importance of accessibility. It includes advice on the business case behind accessibility, and how organisations might usefully update their policies and production processes to embed accessibility in their business-as-usual. On 28 May 2019, BS 8878 was superseded by ISO 30071-1, the international Standard that built on BS 8878 and expanded it for international use. Disability, information technology (IT) and telecommunications: Another useful idea is for websites to include a web accessibility statement on the site. Initially introduced in PAS 78, the best practice for web accessibility statements has been updated in BS 8878 to emphasise the inclusion of: information on how disabled and elderly people could get a better experience of using the website by using assistive technologies or accessibility settings of browsers and operating systems (linking to "BBC My Web My Way" can be useful here); information on what accessibility features the site's creators have included, and if there are any user needs which the site does not currently support (for example, descriptive video to allow blind people to access the information in videos more easily); and contact details for disabled people to be able to use to let the site creators know if they have any problems in using the site. While validations against WCAG, and other accessibility badges can also be included, they should be put lower down the statement, as most disabled people still do not understand these technical terms. Education and accessibility for students: Equal access to education for students with disabilities is supported in some countries by legislation. It is still challenging for some students with disabilities to fully participate in mainstream education settings, but many adaptive technologies and assistive programs are making improvements. In India, the Medical Council of India has now passed the directives to all the medical institutions to make them accessible to persons with disabilities. This happened due to a petition by Dr Satendra Singh founder of Infinite Ability.Students with a physical or mental impairment or learning disability may require note-taking assistance, which may be provided by a business offering such services, as with tutoring services. Talking books in the form of talking textbooks are available in Canadian secondary and post-secondary schools. Also, students may require adaptive technology to access computers and the Internet. These may be tax-exempt expenses in some jurisdictions with a medical prescription. Education and accessibility for students: Accessibility of Assessments It is important to ensure that the accessibility in education includes assessments. Accessibility in testing or assessments entails the extent to which a test and its constituent item set eliminates barriers and permits the test-taker to demonstrate their knowledge of the tested content.With the passage of the No Child Left Behind Act of 2001 in the United States, student accountability in essential content areas such as reading, mathematics, and science has become a major area of focus in educational reform. As a result, test developers have needed to create tests to ensure all students, including those with special needs (e.g., students identified with disabilities), are given the opportunity to demonstrate the extent to which they have mastered the content measured on state assessments. Currently, states are permitted to develop two different types of tests in addition to the standard grade-level assessments to target students with special needs. First, the alternate assessment may be used to report proficiency for up to 1% of students in a state. Second, new regulations permit the use of alternate assessments based on modified academic achievement standards to report proficiency for up to 2% of students in a state. Education and accessibility for students: To ensure that these new tests generate results that allow valid inferences to be made about student performance, they must be accessible to as many people as possible. The Test Accessibility and Modification Inventory (TAMI) and its companion evaluation tool, the Accessibility Rating Matrix (ARM), were designed to facilitate the evaluation of tests and test items with a focus on enhancing their accessibility. Both instruments incorporate the principles of accessibility theory and were guided by research on universal design, assessment accessibility, cognitive load theory, and research on item writing and test development. The TAMI is a non-commercial instrument that has been made available to all state assessment directors and testing companies. Assessment researchers have used the ARM to conduct accessibility reviews of state assessment items for several state departments of education.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Neonatal conjunctivitis** Neonatal conjunctivitis: Neonatal conjunctivitis is a form of conjunctivitis (inflammation of the outer eye) which affects newborn babies following birth. It is typically due to neonatal bacterial infection, although it can also be non-infectious (e.g. chemical exposure). Infectious neonatal conjunctivitis is typically contracted during vaginal delivery from exposure to bacteria from the birth canal, most commonly Neisseria gonorrhoeae or Chlamydia trachomatis.Antibiotic ointment is typically applied to the newborn's eyes within 1 hour of birth as prevention for gonococcal ophthalmia. This practice is recommended for all newborns and most hospitals in the United States are required by state law to apply eye drops or ointment soon after birth to prevent the disease.If left untreated, neonatal conjunctivitis can cause blindness. Signs and symptoms: Neonatal conjunctivitis by definition presents during the first month of life. Signs and symptoms include: Pain and tenderness in the eyeball Conjunctival discharge: purulent, mucoid or mucopurulent (depending on the cause) Conjunctival hyperaemia and chemosis, usually also with swelling of the eyelids Corneal involvement (rare) may occur in herpes simplex ophthalmia neonatorum Time of onset Chemical causes: Right after delivery Neisseria gonorrhoeae: Delivery of the baby until 5 days after birth (early onset) Chlamydia trachomatis: 5 days after birth to 2 weeks (late onset – C. trachomatis has a longer incubation period) Complications Untreated cases may develop corneal ulceration, which may perforate, resulting in corneal opacification and staphyloma formation. Cause: Non-infectious Chemical irritants such as silver nitrate can cause chemical conjunctivitis, usually lasting 2–4 days. Thus, prophylaxis with a 1% silver nitrate solution is no longer in common use. In most countries, neomycin and chloramphenicol eye drops are used, instead. However, newborns can develop neonatal conjunctivitis due to reactions with chemicals in these common eye drops. A blocked tear duct may also be another noninfectious cause of neonatal conjunctivitis. Cause: Infectious The two most common infectious causes of neonatal conjunctivitis are N. gonorrheae and Chlamydia, typically acquired from the birth canal during delivery. However, other different bacteria and viruses can be the cause, including herpes simplex virus (HSV 2), Staphylococcus aureus, Streptococcus pyogenes, and Streptococcus pneumoniae.Ophthalmia neonatorum due to gonococci (N. gonorrhoeae) typically manifests in the first 5 days after birth and is associated with marked bilateral purulent discharge and local inflammation. In contrast, conjunctivitis secondary to infection with C. trachomatis produces conjunctivitis 3 days to 2 weeks after delivery. The discharge is usually more watery (mucopurulent) and less inflamed. Babies infected with chlamydia may develop pneumonitis (chest infection) at a later stage (range 2–19 weeks after delivery). Infants with chlamydia pneumonitis should be treated with oral erythromycin for 10–14 days.Diagnosis is performed after taking swab from the infected conjunctivae. Prevention: Antibiotic ointment is typically applied to the newborn's eyes within 1 hour of birth as prevention against gonococcal ophthalmia. This may be erythromycin, tetracycline, or rarely silver nitrate or Argyrol (mild silver protein). Treatment: Prophylaxis needs antenatal, natal, and postnatal care. Antenatal measures include thorough care of mother and treatment of genital infections when suspected. Natal measures are of utmost importance, as most infection occurs during childbirth. Deliveries should be conducted under hygienic conditions taking all aseptic measures. The newborn baby's closed lids should be thoroughly cleansed and dried. If the cause is determined to be due to a blocked tear duct, gentle palpation between the eye and the nasal cavity may be used to clear the tear duct. If the tear duct is not cleared by the time the newborn is 1 year old, surgery may be required. Postnatal measures include: Use of 1% tetracycline ointment, 0.5% erythromycin ointment, or 1% silver nitrate solution (Credé's method) into the eyes of babies immediately after birth Single injection of ceftriaxone IM or IV should be given to infants born to mothers with untreated gonococcal infection. Curative treatment as a rule, conjunctival cytology samples and culture sensitivity swabs should be taken before starting treatment. Chemical ophthalmia neonatorum is a self-limiting condition and does not require any treatment. Treatment: Gonococcal ophthalmia neonatorum needs prompt treatment to prevent complications. Topical therapy should include: Saline lavage hourly until the discharge is eliminated Bacitracin eye ointment four times per day (because of resistant strains, topical penicillin therapy is not reliable, but in cases with proven penicillin susceptibility, penicillin drops 5000 to 10000 units per ml should be instilled every minute for half an hour, every five minutes for next half an hour, and then half-hourly until the infection is controlled.) If the cornea is involved, then atropine sulfate ointment should be applied. Treatment: The advice of both the pediatrician and ophthalmologist should be sought for proper management.Systemic therapy: Newborns with gonococcal ophthalmia neonatorum should be treated for 7 days with ceftriaxone, cefotaxime, ciprofloxacin, or crystalline benzyl penicillin. Other bacterial ophthalmia neonatorum should be treated by broad-spectrum antibiotics drops and ointment for 2 weeks. Neonatal inclusion conjunctivitis caused by C. trachomatis should be treated with oral erythromycin. Topical therapy is not effective and also does not treat the infection of the nasopharynx. Herpes simplex conjunctivitis should be treated with intravenous acyclovir for a minimum of 14 days to prevent systemic infection. Epidemiology: The incidence of neonatal conjunctivitis varies widely depending on the geographical location. The incidence in England was 257 (95% confidence interval: 245 to 269) per 100,000 in 2011.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Servant (design pattern)** Servant (design pattern): In software engineering, the servant pattern defines an object used to offer some functionality to a group of classes without defining that functionality in each of them. A Servant is a class whose instance (or even just class) provides methods that take care of a desired service, while objects for which (or with whom) the servant does something, are taken as parameters. Description and simple example: Servant is used for providing some behavior to a group of classes. Instead of defining that behavior in each class - or when we cannot factor out this behavior in the common parent class - it is defined once in the Servant. Description and simple example: For example: we have a few classes representing geometric objects (rectangle, ellipse, and triangle). We can draw these objects on some canvas. When we need to provide a “move” method for these objects we could implement this method in each class, or we can define an interface they implement and then offer the “move” functionality in a servant. An interface is defined to ensure that serviced classes have methods that servant needs to provide desired behavior. If we continue in our example, we define an Interface “Movable” specifying that every class implementing this interface needs to implement method “getPosition” and “setPosition”. The first method gets the position of an object on a canvas and second one sets the position of an object and draws it on a canvas. Then we define a servant class “MoveServant”, which has two methods “moveTo(Movable movedObject, Position where)” and moveBy(Movable movedObject, int dx, int dy). The Servant class can now be used to move every object which implements the Movable. Thus the “moving” code appears in only one class which respects the “Separation of Concerns” rule. Two ways of implementation: There are two ways to implement this design pattern. User knows the servant (in which case he doesn’t need to know the serviced classes) and sends messages with his requests to the servant instances, passing the serviced objects as parameters. Two ways of implementation: The serviced classes (geometric objects from our example) don’t know about servant, but they implement the “IServiced” interface. The user class just calls the method of servant and passes serviced objects as parameters. This situation is shown on figure 1.Serviced instances know the servant and the user sends them messages with his requests (in which case she doesn’t have to know the servant). The serviced instances then send messages to the instances of servant, asking for service. Two ways of implementation: On figure 2 is shown opposite situation, where user don’t know about servant class and calls directly serviced classes. Serviced classes then asks servant themselves to achieve desired functionality. How to implement Servant: Analyze what behavior servant should take care of. State what methods servant will define and what these methods will need from serviced parameter. By other words, what serviced instance must provide, so that servants methods can achieve their goals. Analyze what abilities serviced classes must have, so they can be properly serviced. We define an interface, which will enforce implementation of declared methods. Define an interface specifying requested behavior of serviced objects. If some instance wants to be served by servant, it must implement this interface. Define (or acquire somehow) specified servant (his class). Implement defined interface with serviced classes. Example: This simple Java example shows the situation described above. This example is only illustrative and will not offer any actual drawing of geometric objects, nor specification of what they look like. Similar design pattern: Command: Design patterns Command and Servant are very similar and implementations of them are often virtually the same. The difference between them is the approach to the problem. For the Servant pattern we have some objects to which we want to offer some functionality. We create a class whose instances offer that functionality and which defines an interface that serviced objects must implement. Serviced instances are then passed as parameters to the servant. Similar design pattern: Command: For the Command pattern we have some objects that we want to modify with some functionality. So, we define an interface which commands which desired functionality must be implemented. Instances of those commands are then passed to original objects as parameters of their methods.Even though design patterns Command and Servant are similar it doesn’t mean it’s always like that. There are a number of situations where use of design pattern Command doesn’t relate to the design pattern Servant. In these situations we usually need to pass to called methods just a reference to another method, which it will need in accomplishing its goal. Since we can’t pass references to methods in many languages, we have to pass an object implementing an interface which declares the signature of passed method. Resources: Pecinovský, Rudolf; Jarmila Pavlíčková; Luboš Pavlíček (June 2006). Let's Modify the Objects First Approach into Design Patterns First (PDF). Eleventh Annual Conference on Innovation and Technology in Computer Science Education, University of Bologna.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Coursework** Coursework: Coursework (also course work, especially British English) is work performed by students or trainees for the purpose of learning. Coursework may be specified and assigned by teachers, or by learning guides in self-taught courses. Coursework can encompass a wide range of activities, including practice, experimentation, research, and writing (e.g., dissertations, book reports, and essays). In the case of students at universities, high schools and middle schools, coursework is often graded and the scores are combined with those of separately assessed exams to determine overall course scores. In contrast to exams, students may be allotted several days or weeks to complete coursework, and are often allowed to use text books, notes, and the Internet for research.In universities, students are usually required to perform coursework to broaden knowledge, enhance research skills, and demonstrate that they can discuss, reason and construct practical outcomes from learned theoretical knowledge. Sometimes coursework is performed by a group so that students can learn both how to work in groups and from each other. Plagiarism and other problems: Plagiarism and copying can be problematic in graded coursework. Easily accessible websites have given students opportunities to copy ideas and even complete essays, and remain undetected despite measures to detect this. While coursework may give learners the chance to improve their grades, it also provides an opportunity to "cheat the system". Also, there is often controversy regarding the type and amount of help students can receive while completing coursework. In most learning institutions, plagiarism or unreasonable coursework help may lead to coursework disqualification, student expulsion, or both. Plagiarism and other problems: UK GCSE coursework Coursework was removed from UK GCSE courses and replaced by "Controlled Assessment", much of which must be completed under exam conditions, without teacher assistance and with access to resources tightly controlled in order to reduce the possibility of cheating. However, this too has been largely removed and replaced by mainly exam-based assessment as part of a general GCSE reform.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**XDCAM** XDCAM: XDCAM is a series of products for digital recording using random access solid-state memory media, introduced by Sony in 2003. Four different product lines – the XDCAM SD, XDCAM HD, XDCAM EX and XDCAM HD422 – differ in types of encoder used, frame size, container type and in recording media. XDCAM: None of the later products have made earlier product lines obsolete. Sony maintains that different formats within XDCAM family have been designed to meet different applications and budget constraints.The XDCAM range includes cameras and decks which act as drop-in replacements for traditional VTRs allowing XDCAM discs to be used within a traditional videotape-based workflow. These decks can also serve as random access computer hard drives for easy import of the video data files into non-linear editing systems (NLE) via FireWire (IEEE 1394) and Ethernet. XDCAM: In September 2008, JVC announced its alliance with Sony to support the XDCAM EX format. In August 2009, Convergent Design began shipping the nanoFlash Portable Recorder, which uses the Sony XDCAM HD422 codec. In November 2012, VITEC began shipping the FS-T2001 Portable Recorder, which uses Sony XDCAM HD422 and XDCAM HD codec. Compression methods: The XDCAM format uses multiple video compression methods and media container formats. Video is recorded with DV, MPEG-2 Part 2 or MPEG-4 compression schemes. DV is used for standard-definition video, MPEG-2 is used both for standard and high definition video, while MPEG-4 is used for proxy video. Audio is recorded in uncompressed PCM form for all formats except proxy video, which uses A-law compression. Compression methods: Equipment that uses Professional Disc as well as XDCAM 4:2:2 on SxS cards as recording media employs MXF container to store digital audio/digital video streams. Tapeless camcorders that record onto solid-state memory cards, use MP4 container for high definition audio/video, and DV-AVI container for DV video. JVC camcorders that use XDCAM EX recording format, are also capable of recording into QuickTime container besides using MP4 container. Recording formats: DVCAM uses standard DV encoding, which runs at 25 Mbit/s, and is compatible with most editing systems. Some camcorders that allow DVCAM recording can record progressive-scan video. Recording formats: MPEG IMX allows recording in standard definition, using MPEG-2 encoding at data rate of 30, 40 or 50 megabits per second. Unlike most other MPEG-2 implementations, IMX uses intraframe compression with each frame having the same exact size in bytes to simplify recording onto video tape. Sony claims that at 50 Mbit/s it offers visual quality that is comparable to Digital Betacam MPEG IMX is not supported in the XDCAM EX product line. Recording formats: MPEG HD is used in all product lines except for XDCAM SD. This format supports multiple frame sizes, frame rates, scanning types and quality modes. Depending on product line or a particular model, not all modes of this format may be available. MPEG HD422 doubles the chroma horizontal resolution compared to the previous generations of high-definition video XDCAM formats. To accommodate the improved chroma detail, video bitrate has been increased to 50 Mbit/s. This format is used only in XDCAM HD422 products. MPEG SHD422 XDCAM-SHD422 stands for "Super HD" and has been introduced later on to preserve more details. It maintains the 4:2:2 planar chroma sampling as well as the same resolution of MPEG HD422, but it increases the bitrate to 85 Mbit/s. This format has never become widely used and a very limited set of devices support it. Proxy AV is used to record low resolution proxy videos. This format employs MPEG-4 video encoding at 1.5 Mbit/s (CIF resolution) with 64 kbit/s (8 kHz A-law, ISDN-quality) for each audio channel. XDCAM formats ^ 1 720p @ 19 Mbit/s is offered by JVC and is equivalent to HDV 720p XDCAM-SHD422 has a very limited support. Recording media: Professional Disc (XDCAM and XDCAM HD) The Professional Disc was chosen by Sony as its medium for professional non-linear video acquisition for a number of reasons, outlined in their white-paper Why Sony Adopted Professional Disc. This disc is similar to Blu-ray Disc and holds either 23 GB of data (PFD23, single-layer, rewritable), 50 GB (PFD50, dual-layer, rewritable), 100 GB (PFD100TLA, triple-layer, rewritable) or 128 GB (PFD128QLW, quad-layer, write-once). Recording media: Essentially, the Professional Disc format was deemed to be a suitable, cost effective and easy step forward. The discs are reliable and robust, suitable for field work (something which has previously been a problem with many disc-based systems). Additionally, the cost of media is comparable to existing professional formats. SxS In 2008, Sony introduced a new recording medium to their XDCAM range – SxS Pro (pronounced "S-by-S"). It is a solid-state memory card implemented as an ExpressCard module. The first camera to use this media was the Sony PMW-EX1 professional video camera. In December 2009, Sony introduced the more affordable SxS-1. This unit is designed to have the same performance as the SxS Pro card however its life expectancy is shorter at an estimated 5 years of life when used every day to the card's full capacity. In early 2013, Sony has introduced SxS Pro+ cards. These have a 1.2 Gbit/s read and write speed to support the PMW-F55 in storing 4K 60p acquisition. Memory Stick Memory Stick cards can be used in Sony XDCAM EX camcorders via the MEAD-MS01 adapter. Secure Digital Secure Digital memory cards can be used in Sony XDCAM EX camcorders via the MEAD-SD01 adapter. JVC camcorders that record in XDCAM EX format use Secure Digital memory cards natively. XQD XQD memory cards can be used in Sony XDCAM EX camcorders via the QDA-EX1 ExpressCard adapter.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ultra-Black** Ultra-Black: Ultra-black is one of the darkest shades of the color black. An ultra-black substance is defined as reflecting less than 0.5% of the light that hits its surface. This color is part of the natural coloration of some species of birds-of-paradise, butterflies, and fishes, and ultra-black components are used in telescopes, cameras, and solar panels to improve the efficiency of light capture. Discovery in Fishes: The first recorded instance of ultra-black coloration being discovered in a species of fish occurred in 2020, when a group of researchers were examining fishes caught in trawls during research cruises in Monterey Bay, California, and the Gulf of Mexico. A total of 16 out of the 18 species caught in these trawls were found to have skin that reflected less than 0.5% of the light that hit it and that could thus be termed ultra-black. The specimen with the darkest skin, an anglerfish belonging to the genus Oneirodes, also tied some species of birds-of-paradise as having the darkest pigment in any animal, reflecting only 0.044% of the light that hit it. This ultra-black skin may serve several purposes depending on the biology, preferred food sources, and predators of each species. While most of these species likely use this coloration as camouflage to hide from predators, some of them, including fish that attract prey using bioluminescent lures like Astronesthes micropogon and Oneirodes sp., could potentially use ultra-black skin to catch prey unawares and prevent them from being seen in the light from their own lures. In some cases, ultra-black skin might also serve to block light that the fish does not want to emit, with ultra-black skin over the gut potentially blocking light emitted by bioluminescent prey while being digested. Discovery in Fishes: Mechanism The ultra-black coloration of these deep-sea fishes is due to a pigment called melanin, the same pigment that gives human skin its coloration, and the reason the skin of these fishes is so much darker than human skin is due to both the amount of melanin present and arrangement of the melanin. The skin of the deep-sea fishes contains one layer filled with small organelles called melanosomes that contain melanin, and in these fishes the melanosomes are both larger and more abundant than in other animals, with very few gaps resulting in a solid layer of pigment. The high concentration of the pigment is augmented by the melanosomes being aligned in a way that scatters incoming light sideways into other melanosomes rather than reflecting it directly back, which in turn increases the amount of pigment the light hits before it is reflected out of the fishes’ skin, ultimately reducing the amount of light that the fishes emit. This has a significant effect on reducing the fishes' visibility, and it is estimated that reducing the amount of light that a deep-sea fish reflects from 2% to 0.05% reduces the distance at which the fish can be seen by 84%. What makes these fishes unique from other animals that have ultra-black coloration like butterflies and birds-of-paradise is that those animals have structures that capture light and direct it into the melanin in their skin, while the fishes do not have those structures and rely solely on the pigment in their melanosomes to absorb incoming light. Being able to reproduce the mechanism these fishes use to absorb light has industrial applications because the ultra-black products humans make use carbon nanotubes, which are very delicate, to trap light and being able to replace these nanotubes with a pigment-based system would improve the durability of current products and open the door for new applications.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded