text
stringlengths
60
353k
source
stringclasses
2 values
**CP-39,332** CP-39,332: CP-39,332 is a drug which acts as a serotonin-norepinephrine reuptake inhibitor. Tametraline (1R,4S-), CP-24,442 (1S,4R-), CP-22,185 (cis-), and CP-22,186 (trans-) are stereoisomers of the compound and show varying effects on monoamine reuptake. None of them were ever marketed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Diphenhydramine** Diphenhydramine: Diphenhydramine (DPH) is an antihistamine and sedative mainly used to treat allergies, insomnia, and symptoms of the common cold. It is also less commonly used for tremor in parkinsonism, and nausea. It is taken by mouth, injected into a vein, injected into a muscle, or applied to the skin. Maximal effect is typically around two hours after a dose, and effects can last for up to seven hours.Common side effects include sleepiness, poor coordination and an upset stomach. Its use is not recommended in young children or the elderly. There is no clear risk of harm when used during pregnancy; however, use during breastfeeding is not recommended. It is a first-generation H1-antihistamine and it works by blocking certain effects of histamine, which produces its antihistamine and sedative effects. Diphenhydramine is also a potent anticholinergic, which means it also works as a deliriant at much higher than recommended doses as a result. Its sedative and deliriant effects have led to some cases of recreational use.Diphenhydramine was first made by George Rieveschl and came into commercial use in 1946. It is available as a generic medication. It is sold under the brand name Benadryl, among others. In 2020, it was the 192nd most commonly prescribed medication in the United States, with more than 2 million prescriptions. Medical uses: Diphenhydramine is a first-generation antihistamine used to treat a number of conditions including allergic symptoms and itchiness, the common cold, insomnia, motion sickness, and extrapyramidal symptoms. Diphenhydramine also has local anesthetic properties, and has been used as such in people allergic to common local anesthetics such as lidocaine. Medical uses: Allergies Diphenhydramine is effective in treatment of allergies. As of 2007, it was the most commonly used antihistamine for acute allergic reactions in the emergency department.By injection it is often used in addition to epinephrine for anaphylaxis, although as of 2007 its use for this purpose had not been properly studied. Its use is only recommended once acute symptoms have improved. Medical uses: Topical formulations of diphenhydramine are available, including creams, lotions, gels, and sprays. These are used to relieve itching and have the advantage of causing fewer systemic effects (e.g., drowsiness) than oral forms. Movement disorders Diphenhydramine is used to treat akathisia and Parkinson's disease–like extrapyramidal symptoms caused by antipsychotics. It is also used to treat acute dystonia including torticollis and oculogyric crisis caused by first generation antipsychotics. Medical uses: Sleep Because of its sedative properties, diphenhydramine is widely used in nonprescription sleep aids for insomnia. The drug is an ingredient in several products sold as sleep aids, either alone or in combination with other ingredients such as acetaminophen (paracetamol) in Tylenol PM and ibuprofen in Advil PM. Diphenhydramine can cause minor psychological dependence. Diphenhydramine has also been used as an anxiolytic.Diphenhydramine has also been used off prescription by parents in an attempt to make their children sleep and to sedate them on long-distance flights. This has been met with criticism, both by doctors and by members of the airline industry, because sedating passengers may put them at risk if they cannot react efficiently to emergencies, and because the drug's side effects, especially the chance of a paradoxical reaction, may make some users hyperactive. Addressing such use, the Seattle Children's hospital argued, in a 2009 article, "Using a medication for your convenience is never an indication for medication in a child."The American Academy of Sleep Medicine's 2017 clinical practice guidelines recommended against the use of diphenhydramine in the treatment of insomnia, because of poor effectiveness and low quality of evidence. A major systematic review and network meta-analysis of medications for the treatment of insomnia published in 2022 found little evidence to inform the use of diphenhydramine for insomnia. Medical uses: Nausea Diphenhydramine also has antiemetic properties, which make it useful in treating the nausea that occurs in vertigo and motion sickness. However, when taken above recommended doses, it can cause nausea (especially above 200 mg). Medical uses: Special populations Diphenhydramine is not recommended for people older than 60 and children younger than six, unless a physician is consulted. These people should be treated with second-generation antihistamines, such as loratadine, desloratadine, fexofenadine, cetirizine, levocetirizine, and azelastine. Because of its strong anticholinergic effects, diphenhydramine is on the Beers list of drugs to avoid in the elderly.Diphenhydramine is excreted in breast milk. It is expected that low doses of diphenhydramine taken occasionally will cause no adverse effects in breastfed infants. Large doses and long-term use may affect the baby or reduce breast milk supply, especially when combined with sympathomimetic drugs, such as pseudoephedrine, or before the establishment of lactation. A single bedtime dose after the last feeding of the day may minimize harmful effects of the medication on the baby and on the milk supply. Still, non-sedating antihistamines are preferred.Paradoxical reactions to diphenhydramine have been documented, particularly in children, and it may cause excitation instead of sedation.Topical diphenhydramine is sometimes used especially for people in hospice. This use is without indication and topical diphenhydramine should not be used as treatment for nausea because research has not shown that this therapy is more effective than others.There were no documented cases of clinically apparent acute liver injury caused by normal doses of diphenhydramine. Adverse effects: The most prominent side effect is sedation. A typical dose creates driving impairment equivalent to a blood-alcohol level of 0.10, which is higher than the 0.08 limit of most drunk-driving laws.Diphenhydramine is a potent anticholinergic agent and potential deliriant in higher doses. This activity is responsible for the side effects of dry mouth and throat, increased heart rate, pupil dilation, urinary retention, constipation, and, at high doses, hallucinations or delirium. Other side effects include motor impairment (ataxia), flushed skin, blurred vision at nearpoint owing to lack of accommodation (cycloplegia), abnormal sensitivity to bright light (photophobia), sedation, difficulty concentrating, short-term memory loss, visual disturbances, irregular breathing, dizziness, irritability, itchy skin, confusion, increased body temperature (in general, in the hands and/or feet), temporary erectile dysfunction, and excitability, and although it can be used to treat nausea, higher doses may cause vomiting. Diphenhydramine in overdose may occasionally result in QT prolongation.Some individuals experience an allergic reaction to diphenhydramine in the form of hives.Conditions such as restlessness or akathisia can worsen from increased levels of diphenhydramine, especially with recreational dosages. Normal doses of diphenhydramine, like other first generation antihistamines, can also make symptoms of restless legs syndrome worse. As diphenhydramine is extensively metabolized by the liver, caution should be exercised when giving the drug to individuals with hepatic impairment. Adverse effects: Anticholinergic use later in life is associated with an increased risk for cognitive decline and dementia among older people. Contraindications: Diphenhydramine is contraindicated in premature infants and neonates, as well as people who are breastfeeding. It is a pregnancy Category B drug. Diphenhydramine has additive effects with alcohol and other CNS depressants. Monoamine oxidase inhibitors prolong and intensify the anticholinergic effect of antihistamines. Overdose: Diphenhydramine is one of the most commonly misused over-the-counter drugs in the United States. In cases of extreme overdose, if not treated in time, acute diphenhydramine poisoning may have serious and potentially fatal consequences. Overdose symptoms may include: Acute poisoning can be fatal, leading to cardiovascular collapse and death in 2–18 hours, and in general is treated using a symptomatic and supportive approach. Diagnosis of toxicity is based on history and clinical presentation, and in general precise plasma levels do not appear to provide useful relevant clinical information. Several levels of evidence strongly indicate diphenhydramine (similar to chlorpheniramine) can block the delayed rectifier potassium channel and, as a consequence, prolong the QT interval, leading to cardiac arrhythmias such as torsades de pointes. Overdose: No specific antidote for diphenhydramine toxicity is known, but the anticholinergic syndrome has been treated with physostigmine for severe delirium or tachycardia. Benzodiazepines may be administered to decrease the likelihood of psychosis, agitation, and seizures in people who are prone to these symptoms. Interactions: Alcohol may increase the drowsiness caused by diphenhydramine. Pharmacology: Pharmacodynamics Diphenhydramine, while traditionally known as an antagonist, acts primarily as an inverse agonist of the histamine H1 receptor. It is a member of the ethanolamine class of antihistaminergic agents. By reversing the effects of histamine on the capillaries, it can reduce the intensity of allergic symptoms. It also crosses the blood–brain barrier and inversely agonizes the H1 receptors centrally. Its effects on central H1 receptors cause drowsiness. Pharmacology: Diphenhydramine is a potent antimuscarinic (a competitive antagonist of muscarinic acetylcholine receptors) and, as such, at high doses can cause anticholinergic syndrome. The utility of diphenhydramine as an antiparkinson agent is the result of its blocking properties on the muscarinic acetylcholine receptors in the brain. Pharmacology: Diphenhydramine also acts as an intracellular sodium channel blocker, which is responsible for its actions as a local anesthetic. Diphenhydramine has also been shown to inhibit the reuptake of serotonin. It has been shown to be a potentiator of analgesia induced by morphine, but not by endogenous opioids, in rats. The drug has also been found to act as an inhibitor of histamine N-methyltransferase (HNMT). Pharmacology: Pharmacokinetics Oral bioavailability of diphenhydramine is in the range of 40% to 60%, and peak plasma concentration occurs about 2 to 3 hours after administration.The primary route of metabolism is two successive demethylations of the tertiary amine. The resulting primary amine is further oxidized to the carboxylic acid. Diphenhydramine is metabolized by the cytochrome P450 enzymes CYP2D6, CYP1A2, CYP2C9, and CYP2C19.The elimination half-life of diphenhydramine has not been fully elucidated, but appears to range between 2.4 and 9.3 hours in healthy adults. A 1985 review of antihistamine pharmacokinetics found that the elimination half-life of diphenhydramine ranged between 3.4 and 9.3 hours across five studies, with a median elimination half-life of 4.3 hours. A subsequent 1990 study found that the elimination half-life of diphenhydramine was 5.4 hours in children, 9.2 hours in young adults, and 13.5 hours in the elderly. A 1998 study found a half-life of 4.1 ± 0.3 hours in young men, 7.4 ± 3.0 hours in elderly men, 4.4 ± 0.3 hours in young women, and 4.9 ± 0.6 hours in elderly women. In a 2018 study in children and adolescents, the half-life of diphenhydramine was 8 to 9 hours. Chemistry: Diphenhydramine is a diphenylmethane derivative. Analogues of diphenhydramine include orphenadrine, an anticholinergic, nefopam, an analgesic, and tofenacin, an antidepressant. Chemistry: Detection in body fluids Diphenhydramine can be quantified in blood, plasma, or serum. Gas chromatography with mass spectrometry (GC-MS) can be used with electron ionization on full scan mode as a screening test. GC-MS or GC-NDP can be used for quantification. Rapid urine drug screens using immunoassays based on the principle of competitive binding may show false-positive methadone results for people having ingested diphenhydramine. Quantification can be used to monitor therapy, confirm a diagnosis of poisoning in people who are hospitalized, provide evidence in an impaired driving arrest, or assist in a death investigation. History: Diphenhydramine was discovered in 1943 by George Rieveschl, a former professor at the University of Cincinnati. In 1946, it became the first prescription antihistamine approved by the U.S. FDA.In the 1960s, diphenhydramine was found to weakly inhibit reuptake of the neurotransmitter serotonin. This discovery led to a search for viable antidepressants with similar structures and fewer side effects, culminating in the invention of fluoxetine (Prozac), a selective serotonin reuptake inhibitor (SSRI). A similar search had previously led to the synthesis of the first SSRI, zimelidine, from brompheniramine, also an antihistamine. Society and culture: Diphenhydramine is deemed to have limited abuse potential in the United States owing to its potentially serious side-effect profile and limited euphoric effects, and is not a controlled substance. Since 2002, the U.S. FDA has required special labeling warning against use of multiple products that contain diphenhydramine. In some jurisdictions, diphenhydramine is often present in postmortem specimens collected during investigation of sudden infant deaths; the drug may play a role in these events.Diphenhydramine is among prohibited and controlled substances in the Republic of Zambia, and travelers are advised not to bring the drug into the country. Several Americans have been detained by the Zambian Drug Enforcement Commission for possession of Benadryl and other over-the-counter medications containing diphenhydramine. Society and culture: Recreational use Although diphenhydramine is widely used and generally considered to be safe for occasional usage, multiple cases of abuse and addiction have been documented. Because the drug is cheap and sold over the counter in most countries, adolescents without access to more sought-after illicit drugs, are particularly at risk. People with mental health problems—especially those with schizophrenia—are also prone to abuse the drug, which is self-administered in large doses to treat extrapyramidal symptoms caused by the use of antipsychotics.Recreational users report calming effects, mild euphoria, and hallucinations as the desired effects of the drug. Research has shown that antimuscarinic agents, including diphenhydramine, "may have antidepressant and mood-elevating properties". A study conducted on adult males with a history of sedative abuse found that subjects who were administered a high dose (400 mg) of diphenhydramine reported a desire to take the drug again, despite also reporting negative effects, such as difficulty concentrating, confusion, tremors, and blurred vision.In 2020, an Internet challenge emerged on social media platform TikTok involving deliberately overdosing on diphenhydramine; dubbed the Benadryl challenge, the challenge encourages participants to consume dangerous amounts of Benadryl for the purpose of filming the resultant psychoactive effects, and has been implicated in several hospitalisations and at least two deaths. Society and culture: Names Diphenhydramine is sold under the brand name Benadryl by McNeil Consumer Healthcare in the US, Canada, and South Africa. Trade names in other countries include Dimedrol, Daedalon, and Nytol. It is also available as a generic medication. Procter & Gamble markets an over-the-counter formulation of diphenhydramine as a sleep aid under the brand ZzzQuil.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Microcystinase** Microcystinase: Microcystinase is a protease that selectively degrades Microcystin, an extremely potent cyanotoxin that results in marine pollution and human and animal food chain poisoning. The enzyme is naturally produced by a number of bacteria isolated in Japan and New Zealand. As of 2012, the chemical structure of this enzyme has not been scientifically determined. The enzyme degrades the cyclic peptide toxin microcystin into a linear peptide, which is 160 times less toxic. Other bacteria then further degrade the linear peptide.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trifluoromethylphenylpiperazine** Trifluoromethylphenylpiperazine: 3-Trifluoromethylphenylpiperazine (TFMPP) is a recreational drug of the phenylpiperazine chemical class and is a substituted piperazine. Usually in combination with benzylpiperazine (BZP) and other analogues, it is sold as an alternative to the illicit drug MDMA ("Ecstasy"). Pharmacology: TFMPP has affinity for the 5-HT1A (Ki = 288 nM), 5-HT1B (Ki = 132 nM), 5-HT1D (Ki = 282 nM), 5-HT2A (Ki = 269 nM), and 5-HT2C (Ki = 62 nM) receptors, and functions as a full agonist at all sites except the 5-HT2A receptor, where it acts as a weak partial agonist or antagonist. Unlike the related piperazine compound meta-chlorophenylpiperazine (mCPP), TFMPP has insignificant affinity for the 5-HT3 receptor (IC50 = 2,373 nM). TFMPP also binds to the SERT (EC50 = 121 nM) and evokes the release of serotonin. It has no effects on dopamine or norepinephrine reuptake or efflux. Use and effects: TFMPP is rarely used by itself. In fact, TFMPP reduces locomotor activity and produces aversive effects in animals rather than self-administration, which may explain the decision of the DEA not to permanently make TFMPP a controlled substance. More commonly, TFMPP is co-administered with BZP, which acts as a norepinephrine and dopamine releasing agent. Due to the serotonin agonist effects and increase in serotonin, norepinephrine, and dopamine levels produced by the BZP/TFMPP combination, this mixture of drugs produces effects which crudely mimic those of MDMA. Side effects: The combination of BZP and TFMPP has been associated with a range of side effects, including insomnia, anxiety, nausea and vomiting, headaches and muscle aches which may resemble migraine, seizures, impotence, and rarely psychosis, as well as a prolonged and unpleasant hangover effect. These side effects tend to be significantly worsened when the BZP/TFMPP mix is consumed alongside alcohol, especially the headache, nausea, and hangover. Side effects: However, it is difficult to say how many of these side effects are produced by TFMPP itself, as it has rarely been marketed without BZP also being present, and all of the side effects mentioned are also produced by BZP (which has been sold as a single drug). Studies into other related piperazine drugs such as mCPP suggest that certain side effects such as anxiety, headache and nausea are common to all drugs of this class, and pills containing TFMPP are reported by users to produce comparatively more severe hangover effects than those containing only BZP. The drug can also cause the body to tremble for a long period of time. Legal status: Canada Since 2012, TFMPP has been listed as a Schedule III controlled substance in Canada, making possession of TFMPP a federal offence. It has also been added to Part J of the Food and Drug Regulations thereby prohibiting the production, export or import of the substance. China As of October 2015 TFMPP is a controlled substance in China. Finland Scheduled in government decree on psychoactive substances banned from the consumer market. Denmark As of December 3, 2005, TFMPP is illegal in Denmark. Japan Since 2003, TFMPP and BZP became illegal in Japan. Netherlands TFMPP is unscheduled in the Netherlands. Legal status: New Zealand Based on the recommendation of the EACD, the New Zealand government has passed legislation which placed BZP, along with the other piperazine derivatives TFMPP, mCPP, pFPP, MeOPP and MBZP, into Class C of the New Zealand Misuse of Drugs Act 1975. A ban was intended to come into effect in New Zealand on December 18, 2007, but the law change did not go through until the following year, and the sale of BZP and the other listed piperazines became illegal in New Zealand as of 1 April 2008. An amnesty for possession and usage of these drugs remained until October 2008, at which point they became completely illegal. Legal status: Sweden As of March 1, 2006, TFMPP is scheduled as a "dangerous substance" in Sweden. Switzerland As of December 1, 2010, TFMPP is a controlled substance in Switzerland. United Kingdom As of December 2009, TFMPP has been made a Class C drug in the United Kingdom along with BZP. United States TFMPP is not currently scheduled at the federal level in the United States, but it was briefly emergency scheduled in Schedule I. The scheduling expired in April 2004 and was not renewed. However, some states such as Florida have banned the drug in their criminal statutes making its possession a felony. Florida TFMPP is a Schedule I controlled substance in the state of Florida making it illegal to buy, sell, or possess in Florida. Texas TFMPP is controlled in Texas under Penalty Group 2, as a hallucinogenic substance. It is illegal to possess TFMPP in any quantity in Texas.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Blasticidin-S deaminase** Blasticidin-S deaminase: In enzymology, a blasticidin-S deaminase (EC 3.5.4.23) is an enzyme that catalyzes the chemical reaction blasticidin S + H2O ⇌ deaminohydroxyblasticidin S + NH3Thus, the two substrates of this enzyme are blasticidin S and H2O, whereas its two products are deaminohydroxyblasticidin S and NH3. This enzyme belongs to the family of hydrolases, those acting on carbon-nitrogen bonds other than peptide bonds, specifically in cyclic amidines. The systematic name of this enzyme class is blasticidin-S aminohydrolase. Structural studies: As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1WN5 and 1WN6.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aluminium-conductor steel-reinforced cable** Aluminium-conductor steel-reinforced cable: Aluminium conductor steel-reinforced cable (ACSR) is a type of high-capacity, high-strength stranded conductor typically used in overhead power lines. The outer strands are high-purity aluminium, chosen for its good conductivity, low weight, low cost, resistance to corrosion and decent mechanical stress resistance. The centre strand is steel for additional strength to help support the weight of the conductor. Steel is of higher strength than aluminium which allows for increased mechanical tension to be applied on the conductor. Steel also has lower elastic and inelastic deformation (permanent elongation) due to mechanical loading (e.g. wind and ice) as well as a lower coefficient of thermal expansion under current loading. These properties allow ACSR to sag significantly less than all-aluminium conductors. As per the International Electrotechnical Commission (IEC) and The CSA Group (formerly the Canadian Standards Association or CSA) naming convention, ACSR is designated A1/S1A. Design: The aluminium alloy and temper used for the outer strands in the United States and Canada is normally 1350-H19 and elsewhere is 1370-H19, each with 99.5+% aluminium content. The temper of the aluminium is defined by the aluminium version's suffix, which in the case of H19 is extra hard. To extend the service life of the steel strands used for the conductor core they are normally galvanized, or coated with another material to prevent corrosion. The diameters of the strands used for both the aluminum and steel strands vary for different ACSR conductors. ACSR cable still depends on the tensile strength of the aluminium; it is only reinforced by the steel. Because of this, its continuous operating temperature is limited to 75 °C (167 °F), the temperature at which aluminium begins to anneal and soften over time. For situations that higher operating temperatures are required, aluminium-conductor steel-supported (ACSS) may be used. Steel core The standard steel core used for ACSR is galvanized steel, but zinc, 5% or 10% aluminium alloy and trace mischmetal coated steel (sometimes called by the trade-names Bezinal or Galfan) and aluminium-clad steel (sometimes called by the trade-name Alumoweld) are also available. Higher strength steel may also be used. Design: In the United States the most commonly used steel is designated GA2 for galvanized steel (G) with class A zinc coating thickness (A) and regular strength (2). Class C zinc coatings are thicker than class A and provide increased corrosion protection at the expense of reduced tensile strength. A regular strength galvanized steel core with Class C coating thickness would be designated GC2. Higher strength grades of steel are designated high-strength (3), extra-high-strength (4), and ultra-high-strength (5). An ultra-high-strength galvanized steel core with class A coating thickness would be designated GA5. The use of higher strength steel cores increases the tensile strength of the conductor allowing for higher tensions which results in lower sag. Design: Zinc-5% aluminium mischmetal coatings are designated with an "M". These coatings provide increased corrosion protection and heat resistance compared to zinc alone. Regular strength Class "A" mischmetal thickness weight coated regular strength steel would be designated MA2. Aluminium-clad steel is designated as "AW". Aluminium-clad steel offers increased corrosion protection and conductivity at the expense of reduced tensile strength. Aluminium-clad steel is commonly specified for coastal applications. Design: IEC and CSA use a different naming convention. The most commonly used steel is S1A for S1 regular strength steel with a class A coating. S1 steel has slightly lower tensile strength than the regular strength steel used in the United States. Per the Canadian CSA standards the S2A strength grade is classified as High Strength steel. The equivalent material per the ASTM standards is the GA2 strength grade and called Regular Strength steel. The CSA S3A strength grade is classified as Extra High Strength steel. The equivalent material per the ASTM standards is the GA3 strength grade called High Strength. The present day CSA standards for overhead electrical conductor do not yet officially recognize the ASTM equivalent GA4 or GA5 grades. The present day CSA standards do not yet officially recognize the ASTM "M" family of zinc alloy coating material. Canadian utilities are using conductors built with the higher strength steels with the "M" zinc alloy coating. Design: Lay Lay of a conductor is determined by four extended fingers; "right" or "left" direction of the lay is determined depending if it matches finger direction from right hand or left hand respectively. Overhead aluminium (AAC, AAAC, ACAR) and ACSR conductors in the USA are always manufactured with the outer conductor layer with a right-hand lay. Going toward the center each layer has alternating lays. Some conductor types (e.g. copper overhead conductor, OPGW, steel EHS) are different and have left-hand lay on the outer conductor. Some South American countries specify left-hand lay for the outer conductor layer on their ACSR, so those are wound differently than those used in the USA. Sizing: ACSR conductors are available in numerous specific sizes, with single or multiple center steel wires and generally larger quantities of aluminium strands. Although rarely used, there are some conductors that have more steel strands than aluminum strands. An ACSR conductor can in part be denoted by its stranding, for example, an ACSR conductor with 72 aluminium strands with a core of 7 steel strands will be called 72/7 ACSR conductor. Cables generally range from #6 AWG ("6/1" – six outer aluminum conductors and one steel reinforcing conductor) to 2167 kcmil ("72/7" – seventy two outer aluminum conductors and seven steel reinforcing conductors). Sizing: Naming convention To help avoid confusion due to the numerous combinations of stranding of the steel and aluminium strands, code words are used to specify a specific conductor version. In North America bird names are used for the code words while animal names are used elsewhere. For instance in North America, Grosbeak is a 322.3 mm2 (636 kcmil) ACSR conductor with 26/7 Aluminium/Steel stranding whereas Egret is the same total aluminium size (322.3 mm2, 636 kcmil conductor) but with 30/19 Aluminium/Steel stranding. Although the number of aluminium strands is different between Grosbeak and Egret, differing sizes of the aluminium strands are used to offset the change in the number of strands such that the total amount of aluminium remains the same. Differences in the number of steel strands result in varying weights of the steel portion and also result in different overall conductor diameters. Most utilities standardize on a specific conductor version when various versions of the same amount of aluminum to avoid issues related to different size hardware (such as splices). Due to the numerous different sizes available, utilities often skip over some of the sizes to reduce their inventory. The various stranding versions result in different electrical and mechanical characteristics. Sizing: Ampacity ratings Manufacturers of ACSR typically provide ampacity tables for a defined set of assumptions. Individual utilities normally apply different ratings due to using varying assumptions (which may be a result in higher or lower amperage ratings than those the manufacturers provide). Significant variables include wind speed and direction relative to the conductor, sun intensity, emissivity, ambient temperature, and maximum conductor temperature. Conducting properties: In three phase electrical power distribution, conductors must be designed to have low electrical impedance in order to assure that the power lost in the distribution of power is minimal. Impedance is a combination of two quantities: resistance and reactance. The resistances of ASCR conductors are tabulated for different conductor designs by the manufacturer at DC and AC frequency assuming specific operating temperatures. The reasons that resistance changes with frequency are largely due to the skin effect, the proximity effect, and hysteresis loss. Depending on the geometry of the conductor as differentiated by the conductor name, these phenomena have varying degrees of affecting the overall resistance in the conductor at AC vs DC frequency. Conducting properties: Often not tabulated with ACSR conductors is the electrical reactance of the conductor, which is due largely to the spacing between the other current carrying conductors and the conductor radius. The reactance of the conductor contributes significantly to the overall current that needs to travel through the line, and thus contributes to resistive losses in the line. For more information on transmission line inductance and capacitance, see electric power transmission and overhead power line. Conducting properties: Skin effect The skin effect decreases the cross sectional area in which the current travels through the conductor as AC frequency increases. For alternating current, most (63%) of the electric current flows between the surface and the skin depth, δ, which depends on the frequency of the current and the electrical (conductivity) and magnetic properties of the conductor. This decreased area causes the resistance to rise due to the inverse relationship between resistance and conductor cross sectional area. The skin effect benefits the design, as it causes the current to be concentrated towards the low-resistivity aluminum on the outside of the conductor. To illustrate the impact of the skin effect, the American Society for Testing and Materials (ASTM) standard includes the conductivity of the steel core when calculating the DC and AC resistance of the conductor, but the IEC and CSA Group standards do not. Conducting properties: Proximity effect In a conductor (ACSR and other types) carrying AC current, if currents are flowing through one or more other nearby conductors the distribution of current within each conductor will be constrained to smaller regions. The resulting current crowding is termed as the proximity effect. This crowding gives an increase in the effective AC resistance of the circuit, with the effect at 60 Hertz being greater than at 50 Hertz. Geometry, conductivity, and frequency are factors in determining the amount of proximity effect. Conducting properties: The proximity effect is result of a changing magnetic field which influences the distribution of an electric current flowing within an electrical conductor due to electromagnetic induction. When an alternating current (AC) flows through an isolated conductor, it creates an associated alternating magnetic field around it. The alternating magnetic field induces eddy currents in adjacent conductors, altering the overall distribution of current flowing through them. Conducting properties: The result is that the current is concentrated in the areas of the conductor furthest away from nearby conductors carrying current in the same direction. Conducting properties: Hysteresis loss Hysteresis in an ACSR conductor is due to the atomic dipoles in the steel core changing direction due to induction from the 60 or 50 Hertz AC current in the conductor. Hysteresis losses in ACSR are undesirable and can be minimized by using an even number of aluminium layers in the conductor. Due to the cancelling effect of the magnetic field from the opposing lay (right-hand and left-hand) conductors for two aluminium layers there is significantly less hysteresis loss in the steel core than there would be for one or three aluminium layers where the magnetic field does not cancel out. Conducting properties: The hysteresis effect is negligible on ACSR conductors with even numbers of aluminium layers and so it is not considered in these cases. For ACSR conductors with an odd number of aluminium layers however, a magnetization factor is used to accurately calculate the AC resistance. The correction method for single-layer ACSR is different than that used for three-layer conductors. Due to applying the magnetization factor, a conductor with an odd number of layers has an AC resistance slightly higher than an equivalent conductor with an even number of layers. Conducting properties: Due to higher hysteresis losses in the steel and associated heating of the core, an odd-layer design will have a lower ampacity rating (up to a 10% de-rate) than an equivalent even-layer design. All standard ACSR conductors smaller than Partridge (135.2 mm2 {266.8 kcmil} 26/7 Aluminium/Steel) have only one layer due to their small diameters so the hysteresis losses cannot be avoided. Non-standard designs: ACSR is widely used due to its efficient and economical design. Variations of standard (sometimes called traditional or conventional) ACSR are used in some cases due to the special properties they offer which provide sufficient advantage to justify their added expense. Special conductors may be more economic, offer increased reliability, or provide a unique solution to an otherwise difficult, of impossible, design problem. Non-standard designs: The main types of special conductors include "trapezoidal wire conductor" (TW) - a conductor having aluminium strands with a trapezoidal shape rather than round) and "self-damping" (SD), sometimes called "self-damping conductor" (SDC). A similar, higher temperature conductor made from annealed aluminium is called "aluminium conductor steel supported" (ACSS) is also available. Trapezoidal wire Trapezoidal-shaped wire (TW) can be used in lieu of round wire in order to "fill in the gaps" and have a 10–15% smaller overall diameter for the same cross-sectional area or a 20–25% larger cross-sectional area for the same overall diameter. Non-standard designs: Ontario Hydro (Hydro One) introduced trapezoidal-shaped wire ACSR conductor designs in the 1980s to replace existing round-wire ACSR designs (they called them compact conductors; these conductor types are now called ACSR/TW). Ontario Hydro's trapezoidal-shaped wire (TW) designs used the same steel core but increased the aluminium content of the conductor to match the overall diameter of the former round-wire designs (they could then use the same hardware fittings for both the round and the TW conductors). Hydro One's designs for their trapezoidal ACSR/TW conductors only use even numbers of aluminium layers (either two layers or four layers). They do not use designs which have odd number of layers (three layers) due to that design incurring higher hysteresis losses in the steel core. Non-standard designs: Also in the 1980s, Bonneville Power Administration (BPA) introduced TW designs where the size of the steel core was increased to maintain the same Aluminium/Steel ratio. Non-standard designs: Self-damping Self-damping (ACSR/SD) is a nearly obsolete conductor technology and is rarely used for new installations. It is a concentric-lay stranded, self-damping conductor designed to control wind induced (Aeolian-type) vibration in overhead transmission lines by internal damping. Self-damping conductors consists of a central core of one or more round steel wires surrounded by two layers of trapezoidal shaped aluminium wires. One or more layers of round aluminium wires may be added as required. Non-standard designs: SD conductor differs from conventional ACSR in that the aluminium wires in the first two layers are trapezoidal shaped and sized so that each aluminium layer forms a stranded tube which does not collapse onto the layer beneath when under tension, but maintains a small annular gap between layers. The trapezoidal wire layers are separated from each other and from the steel core by the two smaller annular gaps that permit movement between the layers. The round aluminium wire layers are in tight contact with each other and the underlying trapezoidal wire layer. Non-standard designs: Under vibration, the steel core and the aluminium layers vibrate with different frequencies and impact damping results. This impact damping is sufficient to keep any Aeolian vibration to a low level. The use of trapezoidal strands also results in reduced conductor diameter for a given AC resistance per mile. The major advantages ACSR/SD are: High self-damping allows the use of higher unloaded tension levels resulting in reduced maximum sag and thus reduced structure height and/or fewer structures per km [or per mile]. Reduced diameter for a given AC resistance yielding reduced structure transverse wind and ice loading.The major disadvantages ACSR/SD are: There most likely will be increased installation and clipping costs due to special hardware requirements and specialized stringing methods. The conductor design always requires the use of a steel core even in light loading areas. Non-standard designs: Aluminium-conductor steel supported Aluminium-conductor steel supported (ACSS) conductor visually appears to be similar to standard ACSR but the aluminium strands are fully annealed. Annealing the aluminium strands reduces the composite conductor strength, but after installation, permanent elongation of the aluminium strands results in a much larger percentage of the conductor tension being carried in the steel core than is true for standard ACSR. This in turn yields reduced composite thermal elongation and increased self-damping. Non-standard designs: The major advantages of ACSS are: Since the aluminium strands are "dead-soft" to begin with, the conductor may be operated at temperatures in excess of 200 °C (392 °F) without loss of strength. Non-standard designs: Since the tension in the aluminium strands is normally low, the conductor's self-damping of Aeolian vibration is high and it may be installed at high unloaded tension levels without the need for separate Stockbridge-type dampers.The major disadvantages of ACSS are: In areas experiencing heavy ice load, the reduced strength of this conductor relative to standard ACSR may make it less desirable. Non-standard designs: The softness of the annealed aluminium strands and the possible need for pre-stressing prior to clipping and sagging may raise installation costs. Non-standard designs: Twisted pair Twisted pair (TP) conductor (sometimes called by the trade-names T-2 or VR) has the two sub-conductors twisted (usually with a left-hand lay) about one another generally with a lay length of approximately three meters (nine feet).The conductor cross-section of the TP is a rotating "figure-8". The sub-conductors can be any type of standard ACSR conductor but the conductors need to match one another to provide mechanical balance. Non-standard designs: The major advantages of TP conductor are: The use of the TP conductor reduces the propensity of ice/wind galloping starting on the line. In an ice storm when ice deposits start to accumulate along the conductor the twisted conductor profile prevents a uniform airfoil shape from forming. With a standard round conductor the airfoil shape results in uplift of the conductor and initiation of the galloping motion. The TP conductor profile and this absence of the uniform airfoil shape inhibits the initiation of the galloping motion. The reduction in motion during icing events helps prevent the phase conductors from contacting each other causing a fault and an associated outage of the electrical circuit. With the reduction in large amplitude motions, closer phase spacing or longer span lengths can be used. This in turn can result in a lower cost of construction. TP conductor is generally installed only in areas that normally are exposed to wind speed and freezing temperature conditions associated with ice buildup. Non-standard designs: The non-round shape of this conductor reduces the amplitude of Aeolian vibration and the accompanying fatigue inducing strains near splices and conductor attachment clamps. TP conductors can gently rotate to dissipate energy. As a result, TP conductor can be installed to higher tension levels and reduced sags.The major disadvantages of TP conductor are: The non-round cross-section yields wind and ice loadings which are about 11% higher than standard conductor of the same AC resistance per mile. Non-standard designs: The installation of, and hardware for this conductor, can be somewhat more expensive than standard conductor. Splicing: Many electrical circuits are longer than the length of conductor which can be contained on one reel. As a result, splicing is often necessary to join conductors to provide the desired length. It is important that the splice not be the weak link. A splice (joint) must have high physical strength along with a high electrical current rating. Within the limitations of the equipment used to install the conductor from the reels, a sufficient length of conductor is generally purchased that the reel can accommodate to avoid more splices than are absolutely necessary. Splicing: Splices are designed to run cooler than the conductor. The temperature of the splice is kept lower by having a larger cross-sectional area and thus less electrical resistance than the conductor. Heat generated at the splice is also dissipated faster due to the larger diameter of the splice. Failures of splices are of concern, as a failure of just one splice can cause an outage that affects a large amount of electrical load. Most splices are compression-type splices (crimps). These splices are inexpensive and have good strength and conductivity characteristics. Some splices, called automatics, use a jaw-type design that is faster to install (does not require the heavy compression equipment) and are often used during storm restoration when speed of installation is more important than the long term performance of the splice. Causes for splice failures are numerous. Some of the main failure modes are related to installation issues, such as: insufficient cleaning (wire brushing) of the conductor to eliminate the aluminium oxide layer (which has a high resistance {is a poor electrical conductor}), improper application of conducting grease, improper compression force, improper compression locations or number of compressions. Splice failures can also be due to Aeolian vibration damage as the small vibrations of the conductor over time cause damage (breakage) of the aluminium strands near the ends of the splice. Splicing: Special splices (two-piece splices) are required on SD-type conductors as the gap between the trapezoidal aluminium layer and the steel core prevents the compression force on the splice to the steel core to be adequate. A two-piece design has a splice for the steel core and a longer and larger-diameter splice for the aluminium portion. The outer splice must be threaded on first and slid along the conductor and the steel splice compressed first and then the outer splice is slid back over the smaller splice and then compressed. This complicated process can easily result in a poor splice.Splices can also fail partially, where they have higher resistance than expected, usually after some time in the field. These can be detected using thermal camera, thermal probes, and direct resistance measurements, even when the line is energized. Such splices usually require replacement, either on deenergized line, by doing a temporary bypass to replace it, or by adding a big splice over the existing splice, without disconnecting. Conductor coatings: When ACSR is new, the aluminium has a shiny surface which has a low emissivity for heat radiation and a low absorption of sunlight. As the conductor ages the color becomes dull gray due to the oxidation reaction of the aluminium strands. In high pollution environments, the color may turn almost black after many years of exposure to the elements and chemicals. For aged conductor, the emissivity for heat radiation and the absorption of sunlight increases. Conductor coatings are available that have a high emissivity for high heat radiation and a low absorption of sunlight. These coatings would be applied to new conductor during manufacture. These types of coatings have the ability to potentially increase the current rating of the ACSR conductor. For the same amount of amperage, the temperature of the same conductor will be lower due to the better heat dissipation of the higher emissivity coating.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gaussian q-distribution** Gaussian q-distribution: In mathematical physics and probability and statistics, the Gaussian q-distribution is a family of probability distributions that includes, as limiting cases, the uniform distribution and the normal (Gaussian) distribution. It was introduced by Diaz and Teruel. It is a q-analog of the Gaussian or normal distribution. The distribution is symmetric about zero and is bounded, except for the limiting case of the normal distribution. The limiting uniform distribution is on the range -1 to +1. Definition: Let q be a real number in the interval [0, 1). The probability density function of the Gaussian q-distribution is given by if if if x>ν. where ν=ν(q)=11−q, c(q)=2(1−q)1/2∑m=0∞(−1)mqm(m+1)(1−q2m+1)(1−q2)q2m. The q-analogue [t]q of the real number t is given by [t]q=qt−1q−1. The q-analogue of the exponential function is the q-exponential, Exq, which is given by Eqx=∑j=0∞qj(j−1)/2xj[j]! where the q-analogue of the factorial is the q-factorial, [n]q!, which is in turn given by [n]q!=[n]q[n−1]q⋯[2]q for an integer n > 2 and [1]q! = [0]q! = 1. The cumulative distribution function of the Gaussian q-distribution is given by if if if x>ν where the integration symbol denotes the Jackson integral. The function Gq is given explicitly by if if if x>ν where (a+b)qn=∏i=0n−1(a+qib). Moments: The moments of the Gaussian q-distribution are given by 1c(q)∫−ννEq2−q2x2/[2]x2ndqx=[2n−1]!!, 1c(q)∫−ννEq2−q2x2/[2]x2n+1dqx=0, where the symbol [2n − 1]!! is the q-analogue of the double factorial given by [2n−1][2n−3]⋯[1]=[2n−1]!!.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Integrated Science Investigation of the Sun** Integrated Science Investigation of the Sun: Integrated Science Investigation of the Sun or IS☉IS, is an instrument aboard the Parker Solar Probe, a space probe designed to study the Sun. IS☉IS is focused on measuring energetic particles from the Sun, including electrons, protons, and ions. The parent spacecraft was launched in early August 2018, and with multiple flybys of Venus will study the heliosphere of the Sun from less than 4 million kilometers or less than 9 solar radii.IS☉IS consists of two detectors, EPI-Lo and EPI-Hi, corresponding to detection of relatively lower and higher energy particles. EPI-Lo is designed to detect from about 20 keV per nucleon up to 15 MeV (mega electronvolts) total energy, and for electrons from about 25 keV up to 1000 keV. EPI-Hi is designed to measure charged particles from about 1– to 200 MeV per nucleon and electrons from about 0.5 to 6 MeV, according to a paper about the device.The shortname includes a symbol for the Sun, a circle with a dot in it: ☉. NASA suggests pronouncing the name as "ee-sis" in English. Operations: By September 2018, IS☉IS had been turned on and first light data was returned. EPI-Hi: EPI-Hi includes: High Energy Telescope (1) HET has 16 detectors stacked Low Energy Telescopes (2) LET1 is double ended with 9 stacked detectors LET2 is single ended with 7 stacked detectorsThe detectors are solid-state devices. EPI-Lo: EPI-Lo includes 8 wedge detectors, fed by 80 separate entrances. These entrances correspond to covering a field of view over almost a full hemisphere.EPI-Lo can record differential energy spectra for electrons, Hydrogen, Helium-3, Helium-4, Carbon, Oxygen, Neon, Magnesium, Silicon, and Iron.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Streptopyrrole** Streptopyrrole: Streptopyrrole is an antibiotic with the molecular formula C14H12ClNO4 which is produced by the bacterium Streptomyces armeniacus
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**.na** .na: .na is the Internet country code top-level domain (ccTLD) for Namibia corresponding to the two letter code from the ISO-3166 standard.The registry accredits both Namibian and foreign registrars. Registrars access the Registry and register domains using either a web-based GUI or the industry standard EPP protocol. The domain was established on 8 May 1991. The ccTLD manager is NA-NiC (Namibian Network Information Centre). The Namibian Parliament passed a Communications Act in 2009 containing various provisions regarding the ccTLD; however, as of the end of 2017, they had not yet entered into force. Registrations are available at both the second level or at the third level beneath various names that include some apparently redundant choices (e.g., both .co.na and .com.na for commercial entities). Domain registration costs: Domain registration prices to the end-user are now set by registrars in competition with each other. Wholesale prices (the cost to the registrars) depend on the level at which a registration is made (i.e. whether at second-level or a third-level registration) and also whether the registrant is domestic or foreign. The second-level is considered 'premium', so the cheapest domains would be a registration by a local organisation at third-level (such as the NamNumbers telephone directory at TELECOM.COM.NA) whilst the highest prices are paid by non-Namibian entities registering at the second-level (such as BRITISHCOUNCIL.NA).NA-NiC is a member of the Council of Country Code Administrators and uses their Dispute Resolution. Secure DNS: .na is an early adopter of the Domain Name System Security Extensions, with the .na root zone having been signed with DNSSEC since 1 September 2009.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Postage stamp booklet** Postage stamp booklet: A postage stamp booklet (also called stamp book) is a booklet made up of one or more small panes of postage stamps in a cardboard cover. Booklets are often made from sheets especially printed for this purpose, with a narrow selvage at one side of the booklet pane for binding. From the cutting, the panes are usually imperforate on the edges of the booklet. Smaller and easier to handle than a whole sheet of stamps, in many countries booklets have become a favored way to purchase stamps. History: Booklets of telegraph stamps are known to have been issued by the California State Telegraph Company in 1870, and by Western Union in 1871, and on 14 October 1884 an A.W. Cooke of Boston received Patent 306,674 from the United States Patent Office for the idea of putting postage stamps into booklets. Luxembourg was the first country to issue booklets, in 1895, followed by Sweden in 1898, the United States in 1900 and Great Britain in 1904. The idea became popular and quickly spread around the world. Production: Originally booklets were produced manually, by separating sheets into smaller panes and binding those. These are not distinguishable from the sheet stamps. Later, the popularity of booklets meant that it was worthwhile to produce booklet panes directly; printing onto large sheets, then cutting into booklet panes each with a small number of stamps, and perforating between the stamps of each pane. Such sheets, in fact, were created to produce the earliest United States booklets, printed from special plates that yielded sheets of 180 or 360 stamps for cutting into panes of six stamps each. (Normal sheets containing 400 stamps were deemed unusable for booklets because they could not be cut into six-stamp panes without leaving waste.) Booklet stamps so produced usually have 1, 2, or 3 straight edges (although some booklet panes have been printed 3 stamps across, and the middle stamps will have perforations all around). The first two U. S. booklet issues (1900 and 1903) offered only stamps denominated at the normal letter rate (2¢), but in 1907 booklets were introduced containing 1¢ stamps suitable for post cards. Production: Some countries, such as Sweden, routinely issue a single stamp design in coils, booklets, and sheets. The complete stamp collection will contain examples of each of these. Some collectors specialize in collecting the booklets themselves, or whole panes from a booklet; these often sell at a premium over the equivalent number of stamps. The oldest types of booklets were not much noticed at the time, nearly all used for postage, and intact booklets are quite rare today.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Soil morphology** Soil morphology: Soil morphology is the branch of soil science dedicated to the technical description of soil, particularly physical properties including texture, color, structure, and consistence. Morphological evaluations of soil are typically performed in the field on a soil profile containing multiple horizons.Along with soil formation and soil classification, soil morphology is considered part of pedology, one of the central disciplines of soil science. Background: Since the origin of agriculture, humans have understood that soils contain different properties which affect their ability to grow crops. However, soil science did not become its own scientific discipline until the 19th century, and even then early soil scientists were broadly grouped as either "agro-chemists" or "agro-geologists" due to the enduring strong ties of soil to agriculture. These agro-geologists examined soils in natural settings and were the first to scientifically study soil morphology.A team of Russian early soil scientists led by V.V. Dokuchaev observed soil profiles with similar horizons in areas with similar climate and vegetation, despite being hundreds of kilometers apart. Dokuchaev's work, along with later contributions from K.D. Glinka, C.F. Marbut, and Hans Jenny, established soils as independent, natural bodies with unique properties caused by their equally unique combinations of climate, biological activity, relief, parent material, and time. Soil properties had previously been inferred from geological or environmental conditions alone, but with this new understanding, soil morphological properties were now used to evaluate the integrated influence of these factors.Soil morphology became the basis for understanding observations, experiments, behavior, and practical uses of different soils. To standardize morphological descriptions, official guidelines and handbooks for describing soil were first published in the 1930s by Charles Kellogg and the United States Department of Agriculture-Soil Conservation Service for the United States and by G.R. Clarke for the United Kingdom. Many other countries and national soil survey organizations have since developed their own guidelines. Properties and Procedure: Observations of soil morphology are typically performed in the field on soil profiles exposed by excavating a pit or extracting a core with a push tube (handheld or hydraulic) or auger. A soil profile is one face of a pedon, or an imaginary three-dimensional unit of soil that would display the full range of properties characteristic of a particular soil. Pedons generally occupy between 1 and 10 m2 of surface land area and are the fundamental unit of field-based soil study.Many soil scientists in the United States document soil morphological descriptions using the standard Pedon Description field sheet published by the USDA-NRCS. In addition to location, landscape, vegetation, topographic, and other site information, soil morphology descriptions generally include the following properties: Horizonation Soil profiles contain multiple layers, known as horizons, that are generally parallel to the soil surface. These horizons are distinguishable from adjacent layers by their changes in morphological properties as the soil naturally forms. The same soil horizons may be named and labeled differently in various soil classification systems around the world, though most systems contain the following: Numerical prefix: indicates a lithologic discontinuity or change in parent material Capital letter: represents the master horizon, such as O, A, E, B, C, R, and others. Multiple capital letters may be used to describe transition horizons, which are layers with properties of multiple master horizons (such as AB or A/B horizons). Properties and Procedure: Lowercase letter: horizon suffix or subordinate distinction, which add details of soil formation. Multiple suffixes may be used in combination, and some master horizons (including O, B, and L) must be described with a suffix. Properties and Procedure: Numerical suffix: indicates subdivisions within a larger horizon. If there are layers distinct enough to be separate horizons, but similar enough to receive the same master and suffix letters, sequential numbers are added to the end of the designation to distinguish the horizons (such as A, Bt1, Bt2, Bt3, C).In addition to the horizon name, the distinctness and topography of each horizon's lower boundary are described. Boundary distinctness is determined by how accurately the border between horizons can be identified and may be very abrupt, abrupt, clear, gradual, or diffuse. Boundary topography refers to the horizontal variation of the border, which is often not parallel to the soil surface and may even be discontinuous. Topography categories include smooth, wavy, irregular, and broken. Properties and Procedure: Color Soil color is quantitatively described using the Munsell color system, which was developed in the early 20th century by Albert Munsell. Munsell was a painter and the system covers the entire range of colors, though the specially adapted Munsell soil color books commonly used in field description only include the most relevant colors for soil.The Munsell color system includes the following three components: Hue: indicates the dominant spectral (i.e., rainbow) color, which in soil is generally yellow and/or red. Each page of the Munsell soil color book displays a different hue. Examples include 10YR, 5YR, and 2.5Y. Properties and Procedure: Value: indicates lightness or darkness. Value increases from the bottom of each page to the top, with lower numbers representing darker color. Color with a value of 0 would be black. Properties and Procedure: Chroma: indicates intensity or brightness. Chroma increases from left to right on each page, with higher numbers representing more vivid or saturated color. Color with a chroma of 0 would be neutral gray.Colors in soil can be quite diverse and result from organic matter content, mineralogy, and the presence and oxidation states of iron and manganese oxides. Organic-rich soils tend to be dark brown or even black due to organic matter accumulating on the mineral particles. Well-drained and highly weathered soils may be bright red or brown from oxidized iron, while reduced iron can impart gray or blue colors and indicate poor drainage. When soil is saturated for prolonged periods, oxygen availability is limited and iron may become a biological electron acceptor. Reduced iron is more soluble than oxidized iron and is easily leached from particle coatings, which exposes bare, light-colored silicate minerals and results in iron depletions. When iron reduction and/or depletion makes gray the dominant matrix color, the soil is said to be gleyed.Soil color is also moisture dependent, specifically the color value. It is important to note the moisture status as "moist" when adding water does not change the soil color, or as "dry" when the soil is air dry. The standard moisture status for describing soil in the field varies regionally; humid areas generally use the moist state while arid ones use the dry state. In detailed descriptions, both the moist and dry colors should be recorded. Properties and Procedure: Soil texture Soil texture is the analysis and classification of the particle size distribution in soil. The relative amounts of sand, silt, and clay particles determine a soil's texture, which affects the appearance, feel and chemical properties of the soil. Properties and Procedure: Field methods To estimate by hand in the field, soil scientists take a handful of sifted soil and moisten it with water until it holds together. The soil is then rolled into a ball nearing 1-2 inches in diameter and squeezed between the thumb and side of the index finger. Ribbons should be made as long as possible until it naturally breaks under its own weight. Longer ribbons indicate a higher clay percentage. The relative smoothness or grittiness indicates the sand percentage, and with practice, this technique can provide accurate textural class determinations. Properties and Procedure: Lab methods An experienced soil scientist can determine soil texture in the field with decent accuracy, as described above. However, not all soils lend themselves to accurate field determinations of soil texture due to the presence of other particles that interfere with measuring the concentration of sand, silt and clay. The mineral texture can be obfuscated by high soil organic matter, iron oxides, amorphous or short-range-order aluminosilicates, and carbonates. Properties and Procedure: In order to precisely determine the amount of clay, sand and silt in a soil, it must be taken to a laboratory for analysis. A strategy known as particle size analysis (PSA) is performed, beginning with the pretreatment of the soil in order to remove all other particles such as organic matter that may interfere with the classification. Pretreatment must leave the soil as strictly sand, silt and clay particles. Pretreatment may consist of processes such as the sieving of the soil to remove larger particles, thus allowing the soil to be dispersed properly. Hydrometer tests may then be used to calculate the amounts of sand, silt and clay present. This consists of mixing the pretreated soil with water and then allowing the mixture to settle, making note of the hydrometer reading. Sand particles are the largest, and thus will settle the quickest, followed by the silt particles, and lastly the clay particles. The sections are then dried and weighed. The three sections should add up to 100% in order for the test to be considered successful. Laser diffraction analysis can also be used as alternative to the sieving and hydrometer methods.From here, the soil can be classified using a soil texture triangle, which labels the type of soil based on the percentages of each particle in the sample. Properties and Procedure: Structure Soil particles naturally aggregate together into larger units or shapes referred to as "peds". Peds have planes of weakness between them are generally identified by probing exposed soil profiles with a knife to pry out and gently break apart volumes of soil.Morphological descriptions of soil structure contain assessments of shape, size, and grade. Structure shapes include granular, platy, blocky, prismatic, columnar, and others, including the "structureless" shapes of massive and single-grained. Size is classified as one of six categories ranging from "very fine" to "extremely coarse," with different size limits for the various shapes and measurements taken on the smallest ped dimension. Grade indicates the distinctness of peds, or how easily distinguishable they are from each other, and is described with the classes "weak", "moderate", and "strong."Structure is often best evaluated while the soil is relatively dry, as peds may swell with moisture, press together and reduce the definition between each ped. Porosity: Porosity of topsoil is a measure of the pore space in soil which typically decreases as grain size increases. This is due to soil aggregate formation in finer textured surface soils when subject to soil biological processes. Aggregation involves particulate adhesion and higher resistance to compaction. Porosity of a soil is a function of the soil's bulk density, which is based on the composition of the soil. Sandy soils typically have higher bulk densities and lower porosity than silty or clayey soils. This is because finer grained particles have a larger amount of pore space than coarser grained particles. The table below displays the deal bulk densities that both allow and restrict root growth for the three main texture classifications. The porosity of a soil is an important factor that determines the amount of water a soil can hold, how much air it can hold, and subsequently how well plant roots can grow within the soil.Soil porosity is complex. Traditional models regard porosity as continuous. This fails to account for anomalous features and produces only approximate results. Furthermore, it cannot help model the influence of environmental factors which affect pore geometry. A number of more complex models have been proposed, including fractals, bubble theory, cracking theory, Boolean grain process, packed sphere, and numerous other models. Micromorphology: Soil micromorphology refers to the description, measurement, and interpretation of soil features that are too small to be observed by the unassisted eye. While micromorphological descriptions may begin in the field with the use of a 10x hand lens, much more can be described using thin sections made of the soil with the aid of a petrographic polarizing light microscope. The soil can be impregnated with an epoxy resin, but more commonly with a polyester resin (crystic 17449) and sliced and ground to 0.03 millimeter thickness and examined by passing light through the thin soil plasma. Micromorphology: Micromorphology in archaeology Soil micromorphology has been a recognized technique in soil science for some 50 years and experience from pedogenic and paleosol studies first permitted its use in the investigation of archaeologically buried soils. More recently, the science has expanded to encompass the characterization of all archeological soils and sediments and has been successful in providing unique cultural and paleoenvironmental information from a whole range of archaeological sites. Soil formation: Form Soils are formed from their respective parent material, which may or may not match the composition of the bedrock that they lie on top of. Through biological and chemical processes as well as natural processes such as wind and water erosion, parent material can be broken down. The chemical and physical properties of this parent material is reflected in the qualities of the resulting soil. Climate, topography, and biological organisms all have an impact on the formation of soils in various geographic locations. Soil formation: Topography A steep landform is going to see an increased amount of runoff when compared to a flat landform. Increased runoff can inhibit soil formation as the upper layers continue to get stripped off because they are not developed enough to support root growth. Root growth can help prevent erosion as the roots act to keep the soil in place. This phenomenon leads to soils on slopes being thinner and less developed than soils found on plains or plateaus. Soil formation: Climate Varying levels of precipitation and wind have impacts on the formation of soils. Increased precipitation can lead to increased levels of runoff as previously described, but regular amounts of precipitation can encourage plant root growth which works to stop runoff. The growth of vegetation in a certain area can also work to increase the depth and nutrient quality of a topsoil, as decomposition of organic matter works to strengthen organic soil horizons. Soil formation: Biological processes Varying levels of microbial activity can have a range of impacts on soil formation. Most often, biological processes work to disrupt existing soil formation which leads to chemical translocation. the movement of these chemicals can make nutrients available, which can increase plant root growth.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Integrin alpha L** Integrin alpha L: Integrin, alpha L (antigen CD11A (p180), lymphocyte function-associated antigen 1; alpha polypeptide), also known as ITGAL, is a protein that in humans is encoded by the ITGAL gene. CD11a functions in the immune system. It is involved in cellular adhesion and costimulatory signaling. It is the target of the drug efalizumab. Function: ITGAL gene encodes the integrin alpha L chain. Integrins are heterodimeric integral membrane proteins composed of an alpha chain and a beta chain. This I-domain containing alpha integrin combines with the beta 2 chain (ITGB2) to form the integrin lymphocyte function-associated antigen-1 (LFA-1), which is expressed in all leukocytes. LFA-1 plays a central role in leukocyte intercellular adhesion through interactions with its ligands, ICAMs 1-3 (intercellular adhesion molecules 1 through 3), and also functions in lymphocyte costimulatory signaling.CD11a is one of the two components, along with CD18, which form lymphocyte function-associated antigen-1. Function: Efalizumab acts as an immunosuppressant by binding to CD11a but was withdrawn in 2009 because it was associated with severe side effects. Interactions: CD11a has been shown to interact with ICAM-1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nokia 2.1** Nokia 2.1: Nokia 2.1 is a Nokia-branded entry-level smartphone released by HMD Global in August 2018, running the Android operating system. Design: The phone has an aluminium frame with a plastic back. It runs on a Qualcomm Snapdragon 425 System-on-chip with 1 GB of RAM. It has Dual Sim support. Reception: The Nokia 2.1 received mixed reviews. Andrew Williams of TrustedReviews praised the phone’s "low price, large screen and stereo speakers" while criticising "poor storage and performance". == Reference List ==
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Main group peroxides** Main group peroxides: Main group peroxides are peroxide derivatives of the main group elements. Many compounds of the main group elements form peroxides, and a few are of commercial significance. Examples: With thousands of tons/year being produced annually, the peroxydisulfates, S2O2−8, are preeminent members of this class. These salts serve as initiators for polymerization of acrylates and styrene.At one time, peroxyborates were used in detergents. These salts have been largely replaced by peroxycarbonates. Many peroxides are not commercially valuable but are of academic interest. One example is bis(trimethylsilyl) peroxide (Me3SiOOSiMe3). Phosphorus oxides form a number of peroxides, e.g. "P2O6".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Calculus Made Easy** Calculus Made Easy: Calculus Made Easy is a book on infinitesimal calculus originally published in 1910 by Silvanus P. Thompson, considered a classic and elegant introduction to the subject. The original text continues to be available as of 2008 from Macmillan and Co., but a 1998 update by Martin Gardner is available from St. Martin's Press which provides an introduction; three preliminary chapters explaining functions, limits, and derivatives; an appendix of recreational calculus problems; and notes for modern readers. Gardner changes "fifth form boys" to the more American sounding (and gender neutral) "high school students," updates many now obsolescent mathematical notations or terms, and uses American decimal dollars and cents in currency examples. Calculus Made Easy: Calculus Made Easy ignores the use of limits with its epsilon-delta definition, replacing it with a method of approximating (to arbitrary precision) directly to the correct answer in the infinitesimal spirit of Leibniz, now formally justified in modern nonstandard analysis and smooth infinitesimal analysis. The original text is now in the public domain under US copyright law (although Macmillan's copyright under UK law is reproduced in the 2008 edition from St. Martin's Press). It can be freely accessed on Project Gutenberg.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Inductive type** Inductive type: In type theory, a system has inductive types if it has facilities for creating a new type from constants and functions that create terms of that type. The feature serves a role similar to data structures in a programming language and allows a type theory to add concepts like numbers, relations, and trees. As the name suggests, inductive types can be self-referential, but usually only in a way that permits structural recursion. Inductive type: The standard example is encoding the natural numbers using Peano's encoding. It can be defined in Coq as follows: Here, a natural number is created either from the constant "0" or by applying the function "S" to another natural number. "S" is the successor function which represents adding 1 to a number. Thus, "0" is zero, "S 0" is one, "S (S 0)" is two, "S (S (S 0))" is three, and so on. Inductive type: Since their introduction, inductive types have been extended to encode more and more structures, while still being predicative and supporting structural recursion. Elimination: Inductive types usually come with a function to prove properties about them. Thus, "nat" may come with (in Coq syntax): In words: for any proposition "P" over natural numbers, given a proof of "P 0" and a proof of "P n -> P (n+1)", we get back a proof of "forall n, P n". This is the familiar induction principle for natural numbers. Implementations: W- and M-types W-types are well-founded types in intuitionistic type theory (ITT). They generalize natural numbers, lists, binary trees, and other "tree-shaped" data types. Let U be a universe of types. Given a type A : U and a dependent family B : A → U, one can form a W-type Wa:AB(a) . The type A may be thought of as "labels" for the (potentially infinitely many) constructors of the inductive type being defined, whereas B indicates the (potentially infinite) arity of each constructor. W-types (resp. M-types) may also be understood as well-founded (resp. non-well-founded) trees with nodes labeled by elements a : A and where the node labeled by a has B(a)-many subtrees. Each W-type is isomorphic to the initial algebra of a so-called polynomial functor. Implementations: Let 0, 1, 2, etc. be finite types with inhabitants 11 : 1, 12, 22:2, etc. One may define the natural numbers as the W-type with f : 2 → U is defined by f(12) = 0 (representing the constructor for zero, which takes no arguments), and f(22) = 1 (representing the successor function, which takes one argument). One may define lists over a type A : U as List := W(x:1+A)f(x) where and 11 is the sole inhabitant of 1. The value of inl ⁡(11)) corresponds to the constructor for the empty list, whereas the value of inr ⁡(a)) corresponds to the constructor that appends a to the beginning of another list. Implementations: The constructor for elements of a generic W-type Wx:AB(x) has type We can also write this rule in the style of a natural deduction proof, The elimination rule for W-types works similarly to structural induction on trees. If, whenever a property (under the propositions-as-types interpretation) C:Wx:AB(x)→U holds for all subtrees of a given tree it also holds for that tree, then it holds for all trees. Implementations: In extensional type theories, W-types (resp. M-types) can be defined up to isomorphism as initial algebras (resp. final coalgebras) for polynomial functors. In this case, the property of initiality (res. finality) corresponds directly to the appropriate induction principle. In intensional type theories with the univalence axiom, this correspondence holds up to homotopy (propositional equality).M-types are dual to W-types, they represent coinductive (potentially infinite) data such as streams. M-types can be derived from W-types. Implementations: Mutually inductive definitions This technique allows some definitions of multiple types that depend on each other. For example, defining two parity predicates on natural numbers using two mutually inductive types in Coq: Induction-recursion Induction-recursion started as a study into the limits of ITT. Once found, the limits were turned into rules that allowed defining new inductive types. These types could depend upon a function and the function on the type, as long as both were defined simultaneously. Implementations: Universe types can be defined using induction-recursion. Implementations: Induction-induction Induction-induction allows definition of a type and a family of types at the same time. So, a type A and a family of types B:A→Type Higher inductive types This is a current research area in Homotopy Type Theory (HoTT). HoTT differs from ITT by its identity type (equality). Higher inductive types not only define a new type with constants and functions that create elements of the type, but also new instances of the identity type that relate them. Implementations: A simple example is the circle type, which is defined with two constructors, a basepoint; base : circleand a loop; loop : base = base.The existence of a new constructor for the identity type makes circle a higher inductive type.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**McCullagh's parametrization of the Cauchy distributions** McCullagh's parametrization of the Cauchy distributions: In probability theory, the "standard" Cauchy distribution is the probability distribution whose probability density function (pdf) is f(x)=1π(1+x2) for x real. This has median 0, and first and third quartiles respectively −1 and +1. Generally, a Cauchy distribution is any probability distribution belonging to the same location-scale family as this one. Thus, if X has a standard Cauchy distribution and μ is any real number and σ > 0, then Y = μ + σX has a Cauchy distribution whose median is μ and whose first and third quartiles are respectively μ − σ and μ + σ. McCullagh's parametrization of the Cauchy distributions: McCullagh's parametrization, introduced by Peter McCullagh, professor of statistics at the University of Chicago, uses the two parameters of the non-standardised distribution to form a single complex-valued parameter, specifically, the complex number θ = μ + iσ, where i is the imaginary unit. It also extends the usual range of scale parameter to include σ < 0. McCullagh's parametrization of the Cauchy distributions: Although the parameter is notionally expressed using a complex number, the density is still a density over the real line. In particular the density can be written using the real-valued parameters μ and σ, which can each take positive or negative values, as f(x)=1π|σ|(1+(x−μ)2σ2), where the distribution is regarded as degenerate if σ = 0. An alternative form for the density can be written using the complex parameter θ = μ + iσ as f(x)=|ℑθ|π|x−θ|2, where ℑθ=σ To the question "Why introduce complex numbers when only real-valued random variables are involved?", McCullagh wrote: To this question I can give no better answer than to present the curious result that Y∗=aY+bcY+d∼C(aθ+bcθ+d) for all real numbers a, b, c and d. ...the induced transformation on the parameter space has the same fractional linear form as the transformation on the sample space only if the parameter space is taken to be the complex plane. McCullagh's parametrization of the Cauchy distributions: In other words, if the random variable Y has a Cauchy distribution with complex parameter θ, then the random variable Y * defined above has a Cauchy distribution with parameter (aθ + b)/(cθ + d). McCullagh also wrote, "The distribution of the first exit point from the upper half-plane of a Brownian particle starting at θ is the Cauchy density on the real line with parameter θ." In addition, McCullagh shows that the complex-valued parameterisation allows a simple relationship to be made between the Cauchy and the "circular Cauchy distribution". Using the complex parameter also let easily prove the invariance of f-divergences (e.g., Kullback-Leibler divergence, chi-squared divergence, etc.) with respect to real linear fractional transformations (group action of SL(2,R)), and show that all f-divergences between univariate Cauchy densities are symmetric.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nebulizer** Nebulizer: In medicine, a nebulizer (American English) or nebuliser (British English) is a drug delivery device used to administer medication in the form of a mist inhaled into the lungs. Nebulizers are commonly used for the treatment of asthma, cystic fibrosis, COPD and other respiratory diseases or disorders. They use oxygen, compressed air or ultrasonic power to break up solutions and suspensions into small aerosol droplets that are inhaled from the mouthpiece of the device. An aerosol is a mixture of gas and solid or liquid particles. Medical uses: Guidelines Various asthma guidelines, such as the Global Initiative for Asthma Guidelines [GINA], the British Guidelines on the management of Asthma, The Canadian Pediatric Asthma Consensus Guidelines, and United States Guidelines for Diagnosis and Treatment of Asthma each recommend metered dose inhalers in place of nebulizer-delivered therapies. The European Respiratory Society acknowledge that although nebulizers are used in hospitals and at home they suggest much of this use may not be evidence-based. Medical uses: Effectiveness Recent evidence shows that nebulizers are no more effective than metered-dose inhalers (MDIs) with spacers. An MDI with a spacer may offer advantages to children who have acute asthma. Those findings refer specifically to the treatment of asthma and not to the efficacy of nebulisers generally, as for COPD for example. For COPD, especially when assessing exacerbations or lung attacks, there is no evidence to indicate that MDI (with a spacer) delivered medicine is more effective than administration of the same medicine with a nebulizer.The European Respiratory Society highlighted a risk relating to droplet size reproducibility caused by selling nebulizer devices separately from nebulized solution. They found this practice could vary droplet size 10-fold or more by changing from an inefficient nebulizer system to a highly efficient one. Medical uses: Two advantages attributed to nebulizers, compared to MDIs with spacers (inhalers), are their ability to deliver larger dosages at a faster rate, especially in acute asthma; however, recent data suggests actual lung deposition rates are the same. In addition, another trial found that a MDI (with spacer) had a lower required dose for clinical result compared to a nebulizer (see Clark, et al. other references).Beyond use in chronic lung disease, nebulizers may also be used to treat acute issues like the inhalation of toxic substances. One such example is the treatment of inhalation of toxic hydrofluoric acid (HF) vapors. Calcium gluconate is a first-line treatment for HF exposure to the skin. By using a nebulizer, calcium gluconate is delivered to the lungs as an aerosol to counteract the toxicity of inhaled HF vapors. Aerosol deposition: The lung deposition characteristics and efficacy of an aerosol depend largely on the particle or droplet size. Generally, the smaller the particle the greater its chance of peripheral penetration and retention. However, for very fine particles below 0.5 μm in diameter there is a chance of avoiding deposition altogether and being exhaled. In 1966 the Task Group on Lung Dynamics, concerned mainly with the hazards of inhalation of environmental toxins, proposed a model for deposition of particles in the lung. This suggested that particles of more than 10 μm in diameter are most likely to deposit in the mouth and throat, for those of 5–10 μm diameter a transition from mouth to airway deposition occurs, and particles smaller than 5 μm in diameter deposit more frequently in the lower airways and are appropriate for pharmaceutical aerosols. Aerosol deposition: Nebulizing processes have been modeled using computational fluid dynamics. Types: Pneumatic Jet nebulizer The most commonly used nebulizers are jet nebulizers, which are also called "atomizers". Jet nebulizers are connected by tubing to a supply of compressed gas, usually compressed air or oxygen to flow at high velocity through a liquid medicine to turn it into an aerosol that is inhaled by the patient. Currently there seems to be a tendency among physicians to prefer prescription of a pressurized Metered Dose Inhaler (pMDI) for their patients, instead of a jet nebulizer that generates a lot more noise (often 60 dB during use) and is less portable due to a greater weight. However, jet nebulizers are commonly used in hospitals for patients who have difficulty using inhalers, such as in serious cases of respiratory disease, or severe asthma attacks. The main advantage of the jet nebulizer is related to its low operational cost. If the patient needs to inhale medicine on a daily basis the use of a pMDI can be rather expensive. Today several manufacturers have also managed to lower the weight of the jet nebulizer to just over half a kilogram (just under one and a half pounds), and therefore started to label it as a portable device. Compared to all the competing inhalers and nebulizers, the noise and heavy weight is still the biggest draw back of the jet nebulizer. Types: Mechanical Soft mist inhaler The medical company Boehringer Ingelheim also invented a device named Respimat Soft Mist Inhaler in 1997. This new technology provides a metered dose to the user, as the liquid bottom of the inhaler is rotated clockwise 180 degrees by hand, adding a build up tension into a spring around the flexible liquid container. When the user activates the bottom of the inhaler, the energy from the spring is released and imposes pressure on the flexible liquid container, causing liquid to spray out of 2 nozzles, thus forming a soft mist to be inhaled. The device features no gas propellant and no need for battery/power to operate. The average droplet size in the mist was measured to 5.8 micrometers, which could indicate some potential efficiency problems for the inhaled medicine to reach the lungs. Subsequent trials have proven this was not the case. Due to the very low velocity of the mist, the Soft Mist Inhaler in fact has a higher efficiency compared to a conventional pMDI. In 2000, arguments were launched towards the European Respiratory Society (ERS) to clarify/expand their definition of a nebulizer, as the new Soft Mist Inhaler in technical terms both could be classified as a "hand driven nebulizer" and a "hand driven pMDI". Types: Electrical Ultrasonic wave nebulizer Ultrasonic wave nebulizers were invented in 1965 as a new type of portable nebulizer. The technology inside an ultrasonic wave nebulizer is to have an electronic oscillator generate a high frequency ultrasonic wave, which causes the mechanical vibration of a piezoelectric element. This vibrating element is in contact with a liquid reservoir and its high frequency vibration is sufficient to produce a vapor mist. As they create aerosols from ultrasonic vibration instead of using a heavy air compressor, they only have a weight around 170 grams (6.0 oz). Another advantage is that the ultrasonic vibration is almost silent. Examples of these more modern type of nebulizers are: Omron NE-U17 and Beurer Nebulizer IH30. Types: Vibrating mesh technology A new significant innovation was made in the nebulizer market around 2005, with creation of the ultrasonic Vibrating Mesh Technology (VMT). With this technology a mesh/membrane with 1000–7000 laser drilled holes vibrates at the top of the liquid reservoir, and thereby pressures out a mist of very fine droplets through the holes. This technology is more efficient than having a vibrating piezoelectric element at the bottom of the liquid reservoir, and thereby shorter treatment times are also achieved. The old problems found with the ultrasonic wave nebulizer, having too much liquid waste and undesired heating of the medical liquid, have also been solved by the new vibrating mesh nebulizers. Available VMT nebulizers include: Pari eFlow, Respironics i-Neb, Beurer Nebulizer IH50, and Aerogen Aeroneb. As the price of the ultrasonic VMT nebulizers is higher than models using previous technologies, most manufacturers continue to also sell the classic jet nebulizers. Use and attachments: Nebulizers accept their medicine in the form of a liquid solution, which is often loaded into the device upon use. Corticosteroids and bronchodilators such as salbutamol (albuterol USAN) are often used, and sometimes in combination with ipratropium. The reason these pharmaceuticals are inhaled instead of ingested is in order to target their effect to the respiratory tract, which speeds onset of action of the medicine and reduces side effects, compared to other alternative intake routes.Usually, the aerosolized medicine is inhaled through a tube-like mouthpiece, similar to that of an inhaler. The mouthpiece, however, is sometimes replaced with a face mask, similar to that used for inhaled anesthesia, for ease of use with young children or the elderly. Pediatric masks are often shaped like animals such as fish, dogs or dragons to make children less resistant to nebulizer treatments. Many nebulizer manufacturers also offer pacifier attachments for infants and toddlers. But mouthpieces are preferable if patients are able to use them since face-masks result in reduced lung delivery because of aerosol losses in the nose.After use with corticosteroid, it is theoretically possible for patients to develop a yeast infection in the mouth (thrush) or hoarseness of voice (dysphonia), although these conditions are clinically very rare. To avoid these adverse effects, some clinicians suggest that the person who used the nebulizer should rinse his or her mouth. This is not true for bronchodilators; however, patients may still wish to rinse their mouths due to the unpleasant taste of some bronchodilating drugs. History: The first "powered" or pressurized inhaler was invented in France by Sales-Girons in 1858. This device used pressure to atomize the liquid medication. The pump handle is operated like a bicycle pump. When the pump is pulled up, it draws liquid from the reservoir, and upon the force of the user's hand, the liquid is pressurized through an atomizer, to be sprayed out for inhalation near the user's mouth.In 1864, the first steam-driven nebulizer was invented in Germany. This inhaler, known as "Siegle's steam spray inhaler", used the Venturi principle to atomize liquid medication, and this was the very beginning of nebulizer therapy. The importance of droplet size was not yet understood, so the efficacy of this first device was unfortunately mediocre for many of the medical compounds. The Siegle steam spray inhaler consisted of a spirit burner, which boiled water in the reservoir into steam that could then flow across the top and into a tube suspended in the pharmaceutical solution. The passage of steam drew the medicine into the vapor, and the patient inhaled this vapor through a mouthpiece made of glass.The first pneumatic nebulizer fed from an electrically driven gas (air) compressor was invented in the 1930s and called a Pneumostat. With this device, a medical liquid (typically epinephrine chloride, used as a bronchial muscle relaxant to reverse constriction). As an alternative to the expensive electrical nebulizer, many people in the 1930s continued to use the much more simple and cheap hand-driven nebulizer, known as the Parke-Davis Glaseptic.In 1956, a technology competing against the nebulizer was launched by Riker Laboratories (3M), in the form of pressurized metered-dose inhalers, with Medihaler-iso (isoprenaline) and Medihaler-epi (epinephrine) as the two first products. In these devices, the drug is cold-fill and delivered in exact doses through some special metering valves, driven by a gas propellant technology (i.e. Freon or a less environmentally damaging HFA).In 1964, a new type of electronic nebulizer was introduced: the "ultrasonic wave nebulizer". Today the nebulizing technology is not only used for medical purposes. Ultrasonic wave nebulizers are also used in humidifiers, to spray out water aerosols to moisten dry air in buildings.Some of the first models of electronic cigarettes featured an ultrasonic wave nebulizer (having a piezoelectric element vibrating and creating high-frequency ultrasound waves, to cause vibration and atomization of liquid nicotine) in combination with a vapouriser (built as a spray nozzle with an electric heating element). The most common type of electronic cigarettes currently sold, however, omit the ultrasonic wave nebulizer, as it was not found to be efficient enough for this kind of device. Instead, the electronic cigarettes now use an electric vaporizer, either in direct contact with the absorbent material in the "impregnated atomizer," or in combination with the nebulization technology related to a "spraying jet atomizer" (in the form of liquid droplets being out-sprayed by a high-speed air stream, that passes through some small venturi injection channels, drilled in a material absorbed with nicotine liquid).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DnaC** DnaC: dnaC is a loading factor that complexes with the C-terminus of helicase dnaB and inhibits it from unwinding the dsDNA at a replication fork. A dnaB and dnaC associate near the dnaA bound origin for each of the ssDNA. One dnaB-dnaC complex is oriented in the opposite direction to the other dnaB-dnaC complex due to the antiparallel nature of DNA. Because they are oriented in opposite directions, one dnaB-dnaC complex will complex with dnaA from the N-terminus of dnaB whereas the other dnaB-dnaC complex will complex with dnaA from the dnaC. After the assembly of dnaG onto the N-terminus of dnaB, dnaC is released and dnaB will be allowed to begin unwinding dsDNA to make room for DNA polymerase III to begin synthesizing the daughter strands.This interaction of dnaC with dnaB requires the hydrolysis of ATP.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bagel and cream cheese** Bagel and cream cheese: A bagel and cream cheese (also known as bagel with cream cheese) is a common food pairing in American cuisine, the cuisine of New York City, and American Jewish cuisine. It consists, in its basic form, of a sliced bagel spread with cream cheese. The bagel with cream cheese is traditionally and most commonly served open-faced, sliced horizontally and spread with cream cheese and other toppings. Beginning in the 1980s as bagels greatly expanded in popularity beyond Jewish communities, the bagel served closed as a sandwich became increasingly popular for its portability. The basic bagel with cream cheese serves as the base for other items such as the "lox and schmear", a staple of delicatessens in the New York City area and across the U.S. While non-Jewish ingredients take well to bagel sandwiches, such as eggs and breakfast meats, cold cuts and sliced cheese, several traditional Jewish toppings for bagels do not work well between bagel halves, including the popular whitefish salad, pickled herring or chopped liver for the simple mechanical reason that soft toppings easily squirt out the sides when the bagel is bitten, as even a fresh bagel is firmer than most breads. American cuisine: A bagel with cream cheese is common in American cuisine, particularly in New York City. It is often eaten for breakfast; with smoked salmon added, it is sometimes served for brunch. In New York City circa 1900, a popular combination consisted of a bagel topped with lox, cream cheese, capers, tomato, and red onion.The combination of a bagel with cream cheese has been promoted to American consumers in the past by American food manufacturers and publishers. In the early 1950s, Kraft Foods launched an "aggressive advertising campaign" that depicted Philadelphia-brand cream cheese with bagels. In 1977, Better Homes and Family Circle magazines published a bagel and cream cheese recipe booklet that was distributed in the magazines and also placed in supermarket dairy cases. American cuisine: American Jewish cuisine In American Jewish cuisine, a bagel and cream cheese is sometimes called a "whole schmear" or "whole schmeer". A "slab" is a bagel with a slab of cream cheese on top. A "lox and a schmear" is to a bagel with cream cheese and lox or “Nova” smoked salmon. The latter being the particular style of Atlantic salmon used by Jewish delis on the East coast, and often also referred to as lox, especially outside the old and shrinking Jewish lineage of delis, Tomato, red onion, capers and chopped hard-boiled egg are often added. These terms are used at some delicatessens in New York City, particularly at Jewish delicatessens and older, more traditional delicatessens.The lox and schmear likely originated in New York City and Philadelphia, both sites of significant Polish immigration, around the time of the turn of the 20th century, when street vendors in the cities sold salt-cured belly lox from pushcarts. A high amount of salt in the fish necessitated the addition of bread and cheese to offset the lox's saltiness. It was reported by U.S. newspapers in the early 1940s that bagels and lox were sold by delicatessens in New York City as a "Sunday morning treat", and in the early 1950s, bagels and cream cheese combination were very popular in the United States, having permeated American culture. Mass production: Both bagels and cream cheese are mass-produced foods in the United States. Additionally, in January 2003, Kraft Foods began purveying a mass-produced convenience food product named Philadelphia To Go Bagel & Cream Cheese, which consisted of a combined package of two bagels and cream cheese. In popular culture: Bagels and cream cheese were provided to theater patrons by the cast of Bagels and Yox, a 1951 American-Yiddish Broadway revue, during the intermission period of the show. The revue ran at the Holiday Theatre in New York City from September 1951 to February 1952. A 1951 review of Bagels and Yox published in Time magazine helped to popularize bagels to American consumers throughout the country."Bagel and Lox" is a humorous song about the virtues of the bagel, lox, and cream cheese sandwich. It was written by Sid Tepper and Roy C. Bennett. It has been recorded by several different artists, including Eddie "Rochester" Anderson and, more recently, Rob Schneider, Joan Jaffe, and Oleg Frish. The lyrics to the chorus are:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Handlebody** Handlebody: In the mathematical field of geometric topology, a handlebody is a decomposition of a manifold into standard pieces. Handlebodies play an important role in Morse theory, cobordism theory and the surgery theory of high-dimensional manifolds. Handles are used to particularly study 3-manifolds. Handlebodies play a similar role in the study of manifolds as simplicial complexes and CW complexes play in homotopy theory, allowing one to analyze a space in terms of individual pieces and their interactions. n-dimensional handlebodies: If (W,∂W) is an n -dimensional manifold with boundary, and Sr−1×Dn−r⊂∂W (where Sn represents an n-sphere and Dn is an n-ball) is an embedding, the n -dimensional manifold with boundary (W′,∂W′)=((W∪(Dr×Dn−r)),(∂W−Sr−1×Dn−r)∪(Dr×Sn−r−1)) is said to be obtained from (W,∂W) by attaching an r -handle. n-dimensional handlebodies: The boundary ∂W′ is obtained from ∂W by surgery. As trivial examples, note that attaching a 0-handle is just taking a disjoint union with a ball, and that attaching an n-handle to (W,∂W) is gluing in a ball along any sphere component of ∂W . Morse theory was used by Thom and Milnor to prove that every manifold (with or without boundary) is a handlebody, meaning that it has an expression as a union of handles. The expression is non-unique: the manipulation of handlebody decompositions is an essential ingredient of the proof of the Smale h-cobordism theorem, and its generalization to the s-cobordism theorem. A manifold is called a "k-handlebody" if it is the union of r-handles, for r at most k. This is not the same as the dimension of the manifold. For instance, a 4-dimensional 2-handlebody is a union of 0-handles, 1-handles and 2-handles. Any manifold is an n-handlebody, that is, any manifold is the union of handles. It isn't too hard to see that a manifold is an (n-1)-handlebody if and only if it has non-empty boundary. n-dimensional handlebodies: Any handlebody decomposition of a manifold defines a CW complex decomposition of the manifold, since attaching an r-handle is the same, up to homotopy equivalence, as attaching an r-cell. However, a handlebody decomposition gives more information than just the homotopy type of the manifold. For instance, a handlebody decomposition completely describes the manifold up to homeomorphism. In dimension four, they even describe the smooth structure, as long as the attaching maps are smooth. This is false in higher dimensions; any exotic sphere is the union of a 0-handle and an n-handle. 3-dimensional handlebodies: A handlebody can be defined as an orientable 3-manifold-with-boundary containing pairwise disjoint, properly embedded 2-discs such that the manifold resulting from cutting along the discs is a 3-ball. It's instructive to imagine how to reverse this process to get a handlebody. (Sometimes the orientability hypothesis is dropped from this last definition, and one gets a more general kind of handlebody with a non-orientable handle.) The genus of a handlebody is the genus of its boundary surface. Up to homeomorphism, there is exactly one handlebody of any non-negative integer genus. 3-dimensional handlebodies: The importance of handlebodies in 3-manifold theory comes from their connection with Heegaard splittings. The importance of handlebodies in geometric group theory comes from the fact that their fundamental group is free. A 3-dimensional handlebody is sometimes, particularly in older literature, referred to as a cube with handles. Examples: Let G be a connected finite graph embedded in Euclidean space of dimension n. Let V be a closed regular neighborhood of G in the Euclidean space. Then V is an n-dimensional handlebody. The graph G is called a spine of V. Any genus zero handlebody is homeomorphic to the three-ball B3. A genus one handlebody is homeomorphic to B2 × S1 (where S1 is the circle) and is called a solid torus. All other handlebodies may be obtained by taking the boundary-connected sum of a collection of solid tori.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GPR82** GPR82: Probable G-protein coupled receptor 82 is a protein that in humans is encoded by the GPR82 gene.G protein-coupled receptors (GPCRs, or GPRs) contain 7 transmembrane domains and transduce extracellular signals through heterotrimeric G proteins.[supplied by OMIM]
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Energy in Germany** Energy in Germany: Germany predominantly sources its energy from fossil fuels, followed by wind, nuclear, solar, biomass (wood and biofuels) and hydro. Energy in Germany: The German economy is large and developed, ranking fourth in the world by nominal GDP. Germany is seventh in global primary energy consumption as of 2020. As of 2021, German primary energy consumption amounted to 12,193 Petajoule, with more than 75% coming from fossil sources, 6.2% from nuclear energy and 16.1% from renewables. In 2021 Germany's electricity production reached 553.9 TWh, down from 631.4 TWh in 2013.Key to Germany's energy policies and politics is the "Energiewende", meaning "energy turnaround" or "energy transformation". The policy includes phasing out nuclear power by 2022, and progressive replacement of fossil fuels by renewables. The nuclear electricity production lost in Germany's phase-out was primarily replaced with coal electricity production and electricity importing. One study found that the nuclear phase-out caused $12 billion in social costs per year, primarily due to increases in mortality due to exposure to pollution from fossil fuels.Germany used to be highly dependent on Russian energy, it used to gets more than half of the natural gas, a third of heating oil, and half of its coal imports from Russia. Due to this reliance, Germany blocked, delayed or watered down EU proposals to cut Russian energy imports amid the 2022 Russian invasion of Ukraine. However, the Russian invasion resulted in a radical shift in Germany's energy policy, with the goal of being almost completely independent of Russian energy imports by mid-2024. Energy plan: The plan for 2030 aims for 80% of electricity from renewables. Coal is to be phased out by 2030. Energy consumption: In 2019, Germany was the sixth largest consumer of energy in the world. The country also had the largest national market of electricity in Europe. Germany is the fifth-largest consumer of oil in the world, with oil accounting for 34.3% of all energy use in 2018, with another 23.7% coming from natural gas. Energy imports: In 2021, Germany imported 63.7% of its energy.About 98% of oil consumed in Germany is imported. In 2021, Russia supplied 34.1% of crude oil imports, the US 12.5%, Kazakhstan 9.8% and Norway 9.6%.Germany is also the world's largest importer of natural gas, which covered more than a quarter of primary energy consumption in Germany in 2021. Around 95% of Germany's natural gas is imported, of which around half is re-exported. 55% of gas imports come from Russia, 30% from Norway and 13% from the Netherlands. As of 2022, Germany does not have LNG terminals, so all gas imports use pipelines. After the 2022 Russian invasion of Ukraine, Germany announced that it wanted to build an LNG terminal at the North Sea port of Brunsbüttel to improve energy security.Because of its rich coal deposits, Germany has a long tradition of using coal. It was the fourth-largest consumer of coal in the world as of 2016. Domestic hard coal mining has been completely phased out in 2018, as it could not compete with cheaper sources elsewhere and had survived only through subsidies. As of 2022, only lignite is still mined in Germany. After ending domestic production in 2018, Germany imported all 31.8 million tonnes of the hard coal it consumed in 2020. The biggest suppliers were Russia (45.4%), the United States (18.3%) and Australia (12.3%). Sources of power: Fossil fuels Coal power Coal is the second-largest source of electricity in Germany. As of 2020, around 24% of the electricity in the country is generated from coal. This was down from 2013, when coal made up about 45% of Germany's electricity production (19% from hard coal and 26% from lignite). Nonetheless, in the first half of 2021, coal was the largest source of electricity in the country.Germany is also a major producer of coal. Lignite is extracted in the extreme western and eastern parts of the country, mainly in Nordrhein-Westfalen, Sachsen and Brandenburg. Considerable amounts are burned in coal plants near the mining areas to produce electricity and transporting lignite over far distances is not economically feasible; therefore, the plants are located near the extraction sites. Bituminous coal is mined in Nordrhein-Westfalen and Saarland. Most power plants burning bituminous coal operate on imported material, therefore, the plants are located not only near to the mining sites, but throughout the country.German coal-fired power plants are being designed and modified so they can be increasingly flexible to support the fluctuations resulting from increased renewable energy. Existing power plants in Germany are designed to operate flexibly. Load following is achieved by German natural gas combined cycle plants and coal-fired power plants. New coal-fired power plants have a minimum load capability of approximately 40%, with further potential to reduce this to 20–25%. The reason is that the output of the coal boiler is controlled via direct fuel combustion and not, as is the case with a gas combined-cycle power plant, via a heat recovery steam generator with an upstream gas turbine.Germany has been opening new coal power plants until recently, following a 2007 plan to build 26 new coal plants. This has been controversial in light of Germany's commitment to curbing carbon emissions. By 2015, the growing share of renewable energy in the national electricity market (26% in 2014, up from 4% in 1990) and the government's mandated CO2 emission reduction targets (40% below 1990 levels by 2020; 80% below 1990 levels by 2050) have increasingly curtailed previous plans for new, expanded coal power capacity.On 26 January 2019, a group of federal and state leaders as well as industry representatives, environmentalists, and scientists made an agreement to close all 84 coal plants in the country by 2038. Sources of power: The move is projected to cost €40 billion in compensation alone to closed businesses. Coal was used to generate almost 40% of the country's electricity in 2018 and is expected to be replaced by renewable energy and natural gas. 24 coal plants are planned to be closed by 2022 with all but 8 closed by 2030. The final date is expected to be assessed every 3 years.In 2019 the import of coal rose 1.4% compared with 2018.The phasing out of coal was brought forward in 2023 by 8 years to 2030, there is no agreement yet on phasing out lignite. Renewable energy Renewable energy includes wind, solar, biomass and geothermal energy sources. The share of electricity produced from renewable energy in Germany has increased from 6.3 per cent of the national total in 2000 to 46.2 per cent in 2022. Germany renewable power market grew from 0.8 million residential customers in 2006 to 4.9 million in 2012, or 12.5% of all private households in the country. Sources of power: In end of 2011, the cumulative installed total of renewable power was 65.7GW. Although Germany does not have a very sunny climate, solar photovoltaic power made up 4% of annual electricity consumption. On 25 May 2012, a Saturday, solar power reached a new record, injecting 22 GW of power into the German power grid. This met 50% of the nation's mid-day electricity demand on that day.In 2016, renewable energy based electricity generation reached 29.5%, but coal remained a factor at 40.1% of total generation. Wind was the leading renewable source at 12.3%, followed by biomass at 7.9% and solar PV at 5.9%.In 2020, renewable energy reached a share of 50.9% on the German public grid. Wind power made up 27% of total generation, and solar made up 10.5%. Biomass made up 9.7%, and hydro power made up 3.8%. The largest single non-renewable source was brown coal, with 16.8% of generation, followed by nuclear with 12.5%, then hard coal at 7.3%. Gas mainly provides peaking services, allowing for a generation share of 11.6%. Sources of power: Solar power In 2022 Germany had 66.5 GW of solar power capacity, which generated 62 terawatt hours of power from 2.65 million individual installations. Wind power In March 2023 there were around 28,500 turbines in operation in Germany with a combined capacity of 58.5 GW.Offshore wind in Germany is expected to reach 115 GW by 2030. Bioenergy In October 2016 the German Biomass Research Center (Deutsches Biomasseforschungszentrum) (DBFZ) launched an online biomass atlas for researchers, investors and the interested public. Sources of power: Nuclear power Nuclear power has been a topical political issue in recent decades, with continuing debates about when the technology should be phased out. A coalition government of Gerhard Schröder took the decision in 2002 to phaseout all nuclear power by 2022. The topic received renewed attention at the start of 2007 due to the political impact of the Russia-Belarus energy dispute and in 2011 after the Fukushima I nuclear accidents in Japan. Within days of the March 2011 Fukushima Daiichi nuclear disaster, large anti-nuclear protests occurred in Germany. Protests continued and, on 29 May 2011, Merkel's government announced that it would close all of its nuclear power plants by 2022. Eight of the seventeen operating reactors in Germany were permanently shut down following Fukushima in 2011. The last operational German reactors closed down in April 2023. Energy efficiency: The energy efficiency bottom-up index for the whole economy (ODEX) in Germany decreased by 18% between 1991 and 2006, which is equivalent to an energy efficiency improvement by 1.2% per annum on average based on the ODEX, which calculates technical efficiency improvements. Since the beginning of the new century, however, the efficiency improvement measured by the ODEX has slowed down. While a continuous decrease by 1.5%/y could be observed between 1991 and 2001, the decrease in the period from 2001 to 2006 only amounted to 0.5%, which is below the EU-27 level.By 2030 the German Federal Ministry of the Economy projects an increase in electricity consumption to 658 TWh. The expected increase is due to an expected uptick in electric mobility, more heating through electric heat-pumps, and production of batteries and hydrogen. Government energy policy: Germany was the fourth-largest producer of nuclear power in the world, but in 2000, the government and the German nuclear power industry agreed to phase out all nuclear power plants by 2021, as a result of an initiative with a vote result of 513 Yes, 79 No and 8 Empty. The seven oldest reactors were permanently closed after the Fukushima accident. However, being an integral part of the EU's internal electricity market, Germany will continue to consume foreign nuclear electricity even after 2022. Government energy policy: In September 2010, Merkel's government reached a late-night deal which would see the country's 17 nuclear plants run, on average, 12 years longer than planned, with some remaining in production until well into the 2030s. Then, following Fukushima Daiichi nuclear disaster, the government changed its mind again, deciding to proceed with the plan to close all nuclear plants in the country by 2022.After becoming Chancellor of Germany, Angela Merkel expressed concern for overreliance on Russian energy, but the policy of energy imports did not changed significantly afterwards.Government policy emphasises conservation and the development of renewable sources, such as solar, wind, biomass, water, and geothermal power. As a result of energy saving measures, energy efficiency (the amount of energy required to produce a unit of gross domestic product) has been improving since the beginning of the 1970s. Government energy policy: Sustainable energy In September 2010, the German government announced a new aggressive energy policy with the following targets: Reducing CO2 emissions 40% below 1990 levels by 2020 and 80% below 1990 levels by 2050 Increasing the relative share of renewable energy in gross energy consumption to 18% by 2020, 30% by 2030 and 60% by 2050 Increasing the relative share of renewable energy in gross electrical consumption to 35% by 2020 and 80% by 2050 Increasing the national energy efficiency by cutting electrical consumption 50% below 2008 levels by 2050Forbes ranked German Aloys Wobben ($3B), founder of Enercon, as the richest person in the energy business (wind power) in Germany in 2013. Government energy policy: Taxes Fossil fuel taxes Carbon tax The German ecological tax reform was adopted in 1999. After that, the law was amended in 2000 and in 2003. The law grew taxes on fuel and fossil fuels and laid the foundation for the tax for energy. In December 2019, the German Government agreed on a carbon tax of 25 Euros per tonne of CO2 on oil and gas companies. The law came into effect in January 2021. The tax will increase to 55 Euros per tonne by 2025. From 2026 onwards, the price will be decided at auction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lemon technique** Lemon technique: The Lemon technique is a method used by meteorologists using weather radar to determine the relative strength of thunderstorm cells in a vertically sheared environment. It is named for Leslie R. Lemon, the co-creator of the current conceptual model of a supercell. The Lemon technique is largely a continuation of work by Keith A. Browning, who first identified and named the supercell.The method focuses on updrafts and uses weather radar to measure quantities such as height (echo tops), reflectivity (such as morphology and gradient), and location to show features and trends described by Lemon. These features include: Updraft tilt - The tilted updraft (vertical orientation) of the main updraft is an indication of the strength of the updraft, with nearly vertical tilts indicating stronger updrafts. Lemon technique: Echo overhang - In intense thunderstorms, an area of very strong reflectivity atop the weak echo region and on the low-level inflow inside side of the storm. Weak echo region (WER) - An area of markedly lower reflectivity, resulting from an increase in updraft strength. Lemon technique: Bounded weak echo region (BWER) - Another area of markedly lower reflectivity, now bounded by an area of high reflectivity. This is observed as a "hole" in reflectivity, and is caused by an updraft powerful enough to prevent ice and liquid from reaching the ground. This powerful updraft is often an indication of, or is facilitated by, a mesocyclone. A mesocyclone is not strictly necessary for BWER development. Storm rotation can be reliably detected by the Doppler velocities of a weather radar.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hartogs number** Hartogs number: In mathematics, specifically in axiomatic set theory, a Hartogs number is an ordinal number associated with a set. In particular, if X is any set, then the Hartogs number of X is the least ordinal α such that there is no injection from α into X. If X can be well-ordered then the cardinal number of α is a minimal cardinal greater than that of X. If X cannot be well-ordered then there cannot be an injection from X to α. However, the cardinal number of α is still a minimal cardinal not less than or equal to the cardinality of X. (If we restrict to cardinal numbers of well-orderable sets then that of α is the smallest that is not not less than or equal to that of X.) The map taking X to α is sometimes called Hartogs's function. This mapping is used to construct the aleph numbers, which are all the cardinal numbers of infinite well-orderable sets. Hartogs number: The existence of the Hartogs number was proved by Friedrich Hartogs in 1915, using Zermelo–Fraenkel set theory alone (that is, without using the axiom of choice). Hartogs's theorem: Hartogs's theorem states that for any set X, there exists an ordinal α such that |α|≰|X| ; that is, such that there is no injection from α to X. As ordinals are well-ordered, this immediately implies the existence of a Hartogs number for any set X. Furthermore, the proof is constructive and yields the Hartogs number of X. Proof See Goldrei 1996. Let Ord ∣∃i:β↪X} be the class of all ordinal numbers β for which an injective function exists from β into X. First, we verify that α is a set. X × X is a set, as can be seen in Axiom of power set. The power set of X × X is a set, by the axiom of power set. The class W of all reflexive well-orderings of subsets of X is a definable subclass of the preceding set, so it is a set by the axiom schema of separation. Hartogs's theorem: The class of all order types of well-orderings in W is a set by the axiom schema of replacement, as (Domain(w), w) ≅ (β, ≤) can be described by a simple formula.But this last set is exactly α. Now, because a transitive set of ordinals is again an ordinal, α is an ordinal. Furthermore, there is no injection from α into X, because if there were, then we would get the contradiction that α ∈ α. And finally, α is the least such ordinal with no injection into X. This is true because, since α is an ordinal, for any β < α, β ∈ α so there is an injection from β into X. Historic remark: In 1915, Hartogs could use neither von Neumann-ordinals nor the replacement axiom, and so his result is one of Zermelo set theory and looks rather different from the modern exposition above. Instead, he considered the set of isomorphism classes of well-ordered subsets of X and the relation in which the class of A precedes that of B if A is isomorphic with a proper initial segment of B. Hartogs showed this to be a well-ordering greater than any well-ordered subset of X. (This must have been historically the first genuine construction of an uncountable well-ordering.) However, the main purpose of his contribution was to show that trichotomy for cardinal numbers implies the (then 11 year old) well-ordering theorem (and, hence, the axiom of choice).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Triacetin** Triacetin: Triacetin, is the organic compound with the formula C3H5(OCOCH3)3. It is classified as a triglyceride, i.e., the triester of glycerol. It is a colorless, viscous, and odorless liquid with a high boiling point and a low melting point. It has a mild, sweet taste in concentrations lower than 500 ppm, but may appear bitter at higher concentrations. It is one of the glycerine acetate compounds. Uses: Triacetin is a common food additive, for instance as a solvent in flavourings, and for its humectant function, with E number E1518 and Australian approval code A1518. It is used as an excipient in pharmaceutical products, where it is used as a humectant, a plasticizer, and as a solvent. Uses: Potential uses The plasticizing capabilities of triacetin have been utilized in the synthesis of a biodegradable phospholipid gel system for the dissemination of the cancer drug paclitaxel (PTX). In the study, triacetin was combined with PTX, ethanol, a phospholipid and a medium chain triglyceride to form a gel-drug complex. This complex was then injected directly into the cancer cells of glioma-bearing mice. The gel slowly degraded and facilitated sustained release of PTX into the targeted glioma cells. Uses: Triacetin can also be used as a fuel additive as an antiknock agent which can reduce engine knocking in gasoline, and to improve cold and viscosity properties of biodiesel.It has been considered as a possible source of food energy in artificial food regeneration systems on long space missions. It is believed to be safe to get over half of one's dietary energy from triacetin. Synthesis: Triacetin was first prepared in 1854 by the French chemist Marcellin Berthelot. Triacetin was prepared in the 19th century from glycerol and acetic acid.Its synthesis from acetic anhydride and glycerol is simple and inexpensive. 3 (CH3CO)2O + 1 C3H5(OH)3 → 1 C3H5(OCOCH3)3 + 3 CH3CO2HThis synthesis has been conducted with catalytic sodium hydroxide and microwave irradiation to give a 99% yield of triacetin. It has also been conducted with a cobalt(II) Salen complex catalyst supported by silicon dioxide and heated to 50 °C for 55 minutes to give a 99% yield of triacetin. Safety: The US Food and Drug Administration has approved it as Generally Recognized as Safe food additive and included it in the database according to the opinion from the Select Committee On GRAS Substances (SCOGS). Triacetin is included in the SCOGS database since 1975.Triacetin was not toxic to animals in studies of exposure through repeated inhalation over a relatively short period.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Odynophagia** Odynophagia: Odynophagia is pain when swallowing. The pain may be felt in the mouth or throat and can occur with or without difficulty swallowing. The pain may be described as an ache, burning sensation, or occasionally a stabbing pain that radiates to the back. Odynophagia often results in inadvertent weight loss. The term is from odyno- 'pain' and phagō 'to eat'. Causes: Odynophagia may have environmental or behavioral causes, such as: Very hot or cold food and drinks (termed cryodynophagia when associated with cold drinks, classically in the setting of cryoglobulinaemia). Taking certain medications Using drugs, tobacco, or alcohol Trauma or injury to the mouth, throat, or tongueIt can also be caused by certain medical conditions, such as: Ulcers Abscesses Upper respiratory tract infections Inflammation or infection of the mouth, tongue, or throat (esophagitis, pharyngitis, tonsillitis, epiglottitis) Oral or throat cancer
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NGC 4125** NGC 4125: NGC 4125 is an elliptical galaxy in the constellation Draco. In 2016, the telescope KAIT discovered the super nova SN 2016coj in this galaxy. After detection it became brighter over the course of several days, with the spectrum indicating a Type Ia supernova.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Height finder** Height finder: A height finder is a ground-based aircraft altitude measuring device. Early height finders were optical range finder devices combined with simple mechanical computers, while later systems migrated to radar devices. The unique vertical oscillating motion of height finder radars led to them also being known as nodding radar. Devices combining both optics and radar were deployed by the U.S. Military. Optical: In World War II, a height finder was an optical rangefinder used to determine the altitude of an aircraft (actually the slant range from the emplacement which was combined with the angle of sight, in a mechanical computer, to produce altitude), used to direct anti-aircraft guns. Examples of American and Japanese versions exist. In the Soviet Union it was usually combined with optical rangefinders. Radar: A height finder radar is a type of 2-dimensional radar that measures altitude of a target. Radar: The operator slews the antenna toward a desired bearing, identifies a target echo at a desired range on the range height indicator display, then bisects the target with a cursor that is scaled to indicate the approximate altitude of the target. Such systems often complement 2-dimensional radars which find distance and direction (search radar); thus using two 2-dimensional systems to obtain a 3-dimensional aerial picture. Height finding radars of the 1960s and 70s were distinguished by their antenna being tall, but narrow. As beam shape is a function of antenna shape, the height finder beam was flat and wide horizontally (i.e., not very good at determining bearing to the target), but very thin vertically, allowing accurate measurement of elevation angle, thus altitude. Radar: Modern 3D radar sets find both azimuth and elevation, making separate height finder radars largely obsolete.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mugen Puchipuchi** Mugen Puchipuchi: Mugen Puchipuchi (∞(むげん)プチプチ, Infinite Bubble Wrap) is a Japanese bubble wrap keychain toy by Bandai. The term "puchipuchi" serves as a generic trademark for bubble wrap, but is also onomatopoeia for the sound of bubbles being popped.The square-shaped toy has eight buttons that make a popping sound when pressed, and is designed to mimic the sensation of popping bubble wrap for an infinite number of times. It is made of a double-layer silicone rubber structure to create a similar feeling to bubble wrap. It also plays a sound effect for every 100 pops; these sound effects include a “door chime”, “barking dog”, “fart”, and “sexy voice”. Bandai worked with the company behind Puchipuchi bubble wrap to create a design that is most realistic to real bubble wrap.Bandai has also created other keychain toys based on Mugen Puchipuchi, such as Puchi Moe, Mugen Edamame, and Mugen Periperi. The original Mugen Puchipuchi has also been marketed in Europe and North America as "Mugen Pop-Pop". Puchi Moe: Puchi Moe is an anime-themed version of the original Mugen Puchipuchi. The random sound effects have been replaced by one of four anime characters' voices. The different types, each based on an anime character archetype, are a childhood friend, French maid, tsundere, and younger sister.Puchi Moe was created for the lucrative otaku market. All four character voices are done by voice actress Rie Kugimiya. Mugen Edamame: Mugen Edamame (∞(むげん)エダマメ, Infinite Soybeans) has beans inside a pod that appears similar to edamame. Squeezing the pod causes a bean to pop out, showing one of twelve faces, which are pre-set and randomly packaged. Unlike Mugen Puchipuchi, it does not play sounds when pushed. Mugen Periperi: Mugen Periperi (∞(むげん)ペリペリ, Infinity Ripping) mimics the tear strip of a cardboard box that is ripped to open the box. Mugen Periperi was made available on 22 November 2008. Ouchi de Mugen Puchi Puchi Wii: On 24 June 2008, Bandai released a video game version for the Nintendo Wii via WiiWare. The game's title, Ouchi de Mugen Puchi Puchi Wii (おうちで∞プチプチWii), roughly translates to "In-Your-Home Infinite Bubble Wrap Wii".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zirconium tetrafluoride** Zirconium tetrafluoride: Zirconium(IV) fluoride describes members of a family inorganic compounds with the formula (ZrF4(H2O)x. All are colorless, diamagnetic solids. Anhydrous Zirconium(IV) fluoride' is a component of ZBLAN fluoride glass. Structure: Three crystalline phases of ZrF4 have been reported, α (monoclinic), β (tetragonal, Pearson symbol tP40, space group P42/m, No 84) and γ (unknown structure). β and γ phases are unstable and irreversibly transform into the α phase at 400 °C.Zirconium(IV) fluoride forms several hydrates. The trihydrate has the structure (μ−F)2[ZrF3(H20)3]2. Preparation and reactions: Zirconium fluoride can be produced by several methods. Zirconium dioxide reacts with hydrogen fluoride and hydrofluoric acid to afford the anhydrous and monohydrates: ZrO2 + 4 HF → ZrF4 + 2 H2OThe reaction of Zr metal reacts at high temperatures with HF as well: Zr + 4 HF → ZrF4 + 2 H2Zirconium dioxide reacts at 200 °C with solid ammonium bifluoride to give the heptafluorozirconate salt, which can be converted to the tetrafluoride at 500 °C: 2ZrO2 + 7 (NH4)HF2 → 2 (NH4)3ZrF7 + 4 H2O + NH3 (NH4)3ZrF7 → ZrF4 + 3 HF + 3 NH3Addition of hydrofluoric acid to solutions of zirconium nitrate precipitates solid monohydrate. Hydrates of zirconium tetrafluoride can be dehydrated by heating under a stream of hydrogen fluoride. Preparation and reactions: Zirconium fluoride can be purified by distillation or sublimation.Zirconium fluoride forms double salts with other fluorides. The most prominent is potassium hexafluorozironate, formed by fusion of potassium fluoride and zirconium tetrafluoride: ZrF4 + 2 KF → K2ZrF6 Applications: The major and perhaps only commercial application of zirconium fluoride is as a precursor to ZBLAN glasses.Mixture of sodium fluoride, zirconium fluoride, and uranium tetrafluoride (53-41-6 mol.%) was used as a coolant in the Aircraft Reactor Experiment. A mixture of lithium fluoride, beryllium fluoride, zirconium fluoride, and uranium-233 tetrafluoride was used in the Molten-Salt Reactor Experiment. (Uranium-233 is used in the thorium fuel cycle reactors.)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sony Vaio C series** Sony Vaio C series: The Sony Vaio C Series is a discontinued series of notebook computers from Sony introduced in September 2006 as the consumer alternative follow-up to the then current SZ series. History: Like the SZ, the first C series featured a 1280x800 (16:10 widescreen) 13.3" LCD screen, plus Core 2 Duo CPUs; later 15.5" models were released. As a consumer laptop, a variety of colours were offered, while compared with the SZ, the C series was heavier, and lacked the switchable graphics option, instead offering either lower-power Intel GMA 950 or faster Nvidia GeForce 7400 graphics. A crocodile-skin option was offered in Japan. The C series was superseded by the SR series. Models: The 13" 2006 C series weighed 5.1 pounds/2.3kg.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bidirectional scattering distribution function** Bidirectional scattering distribution function: The definition of the BSDF (bidirectional scattering distribution function) is not well standardized. The term was probably introduced in 1980 by Bartell, Dereniak, and Wolfe. Most often it is used to name the general mathematical function which describes the way in which the light is scattered by a surface. However, in practice, this phenomenon is usually split into the reflected and transmitted components, which are then treated separately as BRDF (bidirectional reflectance distribution function) and BTDF (bidirectional transmittance distribution function). Bidirectional scattering distribution function: BSDF is a superset and the generalization of the BRDF and BTDF. The concept behind all BxDF functions could be described as a black box with the inputs being any two angles, one for incoming (incident) ray and the second one for the outgoing (reflected or transmitted) ray at a given point of the surface. The output of this black box is the value defining the ratio between the incoming and the outgoing light energy for the given couple of angles. The content of the black box may be a mathematical formula which more or less accurately tries to model and approximate the actual surface behavior or an algorithm which produces the output based on discrete samples of measured data. This implies that the function is 4(+1)-dimensional (4 values for 2 3D angles + 1 optional for wavelength of the light), which means that it cannot be simply represented by 2D and not even by a 3D graph. Each 2D or 3D graph, sometimes seen in the literature, shows only a slice of the function. Bidirectional scattering distribution function: Some tend to use the term BSDF simply as a category name covering the whole family of BxDF functions. Bidirectional scattering distribution function: The term BSDF is sometimes used in a slightly different context, for the function describing the amount of the scatter (not scattered light), simply as a function of the incident light angle. An example to illustrate this context: for perfectly lambertian surface the BSDF (angle)=const. This approach is used for instance to verify the output quality by the manufacturers of the glossy surfaces. Bidirectional scattering distribution function: Another recent usage of the term BSDF can be seen in some 3D packages, when vendors use it as a 'smart' category to encompass the simple well known cg algorithms like Phong, Blinn–Phong etc. Bidirectional scattering distribution function: Acquisition of the BSDF over the human face in 2000 by Debevec et al. was one of the last key breakthroughs on the way to fully virtual cinematography with its ultra-photorealistic digital look-alikes. The team was the first in the world to isolate the subsurface scattering component (a specialized case of BTDF) using the simplest light stage, consisting on moveable light source, moveable high-res digital camera, 2 polarizers in a few positions and really simple algorithms on a modest computer. The team utilized the existing scientific knowledge that light that is reflected and scattered from the air-to-oil layer retains its polarization while light that travels within the skin loses its polarization. The subsurface scattering component can be simulated as a steady high-scatter glow of light from within the models, without which the skin does not look realistic. ESC Entertainment, a company set up by Warner Brothers Pictures specially to do the visual effects / virtual cinematography system for The Matrix Reloaded and The Matrix Revolutions isolated the parameters for an approximate analytical BRDF which consisted of Lambertian diffusion component and a modified specular Phong component with a Fresnel type of effect. Overview of the BxDF functions: BRDF (Bidirectional reflectance distribution function) is a simplified BSSRDF, assuming that light enters and leaves at the same point (see the image on the right). BTDF (Bidirectional transmittance distribution function) is similar to BRDF but for the opposite side of the surface. (see the top image). BDF (Bidirectional distribution function) is collectively defined by BRDF and BTDF. BSSRDF (Bidirectional scattering-surface reflectance distribution function or Bidirectional surface scattering RDF) describes the relation between outgoing radiance and the incident flux, including the phenomena like subsurface scattering (SSS). The BSSRDF describes how light is transported between any two rays that hit a surface. BSSTDF (Bidirectional scattering-surface transmittance distribution function) is like BTDF but with subsurface scattering. BSSDF (Bidirectional scattering-surface distribution function) is collectively defined by BSSTDF and BSSRDF. Also known as BSDF (Bidirectional scattering distribution function).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Loose coupling** Loose coupling: In computing and systems design, a loosely coupled system is one in which components are weakly associated (have breakable relationships) with each other, and thus changes in one component least affect existence or performance of another component. in which each of its components has, or makes use of, little or no knowledge of the definitions of other separate components. Subareas include the coupling of classes, interfaces, data, and services. Loose coupling is the opposite of tight coupling. Advantages and disadvantages: Components in a loosely coupled system can be replaced with alternative implementations that provide the same services. Components in a loosely coupled system are less constrained to the same platform, language, operating system, or build environment. If systems are decoupled in time, it is difficult to also provide transactional integrity; additional coordination protocols are required. Data replication across different systems provides loose coupling (in availability), but creates issues in maintaining consistency (data synchronization). In integration: Loose coupling in broader distributed system design is achieved by the use of transactions, queues provided by message-oriented middleware, and interoperability standards.Four types of autonomy, which promote loose coupling, are: reference autonomy, time autonomy, format autonomy, and platform autonomy.Loose coupling is an architectural principle and design goal in service-oriented architectures; eleven forms of loose coupling and their tight coupling counterparts are listed in: physical connections via mediator, asynchronous communication style, simple common types only in data model, weak type system, data-centric and self-contained messages, distributed control of process logic, dynamic binding (of service consumers and providers), platform independence, business-level compensation rather than system-level transactions, deployment at different times, implicit upgrades in versioning.Enterprise Service Bus (ESB) middleware was invented to achieve loose coupling in multiple dimensions; however, overengineered and mispositioned ESBs can also have the contrary effect and create undesired tight coupling and a central architectural hotspot. In integration: Event-driven architecture also aims at promoting loose coupling. Methods for decreasing coupling Loose coupling of interfaces can be enhanced by publishing data in a standard format (such as XML or JSON). Loose coupling between program components can be enhanced by using standard data types in parameters. Passing customized data types or objects requires both components to have knowledge of the custom data definition. In integration: Loose coupling of services can be enhanced by reducing the information passed into a service to the key data. For example, a service that sends a letter is most reusable when just the customer identifier is passed and the customer address is obtained within the service. This decouples services because services do not need to be called in a specific order (e.g. GetCustomerAddress, SendLetter). In programming: Coupling refers to the degree of direct knowledge that one component has of another. Loose coupling in computing is interpreted as encapsulation vs. non-encapsulation. In programming: An example of tight coupling occurs when a dependent class contains a pointer directly to a concrete class which provides the required behavior. The dependency cannot be substituted, or its "signature" changed, without requiring a change to the dependent class. Loose coupling occurs when the dependent class contains a pointer only to an interface, which can then be implemented by one or many concrete classes. This is known as dependency inversion. The dependent class's dependency is to a "contract" specified by the interface; a defined list of methods and/or properties that implementing classes must provide. Any class that implements the interface can thus satisfy the dependency of a dependent class without having to change the class. This allows for extensibility in software design; a new class implementing an interface can be written to replace a current dependency in some or all situations, without requiring a change to the dependent class; the new and old classes can be interchanged freely. Strong coupling does not allow this. In programming: This is a UML diagram illustrating an example of loose coupling between a dependent class and a set of concrete classes, which provide the required behavior: For comparison, this diagram illustrates the alternative design with strong coupling between the dependent class and a provider: Other forms Computer programming languages having notions of either functions as the core module (see Functional programming) or functions as objects provide excellent examples of loosely coupled programming. Functional languages have patterns of Continuations, Closure, or generators. See Clojure and Lisp as examples of function programming languages. Object-oriented languages like Smalltalk and Ruby have code blocks, whereas Eiffel has agents. The basic idea is to objectify (encapsulate as an object) a function independent of any other enclosing concept (e.g. decoupling an object function from any direct knowledge of the enclosing object). See First-class function for further insight into functions as objects, which qualifies as one form of first-class function. In programming: So, for example, in an object-oriented language, when a function of an object is referenced as an object (freeing it from having any knowledge of its enclosing host object) the new function object can be passed, stored, and called at a later time. Recipient objects (to whom these functional objects are given) can safely execute (call) the contained function at their own convenience without any direct knowledge of the enclosing host object. In this way, a program can execute chains or groups of functional objects, while safely decoupled from having any direct reference to the enclosing host object. In programming: Phone numbers are an excellent analog and can easily illustrate the degree of this decoupling. In programming: For example: Some entity provides another with a phone number to call to get a particular job done. When the number is called, the calling entity is effectively saying, "Please do this job for me." The decoupling or loose coupling is immediately apparent. The entity receiving the number to call may have no knowledge of where the number came from (e.g. a reference to the supplier of the number). On the other side, the caller is decoupled from specific knowledge of who they are calling, where they are, and knowing how the receiver of the call operates internally. In programming: Carrying the example a step further, the caller might say to the receiver of the call, "Please do this job for me. Call me back at this number when you are finished." The 'number' being offered to the receiver is referred to as a "Call-back". Again, the loose coupling or decoupled nature of this functional object is apparent. The receiver of the call-back is unaware of what or who is being called. It only knows that it can make the call and decides for itself when to call. In reality, the call-back may not even be to the one who provided the call-back in the first place. This level of indirection is what makes function objects an excellent technology for achieving loosely coupled programs. In programming: Communication between loosely coupled components may be based on a flora of mechanisms, like the mentioned asynchronous communication style or the synchronous message passing style Measuring data element coupling The degree of the loose coupling can be measured by noting the number of changes in data elements that could occur in the sending or receiving systems and determining if the computers would still continue communicating correctly. These changes include items such as: Adding new data elements to messages Changing the order of data elements Changing the names of data elements Changing the structures of data elements Omitting data elements
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pre-consumer recycling** Pre-consumer recycling: Pre-consumer recycling is the reclamation of waste materials that were created during the process of manufacturing or delivering goods prior to their delivery to a consumer. Pre-consumer recycled materials can be broken down and remade into similar or different materials, or can be sold "as is" to third-party buyers who then use those materials for consumer products. One of the largest contributing industries to pre-consumer recycling is the textile industry, which recycles fibers, fabrics, trims and unsold "new" garments to third-party buyers. Pre-consumer recycling: There are generally two types of recycling: post-consumer and pre-consumer. Post-consumer recycling is the most heavily practiced form of recycling, where the materials being recycled have already passed through to the consumer. Pre-consumer recycling: According to the Council for Textile Recycling, each year 750,000 tons of textile waste is recycled (pre- and post-consumer) into new raw materials for the automotive, furniture, mattress, coarse yarn, home furnishings, paper and other industries. Although this amount accounts for 75% of textile waste in the United States, there is little research on textile excess produced in countries that play a larger role in global textile production, such as China, Vietnam, Thailand, India and Bangladesh.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Journal of Neurophysiology** Journal of Neurophysiology: The Journal of Neurophysiology is a monthly peer-reviewed scientific journal established in 1938. It is published by the American Physiological Society with Jan "Nino" Ramirez as its editor-in-chief. Ramirez is the Director for the Center for Integrative Brain Research at the University of Washington. Journal of Neurophysiology: The Journal of Neurophysiology publishes original articles on the function of the nervous system. All levels of function are included, from membrane biophysics to cell biology to systems neuroscience and the experimental analysis of behavior. Experimental approaches include molecular neurobiology, cell culture and slice preparations, membrane physiology, developmental neurobiology, functional neuroanatomy, neurochemistry, neuropharmacology, systems electrophysiology, imaging and mapping techniques, and behavioral analysis. Experimental preparations may be invertebrate or vertebrate species, including humans. Theoretical studies are acceptable if they are tied closely to the interpretation of experimental data and elucidate principles of broad interest. Journal of Neurophysiology: The journal published some of the first functional neuroimaging studies.The Journal's Deputy Editor is Reza Shadmehr. The current Associate Editors for the Journal of Neurophysiology are Robert M. Brownstone, Ansgar Buschges, Carmen C. Canavier, Christos Constantinidis, Leslie M. Kay, Zoe Kourtzi, M. Bruce MacIver, Hugo Merchant, Monica A. Perez, Albrecht Stroh, and Ana C. Takakura. Types of manuscripts published: The Journal of Neurophysiology publishes research reports of any length, review articles, Rapid Reports, Innovative Methodology reports, Case Studies in Neuroscience, and NeuroForums (brief commentaries on recent articles authored by graduate and postdoctoral students). Review article topics must be approved by the editor-in-chief prior to submission of the article. Rapid Reports are short papers presenting important new findings that could potentially have a major impact on the field. Rapid Reports submissions receive expedited peer review, and if accepted are highlighted on the journal's website. NeuroForum submissions must meet strict guidelines, and it is recommended that articles that are examined in NeuroForum submissions are pre-approved by the editor-in-chief. Case Studies in Neuroscience provides a forum for human or animal subjects studies that cannot be replicated experimentally (e.g., they report the neurological effects of a rare disease), but provide unique insights into mechanisms of neural function (either at the cellular or systems level). Clinical case studies are not appropriate for this category, and authors are encouraged to consult with the Editor-in-Chief to determine if their manuscript qualifies for submission as Case Studies in Neuroscience.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Seismic noise** Seismic noise: In geophysics, geology, civil engineering, and related disciplines, seismic noise is a generic name for a relatively persistent vibration of the ground, due to a multitude of causes, that is often a non-interpretable or unwanted component of signals recorded by seismometers. Seismic noise: Physically, seismic noise arises primarily due to surface or near surface sources and thus consists mostly of elastic surface waves. Low frequency waves (below 1 Hz) are commonly called microseisms and high frequency waves (above 1 Hz) are called microtremors. Primary sources of seismic waves include human activities (such as transportation or industrial activities), winds and other atmospheric phenomena, rivers, and ocean waves. Seismic noise: Seismic noise is relevant to any discipline that depends on seismology, including geology, oil exploration, hydrology, and earthquake engineering, and structural health monitoring. It is often called the ambient wavefield or ambient vibrations in those disciplines (however, the latter term may also refer to vibrations transmitted through by air, building, or supporting structures.) Seismic noise is often a nuisance for activities that are sensitive to extraneous vibrations, including earthquake monitoring and research, precision milling, telescopes, gravitational wave detectors, and crystal growing. However, seismic noise also has practical uses, including determining the low-strain and time-varying dynamic properties of civil-engineering structures, such as bridges, buildings, and dams; seismic studies of subsurface structure at many scales, often using the methods of seismic interferometry; Environmental monitoring, such as in fluvial seismology; and estimating seismic microzonation maps to characterize local and regional ground response during earthquakes. Causes: Research on the origin of seismic noise indicates that the low frequency part of the spectrum (below 1 Hz) is principally due to natural causes, chiefly ocean waves. In particular the globally observed peak between 0.1 and 0.3 Hz is clearly associated with the interaction of water waves of nearly equal frequencies but probating in opposing directions. At high frequency (above 1 Hz), seismic noise is mainly produced by human activities such as road traffic and industrial work; but there are also natural sources, including rivers. Causes: Above 1 Hz, wind and other atmospheric phenomena can also be a major source of ground vibrations.Anthropogenic noise detected during periods of low seismic activity includes "footquakes" from soccer fans stamping their feet in Cameroon.Non-anthropogenic activity includes pulses at intervals between 26 and 28 seconds (0.036–0.038 Hz) centered on the Bight of Bonny in the Gulf of Guinea that are thought to be caused by reflected storm waves, focused by the African coast, acting on the relatively shallow sea-floor. Physical characteristics: The amplitude of seismic noise vibrations is typically in the order of 0.1 to 10 μm/s. High and low background noise models as a function of frequency have been evaluated globally.Seismic noise includes a small number of body waves (P- and S-waves), but surface waves (Love and Rayleigh waves) predominate since they are preferentially excited by surface source processes. These waves are dispersive, meaning that their phase velocity varies with frequency (generally, it decreases with increasing frequency). Since the dispersion curve (phase velocity or slowness as a function of frequency) is related to the variations of the shear-wave velocity with depth, it can be used as a non-invasive tool to determine subsurface seismic structure and an inverse problem. History: Under normal conditions, seismic noise has very low amplitude and cannot be felt by humans, and was also too low to be recorded by most early seismometers at the end of 19th century. However, by the early 20th century, Japanese seismologist Fusakichi Omori could already record ambient vibrations in buildings, where the amplitudes are magnified. He determined building resonance frequencies and studied their evolution as a function of damage. Globally visible 30 s–5 s seismic noise was recognized early in the history of seismology as arising from the oceans, and a comprehensive theory of its generation was published by Longuet-Higgins in 1950. History: Rapid advances beginning around 2005 in seismic interferometry driven by theoretical, methodological, and data advances have resulted in a major renewed interest in the applications of seismic noise. History: Civil engineering After the 1933 Long Beach earthquake in California, a large experiment campaign led by D. S. Carder in 1935 recorded and analyzed ambient vibrations in more than 200 buildings. These data were used in the design codes to estimate resonance frequencies of buildings but the interest of the method went down until the 1950s. Interest on ambient vibrations in structures grew further, especially in California and Japan, thanks to the work of earthquake engineers, including G. Housner, D. Hudson, K. Kanai, T. Tanaka, and others.In engineering, ambient vibrations were however supplanted - at least for some time - by forced vibration techniques that allow to increase the amplitudes and control the shaking source and their system identification methods. Even though M. Trifunac showed in 1972 that ambient and forced vibrations led to the same results, the interest in ambient vibration techniques only rose in the late 1990s. They have now become quite attractive, due to their relatively low cost and convenience, and to the recent improvements in recording equipment and computation methods. The results of their low-strain dynamic probing were shown to be close enough to the dynamic characteristics measured under strong shaking, at least as long as the buildings are not severely damaged. History: Scientific study and applications in geology and geophysics The recording of global seismic noise expanded widely in the 1950s with the enhancement of seismometers to monitor nuclear tests and the development of seismic arrays. The main contributions at that time for the analysis of these recordings came from the Japanese seismologist K. Aki in 1957. He proposed several methods used today for local seismic evaluation, such as Spatial Autocorrelation (SPAC), Frequency-wavenumber (FK), and correlation. However, the practical implementation of these methods was not possible at that time because of the low precision of clocks in seismic stations. History: Improvements in instrumentation and algorithms led to renewed interest on those methods during the 1990s. Y. Nakamura rediscovered in 1989 the horizontal to vertical spectral ratio (H/V) method to derive the resonance frequency of sites. Assuming that shear waves dominate the microtremor, Nakamura observed that the H/V spectral ratio of ambient vibrations was roughly equal to the S-wave transfer function between the ground surface and the bedrock at a site. (However, this assumption has been questioned by the SESAME project.) In the late 1990s, array methods applied to seismic noise data started to yield ground properties in terms of shear waves velocity profiles. The European Research project SESAME (2004–2006) worked to standardize the use of seismic noise to estimate the amplification of earthquakes by local ground characteristics. Current uses of seismic noise: Characterization of subsurface properties The analysis of the ambient vibrations and the random seismic wavefield motivates a variety of processing methods used to characterize the subsurface, including via power spectra, H/V peak analysis, dispersion curves and autocorrelation functions. Current uses of seismic noise: Single-station methods: Computation of power spectra, e.g. Passive seismic. For example, monitoring the power spectral density characteristics of ocean background microseism and Earth's very long period hum at globally and regionally distributed stations provides proxy estimates of ocean wave energy, particularly in near-shore environments, including the ocean wave attenuation properties of annually varying polar sea ice HVSR (H/V spectral ratio): The H/V technique is especially related to ambient vibration recordings. Bonnefoy-Claudet et al. showed that peaks in the horizontal to vertical spectral ratios can be linked to the Rayleigh ellipticity peak, the Airy phase of the Love waves and/or the SH resonance frequencies depending on the proportion of these different types of waves in the ambient noise. By chance, all these values give however approximately the same value for a given ground so that H/V peak is a reliable method to estimate the resonance frequency of the sites. For 1 sediment layer on the bedrock, this value f0 is related to the velocity of S-waves Vs and the depth of the sediments H following: f0=Vs4H . It can therefore be used to map the bedrock depth knowing the S-wave velocity. This frequency peak allows to constrain the possible models obtain using other seismic methods but is not enough to derive a complete ground model. Moreover, it has been shown that the amplitude of the H/V peak was not related to the magnitude of the amplification.Array methods: Using an array of seismic sensors recording simultaneously the ambient vibrations allow for greater understanding of the wavefield and to derive improved images of the subsurface. In some cases, multiple arrays of different sizes may be realized and the results merged. Current uses of seismic noise: The information of the Vertical components is only linked to the Rayleigh waves, and therefore easier to interpret, but method using the all three ground motion components are also developed, providing information about Rayleigh and Love wavefield. Seismic Interferometry methods, in particular, use correlation-based methods to estimate the seismic impulse (Green's Function) response of the Earth from background noise and have become a major area of application and research with the growth in continuously recorded high quality noise data in a wide variety of settings, ranging from the near surface to the continent scale FK, HRFK using the beamforming technique SPAC (spatial auto-correlation) method Correlations methods Refraction microtremor (ReMi) Characterization of the vibration properties of civil engineering structures Like earthquakes, ambient vibrations force into vibrations the civil engineering structures like bridges, buildings or dams. This vibration source is supposed by the greatest part of the used methods to be a white noise, i.e. with a flat noise spectrum so that the recorded system response is actually characteristic of the system itself. The vibrations are perceptible by humans only in rare cases (bridges, high buildings). Ambient vibrations of buildings are also caused by wind and internal sources (machines, pedestrians...) but these sources are generally not used to characterize structures. Current uses of seismic noise: The branch that studies the modal properties of systems under ambient vibrations is called Operational modal analysis (OMA) or Output-only modal analysis and provides many useful methods for civil engineering. Current uses of seismic noise: The observed vibration properties of structures integrate all the complexity of these structures including the load-bearing system, heavy and stiff non-structural elements (infill masonry panels...), light non-structural elements (windows...) and the interaction with the soil (the building foundation may not be perfectly fixed on the ground and differential motions may happen). This is emphasized because it is difficult to produce models able to be compared with these measurements. Current uses of seismic noise: Single-station methods: The power spectrum computation of ambient vibration recordings in a structure (e.g. at the top floor of a building for larger amplitudes) gives an estimation of its resonance frequencies and eventually its damping ratio. Current uses of seismic noise: Transfer function method: Assuming ground ambient vibrations is the excitation source of a structure, for instance a building, the Transfer Function between the bottom and the top allows to remove the effects of a non-white input. This may particularly be useful for low signal-to-noise ratio signals (small building/high level of ground vibrations). However this method generally is not able to remove the effect of soil-structure interaction.Arrays: They consist in the simultaneous recording in several points of a structure. The objective is to obtain the modal parameters of structures: resonance frequencies, damping ratios and modal shapes for the whole structure. Notice than without knowing the input loading, the participation factors of these modes cannot a priori be retrieved. Using a common reference sensor, results for different arrays can be merged. Current uses of seismic noise: Methods based on correlationsSeveral methods use the power spectral density matrices of simultaneous recordings, i.e. the cross-correlation matrices of these recordings in the Fourier domain. They allow to extract the operational modal parameters (Peak Picking method) that can be the results of modes coupling or the system modal parameters (Frequency Domain Decomposition method). System identification methodsNumerous system identification methods exist in the literature to extract the system properties and can be applied to ambient vibrations in structures. Current uses of seismic noise: Social sciences The COVID-19 pandemic produced a unique situation in which human transportation, industrial, and other activities were significantly curtailed across the world, particularly in densely populated areas. An analysis of the attendant strong reductions in seismic noise at high frequencies demonstrated that these exceptional actions resulted in the longest and most prominent global anthropogenic seismic noise reduction ever observed. Seismic noise has additionally been investigated as a proxy for economic development. Inversion/model updating/multi-model approach: Direct measurements of noise properties cannot directly give information on the physical parameters (S-wave velocity, structural stiffness...) of the ground structures or civil engineering structures that are typically of interest. Therefore, models are needed to compute these observations (dispersion curve, modal shapes...) in a suitable forward problem that can then be compared with the experimental data. Given the forward problem, the process of estimating the physical model can then be cast as an Inverse problem. Material needed: The acquisition chain is mainly made of a seismic sensor and a digitizer. The number of seismic stations depends on the method, from single point (spectrum, HVSR) to arrays (3 sensors and more). Three components (3C) sensors are used except in particular applications. The sensor sensitivity and corner frequency depend also on the application. For ground measurements, velocimeters are necessary since the amplitudes are generally lower than the accelerometers sensitivity, especially at low frequency. Their corner frequency depends on the frequency range of interest but corner frequencies lower than 0.2 Hz are generally used. Geophones (generally 4.5 Hz corner frequency or greater) are generally not suited. For measurements in civil engineering structures, the amplitude is generally higher as well as the frequencies of interest, allowing the use of accelerometers or velocimeters with a higher corner frequency. However, since recording points on the ground may also be of interest in such experiments, sensitive instruments may be needed. Material needed: Except for single station measurements, a common time stamping is necessary for all the stations. This can be achieved by GPS clock, common start signal using a remote control or the use of a single digitizer allowing the recording of several sensors. The relative location of the recording points is needed more or less precisely for the different techniques, requiring either manual distance measurements or differential GPS location. Advantages and limitations: The advantages of ambient vibration techniques compared to active techniques commonly used in exploration geophysics or earthquake recordings used in Seismic tomography. Advantages and limitations: Relatively cheap, non-invasive and non-destructive method Applicable to urban environment Provide valuable information with little data (e.g. HVSR) Dispersion curve of Rayleigh wave relatively easy to retrieve Provide reliable estimates of Vs30Limitations of these methods are linked to the noise wavefield but especially to common assumptions made in seismic: Penetration depth depends on the array size but also on the noise quality, resolution and aliasing limits depend on the array geometry Complexity of the wavefield (Rayleigh, Love waves, interpretation of higher modes...) Plane wave assumption for most of the array methods (problem of sources within the array) 1D assumption of the underground structure, even though 2D was also undertaken Inverse problem difficult to solve as for many geophysical methods
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cementogenesis** Cementogenesis: Cementogenesis is the formation of cementum, one of the three mineralized substances of a tooth. Cementum covers the roots of teeth and serves to anchor gingival and periodontal fibers of the periodontal ligament by the fibers to the alveolar bone (some types of cementum may also form on the surface of the enamel of the crown at the cementoenamel junction (CEJ)). Process: For cementogenesis to begin, Hertwig epithelial root sheath (HERS) must fragment. HERS is a collar of epithelial cells derived from the apical prolongation of the enamel organ. Once the root sheath disintegrates, the newly formed surface of root dentin comes into contact with the undifferentiated cells of the dental sac (dental follicle). This then stimulates the activation of cementoblasts to begin cementogenesis. The external shape of each root is fully determined by the position of the surrounding Hertwig epithelial root sheath. Process: It is believed that either 1) HERS becomes interrupted; 2) infiltrating dental sac cells receive a reciprocal inductive signal from the dentin; or 3) HERS cells transform into cementoblasts.The cementoblasts then disperse to cover the root dentin area and undergo cementogenesis, laying down cementoid. During the later steps within the stage of apposition, many of the cementoblasts become entrapped by the cementum they produce, becoming cementocytes. When the cementoid reaches the full thickness needed, the cementoid surrounding the cementocytes becomes mineralized, or matured, and is then considered cementum. Because of the apposition of cementum over the dentin, the dentinocemental junction (DCJ) is formed.After the apposition of cementum in layers, the cementoblasts that do not become entrapped in cementum line up along the cemental surface along the length of the outer covering of the periodontal ligament. These cementoblasts can form subsequent layers of cementum if the tooth is injured. Cementum grows slowly, by surface apposition, throughout life.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Benproperine** Benproperine: Benproperine (INN) is a cough suppressant. It has been marketed in multiple countries in Central America and Europe, as the phosphate or pamoate salts in either tablet, dragée, or syrup form. Trade names include Blascorid in Italy and Sweden, Pectipront and Tussafug in Germany, and Pirexyl in Scandinavia. The recommended dosage for adults is 25 to 50 mg two to four times daily, and for children 25 mg once or twice daily. Adverse effects include dry mouth, dizziness, fatigue, and heartburn. Synthesis: The base catalyzed ether formation between 2-Benzylphenol [28994-41-4] (1) and 1,2-dichloropropane (2) gives 1-benzyl-2-(2-chloropropoxy)benzene [85909-36-0] (3). Displacement of the remaining halogen with piperidine completes the synthesis of benproperine (4).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Superficial venous palmar arch** Superficial venous palmar arch: The superficial palmar venous arch consists of a pair of venae comitantes accompanying the superficial palmar arch. It receives the common palmar digital veins (the veins corresponding to the branches of the superficial arterial arch). It drains into the superficial ulnar radial and superficial radial veins, and the median antebrachial vein.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**L-theory** L-theory: In mathematics, algebraic L-theory is the K-theory of quadratic forms; the term was coined by C. T. C. Wall, with L being used as the letter after K. Algebraic L-theory, also known as "Hermitian K-theory", is important in surgery theory. Definition: One can define L-groups for any ring with involution R: the quadratic L-groups L∗(R) (Wall) and the symmetric L-groups L∗(R) (Mishchenko, Ranicki). Definition: Even dimension The even-dimensional L-groups L2k(R) are defined as the Witt groups of ε-quadratic forms over the ring R with ϵ=(−1)k . More precisely, L2k(R) is the abelian group of equivalence classes [ψ] of non-degenerate ε-quadratic forms ψ∈Qϵ(F) over R, where the underlying R-modules F are finitely generated free. The equivalence relation is given by stabilization with respect to hyperbolic ε-quadratic forms: [ψ]=[ψ′]⟺n,n′∈N0:ψ⊕H(−1)k(R)n≅ψ′⊕H(−1)k(R)n′ .The addition in L2k(R) is defined by := [ψ1⊕ψ2]. Definition: The zero element is represented by H(−1)k(R)n for any n∈N0 . The inverse of [ψ] is [−ψ] Odd dimension Defining odd-dimensional L-groups is more complicated; further details and the definition of the odd-dimensional L-groups can be found in the references mentioned below. Examples and applications: The L-groups of a group π are the L-groups L∗(Z[π]) of the group ring Z[π] . In the applications to topology π is the fundamental group π1(X) of a space X . The quadratic L-groups L∗(Z[π]) play a central role in the surgery classification of the homotopy types of n -dimensional manifolds of dimension n>4 , and in the formulation of the Novikov conjecture. Examples and applications: The distinction between symmetric L-groups and quadratic L-groups, indicated by upper and lower indices, reflects the usage in group homology and cohomology. The group cohomology H∗ of the cyclic group Z2 deals with the fixed points of a Z2 -action, while the group homology H∗ deals with the orbits of a Z2 -action; compare XG (fixed points) and XG=X/G (orbits, quotient) for upper/lower index notation. Examples and applications: The quadratic L-groups: Ln(R) and the symmetric L-groups: Ln(R) are related by a symmetrization map Ln(R)→Ln(R) which is an isomorphism modulo 2-torsion, and which corresponds to the polarization identities. The quadratic and the symmetric L-groups are 4-fold periodic (the comment of Ranicki, page 12, on the non-periodicity of the symmetric L-groups refers to another type of L-groups, defined using "short complexes"). Examples and applications: In view of the applications to the classification of manifolds there are extensive calculations of the quadratic L -groups L∗(Z[π]) . For finite π algebraic methods are used, and mostly geometric methods (e.g. controlled topology) are used for infinite π . More generally, one can define L-groups for any additive category with a chain duality, as in Ranicki (section 1). Examples and applications: Integers The simply connected L-groups are also the L-groups of the integers, as := L(Z[e])=L(Z) for both L = L∗ or L∗. For quadratic L-groups, these are the surgery obstructions to simply connected surgery. The quadratic L-groups of the integers are: signature Arf invariant 0. In doubly even dimension (4k), the quadratic L-groups detect the signature; in singly even dimension (4k+2), the L-groups detect the Arf invariant (topologically the Kervaire invariant). The symmetric L-groups of the integers are: signature de Rham invariant 0. In doubly even dimension (4k), the symmetric L-groups, as with the quadratic L-groups, detect the signature; in dimension (4k+1), the L-groups detect the de Rham invariant.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alias (SQL)** Alias (SQL): An alias is a feature of SQL that is supported by most, if not all, relational database management systems (RDBMSs). Aliases provide users with the ability to reduce the amount of code required for a query, and to make queries simpler to understand. In addition, aliasing is required when doing self joins (i.e. joining a table with itself.) In SQL, you can alias tables and columns. A table alias is called a correlation name, according to the SQL standard. A programmer can use an alias to temporarily assign another name to a table or column for the duration of the current SELECT query. Assigning an alias does not actually rename the column or table. This is often useful when either tables or their columns have very long or complex names. An alias name could be anything, but usually it is kept short. For example, it might be common to use a table alias such as "pi" for a table named "price_information". Alias (SQL): The general syntax of an alias is SELECT * FROM table_name [AS] alias_name. Note that the AS keyword is completely optional and is usually kept for readability purposes. Here is some sample data that the queries below will be referencing: Using a table alias: We can also write the same query like this (Note that the AS clause is omitted this time): A column alias is similar: In the returned result sets, the data shown above would be returned, with the only exception being "DepartmentID" would show up as "Id", and "DepartmentName" would show up as "Name". Alias (SQL): Also, if only one table is being selected and the query is not using table joins, it is permissible to omit the table name or table alias from the column name in the SELECT statement. Example as follows: Some systems, such as Postgres and Presto, support specifying column aliases together with table aliases. E.g. Alias (SQL): would produce the same result set as before. In this syntax it is permissible to omit aliases for some column names. In the example, an alias was provided for DepartmentId, but omitted for DepartmentName. Columns with unspecified aliases will be left unaliased. This syntax is often used with expressions that do not produce useful table and column names, such as VALUES and UNNEST. As an example, one may conveniently test the above SQL statements without creating an actual Departments table by using expressions such as
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dichlorodiethyl sulfone** Dichlorodiethyl sulfone: Dichlorodiethyl sulfone (or mustard sulfone) is an oxidation product of mustard gas. It has the formula (ClCH2CH2)2SO2. Although it is irritating to the eyes, it is not nearly as bad as mustard gas (dichlorodiethyl sulfide). Structure: The all-trans arrangement is predicted by Hartree-Fock computational methods to be the most stable conformer. Reactions: When refluxed with aqueous sodium hydroxide, oxygen replaces the chlorine, and an 1,4-oxathiane ring is formed, p-oxathiane-4,4-dioxide. When treated with sodium carbonate, a weaker base, bis-(hydroxyethyl)sulfone is the major product formed. In comparison the dehydrochlorination of the sulfoxide is much slower.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Network termination 1** Network termination 1: Network Termination 1 (NT1) or Network Termination type 1 refers to equipment in an Integrated Services Digital Network (ISDN) that physically and electrically terminates the network at the customer's premises. The NT1 network termination provides signal conversion and timing functions which correspond to layer 1 of the OSI model. In a Basic Rate Interface, the NT1 connects to line termination (LT) equipment in the provider's telephone exchange via the local loop two wire U interface and to customer equipment via the four wire S interface or T interface. The S and T interfaces are electrically equivalent, and the customer equipment port of a NT1 is often labelled as S/T interface. There are many types of NT1 available. Network termination 1: In the United States, the NT1 is considered customer-premises equipment (CPE) and is as such generally provided by the customer or integrated into the customer's equipment. In this case, the U interface is the termination point of the ISDN network. In Europe, the NT1 is generally provided by the provider, and the S/T is the termination point of the ISDN network.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pseudomonas virus 42** Pseudomonas virus 42: Pseudomonas virus 42, formerly Pseudomonas phage 42, is a bacteriophage known to infect Pseudomonas bacteria.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**EDUindex** EDUindex: The EDUindex is a Correlation coefficient representing the relevancy of Curriculum to post-educational objectives, particularly employability. An EDUindex Gap Analysis provides missing, relevant curriculum relative to employment opportunity within a representative area. Representative areas may include geographic regions, states, cities, school districts or specific schools. Analysis is regularly conducted using zip code sets. In 1918, John Franklin Bobbitt said that curriculum, as an idea, has its roots in the Latin word for horse race-course, explaining the curriculum as the course of deeds and experiences through which children become the adults they should be, for success in adult society. EDUindex, Inc. developed the EDUindex to identify and promote relevance in education. EDUindex: The EDUindex is a correlation of curricular subjects taught in a particular school to skills as suggested by a pre-defined or custom selected target marketplace. Published class offerings represent the skills taught. The Classification of Secondary School Courses (CSSC) provides a general inventory of courses taught nationwide in the secondary school level (grades 9 through 12). Further detail is provided by High School Transcript Studies provided by the National Center for Education Statistics. Public, Charter, and Private School listings are accessed per geographical area to create a comprehensive data set of all schools and businesses within the analytical focus. Curriculum per School, District, etc. is published individually and is publicly available. EDUindex: Standard databases like the North American Industry Classification System (NAICS) provide defined business focus. Business focus can be further refined into specific occupations and skill sets using Standard Occupational Classification System (SOC). Together these datasets provide information representing the skills offered and the occupational opportunities available within the designated target area. EDUindex: The EDUindex, as a value, is expressed as a number from 0 to 1.0 with 1.0 representing a perfect match of curricular offering to target need. The value is determined using the Pearson product-moment correlation coefficient (sometimes referred to as the PMCC, and typically denoted by r) as a measure of the correlation (linear dependence) between two variables X and Y, giving a value between +1 and −1 inclusive. It is widely used in the sciences as a measure of the strength of linear dependence between two variables. It was developed by Karl Pearson from a similar but slightly different idea introduced by Francis Galton in the 1880s. The general correlation coefficient is sometimes called "Pearson's r." The EDUindex calculates Pearson’s r for educational relevance by comparing the content of course offerings with the need for related skill sets within the same banded geographic area. Correlative results are weighted based on data volume for Scalar, comparative and presentation purposes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quantum reference frame** Quantum reference frame: A quantum reference frame is a reference frame which is treated quantum theoretically. It, like any reference frame, is an abstract coordinate system which defines physical quantities, such as time, position, momentum, spin, and so on. Because it is treated within the formalism of quantum theory, it has some interesting properties which do not exist in a normal classical reference frame. Reference frame in classical mechanics and inertial frame: Consider a simple physics problem: a car is moving such that it covers a distance of 1 mile in every 2 minutes, what is its velocity in metres per second? With some conversion and calculation, one can come up with the answer "13.41m/s"; on the other hand, one can instead answer "0, relative to itself". The first answer is correct because it recognises a reference frame is implied in the problem. The second one, albeit pedantic, is also correct because it exploits the fact that there is not a particular reference frame specified by the problem. This simple problem illustrates the importance of a reference frame: a reference frame is quintessential in a clear description of a system, whether it is included implicitly or explicitly. Reference frame in classical mechanics and inertial frame: When speaking of a car moving towards east, one is referring to a particular point on the surface of the Earth; moreover, as the Earth is rotating, the car is actually moving towards a changing direction, with respect to the Sun. In fact, this is the best one can do: describing a system in relation to some reference frame. Describing a system with respect to an absolute space does not make much sense because an absolute space, if it exists, is unobservable. Hence, it is impossible to describe the path of the car in the above example with respect to some absolute space. This notion of absolute space troubled a lot of physicists over the centuries, including Newton. Indeed, Newton was fully aware of this stated that all inertial frames are observationally equivalent to each other. Simply put, relative motions of a system of bodies do not depend on the inertial motion of the whole system.An inertial reference frame (or inertial frame in short) is a frame in which all the physical laws hold. For instance, in a rotating reference frame, Newton's laws have to be modified because there is an extra Coriolis force (such frame is an example of non-inertial frame). Here, "rotating" means "rotating with respect to some inertial frame". Therefore, although it is true that a reference frame can always be chosen to be any physical system for convenience, any system has to be eventually described by an inertial frame, directly or indirectly. Finally, one may ask how an inertial frame can be found, and the answer lies in the Newton's laws, at least in Newtonian mechanics: the first law guarantees the existence of an inertial frame while the second and third law are used to examine whether a given reference frame is an inertial one or not. Reference frame in classical mechanics and inertial frame: It may appear an inertial frame can now be easily found given the Newton's laws as empirical tests are accessible. Quite the contrary; an absolutely inertial frame is not and will most likely never be known. Instead, inertial frame is approximated. As long as the error of the approximation is undetectable by measurements, the approximately inertial frame (or simply "effective frame") is reasonably close to an absolutely inertial frame. With the effective frame and assuming the physical laws are valid in such frame, descriptions of systems will ends up as good as if the absolutely inertial frame was used. As a digression, the effective frame Astronomers use is a system called "International Celestial Reference Frame" (ICRF), defined by 212 radio sources and with an accuracy of about 10 −5 radians. However, it is likely that a better one will be needed when a more accurate approximation is required. Reference frame in classical mechanics and inertial frame: Reconsidering the problem at the very beginning, one can certainly find a flaw of ambiguity in it, but it is generally understood that a standard reference frame is implicitly used in the problem. In fact, when a reference frame is classical, whether or not including it in the physical description of a system is irrelevant. One will get the same prediction by treating the reference frame internally or externally. Reference frame in classical mechanics and inertial frame: To illustrate the point further, a simple system with a ball bouncing off a wall is used. In this system, the wall can be treated either as an external potential or as a dynamical system interacting with the ball. The former involves putting the external potential in the equations of motions of the ball while the latter treats the position of the wall as a dynamical degree of freedom. Both treatments provide the same prediction, and neither is particularly preferred over the other. However, as it will be discussed below, such freedom of choice cease to exist when the system is quantum mechanical. Quantum reference frame: A reference frame can be treated in the formalism of quantum theory, and, in this case, such is referred as a quantum reference frame. Despite different name and treatment, a quantum reference frame still shares much of the notions with a reference frame in classical mechanics. It is associated to some physical system, and it is relational. Quantum reference frame: For example, if a spin-1/2 particle is said to be in the state |↑z⟩ , a reference frame is implied, and it can be understood to be some reference frame with respect to an apparatus in a lab. It is obvious that the description of the particle does not place it in an absolute space, and doing so would make no sense at all because, as mentioned above, absolute space is empirically unobservable. On the other hand, if a magnetic field along y-axis is said to be given, the behaviour of the particle in such field can then be described. In this sense, y and z are just relative directions. They do not and need not have absolute meaning. Quantum reference frame: One can observe that a z direction used in a laboratory in Berlin is generally totally different from a z direction used in a laboratory in Melbourne. Two laboratories trying to establish a single shared reference frame will face important issues involving alignment. The study of this sort of communication and coordination is a major topic in quantum information theory. Quantum reference frame: Just as in this spin-1/2 particle example, quantum reference frames are almost always treated implicitly in the definition of quantum states, and the process of including the reference frame in a quantum state is called quantisation/internalisation of reference frame while the process of excluding the reference frame from a quantum state is called dequantisation/externalisation of reference frame. Unlike the classical case, in which treating a reference internally or externally is purely an aesthetic choice, internalising and externalising a reference frame does make a difference in quantum theory.One final remark may be made on the existence of a quantum reference frame. After all, a reference frame, by definition, has a well-defined position and momentum, while quantum theory, namely uncertainty principle, states that one cannot describe any quantum system with well-defined position and momentum simultaneously, so it seems there is some contradiction between the two. It turns out, an effective frame, in this case a classical one, is used as a reference frame, just as in Newtonian mechanics a nearly inertial frame is used, and physical laws are assumed to be valid in this effective frame. In other words, whether motion in the chosen reference frame is inertial or not is irrelevant. Quantum reference frame: The following treatment of a hydrogen atom motivated by Aharanov and Kaufherr can shed light on the matter. Supposing a hydrogen atom is given in a well-defined state of motion, how can one describe the position of the electron? The answer is not to describe the electron's position relative to the same coordinates in which the atom is in motion, because doing so would violate uncertainty principle, but to describe its position relative to the nucleus. As a result, more can be said about the general case from this: in general, it is permissible, even in quantum theory, to have a system with well-defined position in one reference frame and well-defined motion in some other reference frame. Further considerations of quantum reference frame: An example of treatment of reference frames in quantum theory Consider a hydrogen atom. Coulomb potential depends on the distance between the proton and electron only: V(r)=−Ze2r With this symmetry, the problem is reduced to that of a particle in a central potential: −12m∇2ψ(r→)+−Ze2rψ(r→)=Eψ(r→) Using separation of variables, the solutions of the equation can be written into radial and angular parts: Φ(r,θ,ϕ)=Rnl(r)Ylm(θ,ϕ) where l, m , and n are the orbital angular momentum, magnetic, and energy quantum numbers, respectively. Further considerations of quantum reference frame: Now consider the Schrödinger equation for the proton and the electron: ∂∂tΨ(x1,y1,z1,x2,y2,z2,t)=−iHΨ(x1,y1,z1,x2,y2,z2,t) A change of variables to relational and centre-of-mass coordinates yields ∂Ψ(x,y,z,X,Y,Z)∂t=−i[−12M∇c.o.m.2−12μ∇rel2+V(x,y,z)]Ψ where M is the total mass and μ is the reduced mass. A final change to spherical coordinates followed by a separation of variables will yield the equation for Φ(r,θ,ϕ) from above. Further considerations of quantum reference frame: However, if the change of variables done early is now to be reversed, centre-of-mass needs to be put back into the equation for Φ(r,θ,ϕ) :r=(x1−x2)2+(y1−y2)2+(z1−z2)2 tan −1⁡((x1−x2)2+(y1−y2)2z1−z2) tan −1⁡(y1−y2x1−x2) X=m1x1+m2x2m1+m2 Y=m1y1+m2y2m1+m2 Z=m1z1+m2z2m1+m2 The importance of this result is that it shows the wavefunction for the compound system is entangled, contrary to what one would normally think in a classical stand point. More importantly, it shows the energy of the hydrogen atom is not only associated with the electron but also with the proton, and the total state is not decomposable into a state for the electron and one for the proton separately. Further considerations of quantum reference frame: Superselection rules Superselection rules, in short, are postulated rules forbidding the preparation of quantum states that exhibit coherence between eigenstates of certain observables. It was originally introduced to impose additional restriction to quantum theory beyond those of selection rules. As an example, superselection rules for electric charges disallow the preparation of a coherent superposition of different charge eigenstates. Further considerations of quantum reference frame: As it turns out, the lack of a reference frame is mathematically equivalent to superselection rules. This is a powerful statement because superselection rules have long been thought to have axiomatic nature, and now its fundamental standing and even its necessity are questioned. Nevertheless, it has been shown that it is, in principle, always possible (though not always easy) to lift all superselection rules on a quantum system. Further considerations of quantum reference frame: Degradation of a quantum reference frame During a measurement, whenever the relations between the system and the reference frame used is inquired, there is inevitably a disturbance to both of them, which is known as measurement back action. As this process is repeated, it decreases the accuracy of the measurement outcomes, and such reduction of the usability of a reference frame is referred to as the degradation of a quantum reference frame. A way to gauge the degradation of a reference frame is to quantify the longevity, namely, the number of measurements that can be made against the reference frame until certain error tolerance is exceeded. Further considerations of quantum reference frame: For example, for a spin- j system, the maximum number of measurements that can be made before the error tolerance, ϵ , is exceeded is given by nmax≃ϵj2 . So the longevity and the size of the reference frame are of quadratic relation in this particular case.In this spin- j system, the degradation is due to the loss of purity of the reference frame state. On the other hand, degradation can also be caused by misalignment of background reference. It has been shown, in such case, the longevity has a linear relation with the size of the reference frame.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LJ-001** LJ-001: LJ-001 is a broad-spectrum antiviral drug developed as a potential treatment for enveloped viruses. It acts as an inhibitor which blocks viral entry into host cells at a step after virus binding but before virus–cell fusion, and also irreversibly inactivates the virions themselves by generating reactive singlet oxygen molecules which damage the viral membrane. In cell culture tests in vitro, LJ-001 was able to block and disable a wide range of different viruses, including influenza A, filoviruses, poxviruses, arenaviruses, bunyaviruses, paramyxoviruses, flaviviruses, and HIV. Unfortunately LJ-001 itself was unsuitable for further development, as it has poor physiological stability and requires light for its antiviral mechanism to operate. However the discovery of this novel mechanism for blocking virus entry and disabling the virion particles has led to LJ-001 being used as a lead compound to develop a novel family of more effective antiviral drugs with improved properties.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Constraint satisfaction problem** Constraint satisfaction problem: Constraint satisfaction problems (CSPs) are mathematical questions defined as a set of objects whose state must satisfy a number of constraints or limitations. CSPs represent the entities in a problem as a homogeneous collection of finite constraints over variables, which is solved by constraint satisfaction methods. CSPs are the subject of research in both artificial intelligence and operations research, since the regularity in their formulation provides a common basis to analyze and solve problems of many seemingly unrelated families. CSPs often exhibit high complexity, requiring a combination of heuristics and combinatorial search methods to be solved in a reasonable time. Constraint programming (CP) is the field of research that specifically focuses on tackling these kinds of problems. Additionally, Boolean satisfiability problem (SAT), the satisfiability modulo theories (SMT), mixed integer programming (MIP) and answer set programming (ASP) are all fields of research focusing on the resolution of particular forms of the constraint satisfaction problem. Constraint satisfaction problem: Examples of problems that can be modeled as a constraint satisfaction problem include: Type inference Eight queens puzzle Map coloring problem Maximum cut problem Sudoku, Crosswords, Futoshiki, Kakuro (Cross Sums), Numbrix, Hidato and many other logic puzzlesThese are often provided with tutorials of CP, ASP, Boolean SAT and SMT solvers. In the general case, constraint problems can be much harder, and may not be expressible in some of these simpler systems. "Real life" examples include automated planning, lexical disambiguation, musicology, product configuration and resource allocation.The existence of a solution to a CSP can be viewed as a decision problem. This can be decided by finding a solution, or failing to find a solution after exhaustive search (stochastic algorithms typically never reach an exhaustive conclusion, while directed searches often do, on sufficiently small problems). In some cases the CSP might be known to have solutions beforehand, through some other mathematical inference process. Formal definition: Formally, a constraint satisfaction problem is defined as a triple ⟨X,D,C⟩ , where X={X1,…,Xn} is a set of variables, D={D1,…,Dn} is a set of their respective domains of values, and C={C1,…,Cm} is a set of constraints.Each variable Xi can take on the values in the nonempty domain Di Every constraint Cj∈C is in turn a pair ⟨tj,Rj⟩ , where tj⊂X is a subset of k variables and Rj is a k -ary relation on the corresponding subset of domains Dj . An evaluation of the variables is a function from a subset of variables to a particular set of values in the corresponding subset of domains. An evaluation v satisfies a constraint ⟨tj,Rj⟩ if the values assigned to the variables tj satisfy the relation Rj An evaluation is consistent if it does not violate any of the constraints. An evaluation is complete if it includes all variables. An evaluation is a solution if it is consistent and complete; such an evaluation is said to solve the constraint satisfaction problem. Solution: Constraint satisfaction problems on finite domains are typically solved using a form of search. The most used techniques are variants of backtracking, constraint propagation, and local search. These techniques are also often combined, as in the VLNS method, and current research involves other technologies such as linear programming.Backtracking is a recursive algorithm. It maintains a partial assignment of the variables. Initially, all variables are unassigned. At each step, a variable is chosen, and all possible values are assigned to it in turn. For each value, the consistency of the partial assignment with the constraints is checked; in case of consistency, a recursive call is performed. When all values have been tried, the algorithm backtracks. In this basic backtracking algorithm, consistency is defined as the satisfaction of all constraints whose variables are all assigned. Several variants of backtracking exist. Backmarking improves the efficiency of checking consistency. Backjumping allows saving part of the search by backtracking "more than one variable" in some cases. Constraint learning infers and saves new constraints that can be later used to avoid part of the search. Look-ahead is also often used in backtracking to attempt to foresee the effects of choosing a variable or a value, thus sometimes determining in advance when a subproblem is satisfiable or unsatisfiable. Solution: Constraint propagation techniques are methods used to modify a constraint satisfaction problem. More precisely, they are methods that enforce a form of local consistency, which are conditions related to the consistency of a group of variables and/or constraints. Constraint propagation has various uses. First, it turns a problem into one that is equivalent but is usually simpler to solve. Second, it may prove satisfiability or unsatisfiability of problems. This is not guaranteed to happen in general; however, it always happens for some forms of constraint propagation and/or for certain kinds of problems. The most known and used forms of local consistency are arc consistency, hyper-arc consistency, and path consistency. The most popular constraint propagation method is the AC-3 algorithm, which enforces arc consistency. Solution: Local search methods are incomplete satisfiability algorithms. They may find a solution of a problem, but they may fail even if the problem is satisfiable. They work by iteratively improving a complete assignment over the variables. At each step, a small number of variables are changed in value, with the overall aim of increasing the number of constraints satisfied by this assignment. The min-conflicts algorithm is a local search algorithm specific for CSPs and is based on that principle. In practice, local search appears to work well when these changes are also affected by random choices. An integration of search with local search has been developed, leading to hybrid algorithms. Theoretical aspects: Decision problems CSPs are also studied in computational complexity theory and finite model theory. An important question is whether for each set of relations, the set of all CSPs that can be represented using only relations chosen from that set is either in P or NP-complete. If such a dichotomy theorem is true, then CSPs provide one of the largest known subsets of NP which avoids NP-intermediate problems, whose existence was demonstrated by Ladner's theorem under the assumption that P ≠ NP. Schaefer's dichotomy theorem handles the case when all the available relations are Boolean operators, that is, for domain size 2. Schaefer's dichotomy theorem was recently generalized to a larger class of relations.Most classes of CSPs that are known to be tractable are those where the hypergraph of constraints has bounded treewidth (and there are no restrictions on the set of constraint relations), or where the constraints have arbitrary form but there exist essentially non-unary polymorphisms of the set of constraint relations. Theoretical aspects: Every CSP can also be considered as a conjunctive query containment problem. Theoretical aspects: Function problems A similar situation exists between the functional classes FP and #P. By a generalization of Ladner's theorem, there are also problems in neither FP nor #P-complete as long as FP ≠ #P. As in the decision case, a problem in the #CSP is defined by a set of relations. Each problem takes a Boolean formula as input and the task is to compute the number of satisfying assignments. This can be further generalized by using larger domain sizes and attaching a weight to each satisfying assignment and computing the sum of these weights. It is known that any complex weighted #CSP problem is either in FP or #P-hard. Variants: The classic model of Constraint Satisfaction Problem defines a model of static, inflexible constraints. This rigid model is a shortcoming that makes it difficult to represent problems easily. Several modifications of the basic CSP definition have been proposed to adapt the model to a wide variety of problems. Variants: Dynamic CSPs Dynamic CSPs (DCSPs) are useful when the original formulation of a problem is altered in some way, typically because the set of constraints to consider evolves because of the environment. DCSPs are viewed as a sequence of static CSPs, each one a transformation of the previous one in which variables and constraints can be added (restriction) or removed (relaxation). Information found in the initial formulations of the problem can be used to refine the next ones. The solving method can be classified according to the way in which information is transferred: Oracles: the solution found to previous CSPs in the sequence are used as heuristics to guide the resolution of the current CSP from scratch. Variants: Local repair: each CSP is calculated starting from the partial solution of the previous one and repairing the inconsistent constraints with local search. Constraint recording: new constraints are defined in each stage of the search to represent the learning of inconsistent group of decisions. Those constraints are carried over to the new CSP problems. Variants: Flexible CSPs Classic CSPs treat constraints as hard, meaning that they are imperative (each solution must satisfy all of them) and inflexible (in the sense that they must be completely satisfied or else they are completely violated). Flexible CSPs relax those assumptions, partially relaxing the constraints and allowing the solution to not comply with all of them. This is similar to preferences in preference-based planning. Some types of flexible CSPs include: MAX-CSP, where a number of constraints are allowed to be violated, and the quality of a solution is measured by the number of satisfied constraints. Variants: Weighted CSP, a MAX-CSP in which each violation of a constraint is weighted according to a predefined preference. Thus satisfying constraint with more weight is preferred. Fuzzy CSP model constraints as fuzzy relations in which the satisfaction of a constraint is a continuous function of its variables' values, going from fully satisfied to fully violated. Decentralized CSPs In DCSPs each constraint variable is thought of as having a separate geographic location. Strong constraints are placed on information exchange between variables, requiring the use of fully distributed algorithms to solve the constraint satisfaction problem.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Α-Carotene** Α-Carotene: α-Carotene (alpha-carotene) is a form of carotene with a β-ionone ring at one end and an α-ionone ring at the opposite end. It is the second most common form of carotene. Human physiology: In American and Chinese adults, the mean concentration of serum α-carotene was 4.71 μg/dL. Including 4.22 μg/dL among men and 5.31 μg/dL among women. Dietary sources: The following vegetables are rich in alpha-carotene: Yellow-orange vegetables : Carrots (the main source for U.S. adults), Sweet potatoes, Pumpkin, Winter squash Dark-green vegetables : Broccoli, Green beans, Green peas, Spinach, Turnip greens, Collards, Leaf lettuce, Avocado
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pleomorphic T-cell lymphoma** Pleomorphic T-cell lymphoma: Pleomorphic T-cell lymphoma (also known as "Non-mycosis fungoides CD30− pleomorphic small/medium sized cutaneous T-cell lymphoma") is a cutaneous condition characterized by a 5-year survival rate of 62%.: 738
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Scuba diving fatalities** Scuba diving fatalities: Scuba diving fatalities are deaths occurring while scuba diving or as a consequence of scuba diving. The risks of dying during recreational, scientific or commercial diving are small, and on scuba, deaths are usually associated with poor gas management, poor buoyancy control, equipment misuse, entrapment, rough water conditions and pre-existing health problems. Some fatalities are inevitable and caused by unforeseeable situations escalating out of control, though the majority of diving fatalities can be attributed to human error on the part of the victim.Equipment failure is rare in open circuit scuba, and while the cause of death is commonly recorded as drowning, this is mainly the consequence of an uncontrollable series of events taking place in water. Arterial gas embolism is also frequently cited as a cause of death, and it, too, is the consequence of other factors leading to an uncontrolled and badly managed ascent, possibly aggravated by medical conditions. About a quarter of diving fatalities are associated with cardiac events, mostly in older divers. There is a fairly large body of data on diving fatalities, but in many cases, the data is poor due to the standard of investigation and reporting. This hinders research that could improve diver safety.Scuba diving fatalities have a major financial impact by way of lost income, lost business, insurance premium increases and high litigation costs. Statistics: Diving fatality data published in Diving Medicine for Scuba Divers (2015) 90% died with their weight belt on. 86% were alone when they died (either diving solo or separated from their buddy). 50% did not inflate their buoyancy compensator. 25% first got into difficulty on the surface 50% died on the surface. 10% were under training when they died. 10% had been advised that they were medically unfit to dive. 5% were cave diving. Statistics: 1% of divers attempting a rescue died as a result.Fatality rates of 16.4 deaths per 100,000 persons per year among DAN America members and 14.4 deaths per 100,000 persons per year the British Sub-Aqua Club (BSAC) members were similar and did not change during 2000–2006. This is comparable with jogging (13 deaths per 100,000 persons per year) and motor vehicle accidents (16 deaths per 100,000 persons per year), and within the range where reduction is desirable by Health and Safety Executive (HSE) criteria,Activity-based statistics would be a more accurate measurement of risk. Noted above are statistics showing diving fatalities comparable to motor vehicle accidents of 16.4 per 100,000 divers and 16 per 100,000 drivers. DAN 2014/12/17 data shows there are 3.174 million divers in America. Their data shows that 2.351 million dive 1 to 7 times per year. 823,000 dive 8 or more times per year. It is reasonable to say that the average would be in the neighbourhood of 5 dives per year.Data for 17 million student-diver certifications during 63 million student dives over a 20-year period from 1989-2008 show a mean per capita death rate of 1.7 deaths per 100,000 student divers per year. This was lower than for insured DAN members during 2000–2006 at 16.4 deaths per 100,000 DAN members per year, but fatality rate per dive is a better measure of exposure risk, A mean annual fatality rate of 0.48 deaths per 100,000 student dives per year and 0.54 deaths per 100,000 BSAC dives per year and 1.03 deaths per 100,000 non-BSAC dives per year during 2007. The total size of the diving population is important for determining overall fatality rates, and the population estimates from the 1990s of several million U.S. divers need to be updated.During 2006 to 2015 there were an estimated 306 million recreational dives made by US residents and 563 recreational diving deaths from this population. The fatality rate was 1.8 per million recreational dives, and 47 deaths for every 1000 emergency department presentations for scuba injuries.The most frequent known root cause for diving fatalities is running out of, or low on, breathing gas, but the reasons for this are not specified, probably due to lack of data. Other factors cited include buoyancy control, entanglement or entrapment, rough water, equipment misuse or problems and emergency ascent. The most common injuries and causes of death were drowning or asphyxia due to inhalation of water, air embolism and cardiac events. Risk of cardiac arrest is greater for older divers, and greater for men than women, although the risks are equal by age 65.Several plausible opinions have been put forward but have not yet been empirically validated. Suggested contributing factors included inexperience, infrequent diving, inadequate supervision, insufficient predive briefings, buddy separation and dive conditions beyond the diver's training, experience or physical capacity. Statistics: Annual fatalities DAN was notified of 561 recreational scuba deaths during 2010 to 2013. 334 were actively investigated by DAN DAN was notified of 146 recreational scuba deaths during 2014. 68 were actively investigated by DAN DAN was notified of 127 recreational scuba deaths during 2015. 67 were actively investigated by DAN DAN was notified of 169 recreational scuba deaths during 2016. 94 were actively investigated by DAN Cause of death: According to death certificates, over 80% of the deaths were ultimately attributed to drowning, but other factors usually combined to incapacitate the diver in a sequence of events culminating in drowning, which is more a consequence of the medium in which the accidents occurred than the actual accident. Often the drowning obscures the real cause of death. Scuba divers should not drown unless there are other contributory factors as they carry a supply of breathing gas and equipment designed to provide the gas on demand. Drowning occurs as a consequence of preceding problems, such as cardiac disease, pulmonary barotrauma, unmanageable stress, unconsciousness from any cause, water aspiration, trauma, equipment difficulties, environmental hazards, inappropriate response to an emergency or failure to manage the gas supply.The data gathered in relation to the actual causes of death is changing. Although drowning and arterial gas embolisms are cited in the top three causes of diver deaths, stating these as solitary causes does not recognise any pre-existing health issues. Researchers may know the actual causes of death, but the sequence of events that led to the cause of death is often not clear, especially when local officials or pathologists make assumptions.In many diving destinations, resources are not available for comprehensive investigations or complete autopsies, The 2010 DAN Diving Fatalities workshop noted that listing drowning as a cause of death is ineffective in determining what actually occurred in an incident, and that lack of information is the primary reason for personal injury lawsuits filed in the industry.A DAN study published in 2008 investigated 947 recreational open-circuit scuba diving deaths from 1992–2003, and where sufficient information was available, classified the incidents in terms of a sequence of trigger, disabling agent, disabling injury and cause of death. Insufficient gas was the most frequent trigger, at 41%, followed by entrapment at 20%, and equipment problems at 15%. The most common identifiable disabling agents were emergency ascents, at 55%, followed by insufficient gas at 27% and buoyancy complications at 13%. The most frequent disabling injuries were asphyxia at 33%, arterial gas embolism at 29% and cardiac incidents at 26%. Cause of death was reported as drowning in 70% of the cases, arterial gas embolism in 14% and cardiac arrest in 13%. The investigator concluded that disabling injuries were more relevant than cause of death, as drowning often occurred as a consequence of a disabling injury. A further analysis linked risk of type of disabling injury with trigger events. Asphyxia followed entrapment (40%), insufficient gas (32%), buoyancy problems (17%), equipment problems (15%), rough water (11%). Arterial gas embolism was associated with emergency ascent (96%), insufficient gas (63%), equipment trouble (17%), entrapment (9%). Cardiac incidents were associated with cardiovascular disease and age over 40 years. Their conclusion was that the most effective way to reduce diving deaths would be by minimising the frequency of adverse events. Manner of death: If the manner of death is deemed to be accidental (or due to misadventure, where this is applicable), which is usually the case, the incident leading to death is seldom analysed sufficiently to be useful in determining the probable sequence of events, particularly the triggering event, and therefore is not usually useful for improving diver safety.The chain of events leading to diving fatalities is varied in detail, but there are common elements: a triggering event, which leads to a disabling or harmful event and causes a disabling injury, which may itself be fatal or lead to drowning. One or more of the four events may not be unidentifiable.Death usually followed a sequence or combination of events, most of which may have been survivable in isolation. In the more than 940 fatality statistics studied by DAN over ten years, only one-third of the triggers could be identified. The most common of these were: Insufficient gas (41%) Entrapment (20%) Equipment problems (15%)Disabling agents were also identified in one-third of the cases. The most common identified were: Emergency ascent (55%) Insufficient gas (27%) Buoyancy trouble (13%) Disabling injuries Disabling injuries were identified in nearly two-thirds of the cases. The criteria for identify the disabling injury by forensic judgement are specified. Manner of death: Asphyxia (33%), with or without aspiration of water, and no evidence of a previous disabling injury. Triggering events associated with asphyxia included:(40%) entrapment due to entanglement in kelp, wreckage, mooring lines, fishing lines or nets, and entrapment in confined spaces or under ice (32%) insufficient gas, when it was the first identifiable problem, but generally the reason for lack of gas was not determined. (15%) problems with equipment included regulator free-flow, unexpectedly high gas consumption, and diver error in the use of the scuba apparatus, buoyancy compensator, weighting system or dry suit. (11%) rough water conditions included high sea states, strong currents, and surf conditions at beaches, rocky shores and piers. Disabling agents associated with asphyxia cases included:(62%) insufficient gas, triggered by entrapment, equipment problems, or high gas consumption due to heavy exercise in rough conditions. (17%) buoyancy problems, triggered by over- or under-weighting, lack of inflation gas for the buoyancy compensator, or over-inflation of the buoyancy compensator or dry suit. (13%) emergency ascent, triggered by entrapment or lack of breathing gas, was associated with both asphyxia and lung overpressure injury. Other contributing factors were not as clearly connected: Panic was reported in about a fifth of the cases, and may have caused aspiration or accelerated gas consumption. Casualties were diving alone or were separated from their buddies in about 40% of cases with asphyxia, but this was also associated with other disabling injuries. Arterial gas embolism (29%), with gas detected in cerebral arteries, evidence of lung rupture, and history of an emergency ascent. Triggers associated with AGE included:(63%) insufficient gas, (17%) equipment problems, (9%) entanglement or entrapment AGE deaths were often associated with panic. Disabling agents associated with AGE cases included:(96%) emergency ascent. Loss of consciousness was typical, followed by drowning for divers who remained in the water after surfacing. Cardiac incidents (26%), where chest discomfort was indicated by the diver, distress displayed with no obvious cause, a history of cardiac disease and autopsy evidence. There were few overt triggers or disabling agents identified, but reports suggested that about 60% of the decedents displayed symptoms of dyspnea, fatigue, chest pain or other distress, and 10% displayed these symptoms before the dive. Problems were noticed before entering the water in 24% of these cases, at the bottom in 46% of cases, and after starting the ascent in 20% of cases Loss of consciousness could occur at any time. Autopsy reports usually showed evidence of significant cardiovascular disease but seldom myocardial damage, which suggests that fatal dysrhythmias or drowning may have occurred before heart muscle injury could develop. Disabling cardiac incidents were associated with cardiovascular disease and age greater than 40 years, but no significant association with body mass index. Manner of death: Trauma (5%), where a traumatic incident was witnessed or determined by autopsy. The cause of injury is usually obvious, and included incidents of being struck by a watercraft, tumbled over a rocky shoreline by surf, electric shock, and interactions with marine animals. Some could possibly have been avoided by the diver. Traumatic injuries were most commonly associated with rough water conditions and being a frequent diver. Manner of death: Decompression sickness (3.5%), based on symptoms, signs and autopsy findings. Triggers for DCS included:insufficient gas, followed by emergency ascent with omitted decompression. multiple repetitive dives with short surface intervals. Manner of death: gas lost in a regulator free-flow uncontrolled ascent due to dry suit inflator malfunction dragged deep by a speared fish DCS was associated with deep diving, diving alone, and emergency ascent with omitted decompression Unexplained loss of consciousness (2.5%), where the diver was discovered unconscious without obvious cause.Triggers may have included deep dives, diabetes and nitrox dives, including a seizure witnessed at a depth where the oxygen partial pressure would have been approximately 1 bar, normally considered safe. Manner of death: Loss of consciousness was associated with diabetes, frequent diving, and learner divers. Inappropriate gas (2%), Breathing gas supply contaminated by toxic levels of carbon monoxide, or selection of gas with excessive or insufficient oxygen content for the depth.CNS oxygen toxicity, in some cases associated with medications. Manner of death: Carbon monoxide poisoning from contaminated cylinder gas Hypoxia, from incorrect gas choice and from oxygen content depleted by corrosion in the cylinder Association and causality The traditional procedure for developing diving safety recommendations is based on the assumption that associations of circumstances with fatalities are causative. This is reasonable in cases where the cause both precedes the effect and is logically clearly connected, such as where entanglement precedes asphyxia, but in many cases indirect associations are not clearly causative and require further verification. This may not be possible when there is insufficient data. Confident causal inference requires consistent associations that do not conflict with logical medical and engineering reasoning.Analysis of case information for diving fatalities has identified a wide variety of triggers and disabling agents, but has also shown that most fatalities are associated with a small group of these triggers and disabling agents, which suggests that a large reduction in fatalities could be achieved by concentrating on remedying these key factors. Many of these could be improved by training and practice, some by a change of attitude, but some diving fatalities appear to be unavoidable as the risk is inherent in the activity and depends on factors that are not under the control of the diver.The most frequent trigger appears to be insufficient breathing gas. This can obviously be avoided by paying more attention to gas management and having a reliable emergency gas supply available. The next most frequent trigger, entanglement, can largely be avoided by keeping clear of obvious entanglement hazards, and can be mitigated by extrication skills, tools and an adequate gas supply while busy. A competent buddy is clearly of great value in cases where the diver cannot see or reach the snag point. The third ranking trigger was equipment failure, but the variety of failures possible is large, and diving equipment in good condition is generally very reliable. No particular item appears to be obviously less reliable. Good maintenance, testing of function before use, carrying redundant critical equipment and skill at correcting the more critical malfunctions are fairly obvious remedies.The most frequent disabling agent in response to a trigger appears to be emergency ascent. Clearly, avoiding the trigger would eliminate the disabling agent, and this should be the top priority, but the ability to cope effectively with an emergency that does occur would break the sequence of uncontrolled and harmful events, and probably avoid a fatality. A fully independent alternate air source or a fully competent and reliable buddy are the obvious solutions, as more than half of the victims were on their own preceding death.Inappropriate buoyancy was the most frequently identified adverse event, with negative buoyancy more common than positive buoyancy. On some occasions the buoyancy problem was sudden and control was lost quickly, but on many occasions there was a longer term effect of non-catastrophic but chronic over-weighting which led to overexertion and rapid gas consumption, leaving the diver less capable of coping with the stress of the next problem to occur. Buoyancy issues could be a more important contributing factor than is immediately apparent. Contributory factors: The "DAN Annual Diving Report 2016 edition" lists their Ten Most Wanted Improvements in Scuba as:: 5  Correct weighting Greater buoyancy control More attention to gas planning Better ascent rate control Increased use of checklists Fewer equalizing injuries Improved cardiovascular health in divers Diving more often (or more pre-trip refresher training) Greater attention to diving within limits Fewer equipment issues / improved maintenance Diving techniques, competence, and experience More than half of diving fatalities may be a consequence of violations of accepted good practice. Divers who died for reasons other than a medical cause were found to be about 7 times more likely to have one or more violations of recommended practice associated with the fatality.The DAN fatalities workshop of 2011 found that there is a real problem that divers do not follow the procedures they have been trained in, and dive significantly beyond their training, experience, and fitness levels, and that this the basic cause of most accidents. In litigation involving diving accidents, the legal panel reported that 85% to 90% of the cases were attributable to diver error. This is consistent with several scientific studies. Medical issues are a significant part of the problem, and certified divers are responsible for assessing their own fitness and ability to do any particular dive. Experience was also cited as a significant factor, with occasional divers at higher risk than regular divers, and the majority of fatalities had only entry level or slightly higher qualification ("Advanced open-water diver" certification is included in this grouping).A large percentage (40 to 60%) of deaths in the Edmonds summary were associated with panic, a psychological reaction to stress which is characterized by irrational and unhelpful behaviour, which reduces the chances of survival. Panic typically occurs when a susceptible diver is in a threatening and unfamiliar situation, such as running out of breathing gas, or loss of ability to control depth, and is commonly complicated by inappropriate response to the triggering situation, which generally makes the situation worse. Evidence of panic is derived from behavioural reports from eyewitnesses. Contributory factors: Inadequate gas supply The ANZ survey found in 56% of fatalities and the DAN survey in 41%, that the diver was either running low or was out of gas. When equipment was tested following death, few victims had an ample gas supply remaining. The surveys indicated that most problems started when the diver became aware of a low on air situation. 8% of the divers died while trying to snorkel on the surface, apparently trying to conserve air. Concern about a shortage of air may affect the diver's ability to cope with a second problem which may develop during the dive, or may cause the diver to surface early and possibly alone in a stressed state of mind, where he is then unable to cope with surface conditions. Contributory factors: Buoyancy problems In the ANZ survey, 52% of the fatalities had buoyancy problems. Most of these were due to inadequate buoyancy, but 8% had excessive buoyancy. In the DAN survey buoyancy problems were the most common trigger event leading to death. Buoyancy changes associated with wetsuits were found to be a significant factor. Based on a formula for approximate weight requirement based on wetsuit style and thickness, 40% of the divers who died were found to be grossly over-weighted at the surface. This would have been aggravated by suit compression at depth.A correctly weighted diver should be neutrally buoyant at or near the surface with cylinders nearly empty. In this state, descent and ascent are equally easy. This requires the diver to be slightly negative at the start of the dive, due to the weight of the gas in the full cylinders, but this and the buoyancy loss due to suit compression should be is easily compensated by partial inflation of the buoyancy compensator. The practice of over-weighting is dangerous at it may overwhelm the capacity of the buoyancy compensator and makes the buoyancy changes with depth more extreme and difficult to correct. A failure of the buoyancy compensator would be exacerbated. This dangerous practice is unfortunately promoted by some instructors as it expedites shallow water training and allows divers to learn to descend without fully learning the appropriate skills. Greater skill is required to dive safely with more weight than is necessary, but no amount of skill can compensate for insufficient weighting during decompression stops. On dives where decompression is planned, competent divers will often carry a bit more weight than strictly necessary to ensure that in a situation where they have lost or used up all their gas and are relying on a supply from a team member, they do not have to struggle to stay down at the correct stop depth. Some divers may be unaware of the need to adjust weight to suit any change in equipment that may affect buoyancy. Some dive shops do not provide facilities for the diver to adjust weight to suit the combined equipment when renting a full set of gear to someone who has not used that combination before, and just add a few weights to ensure the diver can get down at the start of the dive. Contributory factors: In a survey on buddy diver fatality it was found that regardless of who was first to be low on air, the over-weighted diver was six times more likely to die.In spite of being heavily reliant on their buoyancy compensators, many divers also misused them. Examples of this include accidental inflation or over-inflation causing rapid uncontrolled ascents, confusion between the inflation and dump valves, and inadequate or slow inflation due to being deep or low on air. The drag caused by a buoyancy compensator inflated to offset the weight belt can contribute to exhaustion in divers attempting to swim to safety on the surface. The American Academy of Underwater Sciences reported in 1989 that half the cases of decompression sickness were related to loss of buoyancy control. When twin-bladder buoyancy compensators are used, confusion as to how much gas is in each bladder can lead to a delay in appropriate response, by which time control of the ascent may have already been lost. Contributory factors: Failure to ditch weights 90% of the fatalities did not ditch their weights. Those on the surface had to swim towards safety carrying several kilograms of unnecessary weight, which made staying at the surface more difficult than it needed to be. In some fatalities the weights had been released but became entangled. In other cases, the belt could not be released because it was worn under other equipment, or the release buckle was inaccessible because a weight had slid over it, or it had rotated to the back of the body. Other fatalities have occurred where release mechanisms have failed. Contributory factors: Buddy system failures In spite of the general acceptance, teaching and recommendation of the buddy system by most, if not all diver certification organisations, only 14% of divers who died still had their buddy with them at the time. In a Hawaiian study 19% of the fatalities died with their buddy present. In the ANZ study 33% of the fatalities either dived alone or voluntarily separated from their buddies before the incident, 25% separated after a problem developed and 20% were separated by the problem. In the DAN study, 57% of those who started diving with a buddy were separated at the time of death.The buddy is primarily there to assist when things go wrong to the extent that the diver cannot cope alone, and the absence of a buddy is not in itself a threat to life. Buddy separation cannot be a cause of death, it is simply a failure of an engineering redundancy, leaving the diver without backup in case of specific emergencies, and the appropriate response is to abort the dive, as for any other failure of a singly redundant safety critical item. However, unplanned buddy separation may imply that the missing buddy has already run into trouble beyond their capacity to resolve. A common cause of separation was one diver running low on air and leaving their buddy to continue the dive alone. In some cases more than two divers dived together, without adequate team planning, leading to confusion as to who was responsible for whom. Groups of divers following a dive leader without formal buddy pairing before the dive would be split into pairs to surface by the dive leader as they reached low air status. This would frequently pair the least experienced and competent divers for the ascent including those over-breathing due to anxiety.In others cases, the survivor was leading the victim and not immediately aware of the problem. It is common for the more experienced diver to lead, and also common for the follower not to remain in a position where he can easily be monitored, so the follower may only get intermittent attention and may be inconveniently situated when something goes wrong. By the time the lead diver notices the absence of the buddy it may be too late to assist. Each buddy is responsible for ensuring that the other knows where they are at all times. Contributory factors: Buddy rescue In a minority of cases the buddy was present at the time of death. In 1% of cases the buddy died attempting rescue. In at least one case the survivor had to forcibly retrieve their primary demand valve from a buddy who was apparently unwilling or unable to share it after the secondary demand valve was rejected during an assisted ascent. Contributory factors: Buddy breathing 4% of fatalities were associated with failed buddy breathing.In a study of failed buddy breathing conducted by NUADC, more than half were attempted at depths greater than 20 metres. In 29% the victim's mask was displaced, and a lung over-pressure injury occurred in 12.5% of cases. One in 8 victims refused to return the demand valve, however, donating a regulator rarely results in the donor becoming the victim. The use of a secondary (octopus regulator) second stage or a completely separate emergency air supply (bailout cylinder)would appear to be a safer alternative. Contributory factors: Physiological factors A survey of DAN America members during 2000 to 2006 indicated a low incidence of cardiac-related fatalities in divers less than 40 years old. The rates increased until about 50 years old and stabilised for older divers at a relative risk of approximately 13 times greater than for younger divers. Relative risk for older divers was also found to be greater for asphyxia (3.9 times) and arterial gas embolism (2.5 times). Relative risk between males and females reduced from about 6 to 1 at 25 years to even at 65 years. DAN Europe figure follow a similar trend. Contributory factors: The victim had a pre-existing condition which would widely be considered a contraindication to diving in about 25% of fatalities. Some disorders have no demonstrable pathology and are easily overlooked in an investigation, which results in incomplete understanding of the incident. Drowning can obscure some pathologies which may then not show up at autopsy.Fatigue was a factor in a significant number of cases (28% according to Edmonds). Fatigue is caused by excessive exertion, is aggravated by physical unfitness, and reduces the reserves available for survival. Factors cited as causes of fatigue include excessive drag due to over-weighting, drag due to over-inflation of the BCD, and long surface swims in adverse sea conditions, and it was not restricted to unfit divers. Fatigue was also associated with salt-water aspiration syndrome, cardiac problems and asthma.Salt water aspiration was a factor in 37% of cases in the Edmonds summary. This refers to inhalation of a small amount of sea water by the conscious diver, often in the form of spray. Salt water aspiration may be caused by regulator a leak, rough conditions on the surface, or residual water in the regulator after regulator recovery or buddy breathing. Salt water aspiration may cause respiratory distress, fatigue or panic and other complications.Autopsy evidence of pulmonary barotrauma was found in 13% of the cases summarised by Edmonds et al. This was sometimes a complicating factor, but at other times the direct cause of death. Factors associated with pulmonary barotrauma include panic, rapid buoyant ascent, asthma and regulator failure. In half of these cases a cause for the barotrauma was identified, but a roughly equal number remain unexplained.In cases where the Edmonds summary found cardiac failure was implicated there was either gross cardiac pathology or a clinical indication of cardiac disease in the autopsy findings. 26% of deaths in the DAN studies were due to cardiac failure. 60% of these victims complained of chest pain, dyspnoea or feeling unwell before or during the dive. Cardiac causes are implicated in about 45% of scuba deaths in divers over 40 years old, and they tend to be relatively experienced divers, frequently with a history of cardiac disease or high blood pressure. The associated triggers include exercise, drugs, hypoxia from salt water aspiration, cardio-pulmonary reflexes, respiratory abnormalities, restrictive dive suits and harness, and cold exposure.in at least 9% of fatalities in the ANZ survey cited by Edmonds et al. the diver was asthmatic, and in at least 8% of the cases asthma contributed to the death. In other surveys this correlation is not so clear. Surveys have shown that between 0.5% and 1% of recreational divers are asthmatics. Edmonds considers that the statistics imply that asthma is a significant risk factor and that asthmatics should not be permitted to dive. This opinion was prevalent for a long time, but recent studies by DAN suggest that asthma may be managed successfully in some cases. Factors contributing to death in this group include panic, fatigue and salt water aspiration, and the cause of death was usually drowning or pulmonary barotrauma. The diving environment can provoke or aggravate asthma in several ways, such as salt water aspiration, breathing cold dry air, strenuous exertion, hyperventilation. and high work of breathing.In 10% of the cases summarised by Edmonds et al., vomiting initiated or contributed to the accident. It was often caused by sea sickness or salt water aspiration or ingestion, but ear problems and alcohol were also cited as causes.Nitrogen narcosis was cited as a contributory or triggering factor in 9% of cases reviewed by Edmonds et al., but was never the sole cause of death.Respiratory diseaseDrugsDecompression sickness Equipment Edmonds et al. (2014) suggest that a significant percentage of deaths are associated with equipment failure (35%) or misuse (35%), while the diving fatalities workshop of 2012 found that equipment failure per se was uncommon. This is not necessarily contradictory, as they include incompetent operation under equipment failure and specify overlap between malfunction and misuse.In 14% of deaths there was a regulator fault reported, and in 1% the regulator was misused. Subsequent testing of the regulators showed that most of the problems were caused by leaks resulting in inhalation of salt water, but in some cases there was excessive breathing resistance following a mechanical dysfunction. In a few cases the regulator failed catastrophically, or the hose burst. The difficulty of breathing from the regulator was often aggravated by other factors such as panic, exhaustion or badly adjusted buoyancy.In 8% of cases the buoyancy compensator malfunctioned. This was usually due to a problem with the inflator mechanism, but in some cases the BCD could not stay inflated. In 6% of the fatalities, the buoyancy compensator was not used competently, usually by overinflation which caused an uncontrolled ascent, or deflating when more buoyancy was required at the surface. Overweighting can also be classified as misuse of equipment. Contributory factors: Edmonds et al. found that 13% of victims lost one or both fins. This was sometimes due to defective or ill-fitting fins, but in most cases the cause was not apparent. In 12% of deaths there were problems associated with the cylinder, usually from user error, such as use of an underfilled or undersized cylinder, the cylinder becoming unsecured from the harness, and failure to open the cylinder valve. In less than 5% of fatalities, there were problems due to malfunction or misuse of weight belt (excluding overweighting which is not a failure of the equipment), harness, mask, exposure suit, submersible pressure gauges and entanglement in lines deployed by the diver. Contributory factors: Environment Edmonds et al. indicate that 25% of fatal incidents started at the surface, and 50% of the divers died at the surface. In many cases, the divers surfaced because they ran out of breathing air.Difficult water conditions were implicated in 36% of fatalities in the Edmonds et al. summary. These included current stronger than the diver could manage, rough water, surf, surge from wave movement, and impaired visibility caused by these conditions. These conditions were frequently encountered when the diver was obliged to surface in an unsuitable place due to earlier problems, and were often exacerbated by overweighting and/or the high drag of an excessively inflated buoyancy compensator, leading to exhaustion or panic which resulted in drowning.Excessive depth was considered a factor in 12% of fatalities summarized by Edmonds et al. The fatal dive was often the deepest ever for the victim. Greater depth can expose a diver to factors such as increased air consumption, impaired judgment caused by nitrogen narcosis, colder water, reduced thermal insulation of a compressed wetsuit, reduced visibility and lighting, slower response of buoyancy compensator inflation, increased work of breathing, greater heat loss when using helium mixtures, higher risk of decompression sickness and a necessarily prolonged ascent time.Other environmental factors cited as contributory to fatalities include caves, marine animal injury (including shark and other animal bites, and marine stings, difficulties entering and exiting the water, cold, entanglements, entrapment, and night diving. Accident investigation: Diving fatality investigations are intended to find the cause of death by identifying factors that caused the fatal incident. Causes of diving accidents are the triggering events that when combined with inadequate response, lead to an adverse consequence which may be classified as a notifiable incident or an accident when injury or death follows. These causes can be categorised as human factors, equipment problems and environmental factors. Equipment problems and environmental factors are also often influenced by human error. Three main areas of investigation are common: Medical investigation looks into the diver's health and medical factors that may have led to the cause of death. Accident investigation: Equipment is investigated to look for issues that may have contributed to a cause of death. Accident investigation: Procedural investigation considers whether the diver followed appropriate procedures, adequately prepared themselves and their equipment before diving, or went diving in conditions beyond their training and experience level.Lack of solid information about the underlying causes of diving accidents and fatalities creates uncertainty, and this is the principal factor leading to litigation, higher insurance premiums, massive litigation costs and ultimately the continued loss of life. Accident investigation: There is usually some form of investigation following a diving fatality. There may be several investigators representing different parties. Police are likely to look for evidence of homicide, The maritime safety authority will investigate in cases where a death occurs while diving from a vessel. When the fatality involves a person at work, the occupational health and safety authority may investigate, and investigators from the deceased's, insurance company and the dive operator and certification agency's insurance companies are likely to be involved.In most cases, the investigation takes place some time after the event. In cases where death has already occurred, the police may meet the boat, or travel to a shore site. An investigation by someone representing a sector of the diving industry may not take place until weeks or even months after the incident. It depends on how soon the event is reported, how long the paperwork takes, how soon the insurance carrier appoints an investigator and availability of a suitable investigator. No matter how quickly an investigation is launched, in most cases the body will have been recovered and resuscitation attempted, equipment will have been removed and possibly damaged or lost, and the people at the site returned to their homes. The equipment may have been mishandled by authorities who are unfamiliar with the gear and have stored it improperly, compromising the evidence.People who would be likely to be considered witnesses include: Any instructional staff involved if it was a training dive. Accident investigation: Any crew-members of the boat if the dive was off a boat. Other divers who were diving at the site at the time of the incident. Any rescue and recovery personnel who may have been involved. Any members of a professional dive team if one of their members was involved. Accident investigation: Equipment testing Equipment testing is an important part of dive accident and fatality analysis. As stakeholders in the community have different and occasionally conflicting needs when it comes to such testing, tests should be done as soon as possible to avoid degradation of evidence, and the testing should be done by impartial investigators, with all relevant equipment treated as evidence and legally acceptable procedures for controlling custody of the evidence. Currently the procedures for equipment testing after diving accidents are poorly standardized. Important procedural items include when testing should be conducted, who is responsible for the testing, what equipment should be tested and what tests should be done.This requires appropriate training of first responders and law enforcement agencies, availability of testing equipment, development of suitable test protocols, and funding to conduct the testing. Procedures for testing rebreathers differ from those for testing open circuit equipment.Life-support equipment is an integral part of diving, and dive equipment is generally robust and reliable, but bad maintenance, design flaws, improper use, or other factors may cause or contribute to an incident. When equipment issues are not contributory to an incident, they should be excluded so that the causative factors may be correctly determined. Accident investigation: Forensic autopsy If diving fatalities are thoroughly investigated it may be possible to determine a trigger, or root cause, for the accident. Data collection and analysis allows identification of the most common triggers and contributing factors associated with fatal diving incidents. Forensic autopsies go beyond the detailed description of the internal organs and include a thorough external examination looking for injuries, injury patterns, trace evidence and clues to how the body and the environment may have interacted. Diving deaths are relatively uncommon, and may be unfamiliar to the pathologist. Accident investigation: The forensic pathologist also needs to understand the limitations of autopsy findings in diving-related deaths and realize that there are common postmortem artifacts that can be misinterpreted, resulting in erroneous conclusions. James Carruso, Regional Armed Forces Medical Examiner, Navy Recruiting Command 2011 Legal issues: Scuba diving fatalities have a major financial impact by way of lost income, lost business, insurance premium increases and high litigation costs.The lack of reliable and reasonably complete information about the underlying causes of diving fatalities creates uncertainty. Inaccurate findings following autopsies where the examiner had no experience in diving fatalities and had not followed the relevant protocols are common, and in the majority of cases the primary causative factors are never identified, leading to opportunistic litigation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**McWord** McWord: A McWord is a word containing the prefix Mc-, derived from the first syllable of the name of the McDonald's restaurant chain. Words of this nature are either official marketing terms of the chain (such as McNugget), or are neologisms designed to evoke pejorative associations with the restaurant chain or fast food in general, often for qualities of cheapness, inauthenticity, or the speed and ease of manufacture. They are also used in non-consumerism contexts as a pejorative for heavily commercialized or globalized things and concepts. Examples: Official McDonald's products and branding concepts Mayor McCheese McCafé McDonaldland McInternet – A free Wi-Fi service in some U.S. McDonald's restaurants. In Venezuela and Brazil, it is an Internet cafe service offered in several McDonald's restaurants. McState – The McDonald's job and career search service. McWorld – The term was used in a mid-1990s McDonald's advertising campaign depicting a world ruled by children. It is also used in a critical way to emphasize the deprecation of local culture in favor of a global culture prescribed by large corporations. McNuggets McChicken McDouble McRib McFlurry McArabia McMuffin McWords not officially related to McDonald's McChurch – A megachurch. McDonaldization – the process by which a society takes on the characteristics of a fast-food restaurant. McDojo – A martial arts school (dojo) seen as sacrificing pedagogic principles in favor of offering rapid advancement through the various ranks, often requiring a fee to be paid to achieve a higher rank (often denoted by a colored belt, hence the use of another pejorative name, "belt factory", analogous to a degree mill). McJob – A low-paying job in which one serves as an interchangeable cog in a corporate machine; originally appearing in an article in The Washington Post in 1986 and later popularised by Douglas Coupland's novel Generation X: Tales for an Accelerated Culture. McLibel case and McLibel (film) McMansion – Quickly-built mansions; a group of large houses built in the same style in the same area. McMindfulness – A term coined by Ron Purser debunking the "mindfulness revolution". McOndo – A Latin American literary movement. The name is a spoof on the fictional village of Macondo. McPaper (or McNews) – A newspaper that is considered manufactured and "for the masses" because of its simplistic prose style and flashy use of colors. Typically used in reference to USA Today. McRefugee – people who stay overnight in a 24-hour McDonald's.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Affinity chromatography** Affinity chromatography: Affinity chromatography is a method of separating a biomolecule from a mixture, based on a highly specific macromolecular binding interaction between the biomolecule and another substance. The specific type of binding interaction depends on the biomolecule of interest; antigen and antibody, enzyme and substrate, receptor and ligand, or protein and nucleic acid binding interactions are frequently exploited for isolation of various biomolecules. Affinity chromatography is useful for its high selectivity and resolution of separation, compared to other chromatographic methods. Principle: Affinity chromatography has the advantage of specific binding interactions between the analyte of interest (normally dissolved in the mobile phase), and a binding partner or ligand (immobilized on the stationary phase). In a typical affinity chromatography experiment, the ligand is attached to a solid, insoluble matrix—usually a polymer such as agarose or polyacrylamide—chemically modified to introduce reactive functional groups with which the ligand can react, forming stable covalent bonds. The stationary phase is first loaded into a column to which the mobile phase is introduced. Molecules that bind to the ligand will remain associated with the stationary phase. A wash buffer is then applied to remove non-target biomolecules by disrupting their weaker interactions with the stationary phase, while the biomolecules of interest will remain bound. Target biomolecules may then be removed by applying a so-called elution buffer, which disrupts interactions between the bound target biomolecules and the ligand. The target molecule is thus recovered in the eluting solution.Affinity chromatography does not require the molecular weight, charge, hydrophobicity, or other physical properties of the analyte of interest to be known, although knowledge of its binding properties is useful in the design of a separation protocol. Types of binding interactions commonly exploited in affinity chromatography procedures are summarized in the table below. Batch and column setups: Binding to the solid phase may be achieved by column chromatography whereby the solid medium is packed onto a column, the initial mixture run through the column to allow settling, a wash buffer run through the column and the elution buffer subsequently applied to the column and collected. These steps are usually done at ambient pressure. Alternatively, binding may be achieved using a batch treatment, for example, by adding the initial mixture to the solid phase in a vessel, mixing, separating the solid phase, removing the liquid phase, washing, re-centrifuging, adding the elution buffer, re-centrifuging and removing the elute. Batch and column setups: Sometimes a hybrid method is employed such that the binding is done by the batch method, but the solid phase with the target molecule bound is packed onto a column and washing and elution are done on the column. Batch and column setups: The ligands used in affinity chromatography are obtained from both organic and inorganic sources. Examples of biological sources are serum proteins, lectins and antibodies. Inorganic sources are moronic acid, metal chelates and triazine dyes.A third method, expanded bed absorption, which combines the advantages of the two methods mentioned above, has also been developed. The solid phase particles are placed in a column where liquid phase is pumped in from the bottom and exits at the top. The gravity of the particles ensure that the solid phase does not exit the column with the liquid phase. Batch and column setups: Affinity columns can be eluted by changing salt concentrations, pH, pI, charge and ionic strength directly or through a gradient to resolve the particles of interest. Batch and column setups: More recently, setups employing more than one column in series have been developed. The advantage compared to single column setups is that the resin material can be fully loaded since non-binding product is directly passed on to a consecutive column with fresh column material. These chromatographic processes are known as periodic counter-current chromatography (PCC). The resin costs per amount of produced product can thus be drastically reduced. Since one column can always be eluted and regenerated while the other column is loaded, already two columns are sufficient to make full use of the advantages. Additional columns can give additional flexibility for elution and regeneration times, at the cost of additional equipment and resin costs. Specific uses: Affinity chromatography can be used in a number of applications, including nucleic acid purification, protein purification from cell free extracts, and purification from blood. By using affinity chromatography, one can separate proteins that bind to a certain fragment from proteins that do not bind that specific fragment. Because this technique of purification relies on the biological properties of the protein needed, it is a useful technique and proteins can be purified many folds in one step. Various affinity media Many different affinity media exist for a variety of possible uses. Briefly, they are (generalized) activated/functionalized that work as a functional spacer, support matrix, and eliminates handling of toxic reagents. Amino acid media is used with a variety of serum proteins, proteins, peptides, and enzymes, as well as rRNA and dsDNA. Avidin biotin media is used in the purification process of biotin/avidin and their derivatives. Specific uses: Carbohydrate bonding is most often used with glycoproteins or any other carbohydrate-containing substance; carbohydrate is used with lectins, glycoproteins, or any other carbohydrate metabolite protein. Dye ligand media is nonspecific but mimics biological substrates and proteins. Glutathione is useful for separation of GST tagged recombinant proteins. Heparin is a generalized affinity ligand, and it is most useful for separation of plasma coagulation proteins, along with nucleic acid enzymes and lipases Hydrophobic interaction media are most commonly used to target free carboxyl groups and proteins. Specific uses: Immunoaffinity media (detailed below) utilizes antigens' and antibodies' high specificity to separate; immobilized metal affinity chromatography is detailed further below and uses interactions between metal ions and proteins (usually specially tagged) to separate; nucleotide/coenzyme that works to separate dehydrogenases, kinases, and transaminases. Nucleic acids function to trap mRNA, DNA, rRNA, and other nucleic acids/oligonucleotides. Protein A/G method is used to purify immunoglobulins. Speciality media are designed for a specific class or type of protein/co enzyme; this type of media will only work to separate a specific protein or coenzyme. Specific uses: Immunoaffinity Another use for the procedure is the affinity purification of antibodies from blood serum. If the serum is known to contain antibodies against a specific antigen (for example if the serum comes from an organism immunized against the antigen concerned) then it can be used for the affinity purification of that antigen. This is also known as Immunoaffinity Chromatography. For example, if an organism is immunised against a GST-fusion protein it will produce antibodies against the fusion-protein, and possibly antibodies against the GST tag as well. The protein can then be covalently coupled to a solid support such as agarose and used as an affinity ligand in purifications of antibody from immune serum. Specific uses: For thoroughness, the GST protein and the GST-fusion protein can each be coupled separately. The serum is initially allowed to bind to the GST affinity matrix. This will remove antibodies against the GST part of the fusion protein. The serum is then separated from the solid support and allowed to bind to the GST-fusion protein matrix. This allows any antibodies that recognize the antigen to be captured on the solid support. Elution of the antibodies of interest is most often achieved using a low pH buffer such as glycine pH 2.8. The eluate is collected into a neutral tris or phosphate buffer, to neutralize the low pH elution buffer and halt any degradation of the antibody's activity. This is a nice example as affinity purification is used to purify the initial GST-fusion protein, to remove the undesirable anti-GST antibodies from the serum and to purify the target antibody. Specific uses: Monoclonal antibodies can also be selected to bind proteins with great specificity, where protein is released under fairly gentle conditions. This can become of use for further research in the future.A simplified strategy is often employed to purify antibodies generated against peptide antigens. When the peptide antigens are produced synthetically, a terminal cysteine residue is added at either the N- or C-terminus of the peptide. This cysteine residue contains a sulfhydryl functional group which allows the peptide to be easily conjugated to a carrier protein (e.g. Keyhole limpet hemocyanin (KLH)). The same cysteine-containing peptide is also immobilized onto an agarose resin through the cysteine residue and is then used to purify the antibody. Specific uses: Most monoclonal antibodies have been purified using affinity chromatography based on immunoglobulin-specific Protein A or Protein G, derived from bacteria.Immunoaffinity chromatography with monoclonal antibodies immobilized on monolithic column has been successfully used to capture extracellular vesicles (e.g., exosomes and exomeres) from human blood plasma by targeting tetraspanins and integrins found on the surface of the EVs.Immunoaffinity chromatography is also the basis for immunochromatographic test (ICT) strips, which provide a rapid means of diagnosis in patient care. Using ICT, a technician can make a determination at a patient's bedside, without the need for a laboratory. ICT detection is highly specific to the microbe causing an infection. Specific uses: Immobilized metal ion affinity chromatography Immobilized metal ion affinity chromatography (IMAC) is based on the specific coordinate covalent bond of amino acids, particularly histidine, to metals. This technique works by allowing proteins with an affinity for metal ions to be retained in a column containing immobilized metal ions, such as cobalt, nickel, or copper for the purification of histidine-containing proteins or peptides, iron, zinc or gallium for the purification of phosphorylated proteins or peptides. Many naturally occurring proteins do not have an affinity for metal ions, therefore recombinant DNA technology can be used to introduce such a protein tag into the relevant gene. Methods used to elute the protein of interest include changing the pH, or adding a competitive molecule, such as imidazole. Specific uses: Recombinant proteins Possibly the most common use of affinity chromatography is for the purification of recombinant proteins. Proteins with a known affinity are protein tagged in order to aid their purification. The protein may have been genetically modified so as to allow it to be selected for affinity binding; this is known as a fusion protein. Protein tags include hexahistidine (His), glutathione-S-transferase (GST) and maltose binding protein (MBP). Histidine tags have an affinity for nickel, cobalt, zinc, copper and iron ions which have been immobilized by forming coordinate covalent bonds with a chelator incorporated in the stationary phase. For elution, an excess amount of a compound able to act as a metal ion ligand, such as imidazole, is used. GST has an affinity for glutathione which is commercially available immobilized as glutathione agarose. During elution, excess glutathione is used to displace the tagged protein. Specific uses: Lectins Lectin affinity chromatography is a form of affinity chromatography where lectins are used to separate components within the sample. Lectins, such as concanavalin A are proteins which can bind specific alpha-D-mannose and alpha-D-glucose carbohydrate molecules. Some common carbohydrate molecules that is used in lectin affinity chromatography are Con A-Sepharose and WGA-agarose. Another example of a lectin is wheat germ agglutinin which binds D-N-acetyl-glucosamine. The most common application is to separate glycoproteins from non-glycosylated proteins, or one glycoform from another glycoform. Although there are various ways to perform lectin affinity chromatography, the goal is extract a sugar ligand of the desired protein. Specific uses: Specialty Another use for affinity chromatography is the purification of specific proteins using a gel matrix that is unique to a specific protein. For example, the purification of E. coli β-galactosidase is accomplished by affinity chromatography using p-aminobenyl-1-thio-β-D-galactopyranosyl agarose as the affinity matrix. p-aminobenyl-1-thio-β-D-galactopyranosyl agarose is used as the affinity matrix because it contains a galactopyranosyl group, which serves as a good substrate analog for E. coli β-Galactosidase. This property allows the enzyme to bind to the stationary phase of the affinity matrix and β-Galactosidase is eluted by adding increasing concentrations of salt to the column. Specific uses: Alkaline phosphatase Alkaline phosphatase from E. coli can be purified using a DEAE-Cellulose matrix. A. phosphatase has a slight negative charge, allowing it to weakly bind to the positively charged amine groups in the matrix. The enzyme can then be eluted out by adding buffer with higher salt concentrations. Boronate affinity chromatography Boronate affinity chromatography consists of using boronic acid or boronates to elute and quantify amounts of glycoproteins. Clinical adaptations have applied this type of chromatography for use in determining long term assessment of diabetic patients through analysis of their glycated hemoglobin. Serum albumin purification Affinity purification of albumin and macroglobulin contamination is helpful in removing excess albumin and α2-macroglobulin contamination, when performing mass spectrometry. In affinity purification of serum albumin, the stationary used for collecting or attracting serum proteins can be Cibacron Blue-Sepharose. Then the serum proteins can be eluted from the adsorbent with a buffer containing thiocyanate (SCN−). Weak affinity chromatography: Weak affinity chromatography (WAC) is an affinity chromatography technique for affinity screening in drug development. WAC is an affinity-based liquid chromatographic technique that separates chemical compounds based on their different weak affinities to an immobilized target. The higher affinity a compound has towards the target, the longer it remains in the separation unit, and this will be expressed as a longer retention time. The affinity measure and ranking of affinity can be achieved by processing the obtained retention times of analyzed compounds. Affinity chromatography is part of a larger suite of techniques used in chemoproteomics based drug target identification. Weak affinity chromatography: The WAC technology is demonstrated against a number of different protein targets – proteases, kinases, chaperones and protein–protein interaction (PPI) targets. WAC has been shown to be more effective than established methods for fragment based screening. History: Affinity chromatography was conceived and first developed by Pedro Cuatrecasas and Meir Wilchek.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Polar hypersurface** Polar hypersurface: In algebraic geometry, given a projective algebraic hypersurface C described by the homogeneous equation f(x0,x1,x2,…)=0 and a point a=(a0:a1:a2:⋯) its polar hypersurface Pa(C) is the hypersurface a0f0+a1f1+a2f2+⋯=0, where fi are the partial derivatives of f The intersection of C and Pa(C) is the set of points p such that the tangent at p to C meets a
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sodium bicarbonate** Sodium bicarbonate: Sodium bicarbonate (IUPAC name: sodium hydrogencarbonate), commonly known as baking soda or bicarbonate of soda, is a chemical compound with the formula NaHCO3. It is a salt composed of a sodium cation (Na+) and a bicarbonate anion (HCO3−). Sodium bicarbonate is a white solid that is crystalline, but often appears as a fine powder. It has a slightly salty, alkaline taste resembling that of washing soda (sodium carbonate). The natural mineral form is nahcolite. It is a component of the mineral natron and is found dissolved in many mineral springs. Nomenclature: Because it has long been known and widely used, the salt has many different names such as baking soda, bread soda, cooking soda, and bicarbonate of soda and can often be found near baking powder in stores. The term baking soda is more common in the United States, while bicarbonate of soda is more common in Australia, United Kingdom and Ireland. Abbreviated colloquial forms such as sodium bicarb, bicarb soda, bicarbonate, and bicarb are common.The word saleratus, from Latin sal æratus (meaning "aerated salt"), was widely used in the 19th century for both sodium bicarbonate and potassium bicarbonate.Its E number food additive code is E500.The prefix bi in bicarbonate comes from an outdated naming system predating molecular knowledge. It is based on the observation that there is twice as much carbonate (CO3−2) per sodium in sodium bicarbonate (NaHCO3) as there is in sodium carbonate (Na2CO3). The modern chemical formulas of these compounds now express their precise chemical compositions which were unknown when the name bi-carbonate of potash was coined (see also: bicarbonate). Uses: Cooking Leavening In cooking, baking soda is primarily used in baking as a leavening agent. When it reacts with acid, carbon dioxide is released, which causes expansion of the batter and forms the characteristic texture and grain in cakes, quick breads, soda bread, and other baked and fried foods. The acid–base reaction can be generically represented as follows: NaHCO3 + H+ → Na+ + CO2 + H2OAcidic materials that induce this reaction include hydrogen phosphates, cream of tartar, lemon juice, yogurt, buttermilk, cocoa, and vinegar. Baking soda may be used together with sourdough, which is acidic, making a lighter product with a less acidic taste.Heat can also by itself cause sodium bicarbonate to act as a raising agent in baking because of thermal decomposition, releasing carbon dioxide at temperatures above 80 °C (180 °F), as follows: 2 NaHCO3 → Na2CO3 + H2O + CO2When used this way on its own, without the presence of an acidic component (whether in the batter or by the use of a baking powder containing acid), only half the available CO2 is released (one CO2 molecule is formed for every two equivalents of NaHCO3). Additionally, in the absence of acid, thermal decomposition of sodium bicarbonate also produces sodium carbonate, which is strongly alkaline and gives the baked product a bitter, "soapy" taste and a yellow color. Since the reaction occurs slowly at room temperature, mixtures (cake batter, etc.) can be allowed to stand without rising until they are heated in the oven. Uses: Baking powder Baking powder, also sold for cooking, contains around 30% of bicarbonate, and various acidic ingredients which are activated by the addition of water, without the need for additional acids in the cooking medium. Many forms of baking powder contain sodium bicarbonate combined with calcium acid phosphate, sodium aluminium phosphate, or cream of tartar. Baking soda is alkaline; the acid used in baking powder avoids a metallic taste when the chemical change during baking creates sodium carbonate. Uses: Pyrotechnics Sodium bicarbonate is one of the main components of the common "black snake" firework. The effect is caused by the thermal decomposition, which produces carbon dioxide gas to produce a long snake-like ash as a combustion product of the other main component, sucrose. Sodium bicarbonate is also used to delay combustion reactions by releasing CO2 and H2O when heated, both of which are flame retardants. Uses: Mild disinfectant It has weak disinfectant properties, and it may be an effective fungicide against some organisms. Because baking soda will absorb musty smells, it has become a reliable method for used book sellers when making books less malodorous. Uses: Fire extinguisher Sodium bicarbonate can be used to extinguish small grease or electrical fires by being thrown over the fire, as heating of sodium bicarbonate releases carbon dioxide. However, it should not be applied to fires in deep fryers; the sudden release of gas may cause the grease to splatter. Sodium bicarbonate is used in BC dry chemical fire extinguishers as an alternative to the more corrosive monoammonium phosphate in ABC extinguishers. The alkaline nature of sodium bicarbonate makes it the only dry chemical agent, besides Purple-K, that was used in large-scale fire suppression systems installed in commercial kitchens. Because it can act as an alkali, the agent has a mild saponification effect on hot grease, which forms a smothering, soapy foam. Uses: Neutralization of acids Sodium bicarbonate reacts spontaneously with acids, releasing CO2 gas as a reaction product. It is commonly used to neutralize unwanted acid solutions or acid spills in chemical laboratories. It is not appropriate to use sodium bicarbonate to neutralize base even though it is amphoteric, reacting with both acids and bases. Sports supplement Sodium bicarbonate is taken as a sports supplement to improve muscular endurance. Studies conducted mostly in males have shown that sodium bicarbonate is most effective in enhancing performance in short-term, high-intensity activities. Agriculture Sodium bicarbonate when applied on leaves, can prevent the growth of fungi; however, it does not kill the fungus. Excessive amount of sodium bicarbonate can cause discolouration of fruits (two percent solution) and chlorosis (one percent solution). Uses: Medical uses and health Sodium bicarbonate mixed with water can be used as an antacid to treat acid indigestion and heartburn. Its reaction with stomach acid produces salt, water, and carbon dioxide: NaHCO3 + HCl → NaCl + H2O + CO2(g)A mixture of sodium bicarbonate and polyethylene glycol such as PegLyte, dissolved in water and taken orally, is an effective gastrointestinal lavage preparation and laxative prior to gastrointestinal surgery, gastroscopy, etc.Intravenous sodium bicarbonate in an aqueous solution is sometimes used for cases of acidosis, or when insufficient sodium or bicarbonate ions are in the blood. In cases of respiratory acidosis, the infused bicarbonate ion drives the carbonic acid/bicarbonate buffer of plasma to the left, and thus raises the pH. For this reason, sodium bicarbonate is used in medically supervised cardiopulmonary resuscitation. Infusion of bicarbonate is indicated only when the blood pH is markedly low (< 7.1–7.0).Sodium bicarbonate has been shown to reduce contrast-induced nephropathy, the most common cause of acute renal failure.HCO3− is used for treatment of hyperkalemia, as it will drive K+ back into cells during periods of acidosis. Since sodium bicarbonate can cause alkalosis, it is sometimes used to treat aspirin overdoses. Aspirin requires an acidic environment for proper absorption, and a basic environment will diminish aspirin absorption in cases of overdose. Sodium bicarbonate has also been used in the treatment of tricyclic antidepressant overdose. It can also be applied topically as a paste, with three parts baking soda to one part water, to relieve some kinds of insect bites and stings (as well as accompanying swelling).Some alternative practitioners, such as Tullio Simoncini, have promoted baking soda as a cancer cure, which the American Cancer Society has warned against due to both its unproven effectiveness and potential danger in use. Edzard Ernst has called the promotion of sodium bicarbonate as a cancer cure "one of the more sickening alternative cancer scams I have seen for a long time".Sodium bicarbonate can be added to local anesthetics, to speed up the onset of their effects and make their injection less painful. It is also a component of Moffett's solution, used in nasal surgery.It has been proposed that acidic diets weaken bones. One systematic meta-analysis of the research shows no such effect. Another also finds that there is no evidence that alkaline diets improve bone health, but suggests that there "may be some value" to alkaline diets for other reasons.Antacid (such as baking soda) solutions have been prepared and used by protesters to alleviate the effects of exposure to tear gas during protests.Similarly to its use in baking, sodium bicarbonate is used together with a mild acid such as tartaric acid as the excipient in effervescent tablets: when such a tablet is dropped in a glass of water, the carbonate leaves the reaction medium as carbon dioxide gas (HCO3− + H+ → H2O + CO2↑ or, more precisely, HCO3− + H3O+ → 2 H2O + CO2↑). This makes the tablet disintegrate, leaving the medication suspended and/or dissolved in the water together with the resulting salt (in this example, sodium tartrate). Uses: Personal hygiene Sodium bicarbonate is also used as an ingredient in some mouthwashes. It has anticaries and abrasive properties. It works as a mechanical cleanser on the teeth and gums, neutralizes the production of acid in the mouth, and also acts as an antiseptic to help prevent infections. Sodium bicarbonate in combination with other ingredients can be used to make a dry or wet deodorant. Sodium bicarbonate may be used as a buffering agent, combined with table salt, when creating a solution for nasal irrigation.It is used in eye hygiene to treat blepharitis. This is done by addition of a teaspoon of sodium bicarbonate to cool water that was recently boiled, followed by gentle scrubbing of the eyelash base with a cotton swab dipped in the solution. Uses: Veterinary uses Sodium bicarbonate is used as a cattle feed supplement, in particular as a buffering agent for the rumen. Uses: Cleaning agent Sodium bicarbonate is used in a process for removing paint and corrosion called sodablasting. As a blasting medium, sodium bicarbonate is used to remove surface contamination from softer and less resilient substrates such as aluminium, copper or timber which could be damaged by silica sand abrasive media.A manufacturer recommends a paste made from baking soda with minimal water as a gentle scouring powder. Such a paste can be useful in removing surface rust, as the rust forms a water-soluble compound when in a concentrated alkaline solution. Cold water should be used, as hot-water solutions can corrode steel. Sodium bicarbonate attacks the thin protective oxide layer that forms on aluminium, making it unsuitable for cleaning this metal. A solution in warm water will remove the tarnish from silver when the silver is in contact with a piece of aluminium foil. Baking soda is commonly added to washing machines as a replacement for water softener and to remove odors from clothes. It is also almost as effective in removing heavy tea and coffee stains from cups as sodium hydroxide, when diluted with warm water. Uses: During the Manhattan Project to develop the nuclear bomb in the early 1940s, the chemical toxicity of uranium was an issue. Uranium oxides were found to stick very well to cotton cloth, and did not wash out with soap or laundry detergent. However, the uranium would wash out with a 2% solution of sodium bicarbonate. Clothing can become contaminated with toxic dust of depleted uranium (DU), which is very dense, hence used for counterweights in a civilian context, and in armour-piercing projectiles. DU is not removed by normal laundering; washing with about 6 ounces (170 g) of baking soda in 2 gallons (7.5 L) of water will help to wash it out. Uses: Odor control It is often claimed that baking soda is an effective odor remover, and it is often recommended that an open box be kept in the refrigerator to absorb odor. This idea was promoted by the leading U.S. brand of baking soda, Arm & Hammer, in an advertising campaign starting in 1972. Though this campaign is considered a classic of marketing, leading within a year to more than half of American refrigerators containing a box of baking soda, there is little evidence that it is in fact effective in this application. Uses: Hydrogen gas production Sodium bicarbonate can be used as a catalyst in gas production. Its performance for this application is "good", however not usually used. Hydrogen gas is produced via electrolysis of water, process in which electric current is applied through a volume of water, which causes the hydrogen atoms to separate from the oxygen atoms. This demonstration is usually done in high school chemistry classes to show electrolysis. Chemistry: Sodium bicarbonate is an amphoteric compound. Aqueous solutions are mildly alkaline due to the formation of carbonic acid and hydroxide ion: HCO−3 + H2O → H2CO3 + OH−Sodium bicarbonate can often be used as a safer alternative to sodium hydroxide, and as such can be used as a wash to remove any acidic impurities from a "crude" liquid, producing a purer sample. Reaction of sodium bicarbonate and an acid produces a salt and carbonic acid, which readily decomposes to carbon dioxide and water: NaHCO3 + HCl → NaCl + H2O+CO2 H2CO3 → H2O + CO2(g)Sodium bicarbonate reacts with acetic acid (found in vinegar), producing sodium acetate, water, and carbon dioxide: NaHCO3 + CH3COOH → CH3COONa + H2O + CO2(g)Sodium bicarbonate reacts with bases such as sodium hydroxide to form carbonates: NaHCO3 + NaOH → Na2CO3 + H2O Thermal decomposition At temperatures from 80–100 °C (176–212 °F), sodium bicarbonate gradually decomposes into sodium carbonate, water, and carbon dioxide. The conversion is faster at 200 °C (392 °F): 2 NaHCO3 → Na2CO3 + H2O + CO2Most bicarbonates undergo this dehydration reaction. Further heating converts the carbonate into the oxide (above 850 °C/1,560 °F): Na2CO3 → Na2O + CO2These conversions are relevant to the use of NaHCO3 as a fire-suppression agent ("BC powder") in some dry-powder fire extinguishers. Stability and shelf life: If kept cool (room temperature) and dry (an airtight container is recommended to keep out moist air), sodium bicarbonate can be kept without a significant amount of decomposition for at least two or three years. History: The word natron has been in use in many languages throughout modern times (in the forms of anatron, natrum and natron) and originated (like Spanish, French and English natron as well as 'sodium') via Arabic naṭrūn (or anatrūn; cf. the Lower Egyptian “Natrontal” Wadi El Natrun, where a mixture of sodium carbonate and sodium hydrogen carbonate for the dehydration of mummies was used ) from Greek nítron (νίτρον) (Herodotus; Attic lítron (λίτρον)), which can be traced back to ancient Egyptian ntr. The Greek nítron (soda, saltpeter) was also used in Latin (sal) nitrum and in German Salniter (the source of Nitrogen, Nitrat etc.).In 1791, French chemist Nicolas Leblanc produced sodium carbonate, also known as soda ash. The pharmacist Valentin Rose the Younger is credited with the discovery of sodium bicarbonate in 1801 in Berlin. In 1846, two American bakers, John Dwight and Austin Church, established the first factory in the United States to produce baking soda from sodium carbonate and carbon dioxide.Saleratus, potassium or sodium bicarbonate, is mentioned in the novel Captains Courageous by Rudyard Kipling as being used extensively in the 1800s in commercial fishing to prevent freshly caught fish from spoiling.In 1919, US Senator Lee Overman declared that bicarbonate of soda could cure the Spanish flu. In the midst of the debate on 26 January 1919, he interrupted the discussion to announce the discovery of a cure. "I want to say, for the benefit of those who are making this investigation," he reported, "that I was told by a judge of a superior court in the mountain country of North Carolina they have discovered a remedy for this disease." The purported cure implied a critique of modern science and an appreciation for the simple wisdom of simple people. "They say that common baking soda will cure the disease," he continued, "that they have cured it with it, that they have no deaths up there at all; they use common baking soda, which cures the disease." Production: Sodium bicarbonate is produced industrially from sodium carbonate: Na2CO3 + CO2 + H2O → 2 NaHCO3It is produced on the scale of about 100,000 tonnes/year (as of 2001) with a worldwide production capacity of 2.4 million tonnes per year (as of 2002). Commercial quantities of baking soda are also produced by a similar method: soda ash, mined in the form of the ore trona, is dissolved in water and treated with carbon dioxide. Sodium bicarbonate precipitates as a solid from this solution.Regarding the Solvay process, sodium bicarbonate is an intermediate in the reaction of sodium chloride, ammonia, and carbon dioxide. The product however shows low purity (75pc). Production: NaCl + CO2 + NH3 + H2O → NaHCO3 + NH4ClAlthough of no practical value, NaHCO3 may be obtained by the reaction of carbon dioxide with an aqueous solution of sodium hydroxide: CO2 + NaOH → NaHCO3 Mining Naturally occurring deposits of nahcolite (NaHCO3) are found in the Eocene-age (55.8–33.9 Mya) Green River Formation, Piceance Basin in Colorado. Nahcolite was deposited as beds during periods of high evaporation in the basin. It is commercially mined using common underground mining techniques such as bore, drum, and longwall mining in a fashion very similar to coal mining.It is also produced by solution mining, pumping heated water through nahcolite beds and crystalizing the dissolved nahcolite through a cooling crystallization process. In popular culture: Sodium bicarbonate, as "bicarbonate of soda", was a frequent source of punch lines for Groucho Marx in Marx Brothers movies. In Duck Soup, Marx plays the leader of a nation at war. In one scene, he receives a message from the battlefield that his general is reporting a gas attack, and Groucho tells his aide: "Tell him to take a teaspoonful of bicarbonate of soda and a half a glass of water." In A Night at the Opera, Groucho's character addresses the opening night crowd at an opera by saying of the lead tenor: "Signor Lassparri comes from a very famous family. His mother was a well-known bass singer. His father was the first man to stuff spaghetti with bicarbonate of soda, thus causing and curing indigestion at the same time."In the Joseph L. Mankewicz classic All About Eve, the Max Fabian character (Gregory Ratoff) has an extended scene with Margo Channing (Bette Davis) in which, suffering from heartburn, he requests and then drinks bicarbonate of soda, eliciting a prominent burp. Channing promises to always keep a box of bicarb with Max's name on it.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Moyal bracket** Moyal bracket: In physics, the Moyal bracket is the suitably normalized antisymmetrization of the phase-space star product. The Moyal bracket was developed in about 1940 by José Enrique Moyal, but Moyal only succeeded in publishing his work in 1949 after a lengthy dispute with Paul Dirac. In the meantime this idea was independently introduced in 1946 by Hip Groenewold. Overview: The Moyal bracket is a way of describing the commutator of observables in the phase space formulation of quantum mechanics when these observables are described as functions on phase space. It relies on schemes for identifying functions on phase space with quantum observables, the most famous of these schemes being the Wigner–Weyl transform. It underlies Moyal’s dynamical equation, an equivalent formulation of Heisenberg’s quantum equation of motion, thereby providing the quantum generalization of Hamilton’s equations. Overview: Mathematically, it is a deformation of the phase-space Poisson bracket (essentially an extension of it), the deformation parameter being the reduced Planck constant ħ. Thus, its group contraction ħ→0 yields the Poisson bracket Lie algebra. Overview: Up to formal equivalence, the Moyal Bracket is the unique one-parameter Lie-algebraic deformation of the Poisson bracket. Its algebraic isomorphism to the algebra of commutators bypasses the negative result of the Groenewold–van Hove theorem, which precludes such an isomorphism for the Poisson bracket, a question implicitly raised by Dirac in his 1926 doctoral thesis, the "method of classical analogy" for quantization.For instance, in a two-dimensional flat phase space, and for the Weyl-map correspondence, the Moyal bracket reads, {{f,g}}=def1iℏ(f⋆g−g⋆f)={f,g}+O(ℏ2), where ★ is the star-product operator in phase space (cf. Moyal product), while f and g are differentiable phase-space functions, and {f, g} is their Poisson bracket.More specifically, in operational calculus language, this equals The left & right arrows over the partial derivatives denote the left & right partial derivatives. Sometimes the Moyal bracket is referred to as the Sine bracket. Overview: A popular (Fourier) integral representation for it, introduced by George Baker is sin ⁡(2ℏ(x′p″−x″p′)). Overview: Each correspondence map from phase space to Hilbert space induces a characteristic "Moyal" bracket (such as the one illustrated here for the Weyl map). All such Moyal brackets are formally equivalent among themselves, in accordance with a systematic theory.The Moyal bracket specifies the eponymous infinite-dimensional Lie algebra—it is antisymmetric in its arguments f and g, and satisfies the Jacobi identity. Overview: The corresponding abstract Lie algebra is realized by Tf ≡ f★, so that [Tf,Tg]=Tiℏ{{f,g}}. On a 2-torus phase space, T 2, with periodic coordinates x and p, each in [0,2π], and integer mode indices mi , for basis functions exp(i (m1x+m2p)), this Lie algebra reads, sin ⁡(ℏ2(n1m2−n2m1))Tm1+n1,m2+n2, which reduces to SU(N) for integer N ≡ 4π/ħ. SU(N) then emerges as a deformation of SU(∞), with deformation parameter 1/N. Generalization of the Moyal bracket for quantum systems with second-class constraints involves an operation on equivalence classes of functions in phase space, which can be considered as a quantum deformation of the Dirac bracket. Sine bracket and cosine bracket: Next to the sine bracket discussed, Groenewold further introduced the cosine bracket, elaborated by Baker, {{{f,g}}}=def12(f⋆g+g⋆f)=fg+O(ℏ2). Here, again, ★ is the star-product operator in phase space, f and g are differentiable phase-space functions, and f g is the ordinary product. Sine bracket and cosine bracket: The sine and cosine brackets are, respectively, the results of antisymmetrizing and symmetrizing the star product. Thus, as the sine bracket is the Wigner map of the commutator, the cosine bracket is the Wigner image of the anticommutator in standard quantum mechanics. Similarly, as the Moyal bracket equals the Poisson bracket up to higher orders of ħ, the cosine bracket equals the ordinary product up to higher orders of ħ. In the classical limit, the Moyal bracket helps reduction to the Liouville equation (formulated in terms of the Poisson bracket), as the cosine bracket leads to the classical Hamilton–Jacobi equation.The sine and cosine bracket also stand in relation to equations of a purely algebraic description of quantum mechanics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Neoichnology** Neoichnology: Neoichnology (Greek néos „new“, íchnos „footprint“, logos „science“) is the science of footprints and traces of extant animals. Thus, it is a counterpart to paleoichnology, which investigates tracks and traces of fossil animals. Neoichnological methods are used in order to study the locomotion and the resulting tracks of both invertebrates and vertebrates. Often these methods are applied in the field of palaeobiology to gain a deeper understanding of fossilized footprints. Neoichnological methods: Working with living animals Typically, when working with living animals, a race track is prepared and covered with a substrate, which allows for the production of footprints, i.e. sand of varying moisture content, clay or mud. After preparation, the animal is lured or shooed over the race track. This results in the production of numerous footprints that constitute a complete track. In some cases the animal is filmed during track production in order to subsequently study the impact of the animal's velocity or its behavior on the produced track. This poses an important advantage of working with living animals: changes in speed or direction, resting, slippage or moments of fright become visible in the produced tracks. After track production and prior to reuse, the track can be photographed, drawn or molded. Changes in the experimental setup are possible throughout the experiment, i. e. regulation of the moisture content of the substrate. As an alternative, also tracks of free living animals can be studied in nature (i.e. nearby lakes) and without any special experimental setup. However, without the standardized environment of the lab, matching the tracks with the behavior of the animal during track production is undoubtedly harder. Neoichnological methods: Working with foot models or severed limbs Another field of methods is the experimental work done with foot models or severed limbs. With these methods, the natural behavior of the animal is excluded from the analysis. In a typical experimental setup, the prepared foot is pressed into the substrate of interest, which again allows for the production of a footprint. Other than in the methods previously mentioned, the experimenter has now the opportunity to regulate manually the pressure, direction and speed of foot touchdown. Because of that, the effects of those manipulations can be studied more directly. The layering of differently colored substrates furthermore allows to study the consequences of touchdown in lower substrate layers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Concussions in rugby union** Concussions in rugby union: Concussions in England's professional rugby union are the most common injury received. Concussion can occur where an individual experiences an impact to the head, and commonly occurs in high-contact sporting activities, including American football, boxing, MMA and the rugby codes. It can also occur in recreational activities like horse riding, jumping, cycling, and skiing. The reason is that it doesn't have to be something to strike you in the proximity of the brain, but can also be caused by rapid change of movement, giving the skull not enough time to move with the body, causing the brain to press against the skull. With rugby being such a contact and fast moving sport, it is no wonder why there is concussion and other head injuries occurring. With the development of equipment and training methods, these will help benefit the players on the field know what could happen and how they can help with preventing it. History of concussions: A concussion, which is known as a subset of traumatic brain injury (TBI), is when a force comes in contact with the head, neck or face, or fast movement of the head, causing a functional injury to the brain. The severity of any injury depends on the location & strength of the impact. It is short-lived impairment of neurological function, the brains ability to process information, which can be resolved in seven to ten days. Not all concussion involves the loss of consciousness, with it occurring in less than 10% of concussions. Second-impact syndrome is when a player has obtained a second concussion when you either return to field the same day, or return to play before a complete recovery from a previous concussion. This is a result from brain swelling, from vascular congestion and increased intracranial pressure, this can be fatal to a player as it is a very difficult medical injury to control. The brain is surrounded by cerebrospinal fluid, which protects it from light trauma. More severe impacts, or the forces associated with rapid acceleration, may not be absorbed by this cushion. Concussion may be caused by impact forces, in which the head strikes or is struck by something, or impulsive forces, in which the head moves without itself being subject to blunt trauma (for example, when the chest hits something and the head snaps forward). Chronic traumatic encephalopathy, or "CTE", is an example of the cumulative damage that can occur as the result of multiple concussions or less severe blows to the head. The condition was previously referred to as "dementia pugilistica", or "punch drunk" syndrome, as it was first noted in boxers. The disease can lead to cognitive and physical handicaps such as parkinsonism, speech and memory problems, slowed mental processing, tremor, depression, and inappropriate behavior. It shares features with Alzheimer's disease. History of concussions: In a 2013 interview, recently retired Scotland international Rory Lamont was critical of the then-current protocols for handling concussions, notably the Pitchside Suspected Concussion Assessment (PSCA) employed at that time::The problem with the PSCA is a concussed player can pass the assessment. I know from first hand experience it can be quite ineffective in deciding if a player is concussed. It is argued that allowing the five-minutes assessment is better than zero minutes but it is not as clear cut as one might hope. Concussion symptoms regularly take 10 minutes or longer to actually present. Consequently the five-minute PSCA may be giving concussed players a license to return to the field. Connection with rugby union: Rugby union has been played since the early 19th century. Being a high contact sport it has the highest announced rates of concussions and outside England also has the highest number of catastrophic injuries out of any team sport. Research finding that during match play, concussion was reported at a higher level, and during training at a lower level, but still at a higher level than most players of another sport to receive. With the game being both physically and mentally demanding, it varies from being at high intensities of sprinting, tackling and rucking, with small intensities of jogging and walking. The position of the forwards consists of them having to have a lot of physical strength to get the ball from the other team, or create gaps for their team to run through. Where as the backs are the players that make the play happen, making runs with the ball, with the protection of the forwards stopping attacks, the backs still do get tackled like any other player on the field, so they have to have physical strength as much as a forward. The Concussion bin was replaced by the head bin in 2012 with the players assessment taking 10 minutes.About a quarter of rugby players are injured in each season.In the US, college rugby has much higher injury rates than college football. Rugby union has similar injury types to American football but with more common injuries of arms. Causes and likelihood of concussion: Concussion was the most commonly reported Premiership Rugby match injury in 2015-16 (for the 5th consecutive season), constituting appropriately 25% of all match injuries, and the RFU medical officer said that the tackle is where the overwhelming majority of concussions occur. A study found that playing more than 25 matches in the 2015/2016 season meant that sustaining concussion was more likely than not sustaining concussion. Signs of concussion: Some of the effects that concussion can cause to an individual's mind set can vary, depending on the circumstances and the severity of the impact. The common signs of concussion can be; blank look, slow to get up off of the ground, unsteady on their feet, grabbing their head, confused in where they are or what they are doing, and obviously if they are unconscious. These are the things that a spectator, coach and medical assistant will notice in a player. Sometimes concussion can go unrecognised, so from a players point there can be these symptoms; continual headaches, dizziness, visual problems, feeling of fatigue and drowsiness. These all can occur post game, so a player needs to have knowledge of what these signs could mean. Saliva testing: Although not commonly used at present, novel experimental methods to rapidly diagnose concussion in the field have been developed by research laboratories in the US and UK, based on the detection of RNA biomarkers in saliva. Treatment of the injury: Once taken off the field of play due to possible concussion, being unconscious, or showing the symptoms post game, getting medical advice as soon as possible is recommended. At the hospital or medical practice, the player will be under observation, if they are experiencing a headache, mild pain killers will be given. The medical professional will request that no food or drink is to be consumed until advised. They will then assess whether the player needs an x-ray, to check for any possible cervical vertebrae damage, or a computerised axial tomography (CT Scan) to check for any brain or cranium damage. With a mild head injury being sent home to take care and doing activities slower than usual, and maintaining painkillers. If symptoms of concussion do not disappear in the average of seven to ten days, then seek medical advice again as injury could be worse. In post-concussion syndrome, symptoms do not resolve for weeks, months, or years after a concussion, and may occasionally be permanent. About 10% to 20% of people have post concussion syndrome for more than a month. Controlling concussions: In order to minimise the risk of concussion and repetitive head trauma, the method of the 6 R's is used. Firstly Recognising and Removing a suspected player of concussion, to stop the injury from getting worse. Secondly Refer, whether the player is either recognised or suspected with concussion they must see a medical doctor as soon as possible. 90.8% of players knew they should not continue playing when concussed. 75% of players would continue an important game even if concussed. Of those concussed, 39.1% have tried to influence medical assessment with 78.2% stating it is possible or quite easy to do so. If the player is diagnosed with concussion, they then must Rest, until all signs of concussion are gone. The player must then Recover by just returning to general activities in life, then progressing back to playing. Returning to play, must follow the Graduated Return to Play (GRTP) protocol, by having clearance from a medical professional, and no symptoms of concussion. Despite good knowledge of concussion complications, management players engage in unsafe behaviour with little difference between gender and competition grades. Information regarding symptoms and management should be available to all players, coaches, and parents. On-going education is needed to assist coaches in identifying concussion signs and symptoms. Provision of medical care should be mandatory at every level of competition. Effect of concussions on brain functioning in later life: A 2017 study found that past participation in rugby or a history of concussion were associated with small to moderate neurocognitive deficits after retirement from competitive sport.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Oxoeicosanoid receptor 1** Oxoeicosanoid receptor 1: Oxoeicosanoid receptor 1 (OXER1) also known as G-protein coupled receptor 170 (GPR170) is a protein that in humans is encoded by the OXER1 gene located on human chromosome 2p21; it is the principal receptor for the 5-Hydroxyicosatetraenoic acid family of carboxy fatty acid metabolites derived from arachidonic acid. The receptor has also been termed hGPCR48, HGPCR48, and R527 but OXER1 is now its preferred designation. OXER1 is a G protein-coupled receptor (GPCR) that is structurally related to the hydroxy-carboxylic acid (HCA) family of G protein-coupled receptors whose three members are HCA1 (GPR81), HCA2 (Niacin receptor 1), and HCA3 (Niacin receptor 2); OXER1 has 30.3%, 30.7%, and 30.7% amino acid sequence identity with these GPCRs, respectively. It is also related (30.4% amino acid sequence identity) to the recently defined receptor, GPR31, for the hydroxyl-carboxy fatty acid 12-HETE. Species and tissue distribution: Orthologs of OXER1 are found in various mammalian species including opossums and several species of fish; however, mice and rats lack a clear ortholog of OXER1. This represents an important hindrance to studies on the function of OXER1 since these two mammalian species are the most common and easiest models for investigating the in vivo functions of receptors in mammals and by extrapolation humans. Since mouse cells make and respond to members of the 5-HETE family of agonists, it is most likely that mice do have a receptor that substitutes for OXER1 by mediating their responses to this agonist family. Recently, A G protein-couple receptor of the hydroxy carboxylic acid subfamily, Niacin receptor 1, has been proposed to mediate the responses of mouse tissues to 5-oxo-ETE.OXER1 is highly expressed by human white blood cells, particularly eosinophils and to a lesser extent neutrophils, basophils, and monocytes; by bronchoalveolar macrophages isolated from human bronchoalveolar lavage washings; and by the human H295R adrenocortical cell line. Various types of human cancer cells lines express OXER1; these include those of the prostate, breast, lung, ovaries, colon, and pancreas. OXER1 is also expressed by the human spleen, lung, liver, and kidney tissues. The exact cell types bearing OXER1 in these tissues has not been defined. Species and tissue distribution: A recent study has found that cats express the OXER1 receptor for 5-oxo-ETE, that feline leukocytes, including eosinophils, have been found to synthesize and be very highly responsive to 5-oxo-ETE, and that 5-oxo-ETE is present in the bronchoalveolar lavage fluid from cats with experimentally induced asthma; these findings suggest that the 5-oxo-ETE/OXER1 axis may play an important role in feline asthma, a common condition in this species, and that felines could serve as a useful animal model to investigate the pathophysiological role of 5-oxo-ETE in asthma and other conditions. Ligands: The OXER1 G protein-coupled receptor resembles the hydroxy carboxilic acid subfamily of G protein-coupled receptors, which besides GPR109A, niacin receptor 1, and niacin receptor 2 may include the recently defined receptor for 12-HETE, GPR31, not only in its amino acid sequence but also in the hydroxy-carboxylic acid nature of its cognate ligands. Naturally occurring ligands for OXER1 are long chain polyunsaturated fatty acids containing either a hydroxyl (i.e. -OH) or oxo (i.e. =O, keto) residue removed by 5 carbons from each of these acid's carboxy residue. Ligands: Agonists OXER1 is known or presumed to bind and thereby be activated by the following endogenous arachidonic acid metabolites; 5-oxo-ETE>5-oxo-15-hydroxy-ETE>5-hydroperoxyicosatetraenoic acid (5-HpETE)>5-HETE>5,20-diHETE. OXER1 is also activated by metabolites of other polyunsaturated fatty acids that therefore may be categorized as members of the 5-oxo-ETE family of agonists; these agonists include 5(S)-oxo-6E,8Z,11Z-eicosatrienoic acid (a 5-LO metabolite of mead acid); 5(S)-hydroxy-6E,8Z-octadecadienoic acid and 5(S)-oxo-6E,8Z-octadecadienoic acid (5-LO metabolites of sebaleic acid, i.e. 5Z,8Z-octadecadienoic acid); and 5(S)-hydroxy-6E,8Z,11Z,14Z,17Z-eicosapentaenoic and 5-oxo-6E,8Z,11Z,14Z,17Z-eicosapentaenoic acids (5-LO metabolites of the n-3 polyunsaturated fatty acid, eicosapentaenoic acid). Ligands: Antagonists 5-Oxo-12(S)-hydroxy-HETE and its 8-trans isomer, 5-oxo-12(S)-hydroxy-6E,8E,11Z,14Z-eicosatetraenoic acid, and a series of synthetic mimetics of 5-oxo-ETE structure (compounds 346, S-264, S-230, Gue154, and still to be named but considerably more potent drugs than these) block the activity of 5-oxo-ETE but not other stimuli in leukocytes and are presumed to be OXER1 antagonists. Mechanisms of activating cells: OXE-R couples to the G protein complex Gαi-Gβγ; when bound to a 5-oxo-ETE family member, OXE-R triggers this G protein complex to dissociate into its Gαi and Gβγ components. Gβγ appears to be the component most responsible for activating many of the signal pathways that lead to cellular functional responses. Intracellular cell-activation pathways stimulated by OXER1 include those involving rises in cytosolic calcium ion levels, and along with others that lead to the activation of MAPK/ERK, p38 mitogen-activated protein kinases, cytosolic Phospholipase A2, PI3K/Akt, and protein kinase C beta (i.e. PRKCB1, delta (i.e. PRKCD), epsilon (i.e. PRKCE), and zeta (i.e. PRKCZ). Function: OXER1 is activated by 5-oxo-ETE, 5-HETE, and other members of the 5-Hydroxyicosatetraenoic acid family of arachidonic acid metabolites and thereby mediates this family's stimulatory effects on cell types that are involved in mediating immunity-based inflammatory reactions such as neutrophils, monocytes, and macrophages) as well as allergic reactions such as eosinophils and basophils. It also mediates the in vitro proliferation and other pro-malignant responses of cultured prostate, breast, ovary, and kidney cancer cells to the 5-HETE family of agonists. These studies suggest that OXER1 may be involved in orchestrating inflammatory and allergic responses in humans and contribute to the growth and spread of human prostate, breast, ovary, and kidney cancers. OXER1 is responsible for steroid production response to 5-oxo-ETE by human steroidogenic cells in vitro and therefore could be involved in steroid production in humans. Function: To date, however, all studies have been pre-clinical; they use model systems that can suggest but not prove the contribution of OXER1 to human physiology and diseases. The most well-studied and promising area for OXER1 function is in allergic reactions. The recent development of OXER1 antagonists will help address this issue.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**N-Methyl-2-pyrrolidone** N-Methyl-2-pyrrolidone: N-Methyl-2-pyrrolidone (NMP) is an organic compound consisting of a 5-membered lactam. It is a colorless liquid, although impure samples can appear yellow. It is miscible with water and with most common organic solvents. It also belongs to the class of dipolar aprotic solvents such as dimethylformamide and dimethyl sulfoxide. It is used in the petrochemical, polymer and battery industries as a solvent, exploiting its nonvolatility and ability to dissolve diverse materials (including polyvinylidene difluoride, PVDF). Preparation: NMP is produced industrially by a typical ester-to-amide conversion, by treating butyrolactone with methylamine. Alternative routes include the partial hydrogenation of N-methylsuccinimide and the reaction of acrylonitrile with methylamine followed by hydrolysis. About 200,000 to 250,000 tons are produced annually. Applications: NMP is used to recover certain hydrocarbons generated in the processing of petrochemicals, such as the recovery of 1,3-butadiene and acetylene. It is used to absorb hydrogen sulfide from sour gas and hydrodesulfurization facilities. Its good solvency properties have led to NMP's use to dissolve a wide range of polymers. Specifically, it is used as a solvent for surface treatment of textiles, resins, and metal coated plastics or as a paint stripper. It is also used as a solvent in the commercial preparation of polyphenylene sulfide. In the pharmaceutical industry, N-methyl-2-pyrrolidone is used in the formulation for drugs by both oral and transdermal delivery routes. It is also used heavily in lithium ion battery fabrication, as a solvent for electrode preparation, because NMP has a unique ability to dissolve polyvinylidene fluoride binder. Due to NMP's toxicity and high boiling point, there is much effort to replace it in battery manufacturing with other solvent(s), like water. Health hazards: N-Methyl-2-pyrrolidone is an agent that causes the production of physical defects in the developing embryo. It also is a reproductive toxin, a chemical that is toxic to the reproductive system, including defects in the progeny and injury to male or female reproductive function. Reproductive toxicity includes developmental effects. The substance can be absorbed into the body by inhalation, through the skin and by ingestion. When people are exposed to it, rapid, irregular respiration, shortness of breath, decreased pain reflex, and slight bloody nasal secretion are possible. Inhalation can result in headaches and exposure on skin can result in redness and pain. When ingested it will cause a burning sensation in the throat and chest. It also can cause an acute solvent syndrome. Biological aspects: In rats, NMP is absorbed rapidly after inhalation, oral, and dermal administration, distributed throughout the organism, and eliminated mainly by hydroxylation to polar compounds, which are excreted via urine. About 80% of the administered dose is excreted as NMP and NMP metabolites within 24 hours. A probably dose dependent yellow coloration of the urine in rodents is observed. The major metabolite is 5-hydroxy-N-methyl-2-pyrrolidone. Studies in humans show comparable results. Dermal penetration through human skin has been shown to be very rapid. NMP is rapidly biotransformed by hydroxylation to 5-hydroxy-N-methyl-2-pyrrolidone, which is further oxidized to N-methylsuccinimide; this intermediate is further hydroxylated to 2-hydroxy-N-methylsuccinimide. These metabolites are all colourless. The excreted amounts of NMP metabolites in the urine after inhalation or oral intake represented about 100% and 65% of the administered doses, respectively. NMP has a low potential for skin irritation and a moderate potential for eye irritation in rabbits. Repeated daily doses of 450 mg/kg body weight administered to the skin caused painful and severe haemorrhage and eschar formation in rabbits. These adverse effects have not been seen in workers occupationally exposed to pure NMP, but they have been observed after dermal exposure to NMP used in cleaning processes. No sensitization potential has been observed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Azepine** Azepine: Azepines are unsaturated heterocycles of seven atoms, with a nitrogen replacing a carbon at one position.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dini derivative** Dini derivative: In mathematics and, specifically, real analysis, the Dini derivatives (or Dini derivates) are a class of generalizations of the derivative. They were introduced by Ulisse Dini, who studied continuous but nondifferentiable functions. The upper Dini derivative, which is also called an upper right-hand derivative, of a continuous function f:R→R, is denoted by f′+ and defined by lim sup h→0+f(t+h)−f(t)h, where lim sup is the supremum limit and the limit is a one-sided limit. The lower Dini derivative, f′−, is defined by lim inf h→0+f(t)−f(t−h)h, where lim inf is the infimum limit. If f is defined on a vector space, then the upper Dini derivative at t in the direction d is defined by lim sup h→0+f(t+hd)−f(t)h. If f is locally Lipschitz, then f′+ is finite. If f is differentiable at t, then the Dini derivative at t is the usual derivative at t. Remarks: The functions are defined in terms of the infimum and supremum in order to make the Dini derivatives as "bullet proof" as possible, so that the Dini derivatives are well-defined for almost all functions, even for functions that are not conventionally differentiable. The upshot of Dini's analysis is that a function is differentiable at the point t on the real line (ℝ), only if all the Dini derivatives exist, and have the same value.Sometimes the notation D+ f(t) is used instead of f′+(t) and D− f(t) is used instead of f′−(t). Remarks: Also, lim sup h→0+f(t+h)−f(t)h and lim inf h→0+f(t)−f(t−h)h .So when using the D notation of the Dini derivatives, the plus or minus sign indicates the left- or right-hand limit, and the placement of the sign indicates the infimum or supremum limit.There are two further Dini derivatives, defined to be lim inf h→0+f(t+h)−f(t)h and lim sup h→0+f(t)−f(t−h)h .which are the same as the first pair, but with the supremum and the infimum reversed. For only moderately ill-behaved functions, the two extra Dini derivatives aren't needed. For particularly badly behaved functions, if all four Dini derivatives have the same value ( D+f(t)=D+f(t)=D−f(t)=D−f(t) ) then the function f is differentiable in the usual sense at the point t . Remarks: On the extended reals, each of the Dini derivatives always exist; however, they may take on the values +∞ or −∞ at times (i.e., the Dini derivatives always exist in the extended sense).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Conventional sex** Conventional sex: Conventional sex, colloquially known as vanilla sex, is sexual behavior that is within the range of normality for a culture or subculture, and typically involves sex which does not include elements of BDSM, kink, fetishism, or happens within a marriage or relationship. Description: What is regarded as conventional sex depends on cultural and subcultural norms. Among heterosexual couples in the Western world, for example, conventional sex often refers to sexual intercourse in the missionary position. It can also describe penetrative sex which does not have any element of BDSM, kink or fetish.The British Medical Journal regards conventional sex between homosexual couples as "sex that does not extend beyond affection, mutual masturbation, and oral and anal sex." In addition to mutual masturbation, penetrative sexual activity among same-sex pairings is contrasted by non-insertive acts such as intercrural sex, frot and tribadism, although tribadism has been cited as a common but rarely discussed sexual practice among lesbians. Vanilla sexuality: The term "vanilla" in "vanilla sex" derives from the use of vanilla extract as the basic flavoring for ice cream, and by extension, meaning plain or conventional. In relationships where only one partner enjoys less conventional forms of sexual expression, the partner who does not enjoy such activities as much as the other is often referred to as the vanilla partner. As such, it is easy for them to be erroneously branded unadventurous in sexual matters. Through exploration with their partner, it may be possible for a more vanilla-minded person to discover new facets of their sexuality. As with any sexually active person, they may find their preferences on the commonly termed "vanilla-kink spectrum" are sufficient for their full satisfaction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cacodyl cyanide** Cacodyl cyanide: Cacodyl cyanide is a highly toxic organoarsenic compound discovered by Robert Bunsen in the 1840s. It is very volatile and flammable, as it shares the chemical properties of both arsenic and cyanide. Synthesis: Cacodyl cyanide can be prepared by reaction of cacodyl oxide with hydrogen cyanide or mercuric cyanide. Properties: Cacodyl cyanide is a white solid that is only slightly soluble in water, but very soluble in alcohol and ether.Cacodyl cyanide is highly toxic, producing symptoms of both cyanide and arsenic poisoning. Bunsen described it in the following terms; This substance is extraordinarily poisonous, and for this reason its preparation and purification can only be carried on in the open air; indeed, under these circumstances, it is necessary for the operator to breathe through a long open tube so as to insure the inspiration of air free from impregnation with any trace of the vapor of this very volatile compound. If only a few grains of this substance be allowed to evaporate in a room at the ordinary temperature, the effect upon any one inspiring the air is that of sudden giddiness and insensibility, amounting to complete unconsciousness. Properties: It is also explosive, and Bunsen himself was severely injured in the course of his experiments with cacodyl cyanide. The Russian military tested cacodyl cyanide on cats as a potential chemical weapon for filling shells in late 1850s, but while it was found to be a potent lachrymatory agent, all cats survived and it was ultimately considered unsuitable for military use. Any experiment or contact with cacodyl cyanide requires extreme care and caution as it is highly dangerous.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Germ cell nest** Germ cell nest: The germ cell nest (germ-line cyst) forms in the ovaries during their development. The nest consists of multiple interconnected oogonia formed by incomplete cell division. The interconnected oogonia are surrounded by somatic cells called granulosa cells. Later on in development, the germ cell nests break down through invasion of granulosa cells. The result is individual oogonia surrounded by a single layer of granulosa cells. There is also a comparative germ cell nest structure in the developing spermatogonia, with interconnected intracellular cytoplasmic bridges. Formation of germ cell nests: Prior to meiosis primordial germ cells (PGCs) migrate to the gonads and mitotically divide along the genital ridge in clusters or nests of cells referred to as germline cysts or germ cell nests. The understating of germ cell nest formation is limited. However, invertebrate models, especially drosophila have provided insight into the mechanisms surrounding formation. In females, it is suggested that cysts form from dividing progenitor cells. During this cyst formation, 4 rounds of division with incomplete cytokinesis occur resulting in cystocytes that are joined by intercellular bridges, also known as ring canals.Rodent PGCs migrate to the gonads and mitotically divide at embryonic day (E) 10.5. It is at this stage they switch from complete to incomplete cytokinesis during the mitotic cycle from E10.5-E14.5. Germ cell nests emerge following consecutive divisions of progenitor cells resulting from cleavage furrows arresting and forming intercellular bridges. The intercellular bridges are crucial in maintaining effective communication. They ensure meiosis begins immediately after the mitotic cyst formation cycle is complete.In females, mitosis will end at E14.5 and meiosis will commence. However, It is possible that germ cells may travel to the gonads and cluster together forming nests after their arrival or form through cellular aggregation. Function: Most of our understanding of germ cell nests come from Drosophila (fruit flies). In the Drosophila model, germ cell nests arise from incomplete cell division (cytokinesis), forming bridges between the daughter cells called ring canals. In ovarian cysts, generally all but one cell differentiate into nurse cells and transport materials through these ring canals to accelerate the growth of the remaining cell, which becomes the oocyte (egg cell). In males, sperm cells almost all develop in these clusters of germ cells, and they are thought to benefit from the interconnection between them because the genetic materials are shared between them through the ring canals, which reduces the production of non-functional sperm and the selection for certain genotypes over others (meiotic drive). There is also a high level of synchronisation between the clustered germ cells in males. In females, germ cell nests enable large eggs to be produced through the support of differentiated cystocytes into nurse cells. Supporting the oocyte with nurse cells within the germ cell nest also means that the oocyte nucleus can stay inactive, which reduces its susceptibility to mutations and parasites (largely applies to insect models). However, there doesn’t seem to be much synchrony despite the presence of ring canals. Transport through ring canals is highly regulated and directional in the ovarian germ-line cysts.Similar to the drosophila model, germ-line cysts in mammals such as mice and humans facilitate the transport of substances through the microtubules between nuclei within the syncytia. Organelles including the smooth ER, ribosomes, smooth vesicles, mitochondria and microtubules can be found within the ring canal in mouse, rabbit and human foetal ovaries. This allows organelles redistribution during oocyte differentiation, leading to about 20% of the foetal germ cells differentiating into primary oocytes with enriched cytoplasmic content. The germ cells that donate their cytoplasm undergo apoptosis. Besides this function, it has been proposed that germ-line cysts may also facilitate the onset of meiosis, facilitate organelle biogenesis through enriching mitochondria, inhibit mitosis to restrict the number of germ cells entering meiosis, and restrict the motility of germ cells. Breakdown: In the mouse, germ cell nest breakdown occurs just after birth, and in humans, this breakdown occurs during the second trimester of gestation. Germ cell nest breakdown involves the degeneration of many germ cell nuclei and the invasion of pre-granulosa cells into the nests. In the germ cell nest, one germ cell matures into an oocyte whereas others act as ‘nurse cells’, transferring their contents including cytoplasmic organelles like mitochondria into the predestined oocyte. These nurse cells subsequently undergo apoptosis. Cytoplasmic bridges between the remaining nuclei are cleaved through protease action of the surrounding somatic cells. Once the granulosa cells have fully surrounded the remaining nuclei, a basement membrane is laid down and completely encompasses each newly formed primordial follicle. The reason for selective loss of germ cells during nest breakdown has been suggested to be due to genetic defects or failure of the germ cell to produce the necessary cytoplasmic organelles, therefore acting as a quality control mechanism. Female vs. male gametogenesis: In males, this process of spermatogenesis is slightly different to that of female oogenesis but does have a comparative ‘germ-line nest/cyst’. Male germ-line stem cells divide asymmetrically to give one stem cell and a spermatogonia cell (unspecialised male germ cell) that undergoes mitotic proliferation to form primary spermatocytes (diploid - 46 chromosomes in the human). Each spermatocyte undergoes two rounds of meiosis to produce in the first round, two haploid secondary spermatocytes, and in the second round into four haploid (23 chromosomes in the human) spermatids. These spermatids then undergo differentiation into mature sperm.In these developing male germ cells, they undergo incomplete cytokinesis during the mitosis and meiosis. Cytokinesis is normally when the cytoplasm of one parent cell divides to split into two daughter cells. Large clones of differentiating (specialising) daughter cells that have descended from one maturing spermatogonia (undifferentiated, immature male germ cell) remain connected by stable intracellular cytoplasmic bridges that interconnect the cells. This forms a syncytium – this is a mass of cytoplasm containing many nuclei enclosed within one plasma membrane. These persist until the end of sperm differentiation when individual sperm are released into the seminiferous tubule lumen. The seminiferous tubules are the functional unit of the testis, and contain germ cells at various stages of maturation, and many other constituents.These intra-cellular bridges promote germ cell communication and sharing of cytoplasmic constituents, and allow for synchronisation of mitotic divisions and entry into meiosis. They are required for fertility in male insects and mammals. In mammals, germ cells form syncytia of hundreds of germ cells interconnected by intercellular bridges. As they share a common cytoplasm with their neighbours, cells can be supplied with all the products of a complete diploid genome. Developing sperm carrying a Y chromosome can be supplied with essential proteins encoded by genes on the X chromosome.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tate shiho gatame** Tate shiho gatame: Tate-Shiho-Gatame (縦四方固) is one of the seven mat holds, Osaekomi-waza, of Kodokan Judo. In grappling terms, it is categorized as a mounted position. Technique description: Graphic from http://judoinfo.com/techdrw.htm Exemplar Videos: Demonstrated from http://www.judoinfo.com/video6.htm Known as the full mount in Brazilian Jiu Jitsu and other grappling arts. You sit astride your opponent knees up high under armpits to avoid being bucked or alternatively lying on top of your opponent grapevining their legs with your own whilst your arms act as stabilisers and your chest smothering their airways. When the opponent weakens from exhaustion/asphyxiation one should then consider the following options. Technique description: The high armpit position allows transition to armbars the other to various choke holds. Included systems: Systems: Kodokan Judo, Judo ListsLists: The Canon of Judo Judo technique Brazilian Jiu-Jitsu, Theory and Technique Escapes: Upa Upa is described as a technique onto itself in the book Brazilian Jiu-Jitsu, Theory and Technique, and demonstrated in the video Gracie_Jiu-Jitsu_Basics_Vol.1. It is also part of the movement described as the cross lock (juji-jime) defense method in the book The Canon of Judo. Elbow escape "The elbow escape from the mounted position" is described in the book Brazilian Jiu-Jitsu, Theory and Technique, and demonstrated in the video Gracie_Jiu-Jitsu_Basics_Vol.1. Others Arm Pull and Roll Over Tate Shiho Gatame Escape Similar techniques, variants, and aliases: Kuzure-Tate-Shiho-Gatame The Canon of Judo lists a variation as a separate technique, where tori secures one of uke's arms instead of uke's neck, as demonstrated in the above animation, while holding onto the belt. Others English aliases: Horizontal four quarter holdVariants: Double Arm Tate-Shiho-Gatame Head Lock Tate-Shiho-Gatame Reverse Head Lock Tate-Shiho-Gatame Arm Hold Tate-Shiho-Gatame Thigh on Shoulder/Arm Hold Tate-Shiho-Gatame
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CompEx** CompEx: CompEx (meaning Competency in Ex atmospheres) is a global certification scheme for electrical and mechanical craftspersons and designers working in potentially explosive atmospheres. The scheme is operated by JTLimited, UK and is accredited by UKAS to ISO/IEC 17024.The scheme was created by EEMUA (Engineering Equipment and Materials Users' Association) to satisfy the general competency requirements of BS EN 60079 (IEC 60079), parts 10, 14 and 17. The requirements are currently explicitly detailed in IEC 60079 Part 14 Annex A, detailing knowledge/skills and competency requirements for responsible persons, operatives and designers. CompEx: The scheme is broken down to twelve units covering different actions and hazardous area concepts. In 2017, CompEx 01-04 was introduced to the NEC Standard. NEC500 & also NEC505, along with Ex "f" Foundation Courses. These are provided by Global EX Solutions, via Eaton
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Concrete grinder** Concrete grinder: A concrete grinder can come in many configurations, the most common being a hand-held angle grinder, but it may be a specialized tool for countertops or worktops. Angle grinders are small and mobile, and allow one to work on harder to reach areas and perform more precise work.There are also purpose-built floor grinders that are used for grinding and polishing marble, granite and concrete. Machines that grind concrete floors are usually made to handle much more stress and will have more power to drive the unit as concrete has a much higher sliding friction than marble or granite which is also worked wet, therefore with less friction. In fact some types of marble will spark when it is ground dry, causing deep damage to the marble surface. Floor grinders are most suitable to polishing a concrete floor slab as it can cover large surfaces more quickly, and they have more weight on them, therefore making the actual grinding process more efficient. Attachments: All concrete grinders use some sort of abrasive to grind or polish such as diamond tools or silicon carbide. The diamond tools used for grinding most commonly are diamond grinding cup wheels, other machines may use diamond segments, mounted on varies plates, slide on diamond grinding shoes and for polishing are usually circular Resin diamond polishing pads. The use of diamond attachments is the most common type of abrasive used under concrete grinders and come in many grits that range from 6 grit to the high thousands, although 1800 grit is considered by the insurance industry as the highest shine to apply to a floor surface. Wet or dry usage: Concrete can be ground wet or dry, although dust extraction equipment needs to be used when grinding dry. Wet or dry usage: To grind concrete dry, a grinding shroud can sourced for most angle grinder sizes, and floor grinders usually have them inbuilt. This provides the necessary vacuum attachment where one can connect a vacuum or HEPA filter-equipped vacuum to capture the fine dust produced when grinding dry. Of course concrete can also be ground wet in which case no vacuum is used. An issue with dry grinding is that is can be time-consuming as is a slower method of keeping the diamond tools cutting and the fine dust particles quickly blocks up the HEPA filters in the vacuum. Continuously stopping to clean or replace filters can be time-consuming and this is where a dust separator can be beneficial. It is connected between the concrete grinder and the vacuum cleaner and works by capturing the larger particles of concrete in its drum, so only the fine particles reach the vacuum cleaner. Wet or dry usage: The benefit of grinding concrete wet is that it requires less attachments than when grinding dry. The water makes the dust particles heavy by turning them into a slurry or paste and prevents them from being dispersed into the air. This significantly reduces health risks from breathing in concrete dust, but it does use a lot of water and make a bit of mess. Dust precautions: When grinding concrete it is important to ensure steps are taken to mitigate exposure to concrete dust. According to the Cancer Council, approximately 230 people develop lung cancer each year due to past exposure to silica dust at work. Fine concrete dust contains silica which is very harmful to the lungs and can lead to silicosis so all effort should be made to avoid breathing concrete dust. In construction, mining and other industrial type jobs that expose workers to dust and small particles, one should wear a respirator mask commonly known as a N95 mask, FFP2 mask, P2 Mask or Kn95 mask to protect from inhaling concrete dust. This is because such a respiratory mask can block 94-95% of non-oil based particulates that are larger than 0.3 microns. Concrete Dust particles can be as small as 0.5 microns, which is larger than 0.3 microns, which means that a N95 respirator provides effective protection against concrete dust when fitted properly.For green building methods many regulators LEED's have seen the benefit of using concrete grinders that are designed to finish concrete to a very stable wear surface, that can safely be used for many years as a floor or tabletop surface. These machines are usually powered by higher electrical power such as 240 volts or higher as they require motor power larger than 120 volts can supply. Some machine are powered by LP gas such as used on forklifts so that they can be run in well ventilated areas without a power cord, but these machines usually have fewer features that a fully electric unit.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ensemble (musical theatre)** Ensemble (musical theatre): In musical theatre, the ensemble or chorus are the on-stage performers other than the featured players. Ensemble members typically do not play named characters and have few or no spoken lines or solo parts; rather, they sing and dance in unison. An ensemble member may play multiple roles through the course of a show. Origin: The modern musical chorus descends from the chorus line, associated with early 20th century theatrical revues such as Ziegfeld Follies. The chorus line was typically composed of women (dubbed chorus girls or chorines) performing synchronized dances in a line. Composition: In the 2018–2019 season, ensemble sizes for Broadway productions ranged from 9 (for Hadestown) to 55 (for The Lion King). Ensemble sizes on Broadway have generally decreased over time, possibly due to cost-cutting. Many modern musicals feature no ensemble at all, such as Girl from the North Country and Six. Composition: Within the ensemble there exist certain specialized roles. The dance captain is an ensemble member who leads routine dance rehearsals once the show has opened and teaches choreography to new ensemble members. An ensemble member may also understudy a principal role, meaning that they play that role when the regular actor is unable to. A swing is a performer who is prepared to step in for a number of ensemble roles (known as tracks). Recognition: Major theatre awards such as the Tony Awards and Olivier Awards do not recognize ensemble members. In 2018, the Actors' Equity Association, the main union representing theatre performers in the United States, announced a campaign urging the Tony Awards to create two new awards for best ensemble (defined as the entire cast of a production), and best chorus. Similar awards exist in some regional theatre award ceremonies, such as the Jeff Awards in Chicago.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Redintegration** Redintegration: Redintegration refers to the restoration of the whole of something from a part of it. The everyday phenomenon is that a small part of a memory can remind a person of the entire memory, for example, “recalling an entire song when a few notes are played.” In cognitive psychology the word is used in reference to phenomena in the field of memory, where it is defined as "the use of long-term knowledge to facilitate recall." The process is hypothesised to be working as "pattern completion", where previous knowledge is used to facilitate the completion of the partially degraded memory trace. Proust: The great literary example of redintegration is Marcel Proust's novel Remembrance of Things Past. The conceit is that the entire seven-volume novel consists of the memories triggered by the taste of a madeleine soaked in lime tea. "I had recognized the taste of the crumb of madeleine soaked in her concoction of lime-flowers which my aunt used to give to me. Immediately the old grey house upon the street, where her room was, rose up like the scenery of a theatre to attach itself to the little pavilion, opening on to the garden, which had been built out behind it for my parents", ... for seven volumes. (See List of longest novels.) Associationists: Redintegration was one of the memory phenomena that the Associationist school of philosophical psychologists sought to explain and used as evidence supporting their theories. Contemporary Memory Research: In the study of item recall in working memory, memories that have partially decayed can be recalled in their entirety. It is hypothesized that this is accomplished by a redintegration process, which allows the entire memory to be reconstructed from the temporary memory trace by using the subject's previous knowledge. The process seems to work because of the redundancy of language. The effects of long-term knowledge on memory’s trace reconstruction have been shown for both visual and auditory presentation and recall. The mechanism of redintegration is still not fully understood and is being actively researched. Models of Redintegration: Multinomial Processing Tree Schweickert (1993) attempted to model memory redintegration using a multinomial processing tree. In a multinomial processing tree, the cognitive processes and their outcomes are represented with branches and nodes, respectively. The outcome of the cognitive effort is dependent on which terminal node is reached.In Schweickert model of recall, the trace of memory can be either intact or partially degraded. If the trace is intact, memory can be restored promptly and accurately. The node for correct recall is reached, and the recall process is terminated. If the memory has partially degraded, the item must be reconstructed through trace redintegration. If the process of redintegration was successful, the memory is recalled correctly. Models of Redintegration: Thus, the probability of correct recall ( PC ) is:Where: I is the probability of trace being intact, and R is the probability of correct redintegration.If the trace is neither intact nor successfully completely redintegrated, person fails to accurately recall the memory. Models of Redintegration: Trace Redintegration Schweickert proposed that the redintegration of memory trace happens through two independent processes. In the lexical process, the memory trace is attempted to be converted into a word. In the phonemic process, the memory trace is attempted to be converted into a string of phonemes. Consequently, the probability of correct redintegration ( R ), becomes a function of L (lexical process) and/or P (phonemic process). These processes are autonomous, and their effect on R depends on whether they take place sequentially or non-sequentially.Schweickert’s explanation of trace redintegration is analogous to the processes hypothesized to be responsible for repairs of errors in speech.Though Schweickert indicates that the process of trace redintegration may be facilitated by the context of the situation in which recall takes place (e.g. syntax, semantics), his model does not provide details on the potential influences of such factors. Models of Redintegration: Extensions Schweickert’s model was extended by Gathercole and colleagues (1999), who added a concept of degraded trace. Their model of the multinomial processing tree included an additional node, which represents a decayed memory. Such degraded trace can no longer undergo redintegration, and the outcome of recall is incorrect. Thus, the probability of correct recall ( PC ) changes to: Where: I is the probability of trace being intact, R is the probability of correct redintegration, and T is the probability of trace being entirely lost. Models of Redintegration: Criticism The main criticism of Schweickert model concerns its discrete nature. The model treats memory in a binomial manner, where trace can be either intact, leading to correct recall, or partially decayed, with subsequent successful or unsuccessful redintegration. It does not explain the factors underlying the intactness, and cannot account for the differences in the number of incorrect attempts of recall of different items. Moreover, the model does not incorporate the concept of the degree of memory degradation, implying that the level of trace’s decay does not affect the probability of redintegration. This issue was approached by Roodenrys and Miller (2008), whose alternative account of redintegration uses constrained Rasch model to portray trace degradation as a continuous process. Influencing Factors: Lexicality In immediate recall, trace reconstruction is more accurate for words than for non-words. This has been labelled as the lexicality effect. The effect is hypothesized to occur due to the differences in the presence and availability of phonological representations. Contrarily to non-words, words possess stable mental representations of the accompanying sounds. Such representation can be retrieved from previous knowledge, facilitating the redintegration of item from the memory trace. The lexicality effect is commonly used to support the importance of long-term memory in the redintegration processes. Influencing Factors: Item Similarity The redintegration of memory traces may be affected by both semantic and phonological similarity of items which are to be recalled.Semantic similarity effect refers to the higher accuracy of redintegration for lists containing semantically homogenous items, than for those with semantically heterogeneous items. This has been attributed to the differences in the accessibility of different memories in the long-term store. When words are presented in semantically homogenous lists, other items may guide the trace reconstruction, providing a cue for item search. This increases the availability of certain memories and facilitates the redintegration process. An example would be a redintegration attempt for a word from a list of animal names. The semantic consistency of words evokes the memories associated with this matter, making the animal names more accessible in the memory. Influencing Factors: Contrarily, redintegration has been shown to be hindered for items sharing phonological features. This has been attributed to the “trace competition”, where errors in redintegration are caused by mistaking the items on the lists. This effect could arise for example for the words auction (/ˈɔːkʃ(ə)n/) and audience (/ˈɔːdiəns/). The effect of phonological similarity on redintegration may differ depending on the position of phenomes shared within the items. Influencing Factors: Word Frequency Word frequency effect refers to the higher accuracy of redintegration processes for the words that are encountered more frequently in the language. This effect has been attributed to the differences in the availability of items stored in long-term memory. Frequently encountered words are hypothesized to be more accessible for subsequent recall, which facilitates the reconstruction of memory redintegration of the partially degraded trace. Influencing Factors: Phonotactic Frequency Phonotactic frequency effect refers to the pattern in memory redintegration, in which trace reconstruction is more accurate for items that contain phoneme combination that is frequently represented in the language. Though this effect is similar to the Word Frequency Effect, it can also explain patterns in redintegration of non-word items. Others Other factors which have been shown to facilitate redintegration include the ease of item imageability, familiarity with the language, and word concreteness.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tilorone** Tilorone: Tilorone (trade names Amixin, Lavomax and others) is the first recognized synthetic, small molecular weight compound that is an orally active interferon inducer. It is used as an antiviral drug in some countries which do not require double-blind placebo-controlled studies, including Russia. It is effective against Ebola virus in mice. Pharmacology: Tilorone activates the production of interferon.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Androtomy** Androtomy: Dissection (from Latin dissecare "to cut to pieces"; also called anatomization) is the dismembering of the body of a deceased animal or plant to study its anatomical structure. Autopsy is used in pathology and forensic medicine to determine the cause of death in humans. Less extensive dissection of plants and smaller animals preserved in a formaldehyde solution is typically carried out or demonstrated in biology and natural science classes in middle school and high school, while extensive dissections of cadavers of adults and children, both fresh and preserved are carried out by medical students in medical schools as a part of the teaching in subjects such as anatomy, pathology and forensic medicine. Consequently, dissection is typically conducted in a morgue or in an anatomy lab. Androtomy: Dissection has been used for centuries to explore anatomy. Objections to the use of cadavers have led to the use of alternatives including virtual dissection of computer models. In the field of surgery, the term "dissection" or "dissecting" means more specifically to the practice of separating an anatomical structure (an organ, nerve or blood vessel) from its surrounding connective tissue in order to minimize unwanted damage during a surgical procedure. Overview: Plant and animal bodies are dissected to analyze the structure and function of its components. Dissection is practised by students in courses of biology, botany, zoology, and veterinary science, and sometimes in arts studies. In medical schools, students dissect human cadavers to learn anatomy. Zoötomy is sometimes used to describe "dissection of an animal". Overview: Human dissection A key principle in the dissection of human cadavers (sometimes called androtomy) is the prevention of human disease to the dissector. Prevention of transmission includes the wearing of protective gear, ensuring the environment is clean, dissection technique and pre-dissection tests to specimens for the presence of HIV and hepatitis viruses. Specimens are dissected in morgues or anatomy labs. When provided, they are evaluated for use as a "fresh" or "prepared" specimen. A "fresh" specimen may be dissected within some days, retaining the characteristics of a living specimen, for the purposes of training. A "prepared" specimen may be preserved in solutions such as formalin and pre-dissected by an experienced anatomist, sometimes with the help of a diener. This preparation is sometimes called prosection. Overview: Most dissection involves the careful isolation and removal of individual organs, called the Virchow technique. An alternative more cumbersome technique involves the removal of the entire organ body, called the Letulle technique. This technique allows a body to be sent to a funeral director without waiting for the sometimes time-consuming dissection of individual organs. The Rokitansky method involves an in situ dissection of the organ block, and the technique of Ghon involves dissection of three separate blocks of organs - the thorax and cervical areas, gastrointestinal and abdominal organs, and urogenital organs. Dissection of individual organs involves accessing the area in which the organ is situated, and systematically removing the anatomical connections of that organ to its surroundings. For example, when removing the heart, connects such as the superior vena cava and inferior vena cava are separated. If pathological connections exist, such as a fibrous pericardium, then this may be deliberately dissected along with the organ. Overview: Autopsy and necropsy Dissection is used to help to determine the cause of death in autopsy (called necropsy in other animals) and is an intrinsic part of forensic medicine. History: Classical antiquity Human dissections were carried out by the Greek physicians Herophilus of Chalcedon and Erasistratus of Chios in the early part of the third century BC. During this period, the first exploration into full human anatomy was performed rather than a base knowledge gained from 'problem-solution' delving. While there was a deep taboo in Greek culture concerning human dissection, there was at the time a strong push by the Ptolemaic government to build Alexandria into a hub of scientific study. For a time, Roman law forbade dissection and autopsy of the human body, so anatomists relied on the cadavers of animals or made observations of human anatomy from injuries of the living. Galen, for example, dissected the Barbary macaque and other primates, assuming their anatomy was basically the same as that of humans, and supplemented these observations with knowledge of human anatomy which he acquired while tending to wounded gladiators.Celsus wrote in On Medicine I Proem 23, "Herophilus and Erasistratus proceeded in by far the best way: they cut open living men - criminals they obtained out of prison from the kings and they observed, while their subjects still breathed, parts that nature had previously hidden, their position, color, shape, size, arrangement, hardness, softness, smoothness, points of contact, and finally the processes and recesses of each and whether any part is inserted into another or receives the part of another into itself." Galen was another such writer who was familiar with the studies of Herophilus and Erasistratus. History: India The ancient societies that were rooted in India left behind artwork on how to kill animals during a hunt. The images showing how to kill most effectively depending on the game being hunted relay an intimate knowledge of both external and internal anatomy as well as the relative importance of organs. The knowledge was mostly gained through hunters preparing the recently captured prey. Once the roaming lifestyle was no longer necessary it was replaced in part by the civilization that formed in the Indus Valley. Unfortunately, there is little that remains from this time to indicate whether or not dissection occurred, the civilization was lost to the Aryan people migrating.Early in the history of India (2nd to 3rd century), the Arthashastra described the 4 ways that death can occur and their symptoms: drowning, hanging, strangling, or asphyxiation. According to that source, an autopsy should be performed in any case of untimely demise.The practice of dissection flourished during the 7th and 8th century. It was under their rule that medical education was standardized. This created a need to better understand human anatomy, so as to have educated surgeons. Dissection was limited by the religious taboo on cutting the human body. This changed the approach taken to accomplish the goal. The process involved the loosening of the tissues in streams of water before the outer layers were sloughed off with soft implements to reach the musculature. To perfect the technique of slicing, the prospective students used gourds and squash. These techniques of dissection gave rise to an advanced understanding of the anatomy and the enabled them to complete procedures used today, such as rhinoplasty.During medieval times the anatomical teachings from India spread throughout the known world; however, the practice of dissection was stunted by Islam. The practice of dissection at a university level was not seen again until 1827, when it was performed by the student Pandit Madhusudan Gupta. Through the 1900s, the university teachers had to continually push against the social taboos of dissection, until around 1850 when the universities decided that it was more cost effective to train Indian doctors than bring them in from Britain. Indian medical schools were, however, training female doctors well before those in England.The current state of dissection in India is deteriorating. The number of hours spent in dissection labs during medical school has decreased substantially over the last twenty years. The future of anatomy education will probably be an elegant mix of traditional methods and integrative computer learning. The use of dissection in early stages of medical training has been shown more effective in the retention of the intended information than their simulated counterparts. However, there is use for the computer-generated experience as review in the later stages. The combination of these methods is intended to strengthen the students' understanding and confidence of anatomy, a subject that is infamously difficult to master. There is a growing need for anatomist—seeing as most anatomy labs are taught by graduates hoping to complete degrees in anatomy—to continue the long tradition of anatomy education. History: Islamic world From the beginning of the Islamic faith in 610 A.D., Shari'ah law has applied to a greater or lesser extent within Muslim countries, supported by Islamic scholars such as Al-Ghazali. Islamic physicians such as Ibn Zuhr (Avenzoar) (1091–1161) in Al-Andalus, Saladin's physician Ibn Jumay during the 12th century, Abd el-Latif in Egypt c. 1200, and Ibn al-Nafis in Syria and Egypt in the 13th century may have practiced dissection, but it remains ambiguous whether or not human dissection was practiced. Ibn al-Nafis, a physician and Muslim jurist, suggested that the "precepts of Islamic law have discouraged us from the practice of dissection, along with whatever compassion is in our temperament", indicating that while there was no law against it, it was nevertheless uncommon. Islam dictates that the body be buried as soon as possible, barring religious holidays, and that there be no other means of disposal such as cremation. Prior to the 10th century, dissection was not performed on human cadavers. The book Al-Tasrif, written by Al-Zahrawi in 1000 A.D., details surgical procedure that differed from the previous standards. The book was an educational text of medicine and surgery which included detailed illustrations. It was later translated and took the place of Avicenna's The Canon of Medicine as the primary teaching tool in Europe from the 12th century to the 17th century. There were some that were willing to dissect humans up to the 12th century, for the sake of learning, after which it was forbidden. This attitude remained constant until 1952, when the Islamic School of Jurisprudence in Egypt ruled that "necessity permits the forbidden". This decision allowed for the investigation of questionable deaths by autopsy. In 1982, the decision was made by a fatwa that if it serves justice, autopsy is worth the disadvantages. Though Islam now approves of autopsy, the Islamic public still disapproves. Autopsy is prevalent in most Muslim countries for medical and judicial purposes. In Egypt it holds an important place within the judicial structure, and is taught at all the country's medical universities. In Saudi Arabia, whose law is completely dictated by Shari'ah, autopsy is viewed poorly by the population but can be compelled in criminal cases; human dissection is sometimes found at university level. Autopsy is performed for judicial purposes in Qatar and Tunisia. Human dissection is present in the modern day Islamic world, but is rarely published on due to the religious and social stigma. History: Tibet Tibetan medicine developed a rather sophisticated knowledge of anatomy, acquired from long-standing experience with human dissection. Tibetans had adopted the practice of sky burial because of the country's hard ground, frozen for most of the year, and the lack of wood for cremation. A sky burial begins with a ritual dissection of the deceased, and is followed by the feeding of the parts to vultures on the hill tops. Over time, Tibetan anatomical knowledge found its way into Ayurveda and to a lesser extent into Chinese medicine. History: Christian Europe Throughout the history of Christian Europe, the dissection of human cadavers for medical education has experienced various cycles of legalization and proscription in different countries. Dissection was rare during the Middle Ages, but it was practised, with evidence from at least as early as the 13th century. The practice of autopsy in Medieval Western Europe is "very poorly known" as few surgical texts or conserved human dissections have survived. History: A modern Jesuit scholar has claimed that the Christian theology contributed significantly to the revival of human dissection and autopsy by providing a new socio-religious and cultural context in which the human cadaver was no longer seen as sacrosanct.An edict of the 1163 Council of Tours, and an early 14th-century decree of Pope Boniface VIII have mistakenly been identified as prohibiting dissection and autopsy, misunderstanding or extrapolation from these edicts may have contributed to reluctance to perform such procedures. The Middle Ages witnessed the revival of an interest in medical studies, including human dissection and autopsy. History: Frederick II (1194–1250), the Holy Roman emperor, ruled that any that were studying to be a physician or a surgeon must attend a human dissection, which would be held no less than every five years. Some European countries began legalizing the dissection of executed criminals for educational purposes in the late 13th and early 14th centuries. Mondino de Luzzi carried out the first recorded public dissection around 1315. At this time, autopsies were carried out by a team consisting of a Lector, who lectured, the Sector, who did the dissection, and the Ostensor who pointed to features of interest.The Italian Galeazzo di Santa Sofia made the first public dissection north of the Alps in Vienna in 1404. History: Vesalius in the 16th century carried out numerous dissections in his extensive anatomical investigations. He was attacked frequently for his disagreement with Galen's opinions on human anatomy. Vesalius was the first to lecture and dissect the cadaver simultaneously.The Catholic Church is known to have ordered an autopsy on conjoined twins Joana and Melchiora Ballestero in Hispaniola in 1533 to determine whether they shared a soul. They found that there were two distinct hearts, and hence two souls, based on the ancient Greek philosopher Empedocles, who believed the soul resided in the heart. History: Human dissection was also practised by Renaissance artists. Though most chose to focus on the external surfaces of the body, some like Michelangelo Buonarotti, Antonio del Pollaiuolo, Baccio Bandinelli, and Leonardo da Vinci sought a deeper understanding. However, there were no provisions for artists to obtain cadavers, so they had to resort to unauthorised means, as indeed anatomists sometimes did, such as grave robbing, body snatching, and murder.Anatomization was sometimes ordered as a form of punishment, as, for example, in 1806 to James Halligan and Dominic Daley after their public hanging in Northampton, Massachusetts.In modern Europe, dissection is routinely practised in biological research and education, in medical schools, and to determine the cause of death in autopsy. It is generally considered a necessary part of learning and is thus accepted culturally. It sometimes attracts controversy, as when Odense Zoo decided to dissect lion cadavers in public before a "self-selected audience". History: Britain In Britain, dissection remained entirely prohibited from the end of the Roman conquest and through the Middle Ages to the 16th century, when a series of royal edicts gave specific groups of physicians and surgeons some limited rights to dissect cadavers. The permission was quite limited: by the mid-18th century, the Royal College of Physicians and Company of Barber-Surgeons were the only two groups permitted to carry out dissections, and had an annual quota of ten cadavers between them. As a result of pressure from anatomists, especially in the rapidly growing medical schools, the Murder Act 1752 allowed the bodies of executed murderers to be dissected for anatomical research and education. By the 19th century this supply of cadavers proved insufficient, as the public medical schools were growing, and the private medical schools lacked legal access to cadavers. A thriving black market arose in cadavers and body parts, leading to the creation of the profession of body snatching, and the infamous Burke and Hare murders in 1828, when 16 people were murdered for their cadavers, to be sold to anatomists. The resulting public outcry led to the passage of the Anatomy Act 1832, which increased the legal supply of cadavers for dissection.By the 21st century, the availability of interactive computer programs and changing public sentiment led to renewed debate on the use of cadavers in medical education. The Peninsula College of Medicine and Dentistry in the UK, founded in 2000, became the first modern medical school to carry out its anatomy education without dissection. History: United States In the United States, dissection of frogs became common in college biology classes from the 1920s, and were gradually introduced at earlier stages of education. By 1988, some 75 to 80 percent of American high school biology students were participating in a frog dissection, with a trend towards introduction in elementary schools. The frogs are most commonly from the genus Rana. Other popular animals for high-school dissection at the time of that survey were, among vertebrates, fetal pigs, perch, and cats; and among invertebrates, earthworms, grasshoppers, crayfish, and starfish. About six million animals are dissected each year in United States high schools (2016), not counting medical training and research. Most of these are purchased already dead from slaughterhouses and farms.Dissection in U.S. high schools became prominent in 1987, when a California student, Jenifer Graham, sued to require her school to let her complete an alternative project. The court ruled that mandatory dissections were permissible, but that Graham could ask to dissect a frog that had died of natural causes rather than one that was killed for the purposes of dissection; the practical impossibility of procuring a frog that had died of natural causes in effect let Graham opt out of the required dissection. The suit gave publicity to anti-dissection advocates. Graham appeared in a 1987 Apple Computer commercial for the virtual-dissection software Operation Frog. The state of California passed a Student's Rights Bill in 1988 requiring that objecting students be allowed to complete alternative projects. Opting out of dissection increased through the 1990s.In the United States, 17 states along with Washington, D.C. have enacted dissection-choice laws or policies that allow students in primary and secondary education to opt out of dissection. Other states including Arizona, Hawaii, Minnesota, Texas, and Utah have more general policies on opting out on moral, religious, or ethical grounds. To overcome these concerns, J. W. Mitchell High School in New Port Richey, Florida, in 2019 became the first US high school to use synthetic frogs for dissection in its science classes, instead of preserved real frogs.As for the dissection of cadavers in undergraduate and medical school, traditional dissection is supported by professors and students, with some opposition, limiting the availability of dissection. Upper-level students who have experienced this method along with their professors agree that "Studying human anatomy with colorful charts is one thing. Using a scalpel and an actual, recently-living person is an entirely different matter." Acquisition of cadavers: The way in which cadaveric specimens are obtained differs greatly according to country. In the UK, donation of a cadaver is wholly voluntary. Involuntary donation plays a role in about 20 percent of specimens in the US and almost all specimens donated in some countries such as South Africa and Zimbabwe. Countries that practice involuntary donation may make available the bodies of dead criminals or unclaimed or unidentified bodies for the purposes of dissection. Such practices may lead to a greater proportion of the poor, homeless and social outcasts being involuntarily donated. Cadavers donated in one jurisdiction may also be used for the purposes of dissection in another, whether across states in the US, or imported from other countries, such as with Libya. As an example of how a cadaver is donated voluntarily, a funeral home in conjunction with a voluntary donation program identifies a body who is part of the program. After broaching the subject with relatives in a diplomatic fashion, the body is then transported to a registered facility. The body is tested for the presence of HIV and hepatitis viruses. It is then evaluated for use as a "fresh" or "prepared" specimen. Disposal of specimens: Cadaveric specimens for dissection are, in general, disposed of by cremation. The deceased may then be interred at a local cemetery. If the family wishes, the ashes of the deceased are then returned to the family. Many institutes have local policies to engage, support and celebrate the donors. This may include the setting up of local monuments at the cemetery. Use in education: Human cadavers are often used in medicine to teach anatomy or surgical instruction. Cadavers are selected according to their anatomy and availability. They may be used as part of dissection courses involving a "fresh" specimen so as to be as realistic as possible—for example, when training surgeons. Cadavers may also be pre-dissected by trained instructors. This form of dissection involves the preparation and preservation of specimens for a longer time period and is generally used for the teaching of anatomy. Alternatives: Some alternatives to dissection may present educational advantages over the use of animal cadavers, while eliminating perceived ethical issues. These alternatives include computer programs, lectures, three dimensional models, films, and other forms of technology. Concern for animal welfare is often at the root of objections to animal dissection. Studies show that some students reluctantly participate in animal dissection out of fear of real or perceived punishment or ostracism from their teachers and peers, and many do not speak up about their ethical objections.One alternative to the use of cadavers is computer technology. At Stanford Medical School, software combines X-ray, ultrasound and MRI imaging for display on a screen as large as a body on a table. In a variant of this, a "virtual anatomy" approach being developed at New York University, students wear three dimensional glasses and can use a pointing device to "[swoop] through the virtual body, its sections as brightly colored as living tissue." This method is claimed to be "as dynamic as Imax [cinema]". Advantages and disadvantages: Proponents of animal-free teaching methodologies argue that alternatives to animal dissection can benefit educators by increasing teaching efficiency and lowering instruction costs while affording teachers an enhanced potential for the customization and repeat-ability of teaching exercises. Those in favor of dissection alternatives point to studies which have shown that computer-based teaching methods "saved academic and nonacademic staff time … were considered to be less expensive and an effective and enjoyable mode of student learning [and] … contributed to a significant reduction in animal use" because there is no set-up or clean-up time, no obligatory safety lessons, and no monitoring of misbehavior with animal cadavers, scissors, and scalpels.With software and other non-animal methods, there is also no expensive disposal of equipment or hazardous material removal. Some programs also allow educators to customize lessons and include built-in test and quiz modules that can track student performance. Furthermore, animals (whether dead or alive) can be used only once, while non-animal resources can be used for many years—an added benefit that could result in significant cost savings for teachers, school districts, and state educational systems.Several peer-reviewed comparative studies examining information retention and performance of students who dissected animals and those who used an alternative instruction method have concluded that the educational outcomes of students who are taught basic and advanced biomedical concepts and skills using non-animal methods are equivalent or superior to those of their peers who use animal-based laboratories such as animal dissection.Some reports state that students' confidence, satisfaction, and ability to retrieve and communicate information was much higher for those who participated in alternative activities compared to dissection. Three separate studies at universities across the United States found that students who modeled body systems out of clay were significantly better at identifying the constituent parts of human anatomy than their classmates who performed animal dissection.Another study found that students preferred using clay modeling over animal dissection and performed just as well as their cohorts who dissected animals.In 2008, the National Association of Biology Teachers (NABT) affirmed its support for classroom animal dissection stating that they "Encourage the presence of live animals in the classroom with appropriate consideration to the age and maturity level of the students …NABT urges teachers to be aware that alternatives to dissection have their limitations. NABT supports the use of these materials as adjuncts to the educational process but not as exclusive replacements for the use of actual organisms."The National Science Teachers Association (NSTA) "supports including live animals as part of instruction in the K-12 science classroom because observing and working with animals firsthand can spark students' interest in science as well as a general respect for life while reinforcing key concepts" of biological sciences. NSTA also supports offering dissection alternatives to students who object to the practice.The NORINA database lists over 3,000 products which may be used as alternatives or supplements to animal use in education and training. These include alternatives to dissection in schools. InterNICHE has a similar database and a loans system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Prospects Course Exchange** Prospects Course Exchange: Prospects Course Exchange is a system that manages XCRI-CAP feeds, enabling course data from higher education providers to be visible through Prospects' postgraduate course search.It is run and operated by Graduate Prospects. Prospects Course Check is a free course validation checker also provided by the service.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Reticulum (anatomy)** Reticulum (anatomy): The reticulum is the second chamber in the four-chamber alimentary canal of a ruminant animal. Anatomically it is the smaller portion of the reticulorumen along with the rumen. Together these two compartments make up 84% of the volume of the total stomach. The reticulum is colloquially referred to as the honeycomb, bonnet', or kings-hood. When cleaned and used for food, it is called "tripe". Reticulum (anatomy): Heavy or dense feed and foreign objects will settle here. It is the site of hardware disease in cattle and because of the proximity to the heart this disease can be life-threatening. Anatomy: The internal mucosa has a honeycomb shape. When looking at the reticulum with ultrasonography it is a crescent shaped structure with a smooth contour. The reticulum is adjacent to the diaphragm, lungs, abomasum, rumen and liver. The heights of the reticular crests and depth of the structures vary across ruminant animal species. Grazing ruminants have higher crests than browsers . However, general reticulum size is fairly constant across ruminants of differing body size and feeding type. Anatomy: In a mature cow, the reticulum can hold around 5 gallons of liquid. The rumen and reticulum are very close in structure and function and can be considered as one organ. They are separated only by a muscular fold of tissue. In immature ruminants a reticular groove is formed by the muscular fold of the reticulum. This allows milk to pass by the reticulorumen straight into the abomasum. Role in digestion: The fluid contents of the reticulum play a role in particle separation. This is true both in domestic and wild ruminants. The separation takes place through biphasic contractions. In the first contraction there is sending large particles back into the rumen while the reticulo-omasal orifice allows the passage of finer particles. In the second contraction the reticulum contracts completely so the empty reticulum can refill with contents from the rumen. These contents are then sorted in the next biphasic contraction. The contractions occur in regular intervals. High density particles may settle into the honeycomb structures and can be found after death. It is during the contractions of the reticulum that sharp objects can penetrate the wall and make their way to the heart. Some ruminants, such as goats, also have monophasic contractions in addition to the biphasic contractions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Super Monkey Ball Jr.** Super Monkey Ball Jr.: Super Monkey Ball Jr. is a platform game, part of the Super Monkey Ball series, developed by Realism for the Game Boy Advance. It is one of the few games on the system to make use of its 3D graphics capabilities. It is generally seen as a port of the first game in the Super Monkey Ball series, as it reuses many levels from it, but has a few differences. Gameplay: Main game As with previous entries, the objective of Super Monkey Ball Jr. is to control a monkey to reach the goal before time is over. New to this game, the player has the option to speed up or slow down the tilt by pressing the A or B Buttons respectively. Gameplay: Party games Monkey Duel: Playable with only two players, both players race to the finish as fast as they can while picking up any nearby bananas. The winning player gets 5 bananas added to their overall total. The mode can be played up to five rounds and the player who has the most bananas at the end of the specified number of rounds wins. There is also an option to give one player or the other a head start, up to 5 seconds.Monkey Bowling, Monkey Fight, and Monkey Golf, all three minigames that were present in Super Monkey Ball, return in this game and function exactly like their original appearances. With the exception of Monkey Fight, players can play Monkey Bowling and Monkey Golf on one Game Boy Advance system by alternating turns. Characters: Like many other games of Super Monkey Ball series, the player has the option to choose AiAi, MeeMee, Baby or GonGon. In multiplayer, two to four people can pick the same character. Reception: Super Monkey Ball Jr. received "generally favorable" reviews, according to review aggregator website Metacritic, but the game was criticized for a lack of analog movement. IGN gave the game a 9/10, GameSpot gave the game an 8/10, and review aggregator Metacritic gave the game an 82/100, indicating "generally favorable reviews".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TyeA protein domain** TyeA protein domain: In molecular biology, the protein domain TyeA is short for Translocation of Yops into eukaryotic cells A. It controls the release of Yersinia outer proteins (Yops) which help Yersinia evade the immune system. More specifically, it interacts with the bacterial protein YopN via hydrophobic residues located on the helices. Function: This protein domain is involved in the control of Yop release. This helps it to evade the host's immune system. Yersinia spp. do this by injecting the effector Yersinia outer proteins (Yops) into the target cell. Also involved in Yop secretion are YopN and LcrG. TyeA is also required for translocation of YopE and YopH. TyeA interacts with YopN and with YopD, a component of the translocation apparatus. This shows the complex which recognizes eukaryotic cells and controls Yop secretion is also actively involved in translocation. Localisation: Like YopN, TyeA is localized at the bacterial surface. Structure: The structure of TyeA is composed of two pairs of parallel alpha-helices. Mechanism: Association of TyeA with the C terminus of YopN is accompanied by conformational changes in both polypeptides that create order out of disorder: the resulting structure then serves as an impediment to type III secretion of YopN.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**National Dope Testing Laboratory** National Dope Testing Laboratory: The National Dope Testing Laboratory (NDTL) is a premier analytical testing & research organization established as an autonomous body under the Ministry of Youth Affairs and Sports, Government of India. It is the only laboratory in the country responsible for human sports dope testing. It is headed by Chief Executive Officer (CEO). Dr. Puran Lal Sahu is the Scientific Director of NDTL.It is accredited by National Accreditation Board for Testing & Calibration Laboratories, NABL (ISO/IEC 17025:2017) for human dope testing of urine & blood samples from human sports. NDTL is one of the 29 WADA accredited laboratories in the world. It is one of the modern & state of the art laboratory in the country equipped with latest analytical instrumentation. Introduction: The NDTL in India was established in 2008 with an aim to get permanently accredited by International Olympic Committee (IOC) and World Anti Doping Agency to do the testing for the banned drugs in human sports. The Lab has successfully completed sample testing for numerous major International as well National events since its inception. The lab was earlier located in the Jawaharlal Nehru Stadium and is shifted to the new site on May 14, 2009 with better facilities. The area of the new NDTL lab is 2700 square meters as against the earlier area of only 900 square meters. Accreditation: NDTL is accredited by various National and International agencies to carry out human dope testing. Some of the organizations that accredited NDTL are as follows: World Anti Doping Agency National Accreditation Board for Testing and Calibration Laboratories for Chemical Testing, Biological Testing CSCQ History: The dope testing Lab in India was established in 1990 (as Dope control Centre under the Sports Authority of India). The lab was modernized in 2002 with an aim to get it permanently accredited by International Olympic Commission and WADA. The lab got ISO/IEC 17025 accreditation in 2003 : An eligibility criteria for applying for WADA accreditation. Since Sports Authority of India is responsible for training of elite athletes, hence in view of conflict of interest, a decision was taken to have the independent body responsible for managing the Anti Doping Program in the country. The Union Cabinet took a decision to sign the Copenhagen declaration on anti doping and to set up National Anti Doping Agency in December 2004. The first meeting of General Body/Governing Body of NDTL was held under the Chairmanship of then Honourable Minister Youth Affairs & Sports M.S. Gill on 5th Jan. 2009. The recruitment rules, tariff for dope testing, action plan for Common Wealth Games 2010 was duly approved by the Governing Body. Research Projects: NDTL has state of arts facilities for research and is engaged in conducting research on various projects. The research paper is presented in various National and International conferences and published in indexed journals. The first ever Ph.D. thesis : "Detectability of Indian glucocorticosteroid preparations in sports persons: Effect on the endogenous steroid profile" was submitted in April 2009 by Madhusudhana I.Reddy and degree has been awarded. Research Projects: Projects Accomplished:Detectability of Corticosteroid in various Indian preparations: Effect on Endogenous steroid profile. Characteristics of IEF Patterns and SDS-PAGE Result of Indian EPO Biosimilars. Establishing Reference Range for Endogenous Steroids in Indian Sportspersons and to study the effect of ethnicity and steroid abuse on delta values of endogenous steroids. Analytical strategies in the development and utilization of mass spectrometric method for analysis of Stimulants & Narcotics. Effect of Ethnicity and Anabolic Steroid Abuse on Delta Value of Endogenous Steroids. Rapid Screening in Doping Analysis: Separation and Detection of Doping Agents be Liquid and Gas Chromatographic Mass Spectrometric Analysis.Current Projects:Detection of Synthetic glucocortico steroids, stimulants and anabolic steroids in Indian herbal drugs and supplements. Discrimination of biological and synthetic origin of anabolic steroids in human urine: Correlation between GCMSD & Isotope Ratio Mass Spectrometry. An Analytical approach for the Screening of Performance Enhancing Substances from various Dietary Supplements & to study their excretion profile using Chromatographic-Mass Spectrometric Technique. Development of analytical tools for the Detection and Identification of performance enhancing Peptides in Biological Specimen. An analytical approach for the Detection of Corticosteroids in Human and Horse Biological Specimen using Chromatographic and Mass Spectrometric Technique. To study the effect of various preparation of Testosterone on Steroid Profiling and Delta Value of 13C/12C of Testosterone Metabolite in volunteers with Normal/Abnormal Testosterone/ Epitestosterone (T/E) Ratio. Indian Herbal Drugs : Identification of stimulants, narcotics and other substances with potential of ergogenic aids in sports. Characterization of physiochemical properties and analysis of liposomes in human biological samples using hyphenated analytical technique. Detection of Stanozolol conjugated metabolites by liquid chromatography tandem-mass spectrometry. Prednisone excretion study and identification of its marker metabolites. Rapid determination of urinary phthalates using liquid chromatography tandem mass spectrometry. Identification of various banned small peptides in human urine using liquid chromatography tandem mass spectrometry. Instruments and Technologies: The National Dope Testing Laboratory is equipped with state of the art technologies and the most modern equipment.The use of Gas Chromatography coupled with Mass Spectrometry (GC-MS) is the most common and the oldest technology being used worldwide for dope testing . Nowadays, the use of liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) has become quite widespread . This technique has helped detect the difficult drugs falling into various categories of banned substances and is becoming increasingly more important in the fight against doping. Apart from GC-MS and LC-MS/MS, the use of Gas Chromatography coupled with tandem Mass Spectrometry (GC-MS/MS) and Isotope-ratio mass spectrometry (IRMS) is also very prevalent in sports dope testing. Instruments and Technologies: Both GC-MS/MS and LC-MS/MS are used primarily to analyze urine samples. The analysis of the blood matrix requires a completely different type of equipment which is commonly used in hospital laboratories. Events Organized by NDTL: Since its inception, NDTL has organized many events as follows: One Day Interactive Session with Horse Racing officials : October 14, 2015 One Day Seminar on "Latest trends in Anti Doping Science" : October 15, 2015 3rd WADA Q/A Meeting : January 28–29, 2016 First International Conference on "Implementation of latest Guidelines in human and horse doping: Interaction between Testing authorities and Doping Control Laboratories" : November 4–5, 2016
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Calendering (textiles)** Calendering (textiles): Calendering of textiles is a finishing process used to smooth, coat, or thin a material. With textiles, fabric is passed between calender rollers at high temperatures and pressures. Calendering is used on fabrics such as moire to produce its watered effect and also on cambric and some types of sateens. Calendering (textiles): In preparation for calendering, the fabric is folded lengthwise with the front side, or face, inside, and stitched together along the edges. The fabric can be folded together at full width, however this is done less often as it is more difficult. The fabric is then run through rollers at high temperatures and pressure that polish the surface and make the fabric smoother and more lustrous. Fabrics that go through the calendering process feel thin, glossy and papery.The wash durability of a calendered finish on thermoplastic fibers like polyester is higher than on cellulose fibers such as cotton, though each depends on the amount and type of finishing additives used and the machinery and process conditions employed. Durability of blended fabrics reflects the above, and the proportion of synthetic fiber component. Variations: Various finishes can be achieved through the calendering process by varying different aspects of the process. The main types are beetling, watered, embossing, and Schreiner. Beetled Beetling is a finish given to cotton and linen cloth, and makes it look like satin. In the beetling process the fabric goes over wooden rollers and is beaten with wooden hammers. Watered The watered finish, also known as moire, is produced by using ribbed rollers. These rollers compress the cloth and the ribs produce the characteristic watermark effect by differentially moving and compressing threads. In the process some threads are left round while others get flattened some. Embossed The embossing process uses rollers with engraved patterns, which become stamped onto the fabric, which gives the fabric a raised and sunken look. This works best with soft fabrics. Variations: Schreiner Similar to the watered process, the Schreiner process used ribbed rollers, though very fine, with as many as six hundred ribs per inch. Pressed flat under extremely high pressure, the threads receive little lines, which causes the fabric to reflect light better than a flat surface. The high luster of cloth finished with the Schreiner method can be made more lasting by heating the rollers. History: Historically calendering was done by hand with a huge pressing stone. For example in China huge rocks were brought from the north of the Yangtze River. The pressing stone was cut into a bowl shape, and the surface of the curved bottom made perfectly smooth. After a piece of cloth was placed underneath the stone the worker would stand on the stone and rock it with his feet to press the cloth.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fold of left vena cava** Fold of left vena cava: The fold of the left vena cava, ligament of the left vena cava, or vestigial fold of Marshall, is a triangular fold of the serous pericardium that lies between the left pulmonary artery and subjacent pulmonary vein. Fold of left vena cava: It is formed by the folding of the serous layer over the remnant of the lower part of the left superior vena cava (duct of Cuvier), which becomes obliterated during fetal life, and remains as a fibrous band stretching from the highest left intercostal vein to the left atrium, where it is continuous with a small cardiac vein, the vein of the left atrium (oblique vein of Marshall), which opens into the coronary sinus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Object copying** Object copying: In object-oriented programming, object copying is creating a copy of an existing object, a unit of data in object-oriented programming. The resulting object is called an object copy or simply copy of the original object. Copying is basic but has subtleties and can have significant overhead. There are several ways to copy an object, most commonly by a copy constructor or cloning. Copying is done mostly so the copy can be modified or moved, or the current value preserved. If either of these is unneeded, a reference to the original data is sufficient and more efficient, as no copying occurs. Object copying: Objects in general store composite data. While in simple cases copying can be done by allocating a new, uninitialized object and copying all fields (attributes) from the original object, in more complex cases this does not result in desired behavior. Methods of copying: The design goal of most objects is to give the resemblance of being made out of one monolithic block even though most are not. As objects are made up of several different parts, copying becomes nontrivial. Several strategies exist to treat this problem. Methods of copying: Consider an object A, which contains fields xi (more concretely, consider if A is a string and xi is an array of its characters). There are different strategies for making a copy of A, referred to as shallow copy and deep copy. Many languages allow generic copying by one or either strategy, defining either one copy operation or separate shallow copy and deep copy operations. Note that even shallower is to use a reference to the existing object A, in which case there is no new object, only a new reference. Methods of copying: The terminology of shallow copy and deep copy dates to Smalltalk-80. The same distinction holds for comparing objects for equality: most basically there is a difference between identity (same object) and equality (same value), corresponding to shallow equality and (1 level) deep equality of two object references, but then further whether equality means comparing only the fields of the object in question or dereferencing some or all fields and comparing their values in turn (e.g., are two linked lists equal if they have the same nodes, or if they have same values?). Methods of copying: Shallow copy One method of copying an object is the shallow copy. In that case a new object B is created, and the fields values of A are copied over to B. This is also known as a field-by-field copy, field-for-field copy, or field copy. If the field value is a reference to an object (e.g., a memory address) it copies the reference, hence referring to the same object as A does, and if the field value is a primitive type it copies the value of the primitive type. In languages without primitive types (where everything is an object), all fields of the copy B are references to the same objects as the fields of original A. The referenced objects are thus shared, so if one of these objects is modified (from A or B), the change is visible in the other. Shallow copies are simple and typically cheap, as they can usually be implemented by simply copying the bits exactly. Methods of copying: Deep copy An alternative is a deep copy, meaning that fields are dereferenced: rather than references to objects being copied, new copy objects are created for any referenced objects, and references to these are placed in B. Methods of copying: Combination In more complex cases, some fields in a copy should have shared values with the original object (as in a shallow copy), corresponding to an "association" relationship; and some fields should have copies (as in a deep copy), corresponding to an "aggregation" relationship. In these cases a custom implementation of copying is generally required; this issue and solution dates to Smalltalk-80. Alternatively, fields can be marked as requiring a shallow copy or deep copy, and copy operations automatically generated (likewise for comparison operations). This is not implemented in most object-oriented languages, however, though there is partial support in Eiffel. Implementation: Nearly all object-oriented programming languages provide some way to copy objects. As most languages do not provide most objects for programs, a programmer must define how an object should be copied, just as they must define if two objects are identical or even comparable in the first place. Many languages provide some default behavior. How copying is solved varies from language to language, and what concept of an object it has. Implementation: Lazy copy A lazy copy is an implementation of a deep copy. When initially copying an object, a (fast) shallow copy is used. A counter is also used to track how many objects share the data. When the program wants to modify an object, it can determine if the data is shared (by examining the counter) and can do a deep copy if needed. Implementation: Lazy copy looks to the outside just as a deep copy, but takes advantage of the speed of a shallow copy whenever possible. The downside are rather high but constant base costs because of the counter. Also, in certain situations, circular references can cause problems. Lazy copy is related to copy-on-write. In Java The following presents examples for one of the most widely used object-oriented languages, Java, which should cover nearly every way that an object-oriented language can treat this problem. Implementation: Unlike in C++, objects in Java are always accessed indirectly through references. Objects are never created implicitly but instead are always passed or assigned by a reference variable. (Methods in Java are always pass by value, however, it is the value of the reference variable that is being passed.) The Java Virtual Machine manages garbage collection so that objects are cleaned up after they are no longer reachable. There is no automatic way to copy any given object in Java. Implementation: Copying is usually performed by a clone() method of a class. This method usually, in turn, calls the clone() method of its parent class to obtain a copy, and then does any custom copying procedures. Eventually this gets to the clone() method of Object (the uppermost class), which creates a new instance of the same class as the object and copies all the fields to the new instance (a "shallow copy"). If this method is used, the class must implement the Cloneable marker interface, or else it will throw a CloneNotSupportedException. After obtaining a copy from the parent class, a class' own clone() method may then provide custom cloning capability, like deep copying (i.e. duplicate some of the structures referred to by the object) or giving the new instance a new unique ID. Implementation: The return type of clone() is Object, but implementers of a clone method could write the type of the object being cloned instead due to Java's support for covariant return types. One advantage of using clone() is that since it is an overridable method, we can call clone() on any object, and it will use the clone() method of its class, without the calling code needing to know what that class is (which would be needed with a copy constructor). Implementation: A disadvantage is that one often cannot access the clone() method on an abstract type. Most interfaces and abstract classes in Java do not specify a public clone() method. Thus, often the only way to use the clone() method is if the class of an object is known, which is contrary to the abstraction principle of using the most generic type possible. For example, if one has a List reference in Java, one cannot invoke clone() on that reference because List specifies no public clone() method. Implementations of List like ArrayList and LinkedList all generally have clone() methods, but it is inconvenient and bad abstraction to carry around the class type of an object. Implementation: Another way to copy objects in Java is to serialize them through the Serializable interface. This is typically used for persistence and wire protocol purposes, but it does create copies of objects and, unlike clone, a deep copy that gracefully handles cycled graphs of objects is readily available with minimal effort from a programmer. Implementation: Both of these methods suffer from a notable problem: the constructor is not used for objects copied with clone or serialization. This can lead to bugs with improperly initialized data, prevents the use of final member fields, and makes maintenance challenging. Some utilities attempt to overcome these issues by using reflection to deep copy objects, such as the deep-cloning library. Implementation: In Eiffel Runtime objects in Eiffel are accessible either indirectly through references or as expanded objects which fields are embedded within the objects that use them. That is, fields of an object are stored either externally or internally. The Eiffel class ANY contains features for shallow and deep copying and cloning of objects. All Eiffel classes inherit from ANY, so these features are available within all classes, and are applicable both to reference and expanded objects. The copy feature effects a shallow, field-by-field copy from one object to another. In this case no new object is created. If y were copied to x, then the same objects referenced by y before the application of copy, will also be referenced by x after the copy feature completes. To effect the creation of a new object which is a shallow duplicate of y, the feature twin is used. In this case, one new object is created with its fields identical to those of the source. The feature twin relies on the feature copy, which can be redefined in descendants of ANY, if needed. The result of twin is of the anchored type like Current. Implementation: Deep copying and creating deep twins can be done using the features deep_copy and deep_twin, again inherited from class ANY. These features have the potential to create many new objects, because they duplicate all the objects in an entire object structure. Because new duplicate objects are created instead of simply copying references to existing objects, deep operations will become a source of performance issues more readily than shallow operations. Implementation: In other languages In C#, rather than using the interface ICloneable, a generic extension method can be used to create a deep copy using reflection. This has two advantages: First, it provides the flexibility to copy every object without having to specify each property and variable to be copied manually. Second, because the type is generic, the compiler ensures that the destination object and the source object have the same type. Implementation: In Objective-C, the methods copy and mutableCopy are inherited by all objects and intended for performing copies; the latter is for creating a mutable type of the original object. These methods in turn call the copyWithZone and mutableCopyWithZone methods, respectively, to perform the copying. An object must implement the corresponding copyWithZone method to be copyable. In OCaml, the library function Oo.copy performs shallow copying of an object. In Python, the library's copy module provides shallow copy and deep copy of objects through the copy() and deepcopy() functions, respectively. Programmers may define special methods __copy__() and __deepcopy__() in an object to provide custom copying implementation. Implementation: In Ruby, all objects inherit two methods for performing shallow copies, clone and dup. The two methods differ in that clone copies an object's tainted state, frozen state, and any singleton methods it may have, whereas dup copies only its tainted state. Deep copies may be achieved by dumping and loading an object's byte stream or YAML serialization.[1] Alternatively, you can use the deep_dive gem to do a controlled deep copy of your object graphs. [2] In Perl, nested structures are stored by the use of references, thus a developer can either loop over the entire structure and re-reference the data or use the dclone() function from the module Storable. Implementation: In VBA, an assignment of variables of type Object is a shallow copy, an assignment for all other types (numeric types, String, user defined types, arrays) is a deep copy. So the keyword Set for an assignment signals a shallow copy and the (optional) keyword Let signals a deep copy. There is no built-in method for deep copies of Objects in VBA.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ventricular extrasystoles with syncopal episodes-perodactyly-Robin sequence syndrome** Ventricular extrasystoles with syncopal episodes-perodactyly-Robin sequence syndrome: Ventricular extrasystoles with syncopal episodes-perodactyly-Robin sequence syndrome is a rare autosomal dominant genetic disorder characterized by cardiofaciodigital anomalies occurring alongside Pierre Robin sequence. Additional features include abnormal sense of smell, camptodactyly, recurrent joint dislocations, and short stature. Around 6 to 12 cases have been described in medical literature.This condition has also been called heart-hand syndrome type 5. Cases: This condition was first discovered in 1992 by Stoll et al, when they described 6 members belonging to a 3-generation French family. They had ventricular extrasystoles that presented itself with syncopal episodes associated with multifocal tachycardia, aplastic/hypoplastic distal phalanges of the toes (a phenomenon Stoll et al. described as perodactyly), Pierre Robin Sequence, a condition which causes symptoms such as glossoptosis, and down-slanting palpebral fissures (which Stoll et al. described as antimongoloid slanted). One instance of male-to-male transmission was seen in the family.In 2008, Mercer et al. described 5 cases from 2 families. The first case was from a 7-year-old English girl who was brought to a doctor visit after she had a syncopal episode (also known as fainting) while swimming. Physical examination showed that she had similar symptoms to those shown by the French family reported by Stoll et al. The second to fifth cases were from 4 members of a 2-generation English family (a woman, her brother, and her two sons), alongside the typical symptoms of the syndrome, they also had other dysmorphic features such as a straight, pointy nose and prominent interphalangeal joints. Out of the 4 patients, 3 had hypodontia, 2 had multiple ventricular extrasystoles that weren't associated with syncopal episodes, 2 had microcephaly, the same 2 patients had a low anterior hairline, and the same 2 patients had mild learning difficulties. These last 2 cases (with microcephaly, low anterior hairline and learning difficulties) were from the brothers, they attended a special needs school. While the mother didn't have any learning difficulties and had average intelligence, she did report having had difficulties with school during her academic years. A follow-up on the family was reported by Pengelly et al in the year 2016: one of the 2 brothers went on to have a daughter who reportedly had additional multiple congenital anomalies/dysmorphic features which neither of the brothers had been reported of having, including agenesis of the first metacarpal, a mild form of developmental delay, speech delay, long philtrum, short nasal bridge, thin upper lip, epicanthic folds, radial agenesis of the right arm, thumb hypoplasia, and various benign septal defects (of the heart) which were deemed to be harmless. Genetic testing revealed that all 5 family members that had once been reported as having Stoll syndrome had a mutation in their TRIO gene, which indicated that they had a separate disorder known as autosomal dominant intellectual disability-44 with microcephaly Autosomal dominant intellectual disability-44 with microcephaly This is a condition with only around 25 cases described in medical history (including the previously mentioned family), it's characterized by mild intellectual disability and developmental delay, microcephaly, digital anomalies, and facial dysmorphisms. It is associated with heterozygous mutations in the TRIO gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MyISAM** MyISAM: MyISAM was the default storage engine for the MySQL relational database management system versions prior to 5.5 released in December 2009. It is based on the older ISAM code, but it has many useful extensions. Filesystem: Each MyISAM table is stored on disk in three files (if it is not partitioned). The files have names that begin with the table name and have an extension to indicate the file type. MySQL uses a .frm file to store the definition of the table, but this file is not a part of the MyISAM engine; instead it is a part of the server. The data file has a .MYD (MYData) extension. The index file has a .MYI (MYIndex) extension. The index file, if lost, can always be recreated by recreating indexes. Filesystem: Files format depends on the ROW_FORMAT table option. The following formats are available: FIXED: Fixed is a format where all data (including variable-length types) have a fixed length. This format is faster to read, and improves corrupted tables repair. If a table contains big variable-length columns (BLOB or TEXT) it cannot use the FIXED format. DYNAMIC: Variable-length columns do not have a fixed length size. This format is a bit slower to read but saves some space on disk. Filesystem: COMPRESSED: Compressed tables can be created with a dedicated tool while MySQL is not running, and they are read-only. While this usually makes them a non-viable option, the compression rate is generally sensibly higher than alternatives.MyISAM files do not depend on the system and, since MyISAM is not transactional, their content does not depend on current server workload. Therefore it is possible to copy them between different servers. Features: MyISAM is optimized for environments with heavy read operations, and few writes, or none at all. A typical area in which one could prefer MyISAM is data warehouse because it involves queries on very big tables, and the update of such tables is done when the database is not in use (usually at night). Features: The reason MyISAM allows for fast reads is the structure of its indexes: each entry points to a record in the data file, and the pointer is offset from the beginning of the file. This way records can be quickly read, especially when the format is FIXED. Thus, the rows are of constant length. Inserts are easy too because new rows are appended to the end of the data file. However, delete and update operations are more problematic: deletes must leave an empty space, or the rows' offsets would change; the same goes for updates, as the length of the rows becomes shorter; if the update makes the row longer, the row is fragmented. To defragment rows and claim empty space, the OPTIMIZE TABLE command must be executed. Because of this simple mechanism, MyISAM index statistics are usually quite accurate. Features: However, the simplicity of MyISAM has several drawbacks. The major deficiency of MyISAM is the absence of transactions support. Also, foreign keys are not supported. In normal use cases, InnoDB seems to be faster than MyISAM.Versions of MySQL 5.5 and greater have switched to the InnoDB engine to ensure referential integrity constraints, and higher concurrency. MyISAM supports FULLTEXT indexing and OpenGIS data types. Forks MariaDB has a storage engine called Aria, which is described as a "crash-safe alternative to MyISAM". However, the MariaDB developers still work on MyISAM code. The major improvement is the "Segmented Key Cache". If it is enabled, MyISAM indices' cache is divided into segments. This improves the concurrency because threads rarely need to lock the entire cache. In MariaDB, MyISAM also supports virtual columns. Drizzle does not include MyISAM.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Outfall** Outfall: An outfall is the discharge point of a waste stream into a body of water; alternatively it may be the outlet of a river, drain or a sewer where it discharges into the sea, a lake or ocean. United States of America: In the United States, industrial facilities that discharge storm water which was exposed to industrial activities at the site are required to have a multi-sector general permit. Issuing permits for storm water is delegated to the individual states that are authorized by the Environmental Protection Agency (EPA). Facilities that apply for a permit must specify the number of outfalls at the site. According to the EPA's Multi-Sector General Permit For Stormwater Discharges Associated With Industrial Activity, outfalls are locations where the stormwater exits the facility, including pipes, ditches, swales, and other structures that transport stormwater. If there is more than one outfall present, measure at the primary outfall (i.e., the outfall with the largest volume of stormwater discharge associated with industrial activity).Outfalls from sewage plants can be up to 20 feet (6.1 m) in diameter and release 4,000 US gallons per second (55,000 m3/h) of treated human waste, only miles from the shore. United States of America: A wastewater treatment system discharges treated effluent to a water body from an outfall. An ocean outfall may be conveyed several miles offshore, to discharge by nozzles at the end of a spreader or T-shaped structure. Outfalls may also be constructed as an outfall tunnel or subsea tunnel and discharge effluent to the ocean via one or more marine risers with nozzles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zero-curtain effect** Zero-curtain effect: The zero-curtain effect occurs in cold (particularly periglacial) environments where the phase transition of water to ice is slowed due to latent heat release. The effect is notably found in arctic and alpine permafrost sediments, and occurs where the air temperature falls below 0°C (the freezing point of water) followed by a rapid drop in soil temperature.Because of this effect, the lowering of temperature in moist, cold ground does not happen at a uniform rate. The loss of heat through conduction is reduced when water freezes, and latent heat is released. This heat of fusion is continually released until all the subsurface water has frozen, at which point temperatures can continue to fall.Therefore, for as long as water is available to the system (for example, through cryosuction/capillary action) the temperature of the sediment will remain at a constant temperature.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Late-acting self-incompatibility** Late-acting self-incompatibility: Late-acting self-incompatibility (LSI) is the occurrence of self-incompatibility (SI) in flowering plants where pollen tubes from self-pollen successfully reach the ovary, but ovules fail to develop. Mechanisms that might cause late-acting self-incompatibility have yet to be elucidated. One hypothesis is that the occurrence of LSI is caused by early-acting inbreeding depression where the expression of genetic load causes self-fertilized embryos to abort. Advantages and disadvantages of LSI: The proposed advantages of LSI compared to normal SI mechanisms is that LSI would allow the maternal parent to evaluate the paternal genetic material and allow ovule development depending on the vigor of developing embryos or amount of resources available.On the other hand, plants with LSI may face a disadvantage from seed discounting, which results in a reduction in fecundity. When pollen tubes reach the ovule, they are no longer available to be fertilized by outcrossed pollen, meaning LSI still uses up ovules for potential outcrossing while other SI methods do not. Evidence supporting LSI: Since LSI reactions are said to occur in the ovary and ovules, it is more difficult to researchers to determine where LSI reactions may occur to assess possible LSI mechanisms. Conventional SI reactions are much easier to observe, because they occur in the style or on the stigma. However, research has provided some evidence for the existence of late-acting self-incompatibility. Species noted to possibly have LSI form phylogenetic groupings in a similar fashion to how conventional SI is shared in other phylogenetic groups, suggesting that LSI may be derived from a common ancestor. A study by Lippow and Wyatt reported that species that have LSI create offspring that can be split into different groupings of compatibility and incompatibility based on Mendelian inheritance, which is something that can be demonstrated with plants that have typical SI mechanisms. It is also reported that some plants lack conventional SI mechanisms, yet ovules failed to develop at all, which is unexpected if the mechanism were to be due to lethal alleles. Evidence against LSI and alternative explanations: Studies have reported evidence against LSI and have proposed alternative explanation. For example, some species that are expected to have LSI display abortion at various stages of seed development, indicating that the abortion was due to selective embryo abortion caused by early-acting inbreeding depression. Another explanation for LSI is that it is the occurrence of gametophytic self-incompatibility, but self-pollen tubes are slowed to the point where they do not achieve fertilization prior to ovule abortion.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tying (commerce)** Tying (commerce): Tying (informally, product tying) is the practice of selling one product or service as a mandatory addition to the purchase of a different product or service. In legal terms, a tying sale makes the sale of one good (the tying good) to the de facto customer (or de jure customer) conditional on the purchase of a second distinctive good (the tied good). Tying is often illegal when the products are not naturally related. It is related to but distinct from freebie marketing, a common (and legal) method of giving away (or selling at a substantial discount) one item to ensure a continual flow of sales of another related item. Tying (commerce): Some kinds of tying, especially by contract, have historically been regarded as anti-competitive practices. The basic idea is that consumers are harmed by being forced to buy an undesired good (the tied good) in order to purchase a good they actually want (the tying good), and so would prefer that the goods be sold separately. The company doing this bundling may have a significantly large market share so that it may impose the tie on consumers, despite the forces of market competition. The tie may also harm other companies in the market for the tied good, or who sell only single components. Tying (commerce): One effect of tying can be that low quality products achieve a higher market share than would otherwise be the case. Tying (commerce): Tying may also be a form of price discrimination: people who use more razor blades, for example, pay more than those who just need a one-time shave. Though this may improve overall welfare, by giving more consumers access to the market, such price discrimination can also transfer consumer surpluses to the producer. Tying may also be used with or in place of patents or copyrights to help protect entry into a market, discouraging innovation. Tying (commerce): Tying is often used when the supplier makes one product that is critical to many customers. By threatening to withhold that key product unless others are also purchased, the supplier can increase sales of less necessary products. In the United States, most states have laws against tying, which are enforced by state governments. In addition, the U.S. Department of Justice enforces federal laws against tying through its Antitrust Division. Types: Horizontal tying is the practice of requiring consumers to pay for an unrelated product or service together with the desired one. A hypothetical example would be for Bic to sell its pens only with Bic lighters. (However, a company may offer a limited free item with another purchase as a promotion.) Vertical tying is the practice of requiring customers to purchase related products or services together, from the same company. For example, a company might mandate that its automobiles could only be serviced by its own dealers. In an effort to curb this, many jurisdictions require that warranties not be voided by outside servicing; for example, see the Magnuson-Moss Warranty Act in the United States. In United States law: Certain tying arrangements are illegal in the United States under both the Sherman Antitrust Act, and Section 3 of the Clayton Act. A tying arrangement is defined as "an agreement by a party to sell one product but only on the condition that the buyer also purchases a different (or tied) product, or at least agrees he will not purchase the product from any other supplier." Tying may be the action of several companies as well as the work of just one firm. Success on a tying claim typically requires proof of four elements: (1) two separate products or services are involved; (2) the purchase of the tying product is conditioned on the additional purchase of the tied product; (3) the seller has sufficient market power in the market for the tying product; (4) a not insubstantial amount of interstate commerce in the tied product market is affected.For at least three decades, the Supreme Court defined the required "economic power" to include just about any departure from perfect competition, going so far as to hold that possession of a copyright or even the existence of a tie itself gave rise to a presumption of economic power. The Supreme Court has since held that a plaintiff must establish the sort of market power necessary for other antitrust violations in order to prove sufficient "economic power" necessary to establish a per se tie. More recently, the Court has eliminated any presumption of market power based solely on the fact that the tying product is patented or copyrighted.In recent years, changing business practices surrounding new technologies have put the legality of tying arrangements to the test. Although the Supreme Court still considers some tying arrangements as per se illegal, the Court actually uses a rule-of-reason analysis, requiring an analysis of foreclosure effects and an affirmative defense of efficiency justifications. In United States law: Apple products The tying of Apple products is an example of commercial tying that has caused recent controversy. When Apple initially released the iPhone on June 29, 2007, it was sold exclusively with AT&T (formerly Cingular) contracts in the United States. To enforce this exclusivity, Apple employed a type of software lock that ensured the phone would not work on any network besides AT&T's. Related to the concept of bricking, any user who tried to unlock or otherwise tamper with the locking software ran the risk of rendering their iPhone permanently inoperable. In United States law: This caused complaints among many consumers, as they were forced to pay an additional early termination fee of $175 if they wanted to unlock the device safely for use on a different carrier. Other companies such as Google complained that tying encourages a more closed-access-based wireless service. Many questioned the legality of the arrangement, and in October 2007 a class-action lawsuit was filed against Apple, claiming that its exclusive agreement with AT&T violates California antitrust law. The suit was filed by the Law Office of Damian R. Fernandez on behalf of California resident Timothy P. Smith, and ultimately sought to have an injunction issued against Apple to prevent it from selling iPhones with any kind of software lock.In July 2010, federal regulators clarified the issue when they determined it was lawful to unlock (or in other terms, "jail break") the iPhone, declaring that there was no basis for copyright law to assist Apple in protecting its restrictive business model.Jail breaking is removing operating system or hardware restrictions imposed on an iPhone (or other device). If done successfully, this allows one to run any application on the phone they choose, including applications not authorized by Apple. Apple told regulators that modifying the iPhone operating system leads to the creation of an infringing derivative work that is protected by copyright law. This means that the license on the operating system forbids software modification. However, regulators agreed that modifying an iPhone's firmware/operating system to enable it to run an application that Apple has not approved fits comfortably within the four corners of fair use. In United States law: Microsoft products Another prominent case involving a tying claim was United States v. Microsoft. By some accounts, Microsoft ties together Microsoft Windows, Internet Explorer, Windows Media Player, Outlook Express and Microsoft Office. The United States claimed that the bundling of Internet Explorer (IE) to sales of Windows 98, making IE difficult to remove from Windows 98 (e.g., not putting it on the "Remove Programs" list), and designing Windows 98 to work "unpleasantly" with Netscape Navigator constituted an illegal tying of Windows 98 and IE. Microsoft's counterargument was that a web browser and a mail reader are simply part of an operating system, included with other personal computer operating systems, and the integration of the products was technologically justified. Just as the definition of a car has changed to include things that used to be separate products, such as speedometers and radios, Microsoft claimed the definition of an operating system has changed to include their formerly separate products. The United States Court of Appeals for the District of Columbia Circuit rejected Microsoft's claim that Internet Explorer was simply one facet of its operating system, but the court held that the tie between Windows and Internet Explorer should be analyzed deferentially under the Rule of Reason. The U.S. government claim settled before reaching final resolution. In United States law: As to the tying of Office, parallel cases against Microsoft brought by State Attorneys General included a claim for harm in the market for office productivity applications. The Attorneys General abandoned this claim when filing an amended complaint. The claim was revived by Novell where they alleged that manufacturers of computers ("OEMs") were charged less for their Windows bulk purchases if they agreed to bundle Office with every PC sold than if they gave computer purchasers the choice whether or not to buy Office along with their machines — making their computer prices less competitive in the market. The Novell litigation has since settled.Microsoft has also tied its software to the third-party Android mobile operating system, by requiring manufacturers that license patents it claims covers the OS and smartphones to ship Microsoft Office Mobile and Skype applications on the devices. Anti-tying provision of the Bank Holding Company Act: In 1970, Congress enacted section 106 of the Bank Holding Company Act Amendments of 1970 (BHCA), the anti-tying provision, which is codified at 12 U.S.C. § 1972. The statute was designed to prevent banks, whether large or small, state or federal, from imposing anticompetitive conditions on their customers. Tying is an antitrust violation, but the Sherman and Clayton Acts did not adequately protect borrowers from being required to accept conditions to loans issued by banks, and section 106 was specifically designed to apply to and remedy such bank misconduct. Anti-tying provision of the Bank Holding Company Act: Banks are allowed to take measures to protect their loans and to safeguard the value of their investments, such as requiring security or guaranties from borrowers. The statute exempts so-called “traditional banking practices” from its per se illegality, and thus its purpose is not so much to limit banks' lending practices, as it is to ensure that the practices used are fair and competitive. A majority of claims brought under the BHCA are denied. Banks still have quite a bit of leeway in fashioning loan agreements, but when a bank clearly steps over the bounds of propriety, the plaintiff is compensated with treble damages. Anti-tying provision of the Bank Holding Company Act: At least four regulatory agencies including the Federal Reserve Board oversee the activities of banks, their holding companies, and other related depository institutions. While each type of depository institution has a “primary regulator”, the nation's “dual banking” system allows concurrent jurisdiction among the different regulatory agencies. With respect to the anti-tying provision, the Fed takes the preeminent role in relation to the other financial institution regulatory agencies, which reflects that it was considered the least biased (in favor of banks) of the regulatory agencies when section 106 was enacted. In European Law: Tying is the "practice of a supplier of one product, the tying product, requiring a buyer also to buy a second product, the tied product". The tying of a product can take various forms, that of contractual tying where a contract binds the buyer to purchase both products together, refusal to supply until the buyer agrees to purchase both products, withdrawal or withholding of a guarantee where the dominant seller will not provide the benefit of guarantee until the seller accepts to purchase that party's product, technical tying occurs when the products of the dominant party are physically integrated and making impossible to buy the one without the other and bundling where two products are sold in the same package with one price. This practises are prohibited under Article 101(1)(e) and Article 102(2)(d) and may amount to an infringement of the statute if other conditions are satisfied. However, it is noteworthy that the Court is willing to find an infringement beyond those listed in Article 102(2)(d), see Tetra Pak v Commission. Enforcement under European Law: The Guidance on Article 102 Enforcement Priorities sets out in which circumstances it will be appropriate taking actions against tying practices. First, it must be established whether the accused undertaking has a dominant position in the tying or tied product market. Subsequently, the next step is to determine whether the dominant undertaking tied two distinct products. This is important as two identical products cannot be considered tied under Article 102(2)(d) formulation that states products will be considered tied if they have no connects ‘by their nature or commercial usage’. This arises problems in the legal definition of what will amount to tying in scenarios of selling cars with tires or selling a car with a radio. Hence, the Commission provides guidance on this issue by citing the judgement in Microsoft and states that "two products are distinct if, in the absence of tying or bundling, a substantial number of customers would purchase or would have purchased the tying product without also buying the tied product from the same supplier, thereby allowing stand-alone production for both the tying and the tied product". Next issue is whether the customer was coerced to purchase both the tying and the tied products as Article 102(2)(d) suggests: ‘making the conclusion of contracts subject to acceptance by the other parties of supplementary obligations’. In situations of contractual stipulation, it is clear that the test will be satisfied; for an example of a non-contractual tying see Microsoft. Furthermore, for an undertaking to be deemed anti-competitive is whether the tie is capable of having foreclosure effect. Some examples of tying practices having an anti-competitive foreclosure effect in case law are the IBM, Eurofix-Bauco v Hilti, Telemarketing v CLT, British Sugar and Microsoft. Subsequently, the defence available for the dominant undertaking is that it can provide that tying is objectively justified or enhances efficiency and the commission is willing to consider claims that are tying may result in economic efficiency in production or distribution that will bring benefit to the consumers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Catastalsis** Catastalsis: Catastalsis is the rhythmic contraction and relaxation of the muscle of the intestines. It resembles ordinary peristalsis but is not preceded by a wave of inhibition.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Small nucleolar RNA SNORD63** Small nucleolar RNA SNORD63: In molecular biology, snoRNA U63 (also known as SNORD63) is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA. Small nucleolar RNA SNORD63: snoRNA U63 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs.snoRNA U63 was purified from HeLa cells by immunoprecipitation with antifibrillarin antibody. It is predicted to guide the 2'-O-ribose methylation of 28s ribosomal RNA (rRNA) at residue A4531.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**QED (text editor)** QED (text editor): QED is a line-oriented computer text editor that was developed by Butler Lampson and L. Peter Deutsch for the Berkeley Timesharing System running on the SDS 940. It was implemented by L. Peter Deutsch and Dana Angluin between 1965 and 1966.QED (for "quick editor") addressed teleprinter usage, but systems "for CRT displays [were] not considered, since many of their design considerations [were] quite different." Later implementations: Ken Thompson later wrote a version for CTSS; this version was notable for introducing regular expressions. Thompson rewrote QED in BCPL for Multics. The Multics version was ported to the GE-600 system used at Bell Labs in the late 1960s under GECOS and later GCOS after Honeywell took over GE's computer business. The GECOS-GCOS port used I/O routines written by A. W. Winklehoff. Dennis Ritchie, Ken Thompson and Brian Kernighan wrote the QED manuals used at Bell Labs. Later implementations: Given that the authors were the primary developers of the Unix operating system, it is natural that QED had a strong influence on the classic UNIX text editors ed, sed and their descendants such as ex and sam, and more distantly AWK and Perl. A version of QED named FRED (Friendly Editor) was written at the University of Waterloo for Honeywell systems by Peter Fraser. A University of Toronto team consisting of Tom Duff, Rob Pike, Hugh Redelmeier, and David Tilbrook implemented a version of QED that runs on UNIX; David Tilbrook later included QED as part of his QEF tool set. Later implementations: QED was also used as a character-oriented editor on the Norwegian-made Norsk Data systems, first Nord TSS, then Sintran III. It was implemented for the Nord-1 computer in 1971 by Bo Lewendal who after working with Deutsch and Lampson at Project Genie and at the Berkeley Computer Corporation, had taken a job with Norsk Data (and who developed the Nord TSS later in 1971).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metaphysics of presence** Metaphysics of presence: The concept of the metaphysics of presence is an important consideration in deconstruction. Deconstructive interpretation holds that the entire history of Western philosophy with its language and traditions has emphasized the desire for immediate access to meaning, and thus built a metaphysics or ontotheology based on privileging Presence over Absence. Overview: In Being and Time (1927; transl. 1962), Martin Heidegger argues that the concept of time prevalent in all Western thought has largely remained unchanged since the definition offered by Aristotle in the Physics. Heidegger says, "Aristotle's essay on time is the first detailed Interpretation of this phenomenon [time] which has come down to us. Every subsequent account of time, including Henri Bergson's, has been essentially determined by it." Aristotle defined time as "the number of movement in respect of before and after". By defining time in this way Aristotle privileges what is present-at-hand, namely the "presence" of time. Heidegger argues in response that "entities are grasped in their Being as 'presence'; this means that they are understood with regard to a definite mode of time – the 'Present'". Central to Heidegger's own philosophical project is the attempt to gain a more authentic understanding of time. Heidegger considers time to be the unity of three ecstases: the past, the present, and the future. Overview: Deconstructive thinkers, like Jacques Derrida, describe their task as the questioning or deconstruction of this metaphysical tendency in Western philosophy. Derrida writes, "Without a doubt, Aristotle thinks of time on the basis of ousia as parousia, on the basis of the now, the point, etc. And yet an entire reading could be organized that would repeat in Aristotle's text both this limitation and its opposite." This argument is largely based on the earlier work of Heidegger, who in Being and Time claimed that the theoretical attitude of pure presence is parasitical upon a more originary involvement with the world in concepts such as the ready-to-hand and being-with. Overview: The presence to which Heidegger refers is both a presence as in a "now" and also a presence as in an eternal present, as one might associate with God or the "eternal" laws of science. This hypostatized (underlying) belief in presence is undermined by novel phenomenological ideas, such that presence itself does not subsist, but comes about primordially through the action of our futural projection, our realization of finitude and the reception or rejection of the traditions of our time.In his short work Intuition of the Instant, Gaston Bachelard attempts to navigate beyond, or parallel to, the Western concept of 'time as duration' – as the imagined trajectorial space of movement. He distinguishes between two foundations of time: time viewed as a duration, and time viewed as an instant. Bachelard then follows this second phenomenon of time and concludes that time as a duration does not exist, but is created as a necessary mediation for increasingly complex beings to persist. The reality of time for existence, though, is in fact a reprisal of the instant, the gestation of all existence every instant, the eternal death that gives life.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Supersingular elliptic curve** Supersingular elliptic curve: In algebraic geometry, supersingular elliptic curves form a certain class of elliptic curves over a field of characteristic p > 0 with unusually large endomorphism rings. Elliptic curves over such fields which are not supersingular are called ordinary and these two classes of elliptic curves behave fundamentally differently in many aspects. Hasse (1936) discovered supersingular elliptic curves during his work on the Riemann hypothesis for elliptic curves by observing that positive characteristic elliptic curves could have endomorphism rings of unusually large rank 4, and Deuring (1941) developed their basic theory. Supersingular elliptic curve: The term "supersingular" has nothing to do with singular points of curves, and all supersingular elliptic curves are non-singular. It comes from the phrase "singular values of the j-invariant" used for values of the j-invariant for which a complex elliptic curve has complex multiplication. The complex elliptic curves with complex multiplication are those for which the endomorphism ring has the maximal possible rank 2. In positive characteristic it is possible for the endomorphism ring to be even larger: it can be an order in a quaternion algebra of dimension 4, in which case the elliptic curve is supersingular. The primes p such that every supersingular elliptic curve in characteristic p can be defined over the prime subfield Fp rather than Fpm are called supersingular primes. Definition: There are many different but equivalent ways of defining supersingular elliptic curves that have been used. Some of the ways of defining them are given below. Let K be a field with algebraic closure K¯ and E an elliptic curve over K. Definition: The K¯ -valued points E(K¯) have the structure of an abelian group. For every n, we have a multiplication map [n]:E→E . Its kernel is denoted by E[n] . Now assume that the characteristic of K is p > 0. Then one can show that either or Z/prZ for r = 1, 2, 3, ... In the first case, E is called supersingular. Otherwise it is called ordinary. In other words, an elliptic curve is supersingular if and only if the group of geometric points of order p is trivial.Supersingular elliptic curves have many endomorphisms over the algebraic closure K¯ in the sense that an elliptic curve is supersingular if and only if its endomorphism algebra (over K¯ ) is an order in a quaternion algebra. Thus, their endomorphism algebra (over K¯ ) has rank 4, while the endomorphism group of every other elliptic curve has only rank 1 or 2. The endomorphism ring of a supersingular elliptic curve can have rank less than 4, and it may be necessary to take a finite extension of the base field K to make the rank of the endomorphism ring 4. In particular the endomorphism ring of an elliptic curve over a field of prime order is never of rank 4, even if the elliptic curve is supersingular. Definition: Let G be the formal group associated to E. Since K is of positive characteristic, we can define its height ht(G), which is 2 if and only if E is supersingular and else is 1. Definition: We have a Frobenius morphism F:E→E , which induces a map in cohomology F∗:H1(E,OE)→H1(E,OE) .The elliptic curve E is supersingular if and only if F∗ equals 0.We have a Verschiebung operator V:E→E , which induces a map on the global 1-forms V∗:H0(E,ΩE1)→H0(E,ΩE1) .The elliptic curve E is supersingular if and only if V∗ equals 0.An elliptic curve is supersingular if and only if its Hasse invariant is 0. Definition: An elliptic curve is supersingular if and only if the group scheme of points of order p is connected. An elliptic curve is supersingular if and only if the dual of the Frobenius map is purely inseparable. An elliptic curve is supersingular if and only if the "multiplication by p" map is purely inseparable and the j-invariant of the curve lies in a quadratic extension of the prime field of K, a finite field of order p2. Definition: Suppose E is in Legendre form, defined by the equation y2=x(x−1)(x−λ) , and p is odd. Then for λ≠0 , E is supersingular if and only if the sum ∑i=0n(ni)2λi vanishes, where n=(p−1)/2 . Using this formula, one can show that there are only finitely many supersingular elliptic curves over K (up to isomorphism).Suppose E is given as a cubic curve in the projective plane given by a homogeneous cubic polynomial f(x,y,z). Then E is supersingular if and only if the coefficient of (xyz)p–1 in fp–1 is zero. Definition: If the field K is a finite field of order q, then an elliptic curve over K is supersingular if and only if the trace of the q-power Frobenius endomorphism is congruent to zero modulo p.When q=p is a prime greater than 3 this is equivalent to having the trace of Frobenius equal to zero (by the Hasse bound); this does not hold for p=2 or 3. Examples: If K is a field of characteristic 2, every curve defined by an equation of the form y2+a3y=x3+a4x+a6 with a3 nonzero is a supersingular elliptic curve, and conversely every supersingular curve is isomorphic to one of this form (see Washington2003, p. 122).Over the field with 2 elements any supersingular elliptic curve is isomorphic to exactly one of the supersingular elliptic curves y2+y=x3+x+1 y2+y=x3+1 y2+y=x3+x with 1, 3, and 5 points. This gives examples of supersingular elliptic curves over a prime field with different numbers of points.Over an algebraically closed field of characteristic 2 there is (up to isomorphism) exactly one supersingular elliptic curve, given by y2+y=x3 with j-invariant 0. Its ring of endomorphisms is the ring of Hurwitz quaternions, generated by the two automorphisms x→xω and y→y+x+ω,x→x+1 where ω2+ω+1=0 is a primitive cube root of unity. Its group of automorphisms is the group of units of the Hurwitz quaternions, which has order 24, contains a normal subgroup of order 8 isomorphic to the quaternion group, and is the binary tetrahedral groupIf K is a field of characteristic 3, every curve defined by an equation of the form y2=x3+a4x+a6 with a4 nonzero is a supersingular elliptic curve, and conversely every supersingular curve is isomorphic to one of this form (see Washington2003, p. 122).Over the field with 3 elements any supersingular elliptic curve is isomorphic to exactly one of the supersingular elliptic curves y2=x3−x y2=x3−x+1 y2=x3−x+2 y2=x3+x Over an algebraically closed field of characteristic 3 there is (up to isomorphism) exactly one supersingular elliptic curve, given by y2=x3−x with j-invariant 0. Its ring of endomorphisms is the ring of quaternions of the form a+bj with a and b Eisenstein integers. , generated by the two automorphisms x→x+1 and y→iy,x→−x where i is a primitive fourth root of unity. Its group of automorphisms is the group of units of these quaternions, which has order 12 and contains a normal subgroup of order 3 with quotient a cyclic group of order 4.For Fp with p>3 the elliptic curve defined by y2=x3+1 with j-invariant 0 is supersingular if and only if (mod 3) and the elliptic curve defined by y2=x3+x with j-invariant 1728 is supersingular if and only if (mod 4) (see Washington2003, 4.35). Examples: The elliptic curve given by y2=x(x−1)(x+2) is nonsingular over Fp for p≠2,3 . It is supersingular for p = 23 and ordinary for every other 73 (see Hartshorne1977, 4.23.6). Examples: The modular curve X0(11) has j-invariant −21211−5313, and is isomorphic to the curve y2 + y = x3 − x2 − 10x − 20. The primes p for which it is supersingular are those for which the coefficient of qp in η(τ)2η(11τ)2 vanishes mod p, and are given by the list2, 19, 29, 199, 569, 809, 1289, 1439, 2539, 3319, 3559, 3919, 5519, 9419, 9539, 9929,... OEIS: A006962If an elliptic curve over the rationals has complex multiplication then the set of primes for which it is supersingular has density 1/2. If it does not have complex multiplication then Serre showed that the set of primes for which it is supersingular has density zero. Elkies (1987) showed that any elliptic curve defined over the rationals is supersingular for an infinite number of primes. Classification: For each positive characteristic there are only a finite number of possible j-invariants of supersingular elliptic curves. Classification: Over an algebraically closed field K an elliptic curve is determined by its j-invariant, so there are only a finite number of supersingular elliptic curves. If each such curve is weighted by 1/|Aut(E)| then the total weight of the supersingular curves is (p–1)/24. Elliptic curves have automorphism groups of order 2 unless their j-invariant is 0 or 1728, so the supersingular elliptic curves are classified as follows. Classification: There are exactly ⌊p/12⌋ supersingular elliptic curves with automorphism groups of order 2. In addition if p≡3 mod 4 there is a supersingular elliptic curve (with j-invariant 1728) whose automorphism group is cyclic or order 4 unless p=3 in which case it has order 12, and if p≡2 mod 3 there is a supersingular elliptic curve (with j-invariant 0) whose automorphism group is cyclic of order 6 unless p=2 in which case it has order 24. Classification: Birch & Kuyk (1975) give a table of all j-invariants of supersingular curves for primes up to 307. For the first few primes the supersingular elliptic curves are given as follows. The number of supersingular values of j other than 0 or 1728 is the integer part of (p−1)/12.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Realgar/Indigo naturalis** Realgar/Indigo naturalis: Realgar/Indigo naturalis (RIF), also known as Compound Huangdai (复方黄黛), is a medication used to treat acute promyelocytic leukemia. Effectiveness appears similar to arsenic trioxide. It is generally used together with all-trans-retinoic acid (ATRA). It is taken by mouth.Side effects may include abdominal pain and rash. It is made up of a combination of realgar (tetra-arsenic tetra-sulfide), Indigo naturalis, root of Salvia miltiorrhiza, and root of Pseudostellaria heterophylla. It works by breaking down the cancer protein retinoic acid receptor alpha. The main active ingredients according to NCI are tetraarsenic tetrasulfide (realgar), indirubin (from the indigo) and tanshinone IIA (from the Salvia).Realgar-Indigo naturalis was developed in the 1980s and approved for medical use in China in 2009. It is on the World Health Organization's List of Essential Medicines. It is made in China and was originally a herbal remedy. It is not approved in either the United States or Europe as of 2019. A year of treatment costs 60,000 Chinese yuan, as of 2017. Composition: WHO data indicates that the medication is provided in units of 270 mg of the mixture, 30 mg of which is tetraarsenic tetrasulfide (As4S4). The 2004 Chinese patent for this medication indicates that it contains 12-18% realgar (90-95% As4S4), 25-42% Indigo naturalis, 36-46% Salvia miltiorrhiza root (separately water-extracted), and 12-18% Pseudostellaria heterophylla root. Pharmacology: The basic action of this medication is similar to arsenic trioxide in that the arsenic component triggers degradation of the PML-RARα oncoprotein, by shifting the protein out of the nucleoplasm onto the nuclear matrix. Arsenic also encourages ubiquitination of the oncoprotein, together making it a target for the proteosome. Wang et al. 2008 suggests that the addition of indirubin and tanshinone IIA enhances the action of arsenic by synergy; these components cannot move the protein on their own.The pharmokinetics are similar to IV arsenic trioxide. Society and culture: History A combination of indigo with realgar, named Qīnghuáng sǎn, is known in old traditional Chinese medicine (TCM) literature. In the 1960s, Zhōu Ǎixiáng of China Academy of Chinese Medical Sciences Xiyuan Hospital tried the combination on leukemia with some results. In the 1980s, Huáng Shìlín of Shenyang Theatre Military Hospital came up with the current formulation, again justifying it only using TCM theory. It is unclear whether Huáng was aware of Zhang Tingdong's work on arsenic trioxide in the 1970s. Society and culture: In 1995, the medication was approved within the People's Liberation Army health system. It was allowed to start formal clinical trial in 2002, patented in 2004, then approved for the general Chinese market in 2009. It entered the Chinese treatment guide for APL in 2014 as an alternative to injected arsenic. The patent was initially held by Tiankang Pharmaceuticals, an unprofitable spin-off of an electrics company. In 2015, Yifan Biotech acquired all of Tiankang Pharmaceuticals and started heavier promotion of the drug.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Heme** Heme: Heme (American English), or haem (Commonwealth English, both pronounced /hi:m/ HEEM), is a precursor to hemoglobin, which is necessary to bind oxygen in the bloodstream. Heme is biosynthesized in both the bone marrow and the liver.In biochemical terms, heme is a coordination complex "consisting of an iron ion coordinated to a porphyrin acting as a tetradentate ligand, and to one or two axial ligands." The definition is loose, and many depictions omit the axial ligands. Among the metalloporphyrins deployed by metalloproteins as prosthetic groups, heme is one of the most widely used and defines a family of proteins known as hemoproteins. Hemes are most commonly recognized as components of hemoglobin, the red pigment in blood, but are also found in a number of other biologically important hemoproteins such as myoglobin, cytochromes, catalases, heme peroxidase, and endothelial nitric oxide synthase.The word haem is derived from Greek αἷμα haima meaning "blood". Function: Hemoproteins have diverse biological functions including the transportation of diatomic gases, chemical catalysis, diatomic gas detection, and electron transfer. The heme iron serves as a source or sink of electrons during electron transfer or redox chemistry. In peroxidase reactions, the porphyrin molecule also serves as an electron source, being able to delocalize radical electrons in the conjugated ring. In the transportation or detection of diatomic gases, the gas binds to the heme iron. During the detection of diatomic gases, the binding of the gas ligand to the heme iron induces conformational changes in the surrounding protein. In general, diatomic gases only bind to the reduced heme, as ferrous Fe(II) while most peroxidases cycle between Fe(III) and Fe(IV) and hemeproteins involved in mitochondrial redox, oxidation-reduction, cycle between Fe(II) and Fe(III). Function: It has been speculated that the original evolutionary function of hemoproteins was electron transfer in primitive sulfur-based photosynthesis pathways in ancestral cyanobacteria-like organisms before the appearance of molecular oxygen.Hemoproteins achieve their remarkable functional diversity by modifying the environment of the heme macrocycle within the protein matrix. For example, the ability of hemoglobin to effectively deliver oxygen to tissues is due to specific amino acid residues located near the heme molecule. Hemoglobin reversibly binds to oxygen in the lungs when the pH is high, and the carbon dioxide concentration is low. When the situation is reversed (low pH and high carbon dioxide concentrations), hemoglobin will release oxygen into the tissues. This phenomenon, which states that hemoglobin's oxygen binding affinity is inversely proportional to both acidity and concentration of carbon dioxide, is known as the Bohr effect. The molecular mechanism behind this effect is the steric organization of the globin chain; a histidine residue, located adjacent to the heme group, becomes positively charged under acidic conditions (which are caused by dissolved CO2 in working muscles, etc.), releasing oxygen from the heme group. Types: Major hemes There are several biologically important kinds of heme: The most common type is heme B; other important types include heme A and heme C. Isolated hemes are commonly designated by capital letters while hemes bound to proteins are designated by lower case letters. Cytochrome a refers to the heme A in specific combination with membrane protein forming a portion of cytochrome c oxidase. Types: Other hemes The following carbon numbering system of porphyrins is an older numbering used by biochemists and not the 1–24 numbering system recommended by IUPAC which is shown in the table above.Heme l is the derivative of heme B which is covalently attached to the protein of lactoperoxidase, eosinophil peroxidase, and thyroid peroxidase. The addition of peroxide with the glutamyl-375 and aspartyl-225 of lactoperoxidase forms ester bonds between these amino acid residues and the heme 1- and 5-methyl groups, respectively. Similar ester bonds with these two methyl groups are thought to form in eosinophil and thyroid peroxidases. Heme l is one important characteristic of animal peroxidases; plant peroxidases incorporate heme B. Lactoperoxidase and eosinophil peroxidase are protective enzymes responsible for the destruction of invading bacteria and virus. Thyroid peroxidase is the enzyme catalyzing the biosynthesis of the important thyroid hormones. Because lactoperoxidase destroys invading organisms in the lungs and excrement, it is thought to be an important protective enzyme. Types: Heme m is the derivative of heme B covalently bound at the active site of myeloperoxidase. Heme m contains the two ester bonds at the heme 1- and 5-methyl groups also present in heme l of other mammalian peroxidases, such as lactoperoxidase and eosinophil peroxidase. In addition, a unique sulfonamide ion linkage between the sulfur of a methionyl amino-acid residue and the heme 2-vinyl group is formed, giving this enzyme the unique capability of easily oxidizing chloride and bromide ions to hypochlorite and hypobromite. Myeloperoxidase is present in mammalian neutrophils and is responsible for the destruction of invading bacteria and viral agents. It perhaps synthesizes hypobromite by "mistake". Both hypochlorite and hypobromite are very reactive species responsible for the production of halogenated nucleosides, which are mutagenic compounds. Types: Heme D is another derivative of heme B, but in which the propionic acid side chain at the carbon of position 6, which is also hydroxylated, forms a γ-spirolactone. Ring III is also hydroxylated at position 5, in a conformation trans to the new lactone group. Heme D is the site for oxygen reduction to water of many types of bacteria at low oxygen tension. Types: Heme S is related to heme B by having a formal group at position 2 in place of the 2-vinyl group. Heme S is found in the hemoglobin of a few species of marine worms. The correct structures of heme B and heme S were first elucidated by German chemist Hans Fischer.The names of cytochromes typically (but not always) reflect the kinds of hemes they contain: cytochrome a contains heme A, cytochrome c contains heme C, etc. This convention may have been first introduced with the publication of the structure of heme A. Types: Use of capital letters to designate the type of heme The practice of designating hemes with upper case letters was formalized in a footnote in a paper by Puustinen and Wikstrom which explains under which conditions a capital letter should be used: "we prefer the use of capital letters to describe the heme structure as isolated. Lowercase letters may then be freely used for cytochromes and enzymes, as well as to describe individual protein-bound heme groups (for example, cytochrome bc, and aa3 complexes, cytochrome b5, heme c1 of the bc1 complex, heme a3 of the aa3 complex, etc)." In other words, the chemical compound would be designated with a capital letter, but specific instances in structures with lowercase. Thus cytochrome oxidase, which has two A hemes (heme a and heme a3) in its structure, contains two moles of heme A per mole protein. Cytochrome bc1, with hemes bH, bL, and c1, contains heme B and heme C in a 2:1 ratio. The practice seems to have originated in a paper by Caughey and York in which the product of a new isolation procedure for the heme of cytochrome aa3 was designated heme A to differentiate it from previous preparations: "Our product is not identical in all respects with the heme a obtained in solution by other workers by the reduction of the hemin a as isolated previously (2). For this reason, we shall designate our product heme A until the apparent differences can be rationalized.". In a later paper, Caughey's group uses capital letters for isolated heme B and C as well as A. Synthesis: The enzymatic process that produces heme is properly called porphyrin synthesis, as all the intermediates are tetrapyrroles that are chemically classified as porphyrins. The process is highly conserved across biology. In humans, this pathway serves almost exclusively to form heme. In bacteria, it also produces more complex substances such as cofactor F430 and cobalamin (vitamin B12).The pathway is initiated by the synthesis of δ-aminolevulinic acid (dALA or δALA) from the amino acid glycine and succinyl-CoA from the citric acid cycle (Krebs cycle). The rate-limiting enzyme responsible for this reaction, ALA synthase, is negatively regulated by glucose and heme concentration. Mechanism of inhibition of ALAs by heme or hemin is by decreasing stability of mRNA synthesis and by decreasing the intake of mRNA in the mitochondria. This mechanism is of therapeutic importance: infusion of heme arginate or hematin and glucose can abort attacks of acute intermittent porphyria in patients with an inborn error of metabolism of this process, by reducing transcription of ALA synthase.The organs mainly involved in heme synthesis are the liver (in which the rate of synthesis is highly variable, depending on the systemic heme pool) and the bone marrow (in which rate of synthesis of Heme is relatively constant and depends on the production of globin chain), although every cell requires heme to function properly. However, due to its toxic properties, proteins such as Hemopexin (Hx) are required to help maintain physiological stores of iron in order for them to be used in synthesis. Heme is seen as an intermediate molecule in catabolism of hemoglobin in the process of bilirubin metabolism. Defects in various enzymes in synthesis of heme can lead to group of disorder called porphyrias, these include acute intermittent porphyria, congenital erythropoetic porphyria, porphyria cutanea tarda, hereditary coproporphyria, variegate porphyria, erythropoietic protoporphyria. Synthesis for food: Impossible Foods, producers of plant-based meat substitutes, use an accelerated heme synthesis process involving soybean root leghemoglobin and yeast, adding the resulting heme to items such as meatless (vegan) Impossible burger patties. The DNA for leghemoglobin production was extracted from the soybean root nodules and expressed in yeast cells to overproduce heme for use in the meatless burgers. This process claims to create a meaty flavor in the resulting products. Degradation: Degradation begins inside macrophages of the spleen, which remove old and damaged erythrocytes from the circulation. Degradation: In the first step, heme is converted to biliverdin by the enzyme heme oxygenase (HO). NADPH is used as the reducing agent, molecular oxygen enters the reaction, carbon monoxide (CO) is produced and the iron is released from the molecule as the ferrous ion (Fe2+). CO acts as a cellular messenger and functions in vasodilation.In addition, heme degradation appears to be an evolutionarily-conserved response to oxidative stress. Briefly, when cells are exposed to free radicals, there is a rapid induction of the expression of the stress-responsive heme oxygenase-1 (HMOX1) isoenzyme that catabolizes heme (see below). The reason why cells must increase exponentially their capability to degrade heme in response to oxidative stress remains unclear but this appears to be part of a cytoprotective response that avoids the deleterious effects of free heme. When large amounts of free heme accumulates, the heme detoxification/degradation systems get overwhelmed, enabling heme to exert its damaging effects. Degradation: In the second reaction, biliverdin is converted to bilirubin by biliverdin reductase (BVR): Bilirubin is transported into the liver by facilitated diffusion bound to a protein (serum albumin), where it is conjugated with glucuronic acid to become more water-soluble. The reaction is catalyzed by the enzyme UDP-glucuronosyltransferase. Degradation: This form of bilirubin is excreted from the liver in bile. Excretion of bilirubin from liver to biliary canaliculi is an active, energy-dependent and rate-limiting process. The intestinal bacteria deconjugate bilirubin diglucuronide and convert bilirubin to urobilinogens. Some urobilinogen is absorbed by intestinal cells and transported into the kidneys and excreted with urine (urobilin, which is the product of oxidation of urobilinogen, and is responsible for the yellow colour of urine). The remainder travels down the digestive tract and is converted to stercobilinogen. This is oxidized to stercobilin, which is excreted and is responsible for the brown color of feces. In health and disease: Under homeostasis, the reactivity of heme is controlled by its insertion into the “heme pockets” of hemoproteins. Under oxidative stress however, some hemoproteins, e.g. hemoglobin, can release their heme prosthetic groups. The non-protein-bound (free) heme produced in this manner becomes highly cytotoxic, most probably due to the iron atom contained within its protoporphyrin IX ring, which can act as a Fenton's reagent to catalyze in an unfettered manner the production of free radicals. It catalyzes the oxidation and aggregation of protein, the formation of cytotoxic lipid peroxide via lipid peroxidation and damages DNA through oxidative stress. Due to its lipophilic properties, it impairs lipid bilayers in organelles such as mitochondria and nuclei. These properties of free heme can sensitize a variety of cell types to undergo programmed cell death in response to pro-inflammatory agonists, a deleterious effect that plays an important role in the pathogenesis of certain inflammatory diseases such as malaria and sepsis. In health and disease: Cancer There is an association between high intake of heme iron sourced from meat and increased risk of colon cancer. The heme content of red meat is 10 times higher than that of white meat such as chicken.The American Institute for Cancer Research (AICR) and World Cancer Research Fund International (WCRF) concluded in a 2018 report that there is limited but suggestive evidence that foods containing heme iron increase risk of colorectal cancer. A 2019 review found that heme iron intake is associated with increased breast cancer risk. Genes: The following genes are part of the chemical pathway for making heme: ALAD: aminolevulinic acid, δ-, dehydratase (deficiency causes ala-dehydratase deficiency porphyria) ALAS1: aminolevulinate, δ-, synthase 1 ALAS2: aminolevulinate, δ-, synthase 2 (deficiency causes sideroblastic/hypochromic anemia) CPOX: coproporphyrinogen oxidase (deficiency causes hereditary coproporphyria) FECH: ferrochelatase (deficiency causes erythropoietic protoporphyria) HMBS: hydroxymethylbilane synthase (deficiency causes acute intermittent porphyria) PPOX: protoporphyrinogen oxidase (deficiency causes variegate porphyria) UROD: uroporphyrinogen decarboxylase (deficiency causes porphyria cutanea tarda) UROS: uroporphyrinogen III synthase (deficiency causes congenital erythropoietic porphyria)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zero Range Combat** Zero Range Combat: Zero Range Combat (Japanese: ゼロレンジコンバット, Zerorenjikonbatto, also referred to as 零距離戦闘術, Rei kyori sentō-jutsu, which translates to Zero Range Combat) is a Japanese martial art inspired by military combatives. The founder is Yoshitaka Inagawa, who is publicly referred to as "sentō-sha" (戦闘者, eng. battler or combatant), and "master instructor" (マスターインストラクター masutāinsutorakutā) to his martial arts peers. The name "sentō-sha" is different from "martial arts" and/or "fighter" in that it means a person who is particular about military "battle", referring closer to something akin to "military artsman" (兵法者, Heihōsha). History: ZRC gained prominence in Japan when it was used in High&Low The Red Rain and Re:Born. Curriculum: While ZRC trains anyone learning the martial art via bare hands, knives, swords, batons, flashlights and handguns, the use of rifles is also included in its curriculum. ZRC also learned the technique of dodging flying bullets. Techniques ZRC was inspired by Inagawa learning Kobudō, Muay Boran, Sambo, Systema, Eskrima, and Jieitaikakutōjutsu. Use: Inagawa has provided self-defence guidance to the members of the JGSDF Central Readiness Regiment.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sleep sex** Sleep sex: Sexsomnia, also known as sleep sex, is a distinct form of parasomnia, or an abnormal activity that occurs while an individual is asleep. Sexsomnia is characterized by an individual engaging in sexual acts while in non-rapid eye movement (NREM) sleep. Sexual behaviors that result from sexsomnia are not to be mistaken with normal nocturnal sexual behaviors, which do not occur during NREM sleep. Sexual behaviors that are viewed as normal during sleep and are accompanied by extensive research and documentation include nocturnal emissions, nocturnal erections, and sleep orgasms. Sleep sex: Sexsomnia can present in an individual with other pre-existing sleep-related disorders. Sleep sex: Sexsomnia is most often diagnosed in males beginning in adolescence.Although they may appear to be fully awake, individuals who have sexsomnia often have no recollection of the sexual behaviors they exhibit while asleep. As a result, the individual that they share the bed with notices and reports the sexual behavior.In some cases, a medical diagnosis of sexsomnia has been used as a criminal defense in court for alleged sexual assault and rape cases. Classification: DSM-5 criteria Under DSM-5 criteria, there are 11 diagnostic groups that comprise sleep-wake disorders. These include insomnia disorders, hypersomnolence disorders, narcolepsy, obstructive sleep apnea hypopnea, central sleep apnea, sleep-related hypoventilation, circadian rhythm sleep-wake disorders, non–rapid eye movement (NREM) sleep arousal disorders, nightmare disorders, rapid eye movement (REM) sleep behavior disorders, restless legs syndrome (RLS), and substance-medication-induced sleep disorders. Sexsomnia is classified under NREM arousal parasomnia. Classification: NREM arousal parasomnia Parasomnia disorders are classified into the following categories: arousal disorders sleep-wake transition disorders parasomnias associated with REM sleep Symptoms: Symptoms of sexsomnia include, but are not limited to: masturbation fondling intercourse with climax sexual assault or rape moaning talking dirty while asleepMasturbation during sleep was first reported as a clinical disorder in 1986. The case involved a 34-year-old male who was reported to masturbate each night until climax, even after reporting to have had sexual intercourse with his wife each night before falling asleep. Through the use of video-polysomnography (vPSG), a documented case of sexsomnia was able to provide further information into the nature of this unusual form of parasomnia.A confusing characteristic for those witnessing an individual in an episode of sexsomnia is the appearance of their eyes being open. Though the eyes are described as being "vacant" and "glassy", they give the appearance of the individual being awake and conscious, although the individual is completely unconscious and unaware of their actions. Causes: Symptoms of sexsomnia can be caused by or be associated with: stress factors sleep deprivation consumption of alcohol or other drugs pre-existing parasomnia behaviorsSleep deprivation is known to have negative effects on the brain and behavior. Extended periods of sleep deprivation often results in the malfunctioning of neurons, directly affecting an individual's behavior. While muscles are able to regenerate even in the absence of sleep, neurons are incapable of this ability. Specific stages of sleep are responsible for the regeneration of neurons while others are responsible for the generation of new synaptic connections, the formation of new memories, etc.Sexsomnia can also be triggered by physical contact initiated by a partner or another individual sharing the same bed. Causes: Risk factors Sexsomnia affects individuals of all age groups and backgrounds but present as an increased risk for individuals who experience the following: coexisting sleep disorders sleep disruption secondary to obstructive sleep apnea sleep related epilepsy certain medicationsBehaviors such as pelvic thrusting, sexual arousal, and orgasm are often attributed to sleep-related epilepsy disorder. In some cases, physical contact with a partner in bed has been seen to trigger sexsomia behaviors.Certain medications, including the sedative-hypnotic Zolpidem (commonly known by the brand name Ambien) frequently used to treat insomnia, have been seen to increase risk of sexsomnia as an adverse effect. Causes: Like sleep-related eating disorders, sexsomnia presents more commonly in adults than children. However, these adult individuals usually have a history of parasomnia that began in childhood. Effects: It is possible for an individual who has sexsomnia to experience a variety of negative emotions due to the nature of their disorder. The following are commonly seen secondary effects of sexsomnia: Anger Confusion Denial Frustration Guilt Revulsion ShameThe effects of sexsomnia also extend to those in relationship with the patient. Whether the significant other is directly involved, in the case of sexual intercourse, or a bystander, in the case of masturbation behavior, they are often the first to recognize the abnormal behavior. These abnormal sexual behaviors may be unwanted by the partner, which could lead to the incident being defined as sexual assault. Mechanism: NREM sleep Non-Rapid Eye Movement sleep, or NREM, consists of three stages. Stage 1 is described as "drowsy sleep" or "somnolence" and is characterized by breathing rates becoming increasingly more consistent, the beginning of a decrease in muscle activity, and a decrease in heart rate. The typical duration of Stage 1 is around 10 minutes and accounts for approximately 5% of an individual's total sleep. Stage 2 is characterized by a further decline in muscle activity accompanied by a fading sense of consciousness of surroundings. Brain waves during Stage 2 are seen in the theta range. Stage 2 accounts for approximately 45-50% of an individual's total sleep. Stage 3 is the final stage of NREM sleep and the most common for parasomnias to occur. Also known as slow wave sleep (SWS), Stage 3 is characterized by brain temperature, respiratory rate, heart rate, and blood pressure being measured at their lowest. Representing approximately 15-20% of an individual's total sleep, brain waves during this stage are seen in the delta range. When an individual awakes during this stage, they are likely to exhibit grogginess and require up to thirty minutes to regain normal function and consciousness. Diagnosis: Though it is not possible for a definitive diagnosis of sexsomnia, a series of factors are considered to determine the presence of the condition. Clinical tests may also be utilized for further study. Diagnosis: Determining factors Determining factors include but are not limited to: a family history of somnambulism or sleepwalking prior episodes of somnambulism disorientation when awoken observed confusional or autonomic behavior amnesia of episode trigger factors the individual possesses lack of regard to conceal episode the nature of event compared to the individual's baseline character Clinical tests Electroencephalography Electroencephalograms, or EEG, are tests used to depict electrical activity and waves produced by the brain. This test has the ability to detect abnormalities that are associated with disorders that affect brain activity. Episodes of sexsomnia occur most commonly during slow-wave-sleep, or SWS. During this stage of sleep, brain waves tend to slow down and become larger. Through the use of electroencephalography, health professionals are able to determine if the sexual behaviors are occurring during non-REM sleep or if the individual is fully conscious. Diagnosis: Polysomnography Polysomnography is a study conducted while the individual being observed is asleep. A polysomnograph (PSG) is a recording of an individual's body functions as they sleep. Specialized electrodes and monitors are connected to the individual and remain in place throughout study. Video cameras can be used to record physical behaviors that occur while the subject is asleep. Typically, the unwanted sexual behaviors do not present on film and the majority of information is taken from a sleep study. Diagnosis: A PSG cannot determine a diagnosis every time it is performed, but can assist in determining what diagnoses should be considered or excluded. While PSG is a useful diagnostic tool, it cannot replace forensic examination. A PSG study may identify sexsomnia, but cannot determine whether it was responsible for an individual's actions or present during the time of an alleged crime. Likewise, the study may not identify sexsomnia, but that does not mean that the patient has never experienced it, so it is essential to collect information from as many sources as possible. This could include interviews with friends, family, and significant others, as well as medical records concerning the individual's sleep previous patterns.Polysomnography is also used in the diagnosis of other sleep disorders such as obstructive sleep apnea, narcolepsy, and restless leg syndrome. Diagnosis: Body functions measured by a PSG inspiratory and expiratory air flow oxygen saturation in blood respiratory effort respiratory rate eye movements brain waves electrical activity in muscles position of body Prevention: Since there is not an FDA-approved medication on the market specifically designed for the treatment of sexsomnia, health professionals attempt to treat the disorder through a variety of approaches. Among the first line of prevention for sexsomnia involves creating and maintaining a safe environment for all who are affected as a result of the disorder. Precautionary measures include, but are not limited to, the individual in question sleeping in a separate bedroom and the installation of locks and alarms on doors. Treatment: Treatment for sexsomnia involves one or more of the following: prescription medications CPAP lifestyle changes Medications Clonazepam has been prescribed as treatment for sexsomnia. This medication is classified as a benzodiazepine and works by acting on the GABA-A receptors present in the central nervous system (CNS). Benzodiazepines open the chloride channels to allow chloride to enter the neuron. The most common use of this medication is for the treatment of anxiety, seizures, panic disorders, and sleep disorders. Anticonvulsant therapy is used to treat sexual behaviors that result secondary to sleep related epilepsy. Treatment: CPAP Continuous positive airway pressure is commonly used as a treatment for sleep apnea. In cases where the individual has both sleep apnea, and sexual behaviors consistent with sexsomnia, the implementation of a continuous positive airway pressure resulted in complete discontinuation of unwanted behaviors. Lifestyle changes Positive lifestyle changes are encouraged for individuals with sexsomnia. Reducing stress and anxiety triggers may reduce the likelihood of an exacerbation of the disorder. The use of open discussion and understanding between couples decrease the negative emotional feelings and stress felt and generates a support system. Research: Research findings for sexsomnia first appeared in 1996 publication by Colin Shapiro and Nik Trajanovic of the University of Toronto. In the most recent study of sexsomnia, 832 individuals were surveyed at a sleep disorder center. Among these individuals, 8% reported sexual behaviors consistent with sexsomnia, with men reporting three times more frequently than women. Society and culture: Sexsomnia has begun to gain attention through its exposure on television, news platforms, and social media outlets. Media exposure Articles regarding sexsomnia continue to circulate on Glamour.com, the Huffington Post, and Refinery29 among many others. Increased exposure has resulted in a conversation between those who have the disorder and those directly affected. Sexsomnia has also been featured in popular television series including House, MD, Law and Order: Special Victims Unit, and Desperate Housewives. Legal cases Sex offender controversies The number of alleged sex offenders claiming sexsomnia as the cause of their offenses is rapidly growing. The Australasian Sleep Association has urged qualified physicians to contribute in expert testimony in such cases to ensure the individual's claims are valid and not just an attempt to be released of sexual offense charges. Society and culture: Smith v. State Smith v. State of Georgia officially established a separate affirmative defense for the unconscious. According to the defense, "A person who commits an act during unconsciousness or sleep has not committed a voluntary act and is not criminally responsible for the act." In order for the assault to be considered a crime by the State of Georgia, the accused must have voluntarily committed the act and exhibited intent to carry out the act. Society and culture: Swedish man acquitted of rape with sexsomnia defense Mikael Halvarsson was acquitted of rape in Sweden due to the sexsomnia defense. Charges were brought against Halvarsson after reports of sexual assault were filed by his girlfriend at the time. Upon investigation, Halvarsson was found still asleep in the alleged victim's bed when police arrived. During the appeal, a previous girlfriend of Halvarsson testified of similar behavior she had observed in the past, as well as his mother reporting unusual sleep behaviors beginning at a young age. Society and culture: Rape trial dropped due to victim's supposed sexsomnia In 2022, a case came to light in England, where an allegation of rape in 2017 was dropped in 2020 by the Crown Prosecution Service (CPS) due to expert opinion that the woman involved had sexsomnia, and so the male defendant may have believed that she was consenting, and was thus formally acquitted. The woman involved later appealed the decision. A chief crown prosecutor separate from the department that made the decision to close the case reviewed the evidence of the case again: he concluded that the case should have gone to court; that the expert opinions on sexsomnia should have been challenged in court; and that the decision to close the case was a mistake. The reviewing chief prosecutor apologised unreservedly to the woman when concluding the review. Despite the review, the case could not be reopened because the case had been formally closed and the defendant declared not guilty.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mnemonic link system** Mnemonic link system: A mnemonic link system, sometimes also known as a chain method, is a method of remembering lists that is based on creating an association between the elements of that list. For example, when memorizing the list (dog, envelope, thirteen, yarn, window), one could create a story about a "dog stuck in an envelope, mailed to an unlucky thirteen black cat playing with yarn by the window". It is argued that the story would be easier to remember than the list itself. Mnemonic link system: Another method is to actually link each element of the list with a mental picture of an image that includes two elements in the list that are next to each other. This would form an open doubly linked list which could be traversed at will, backwards or forwards. For example, in the last list one could imagine their dog inside of a giant envelope, then a black cat eating an envelope. The same logic would be used with the rest of the items. The observation that absurd images are easier to remember is known as the Von Restorff effect, although the success of this effect was refuted by several studies (Hock et al. 1978; Einstein 1987), which found that the established connection between the two words is more important than the image's absurdity. Mnemonic link system: In order to access a certain element of the list, one needs to recite the list step by step, much in the same vein as a linked list, in order to get the element from the system. Mnemonic link system: There are three limitations to the link system. The first is that there is no numerical order imposed when memorizing, hence the practitioner cannot immediately determine the numerical position of an item; this can be solved by bundling numerical markers at set points in the chain or using the peg system instead. The second is that if any of the items is forgotten, the entire list may be in jeopardy. The third is the potential for confusing repeated segments of the list, a common problem when memorizing binary digits. This limitation can be resolved either through bundling or by using either the peg system or the method of loci.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quadro** Quadro: Quadro was Nvidia's brand for graphics cards intended for use in workstations running professional computer-aided design (CAD), computer-generated imagery (CGI), digital content creation (DCC) applications, scientific calculations and machine learning. Differences between the professional Quadro and mainstream GeForce lines include the use of ECC memory and enhanced floating point precision. These are desirable properties when the cards are used for calculations which, in contrast to graphics rendering, require reliability and precision. Quadro: The Nvidia Quadro product line directly competed with AMD's Radeon Pro (formerly FirePro/FireGL) line of professional workstation cards.Nvidia has moved away from the Quadro branding for new products, starting with the launch of the Ampere architecture-based RTX A6000 on October 5, 2020. To indicate the upgrade to the Nvidia Ampere architecture for their graphics cards technology, Nvidia RTX is the product line being produced and developed moving forward for use in professional workstations. History: The Quadro line of GPU cards emerged in an effort towards market segmentation by Nvidia. In introducing Quadro, Nvidia was able to charge a premium for essentially the same graphics hardware in professional markets, and direct resources to properly serve the needs of those markets. To differentiate their offerings, Nvidia used driver software and firmware to selectively enable features vital to segments of the workstation market, such as high-performance anti-aliased lines and two-sided lighting, in the Quadro product. The Quadro line also received improved support through a certified driver program. These features were of little value to the gamers that Nvidia's products already sold to, but their lack prevented high-end customers from using the less expensive products. History: There are parallels between the market segmentation used to sell the Quadro line of products to workstation (DCC) markets and the Tesla line of products to engineering and HPC markets. History: In a settlement of a patent infringement lawsuit between SGI and Nvidia, SGI acquired rights to speed-binned Nvidia graphics chips which they shipped under the VPro product label. These designs were completely separate from the SGI Odyssey based VPro products initially sold on their IRIX workstations which used a completely different bus. SGI's Nvidia-based VPro line included the VPro V3 (Geforce 256), VPro VR3 (Quadro), VPro V7 (Quadro2 MXR), and VPro VR7 (Quadro2 Pro). Quadro SDI: Actual extra cards only for Quadro 4000 cards and higher: SDI Capture: SDI Output: Quadro Plex: Quadro Plex consists of a line of external servers for rendering videos. A Quadro Plex contains multiple Quadro FX video cards. A client computer connects to Quadro Plex (using PCI Express ×8 or ×16 interface card with interconnect cable) to initiate rendering. More data in Nvidia Tesla Cards. Quadro SLI and SYNC: Scalable Link Interface, or SLI, is the next generation of Plex. SLI can improve Frame Rendering, FSAA.Quadro SLI support Mosaic for 2 Cards and 8 Monitors.With Quadro SYNC Card support of max. 16 Monitors (4 per Card) possible.Most Cards have SLI-Bridge-Slot for 2, 3 or 4 cards on one main board.Acceleration of scientific calculations is possible with CUDA and OpenCL.Nvidia has 4 types of SLI bridges: Standard Bridge (400 MHz Pixel Clock and 1GB/s bandwidth) LED Bridge (540 MHz Pixel Clock) High-Bandwidth Bridge (650 MHz Pixel Clock) PCIe-Lanes only reserved for SLIMore see SLI. Quadro VCA: Nvidia supports SLI and supercomputing with its 8-GPU Visual Computing Appliance. Nvidia Iray, Chaosgroup V-Ray and Nvidia OptiX accelerate Raytracing for Maya, 3DS Max, Cinema4D, Rhinoceros and others. All software with CUDA or OpenCL, such as ANSYS, NASTRAN, ABAQUS, and OpenFoam, can benefit from VCA. The DGX-1 is available with 8 GP100 Cards.More data in Nvidia Tesla Cards. Quadro RTX: The Quadro RTX series is based on the Turing microarchitecture, and features real-time raytracing. This is accelerated by the use of new RT cores, which are designed to process quadtrees and spherical hierarchies, and speed up collision tests with individual triangles. The Turing microarchitecture debuted with the Quadro RTX series before the mainstream consumer GeForce RTX line.The raytracing performed by the RT cores can be used to produce reflections, refractions and shadows, replacing traditional raster techniques such as cube maps and depth maps. Instead of replacing rasterization entirely, however, the information gathered from ray-tracing can be used to augment the shading with information that is much more physically correct, especially regarding off-camera action. Quadro RTX: Tensor cores further enhance the image produced by raytracing, and are used to de-noise a partially rendered image.RTX is also the name of the development platform introduced for the Quadro RTX series. RTX leverages Microsoft's DXR, OptiX and Vulkan for access to raytracing.Turing is manufactured using TSMC's 12 nm FinFET fabrication process. Quadro RTX also uses GDDR6 memory from Samsung Electronics. Video cards: GeForce Many of these cards use the same core as the game- and action-oriented GeForce video cards by Nvidia. Those cards that are nearly identical to the desktop cards can be modified to identify themselves as the equivalent Quadro card to the operating system, allowing optimized drivers intended for the Quadro cards to be installed on the system. While this may not offer all of the performance of the equivalent Quadro card, it can improve performance in certain applications, but may require installing the MAXtreme driver for comparable speed. Video cards: The performance difference comes in the firmware controlling the card. Given the importance of speed in a game, a system used for gaming can shut down textures, shading, or rendering after only approximating a final output—in order to keep the overall frame rate high. The algorithms on a CAD-oriented card tend rather to complete all rendering operations, even if that introduces delays or variations in the timing, prioritising accuracy and rendering quality over speed. A Geforce card focuses more on texture fillrates and high framerates with lighting and sound, but Quadro cards prioritize wireframe rendering and object interactions. Software: With Caps Viewer (1.38 in 2018) all Windows Users can see data of the graphic Card, the installed Driver and can test some Features. GPU-Z reads also data of the graphic cards and the user can send some data for better database. Software: Quadro/RTX drivers Curie-Architecture Last drivers see Driver Portal of Nvidia (End-of-Life) Tesla-Architecture (G80+, GT2xx) in Legacy Mode Quadro Driver 340: OpenGL 3.3, OpenCL 1.1, DirectX 10.0/10.1 (End-of-Life) Fermi (GFxxx): OpenCL 1.1, OpenGL 4.5, some OpenGL 2016 Features with Quadro Driver 375, in legacy mode with version 391.74 (End-of-Life) Kepler (GKxxx): OpenCL 1.2, OpenGL 4.6, Vulkan 1.2 with RTX Enterprise/Quadro Driver 470 (End-of-Life) Maxwell (GMxxx): OpenCL 3.0, OpenGL 4.6, Vulkan 1.3 with RTX Enterprise/Quadro Driver 525+ Pascal (GPxxx): OpenCL 3.0, OpenGL 4.6, Vulkan 1.3 with RTX Enterprise/Quadro driver 525+ Volta (GVxxx): OpenCL 3.0, OpenGL 4.6, Vulkan 1.3 with RTX Enterprise/Quadro driver 525+ Turing (TUxxx): OpenCL 3.0, OpenGL 4.6, Vulkan 1.3 with RTX Enterprise/Quadro driver 525+ Ampere (GAxxx): OpenCL 3.0, OpenGL 4.6, Vulkan 1.3 with RTX Enterprise/Quadro driver 525+ Ada Lovelace (ADxxx): OpenCL 3.0, OpenGL 4.6, Vulkan 1.3 with RTX Enterprise/Quadro driver 525+ CUDA Tesla Architecture and laterSupported CUDA Level of GPU and Card. Software: CUDA SDK 6.5 support for Compute Capability 1.0 - 5.x (Tesla, Fermi, Kepler, Maxwell) Last Version with support for Tesla-Architecture with Compute Capability 1.x CUDA SDK 7.5 support for Compute Capability 2.0 - 5.x (Fermi, Kepler, Maxwell) CUDA SDK 8.0 support for Compute Capability 2.0 - 6.x (Fermi, Kepler, Maxwell, Pascal) Last version with support for compute capability 2.x (Fermi) CUDA SDK 9.0/9.1/9.2 support for Compute Capability 3.0 - 7.2 (Kepler, Maxwell, Pascal, Volta) CUDA SDK 10.0/10.1/10.2 support for Compute Capability 3.0 - 7.5 (Kepler, Maxwell, Pascal, Volta, Turing) Last version with support for compute capability 3.x (Kepler). Software: CUDA SDK 11.0/11.1/11.2/11.3/11.4/11.5/11.6/11.7 support for Compute Capability 3.5 - 8.9 (Kepler(GK110, GK208, GK210 only), Maxwell, Pascal, Volta, Turing, Ampere, Ada Lovelace) CUDA SDK 11.8 support for Compute Capability 3.5 - 8.9 (Kepler(GK110, GK208, GK210 only), Maxwell, Pascal, Volta, Turing, Ampere, Ada Lovelace) CUDA SDK 12.0 support for Compute Capability 5.0 - 8.9 (Maxwell, Pascal, Volta, Turing, Ampere, Ada Lovelace)For own Card Test see CUDA-Z Tool Desktop PCI Express: Quadro FX (without CUDA, OpenCL, or Vulkan) Rankine (NV3x): DirectX 9.0a, Shader Model 2.0a, OpenGL 2.1 Curie (NV4x, G7x): DirectX 9.0c, Shader Model 3.0, OpenGL 2.1 Quadro FX (with CUDA and OpenCL, but no Vulkan) Architecture Tesla (G80+, GT2xx) with OpenGL 3.3 and OpenCL 1.1 Tesla (G80+): DirectX 10, Shader Model 4.0, only Single Precision (FP32) available for CUDA and OpenCL Tesla 2 (GT2xx): DirectX 10.1, Shader Model 4.1, Single Precision (FP32) available for CUDA and OpenCL (Double Precision (FP64) available for CUDA and OpenCL only for GT200 with CUDA Compute Capability 1.3 ) Quadro Architecture Fermi (GFxxx), Kepler (GKxxx), Maxwell (GMxxx), Pascal (GPxxx), Volta (GVxxx) (except Quadro 400 with Tesla 2) All Cards with Display Port 1.1+ can support 10bit per Channel for OpenGL (HDR for Graphics Professional (Adobe Photoshop and more)) Vulkan 1.2 available with Driver Windows 456.38, Linux 455.23.04 for Kepler, Maxwell, Pascal, Volta All Kepler, Maxwell, Pascal, Volta and later can do OpenGL 4.6 with Driver 418+ All Quadro can do OpenCL 1.1. Kepler can do OpenCL 1.2, Maxwell and later can do OpenCL 3.0 All can do Double Precision with Compute Capability 2.0 and higher (see CUDA)1 Nvidia Quadro 342.01 WHQL: support of OpenGL 3.3 and OpenCL 1.1 for legacy Tesla microarchitecture Quadros.2 Nvidia Quadro 377.83 WHQL: support of OpenGL 4.5, OpenCL 1.1 for legacy Fermi microarchitecture Quadros.3 Nvidia Quadro 474.04 WHQL: support of OpenGL 4.6, OpenCL 1.2, Vulkan 1.2 for legacy Kepler microarchitecture Quadros.4 Nvidia Quadro 528.24 WHQL: support of OpenGL 4.6, OpenCL 3.0, Vulkan 1.3 for Maxwell, Pascal & Volta microarchitecture Quadros.5 OpenCL 1.1 is available for Tesla-Chips, OpenCL 1.0 for some Cards with G8x, G9x and GT200 by MAC OS X Quadro RTX/RTX series (With Ray tracing) Turing (TU10x) microarchitecture Ampere (GA10x) microarchitecture Ada Lovelace (AD10x) microarchitecture Quadro RTX/RTX Mobile Turing, Ampere microarchitecture Desktop AGP: Architecture Celsius (NV1x): DirectX 7, OpenGL 1.2 (1.3) Architecture Kelvin (NV2x): DirectX 8 (8.1), OpenGL 1.3 (1.5), Pixel Shader 1.1 (1.3) Architecture Rankine (NV3x): DirectX 9.0a, OpenGL 1.5 (2.1), Shader Model 2.0a Architecture Curie (NV4x): DirectX 9.0c, OpenGL 2.1, Shader Model 3.0 Desktop PCI: Architecture Rankine (NV3x): DirectX 9.0a, OpenGL 1.5 (2.1), Shader Model 2.0a For business NVS: The Nvidia Quadro NVS graphics processing units (GPUs) provide business graphics solutions for manufacturers of small, medium, and enterprise-level business workstations. The Nvidia Quadro NVS desktop solutions enable multi-display graphics for businesses such as financial traders. For business NVS: Architecture Celsius (NV1x): DirectX 7, OpenGL 1.2 (1.3) Architecture Kelvin (NV2x): DirectX 8 (8.1), OpenGL 1.3 (1.5), Pixel Shader 1.1 (1.3) Architecture Rankine (NV3x): DirectX 9.0a, OpenGL 1.5 (2.1), Shader Model 2.0a Architecture Curie (NV4x): DirectX 9.0c, OpenGL 2.1, Shader Model 3.0 Architecture Tesla (G80+): DirectX 10.0, OpenGL 3.3, Shader Model 4.0, CUDA 1.0 or 1.1, OpenCL 1.1 Architecture Tesla 2 (GT2xx): DirectX 10.1, OpenGL 3.3, Shader Model 4.1, CUDA 1.2 or 1.3, OpenCL 1.1 Architecture Fermi (GFxxx): DirectX 11.0, OpenGL 4.6, Shader Model 5.0, CUDA 2.x, OpenCL 1.1 Architecture Kepler (GKxxx): DirectX 11.2, OpenGL 4.6, Shader Model 5.0, CUDA 3.x, OpenCL 1.2, Vulkan 1.2 Architecture Maxwell 1 (GM1xx): DirectX 12.0, OpenGL 4.6, Shader Model 5.0, CUDA 5.0, OpenCL 3.0, Vulkan 1.3 Mobile applications: Quadro FX M (without Vulkan) Architecture Rankine (NV3x), Curie (NV4x, G7x) and Tesla (G80+, GT2xx) Quadro NVS M Architecture Curie (NV4x, G7x): DirectX 9.0c, OpenGL 2.1, Shader Model 3.0 Architecture Tesla (G80+): DirectX 10.0, OpenGL 3.3, Shader Model 4.0, CUDA 1.0 or 1.1, OpenCL 1.1 Architecture Tesla 2 (GT2xx): DirectX 10.1, OpenGL 3.3, Shader Model 4.1, CUDA 1.2 or 1.3, OpenCL 1.1 Architecture Fermi (GFxxx): DirectX 11.0, OpenGL 4.6, Shader Model 5.0, CUDA 2.x, OpenCL 1.1 Architecture Kepler (GKxxx): DirectX 11.2, OpenGL 4.6, Shader Model 5.0, CUDA 3.x, OpenCL 1.2, Vulkan 1.1 Architecture Maxwell 1 (GM1xx): DirectX 12.0, OpenGL 4.6, Shader Model 5.0, CUDA 5.0, OpenCL 1.2, Vulkan 1.1 Quadro M Architecture Fermi, Kepler, Maxwell, Pascal Fermi, Kepler, Maxwell, and Pascal support OpenGL 4.6 with driver versions 381+ on Linux or 390+ on Windows All can do Double Precision with compute Capability 1.3 and higher Vulkan 1.2 on Kepler and 1.3 on Maxwell and later Quadro 5000M has 2048MB of VRAM, of which 1792MB is usable with ECC enabled. NVENC and NVDEC support matrix: HW accelerated encode and decode are supported on NVIDIA Quadro products with Fermi, Kepler, Maxwell, Pascal, Turing, and Ampere generation GPUs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Glossary of quantum philosophy** Glossary of quantum philosophy: This is a glossary for the terminology applied in the foundations of quantum mechanics and quantum metaphysics, collectively called quantum philosophy, a subfield of philosophy of physics. Note that this is a highly debated field, hence different researchers may have different definitions on the terms. Physics: Non-classical properties of quantum mechanics nonseparabilitySee also: entangledNonlocality Superposition of states See also: Schrödinger's cat Quantum phenomena decoherence uncertainty principle See also: Einstein and the quantum entanglement See also: Bell's theorem, EPR paradox and CHSH inequality quantum teleportation superselection rule quantum erasure delayed choice experiment Quantum Zeno effect premeasurement ideal measurement Suggested physical entities hidden variables ensemble Terms used in the formalism of quantum mechanics Born's rule collapse postulate measurement relative state decoherent histories Metaphysics: objective and subjective ontic and epistemic intrinsic and extrinsic agnostic Philosophical realism determinism causality empiricism rationalism scientific realism psychophysical parallelism Interpretations of quantum mechanics: List of interpretations: Bohmian Mechanics de Broglie–Bohm theory consistent histories Copenhagen interpretation conventional interpretation Usually refer to the Copenhagen interpretation. Ensemble Interpretation Everett interpretation See relative-state interpretation. hydrodynamic interpretation Ghirardi–Rimini–Weber theory (GRW theory / GRW effect) many-worlds interpretation many-minds interpretation many-measurements interpretation modal interpretations objective collapse theory orthodox interpretation Usually refer to the Copenhagen interpretation. Penrose interpretation Pilot wave Quantum logic relative-state interpretation relational quantum mechanics stochastic interpretation transactional interpretation Uncategorized items: quantum Darwinism completeness relativistic measurement theory consciousness and observer role quantum correlation quantum indeterminism stochastic collapse pointer state quantum causality postselection entropy quantum cosmology People: Early researchers (before the 1950s): Max Born Albert Einstein Niels Bohr J. S. Bell Hugh Everett III David Bohm1950s–2010s: Roland Omnès W. H. Zurek Erich Joos Max Tegmark Maximilian Schlosshauer H. D. Zeh David Deutsch Robert B. Griffiths Bernard d'Espagnat Carl von Weizsäcker2000s or later: Bob Coecke Robert Spekkens
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded