text
stringlengths
60
353k
source
stringclasses
2 values
**Ydc2 protein domain** Ydc2 protein domain: In molecular biology, the protein domain, Ydc2 (also known as SpCce1), is a Holliday junction resolvase from the fission yeast Schizosaccharomyces pombe that is involved in the maintenance of mitochondrial DNA. Function: In molecular biology, the Ydc2 domains are enzymes, or in other words biological catalysts, capable of resolving Holliday junctions into separate DNA duplexes by cleaving DNA after 5'-CT-3, and 5'-TT-3, sequences. Properties: The junction resolving enzymes are very diverse, but have the following properties in common: high structure specificity for binding metal dependent, sequence specific cleavage activityEssentially, they are highly specific. Limiting factors: Furthermore, the cleavage efficiency is affected by: strand type (continuous or exchange) nucleotide sequence at cleavage site Structure: This protein domain forms a ribonuclease H fold consisting of two beta sheets and one alpha helix, arranged as a beta-alpha-beta motif. Each beta sheet has five strands, arranged in a 32145 order, with the second strand being antiparallel to the rest.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zinc nitrate** Zinc nitrate: Zinc nitrate is an inorganic chemical compound with the formula Zn(NO3)2. This colorless, crystalline salt is highly deliquescent. It is typically encountered as a hexahydrate Zn(NO3)2·6H2O. It is soluble in both water and alcohol. Synthesis: Zinc nitrate is usually prepared by dissolving zinc metal, zinc oxide, or related materials in nitric acid: Zn + 2 HNO3 → Zn(NO3)2 + H2 ZnO + 2 HNO3 → Zn(NO3)2 + H2OThese reactions are accompanied by the hydration of the zinc nitrate. The anhydrous salt arises by the reaction of anhydrous zinc chloride with nitrogen dioxide: ZnCl2 + 4 NO2 → Zn(NO3)2 + 2 NOCl Reactions: Treatment of zinc nitrate with acetic anhydride gives zinc acetate.On heating, zinc nitrate undergoes thermal decomposition to form zinc oxide, nitrogen dioxide and Oxygen: 2 Zn(NO3)2 → 2 ZnO + 4 NO2 + 1 O2 Applications: Zinc nitrate has no large scale application but is used on a laboratory scale for the synthesis of coordination polymers. Its controlled decomposition to zinc oxide has also been used for the generation of various ZnO based structures, including nanowires.It can be used as a mordant in dyeing. An example reaction gives a precipitate of zinc carbonate: Zn(NO3)2 + Na2CO3 → ZnCO3 + 2 NaNO3
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IBM Operational Decision Management** IBM Operational Decision Management: IBM Operational Decision Manager (ODM) is IBM's Business Rule Management System (BRMS). IBM ODM also incorporates IBM's implementation of Business Event Processing capabilities (also called Complex Event Processing, or CEP.) IBM ODM can be installed both independently and as an application running on WebSphere Application Server. This software is currently in V8.11.0 (as of October 2022). Business rules and events: Rules A business rule is a statement of logic that is used for a business decision to be made. This statement of logic is generally part of a business policy. Rules processing involves a piece of software using this pre-defined rule to make a real-time decision. Example A policy states that a borrower's initial loan must not exceed 3 times their annual salary. The Business Rule would read:if Loan > (Salary * 3) then disallow. Events A business event is a signal or collection of signals indicating that a change in state has occurred, and consists of a small message. Event processing involves using events to determine if an action needs to occur as a result, and carrying out that action. Example If a customer's withdrawal event on their account causes the balance to drop below zero, then an action is taken to notify that customer. Artifacts of IBM ODM: IBM ODM is an implementation of a Business Rule Management System. It allows the creation, management, testing and governance of business rules and events and stores them in a central repository where they can be accessed by multiple individuals and software products. This central storage of the rules and events mean that they can be easily modified without having to rebuild software, and with a reduced testing cycle, and the different software products will pick up this change simultaneously. Artifacts of IBM ODM: Action rules A basic rule expressed in a logical form, stating that if a condition occurs then an action should result. IBM ODM uses Business Action Language (BAL) to define such rules, allowing them to be viewed in a more 'natural' language. Examples If a credit card transaction occurs outside a customer's country, then that customer should be called to confirm the card is not being used fraudulently.If Country of Card Usage is not equal to Customer's home country then trigger the sending of a message to call that customer. Artifacts of IBM ODM: At a bank some customers are not allowed to become overdrawn and some are:If a customer tries to withdraw funds allowing their account to drop below $0 and they are allowed: permit transactionotherwise: disallow transaction Decision tables Decision table Example A loan company determines the insurance rate of a loan depending on the amount, and the credit rating of the customer. Artifacts of IBM ODM: Presented with a customer in group B asking for a loan of $250,000, the rule would indicate the insurance rate should be 0.002%. Rule flows These indicate the order in which rules should be executed. Example An insurance company wants to establish whether a driver should be given a particular insurance policy. The decision depends on: The age of the applicant Whether their history indicates they are a high risk driver, based on speeding tickets and past accidents. A profile score of that customer, based on how recently they have passed their test and other factors. Whether a particular rule is run is dependent on answers to previous rules.A rule flow is constructed, from a start node to the different rules that must be considered and finishing at the end node. Score card This is a statistical model that applies a numerical score to an object, such as a customer or an account. The same attributes are applied when calculating this score for each item. An example of this is a Credit scorecards. Example A score is allocated to a borrower depending on their Age, Citizenship and Credit grade. Events If a specific change in state occurs then a message is emitted causing an event to occur. Artifacts of IBM ODM: Example At a bank some customers are not allowed to become overdrawn and some are. A customer who has tried to take out a loan is refused by the system because their credit rating is too low.If customer is refused, emit an event causing a message to be sent to the user informing them that they have been refused in indicating the reason. Artifacts of IBM ODM: In summary Combining Business Rules and Events within the same system brings together two complementary technologies to automate real-time decisions. An event may trigger a rule to be run, conversely the outcome of a decision made by a rule may emit an event. Components: IBM ODM consists of the following parts: Decision center This provides a repository and management component for the creation and maintenance of decision logic, guiding the business system's behavior. It is the central hub for the coordination of the decision life cycle, covering both business rules and business events, and allowing editing of each. It is presented in different ways depending on how the user is intended to view the system.Business Console, for collaboratively working with business rules.Enterprise Console. Components: Decision server This consists of the runtime components for business rules and business events. Decision server rules This provides a group of tools for construction and running of rules and automated decisions. Various components give access for different types of users, allowing the design, authoring, review, testing and running of business rules. This includes the Rules Designer, an Eclipse-based application for developing applications in Decision Server Rules. Decision server events This provides an integrated set of components for running events. Various components give access for different types of users, allowing the design, development, testing, deployment and maintenance of business events. This includes the Events Designer, an Eclipse-based application for developing applications in Decision Server Events. Components: Connection between parts Rules can be defined in the Decision Center and can also be updated there, using a variety of interfaces, including the Enterprise Console, Business Console. Rules are then stored in a Repository which manages the Decision Artifacts, access and control and versioning. From here the rules are deployed to the Decision Server, which executes these rules, and provides monitoring and measuring facilities. Rules can also be deployed direct to the Decision Server using the Rule Designer or Event Designer. Requirements: Decision Server Rules can run on distributed systems: On WebSphere Application Server WebSphere Application Server ND On WebSphere Application Server Express On Tomcat On JBoss application server On JBoss Enterprise Application Platform On WebLogic Server As a shared or scoped Java EE application Decision Server Rules can run on the z/OS mainframe: Standalone (as Rule Execution Server for z/OS) On WebSphere Application Server for z/OS On WebSphere Application Server ND for z/OS Decision Server Events and Decision Center can run on WebSphere Application Server for z/OSWebSphere Application Server ND for z/OS Rules Designer is run in Eclipse, or an Eclipse-based product Version history: Prior to its release at V7.5, the parts of ODM were available as separate products: ILOG JRules, coming from the acquisition of ILOG WebSphere Business Events, coming from the acquisition of Aptsoft. Websites claiming or suggesting that ODM is a notable piece of software: Technology BlogJames Taylor on Everything Decision Management
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**N-Acetyltaurine** N-Acetyltaurine: N-Acetyltaurine (NAcT) is an endogenous metabolite. Biochemically, N-acetyltaurine is formed as a result of an acetylation of taurine. The main substrate for this reaction is acetate. An increase of endogenous N-acetyltaurine concentrations was observed after the consumption of alcohol and after extended physical activity (ketoacidosis). History: N-Acetyltaurine was first mentioned in 1990 as a compound in the droplets of the orb spider's viscid spiral. Based on its high hygroscopicity, N-acetyltaurine is an important ingredient which ensures the spider web's flexibility. History: As a biomarker for ethanol metabolism, N-acetyltaurine was first mentioned in a mice study in 2012. Another study in 2015 focused on the effect of endurance training on an increase in N-acetyltaurine concentrations. The first study focusing on the forensic context of alcohol biomarker analysis in human urine was published in 2016. One year later, in 2017 an evaluation of N-acetyltaurine as an alcoholmarker in human blood followed. Significance as an alcohol marker: N-Acetyltaurine is a direct alcohol biomarker which represents the oxidative pathway of ethanol metabolism. Other direct alcohol biomarkers such as fatty acid ethyl esters (FAEE), ethyl glucuronide, ethyl sulfate, and phosphatidylethanol reflect the non-oxidative pathway of alcohol metabolism, based on conjunction reactions (biotransformation). The fact that N-acetyltaurine is an endogenous metabolite reduces its significance as an alcohol biomarker: A distinction between endogenous N-acetyltaurine concentrations and alcohol induced concentrations is necessary. Significance as an alcohol marker: During a drinking study with a target blood alcohol concentrations of 0.8 g/kg, an alcohol induced concentration of N-acetyltaurine which was about ten folds higher than the endogenous concentration was observed. In blood the alcohol induced increase was only twofold. Based on these observations, it was concluded that N-acetyltaurine is excreted very efficiently by the kidney. Analytics: N-Acetyltaurine can be quantified by a combination of high performance liquid chromatography and mass spectrometry (MS/MS). Due to the high hydrophilicity of N-acetyltaurine, the hydrophilic interaction chromatography (HILIC) is the method of choice in order to separate the analyte from the matrix components.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tilted plane focus** Tilted plane focus: Tilted plane photography is a method of employing focus as a descriptive, narrative or symbolic artistic device. It is distinct from the more simple uses of selective focus which highlight or emphasise a single point in an image, create an atmospheric bokeh, or miniaturise an obliquely-viewed landscape. In this method the photographer is consciously using the camera to focus on several points in the image at once while de-focussing others, thus making conceptual connections between these points. Limits to focus in imaging: Focus is relative to spatial depth. Selective focus in photography is usually associated with depth of field. A pinhole camera generates an image of infinite relative focus, from a point just outside the camera opening out to infinity. Lenses focus more selectively so that, for objects near the lens, the distance between lens and sensor or film is increased and is shortened for more distant objects, to a point beyond which all is in focus. In telephoto lenses this point may be tens or hundreds of metres from the camera. Wide-angle lenses distinguish differences in depth only up to a short distance, beyond which all is in focus. Depth of field: Depth of field is an effect that permits bringing objects into focus at varying distances from the camera, and at varying depth between each other, into the field of view. A short lens, as explained above, will bring objects into focus that are relatively close to the camera, but it will also keep focus at greater distances between objects. A telephoto lens will be very shallow in its gamut of focus. Depth of field: Reducing the size of the aperture of the lens deepens the focus. At a pinhole size this will increase in effect, though the closer the objects are to the camera, the shorter the distance between focussed objects. Plane of focus: Because focus depends on the distance between lens and the sensor or film plane, focus in the space in front of the camera is not on a point but rather on a plane parallel to the film plane. Spherical construction of lenses, rather than the ideal parabolic construction which is rarely and expensively achieved, means that this plane is slightly concave—more so in simple single element lenses and increasingly so with lenses of lower quality construction and materials. Compound lenses are built to correct this "spherical aberration" or "curvature of field". Tilting the plane of focus: Varying the distance between the lens and sensor or film plane across the field of view permits focussing on objects at varying distances from the camera. One means of achieving this is to tilt the lens and/or the sensor or film plane in relation to each other. This will mean that individual points in the picture plane will focus on different points of depth, with the effect that the plane of sharp focus will tilt. Tilting the plane of focus: This technique is based on the principle of Scheimpflug which, traditionally, is combined with small aperture to increase the gamut of focus beyond that achievable by depth of field alone. Usually no out-of-focus artifacts are desired in the image resulting from Scheimpflug adjustments. Here the converse is true. With the lens at full aperture, the photographer selects points in depth in the scene on which to focus and throws other points out-of-focus. This increases the contrast between the sharp and blurred areas and the selected application of focus and blur remains apparent to the viewer. Tilted plane focus on smaller formats: A view camera permits full, incrementally calibrated control over this technique, though it is possible to achieve with other cameras and formats. It is possible to achieve similar effects on a 35mm camera or digital single-lens reflex camera (DSLR) using a special tilt-shift lens, or by manually holding a lens that is removed from its mount. History: Julia Margaret Cameron was a strong advocate of this use of selective focus. For example, in "Prayer and Praise", produced in 1865, there is a deliberate placement of focus at more than three points: on the face and parts of the body of the foreground child; and faces of mother and father; while a second child's face is thrown radically out of focus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gonadotropin-releasing hormone insensitivity** Gonadotropin-releasing hormone insensitivity: Gonadotropin-releasing hormone (GnRH) insensitivity also known as Isolated gonadotropin-releasing hormone (GnRH) deficiency (IGD) is a rare autosomal recessive genetic and endocrine syndrome which is characterized by inactivating mutations of the gonadotropin-releasing hormone receptor (GnRHR) and thus an insensitivity of the receptor to gonadotropin-releasing hormone (GnRH), resulting in a partial or complete loss of the ability of the gonads to synthesize the sex hormones. The condition manifests itself as isolated hypogonadotropic hypogonadism (IHH), presenting with symptoms such as delayed, reduced, or absent puberty, low or complete lack of libido, and infertility, and is the predominant cause of IHH when it does not present alongside anosmia. Signs and Symptoms: There is a relatively broad spectrum of clinical signs and symptoms that can occur in, ranging from complete absence of sexual development to partial completion of puberty that does not subsequently progress. Of note, the X-linked form of Kallmann syndrome (KS) form of GnRH insensitivity relating to mutations in the ANOS1 gene has the most consistent severe phenotypic presentation (i.e., prepubertal testes size and complete absence of gonadotropin-releasing hormone [GnRH]-induced luteinizing hormone [LH] pulsations during frequency sampling studies) of all of the genes associated with this condition.GnRH insensitivity can present at any age, but the presenting signs and symptoms are a function of the age-related period of reproductive activity.During the neonatal period, boys with the more severe cases of GnRH insensitivity can present with microphallus and/or cryptorchidism, presumably due to in utero and/or neonatal GnRH deficiency; approximately one-half of boys with microphallus have GnRH insensitivity as the underlying diagnosis. In comparison, newborn girls with GnRH insensitivity have no obvious abnormal reproductive tract findings that might provide clues to the diagnosis. However, in both sexes, other congenital nonreproductive features may be present (e.g., midline facial defects, skeletal abnormalities).During childhood, since the hypothalamic GnRH-pituitary-gonadal axis is quiescent, a diagnosis of GnRH insensitivity can generally be heralded only in the presence of nonreproductive phenotypes (e.g., the lack of sense of smell in some patients [anosmia] or skeletal abnormalities, such as cleft lip/cleft palate, hearing deficits, or syndactyly).At puberty, patients of both sexes can present with a complete form of GnRH insensitivity that is characterized by a failure to initiate sexual maturation (e.g., lack of secondary sexual characteristics, primary amenorrhea in girls, lack of virilization in boys) and failure to establish a pubertal growth spurt.Some patients present with partial forms of GnRH insensitivity and undergo some degree of pubertal development that subsequently ceases. For example, some males with GnRH insensitivity exhibit some testicular growth, while some females can have thelarche and menarche, but hypogonadotropic hypogonadism (HH) is demonstrable soon thereafter. Extremely rarely, a few have completely normal pubertal development and adulthood gonadal function, only to develop HH with prepubertal levels of testosterone but sometimes with normal testicular size as a clue to its acquired status, i.e., developing only after adult testicular development has been complete subsequently in adulthood, leading to infertility and sexual dysfunction. These patients are referred to as having the adult-onset or acquired form of GnRH insensitivity. Causes: Congenital Causes Genetic Mutations Kallmann syndrome ANOS1 (formerly KAL1), X-linked recessive KS SOX10 (SRY-box 10 gene), autosomal dominant KS with variable penetrance IL17RD, autosomal dominant KS with variable penetrance SEMA3A, autosomal dominant KS with variable penetrance FEZF1, autosomal recessive KS IL17RD, autosomal dominant KS with variable penetrance Digenic and Oligogenic Mutations A heterozygous FGFR1 mutation and heterozygous deletion in the NSMF gene in the anosmic pedigree A compound heterozygous GNRHR mutation and heterozygous FGFR1 mutation in the normosmic pedigree GnRH deficiency associated with mental retardation/obesity Congenital malformations often associated with craniofacial anomalies Laurence-Moon-Biedl syndrome Prader-Willi syndrome Acquired Causes Benign tumors and cysts Craniopharyngiomas Germinomas, meningiomas, gliomas, astrocytomas Metastatic tumors (breast, lung, prostate) Chronic systemic disease Malnutrition, anorexia nervosa, bulimia Hypothyroidism, hyperprolactinemia, diabetes mellitus, Cushing's disease Post-androgen abuse Infiltrative diseases Hemochromatosis Granulomatous diseases Histiocytosis Head trauma Pituitary apoplexy Drugs - marijuana, opioids, anabolic steroids Pathophysiology: The genetic mechanisms of gonadotropin-releasing hormone (GnRH) insensitivity involve mutations in at least twenty-four genes regulating GnRH neuronal migration, secretion, and activity. So far, the mechanisms underlying gonadotropin deficiency, both in prepubertal and in adulthood onset forms, remain unknown in most of the cases.The lack of endogenous hypothalamic gonadotropin-releasing hormone (GnRH) secretion/action in patients with GnRH insensitivity cannot be proven by direct assay of GnRH in the portal circulation but can be reasonably inferred by two findings: The lack of any endogenous GnRH-induced luteinizing hormone (LH) pulses during frequent blood sampling Typically, most patients respond to exogenous GnRH when administered in a pulsatile regimen designed to mimic endogenous GnRH secretion (GnRH dose and frequency based upon a previous study of LH secretion in normal men) with robust gonadotropin secretion. This responsiveness demonstrates the intact anatomic and functional integrity of the gonadotrophs and the gonads in these patients. Diagnosis: When suspected on the basis of the clinical presentation or physical findings, the diagnosis of GnRH insensitivity should be confirmed biochemically. The diagnosis requires the following findings: The demonstration of prepubertal serum concentrations of sex steroid hormones (serum testosterone less than 100 ng/dL [3.5 nmol/L] in males or serum estradiol less than 20 pg/mL [73 pmol/L] in females). Inappropriately low or normal serum luteinizing hormone (LH) and follicle-stimulating hormone (FSH) concentrations (usually less than 4 to 5 international units/L) rather than the high concentrations expected with primary gonadal failure. Otherwise normal anterior pituitary function. Normal appearance of the hypothalamus and pituitary region on magnetic resonance imaging (MRI); when seeking this diagnosis, it is useful to request fine (1 mm) cuts through the olfactory bulb region of the MRI to define subtle abnormalities of the olfactory system that may signal which genetic tests to request first. Differential diagnosis — For patients fulfilling the above laboratory criteria, the main (and most difficult) differential diagnosis is with constitutional delay of growth and puberty (CDGP). Diagnosis: A definitive diagnosis of GnRH insensitivity in the absence of a family history or prior genetic testing is difficult to make until the patient reaches at least 18 years of age, unless other suggestive features are present (i.e., prior microphallus and/or cryptorchism, anosmia, renal agenesis, skeletal defects, etc.). CDGP is far more common than GnRH insensitivity, affecting approximately 3 percent of adolescents while the incidence of the Kallmann syndrome (KS) form of GnRH insensitivity is 1:48,000 with a clear difference between males (1:30,000) and females (1:125,000). Diagnosis: No single test can reliably distinguish between GnRH insensitivity and CDGP until more widespread genetic testing becomes available, and therefore, one has to rely on an array of clinical clues as well as on the natural evolution over time. However, certain features may indicate a higher likelihood of GnRH insensitivity rather than CDGP: A family history of gonadotropin-releasing hormone (GnRH) deficiency, anosmia, and/or the presence of one or several associated congenital abnormalities suggests congenital nonreproductive abnormalities (e.g., cleft lip/palate, syndactyly) suggest KS form of GnRH deficiency. Diagnosis: A history of "stalled" puberty rather than total absence of development, a family history of delayed puberty, or early evidence of breast or testicular development are useful indicators that puberty is likely to occur spontaneously (i.e., CDGP). The presence of pubic hair suggests GnRH insensitivity because normal adrenarche still occurs; in comparison, both adrenarche and gonadarche are delayed in CDGP, and therefore, pubic hair is usually absent. In females, functional hypogonadotropic hypogonadism (FHH) (or functional hypothalamic amenorrhea) is part of the differential diagnosis for GnRH insensitivity. The presence of predisposing factors like excessive exercise, weight loss, or psychological stress point towards the diagnosis of FHH rather than GnRH insensitivity. Diagnosis: When GnRH deficiency presents after puberty, other causes of secondary hypogonadism (particularly tumors of the hypothalamic-pituitary axis) must be eliminated, as GnRH insensitivity is really a diagnosis of exclusion. These include: Tumors of the hypothalamic-pituitary region that occasionally can be suspected by the presence of other neurologic symptoms (headaches, visual disturbances) or the demonstration of other defects or excess in anterior pituitary hormone secretion on initial biochemical screening. However, enlarging mass lesions in either the pituitary or the central nervous system decrease the secretion of corticotropin (ACTH) or thyroid-stimulating hormone (TSH) less than that of gonadotropins or growth hormone. Diagnosis: Similarly, hemochromatosis should be eliminated by appropriate testing of serum iron, total iron binding capacity, and ferritin levels.Approach to genetic testing — When the diagnosis of GnRH insensitivity is suspected, it is suggested that referral to a clinical geneticist for further evaluation and possible genetic testing be done. As many of the genes causing GnRH insensitivity have pleotropic physiologic functions, genetic testing can aid assessment of both reproductive and nonreproductive clinical features. In addition, ascertaining the specific inheritance modes can aid genetic screening within the family to predict recurrence risk in siblings, family members or offspring of GnRH insensitivity patients. However, genetic testing in GnRH insensitivity is challenging, given the genetic and allelic heterogeneity, as well as complex oligogenic inheritance patterns. However, in the presence of either clear Mendelian inheritance patterns or specific phenotypic cues, targeted genetic testing or multigene panel testing may be performed. However, if such testing is done, variant interpretation and genetic counseling should be performed in conjunction with a clinical genetics service. Alternatively, several research units have special interests in the genetics of GnRH insensitivity, and clinicians can consider referring these patients to such specialized centers. Genetic testing is now commercially available through several Clinical Laboratory Improvement Amendments (CLIA) laboratories in the United States (GeneDx, Athena Diagnostics, Fulgent Diagnostics). Treatment: The choice of therapy for GnRH insensitivity depends upon the patient's age and desire to achieve one or more of the following goals: Induction of puberty and/or maintenance of sexual maturation Induction or restoration of fertilityPuberty induction and sexual maturation Girls and women — Exogenous estrogens are used to start secondary sexual development in prepubertal girls and to build and sustain normal bone and muscle mass. Initiation of treatment are based upon the patient's bone age, current height percentiles, psychosexual needs, and predicted adult height. The shorter the predicted adult height, the later puberty should be induced. Inappropriate use of estrogens may result in rapid osseous maturation with resulting short stature and irregular menstrual bleeding.Initiation of puberty can begin with any type or route of exogenous estrogen, oral or transdermal. Initiation of puberty with transdermal 17-beta estradiol, starting with low doses of approximately 0.08 to 0.12 mcg estradiol per kg/day body weight, is successful and commonly prescribed by pediatricians. The dose is then gradually increased over several years. Initial therapy consist of unopposed estrogen alone to maximize breast growth, achieve appropriate skeletal maturation, and to induce uterine and endometrial proliferation. A progestin eventually needs to be added to prevent endometrial hyperplasia, but adding it prematurely or administering combinations of estrogens and progestins (e.g., birth control pills) before completion of breast development should be avoided because it is likely to reduce ultimate breast size.Once pubertal induction is completed, estrogen and progestin therapy are continued indefinitely. Doses and principles of therapy are similar to those for women with primary ovarian insufficiency. Treatment: Boys and men — In boys, puberty can be induced with testosterone, exogenous gonadotropins, or pulsatile gonadotropin-releasing hormone (GnRH) therapy. The latter two options also induce spermatogenesis, which is not necessary for this age group. Testosterone therapy is suggested for pubertal induction in boys. The goals of therapy are to: Induce virilization Promote optimal skeletal maturation (with bone age monitoring) Maximize adult height Promote psychosexual development Build and sustain normal bone and muscle massOral testosterone preparations should not be used, because of hepatic toxicity. The choices for testosterone replacement include intramuscular injections of long-acting testosterone preparations or topical gels/solutions/patches. Serum testosterone levels should be monitored and dose adjusted. Treatment: Whichever form of testosterone replacement is chosen, providing psychological support is important because the patient will have a variety of new and often confusing symptoms, much like an adolescent undergoing puberty but more difficult because it will likely be at a later age. Testosterone therapy should be initiated at a low dose and gradually increased to an adult dose over a few years.Once pubertal induction is completed, testosterone therapy is continued indefinitely. Prognosis: The prognosis is generally good, with the outcome for fertility depending on the severity of the sex hormone deficiency and the age of initiation of treatment. Rare cases of complete resolution have been described but the pathophysiology of the disease in these patients is not understood. Epidemiology: Gonadotropin-releasing hormone (GnRH) insensitivity affects both sexes but has a significant male preponderance. A population-based, epidemiological study from Finland showed a minimal prevalence estimate of the Kallman syndrome (KS) form of Gonadotropin-releasing hormone (GnRH) insensitivity to be 1:48,000 with a clear difference between males (1:30,000) and females (1:125,000). Research: The research of GnRH deficiency has been long studied over the past five decades. The classic studies from the 1970s identified that pulsatile release of GnRH from the hypothalamus is a prerequisite for physiologic gonadotrope function. Further theses studies demonstrated that the absence, decreased frequency, or decreased amplitude of pulsatile GnRH release results in the clinical syndrome of hypogonadotropic hypogonadism (HH).Current research primarily aims to define the physiology of GnRH, as it is critical to understanding the clinical heterogeneity of GnRH insufficiency and its comparison to other conditions resulting in hypogonadotropic hypogonadism (HH). Some overall goals of current research have focused on investigating: The neuroendocrine control of reproduction and specifically the physiology and pathophysiology of GnRH secretion and action in humans Efficacy of genetic counseling and patient management The psychopathology, sexuality, and personality characteristics in patients with GnRH deficiency under hormonal replacement therapy
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carboxydothermus hydrogenoformans** Carboxydothermus hydrogenoformans: Carboxydothermus hydrogenoformans is an extremely thermophilic anaerobic Gram-positive bacterium that has the interesting property of producing hydrogen as a waste product while feeding on carbon monoxide and water. It also forms endospores. Carboxydothermus hydrogenoformans: It was isolated from a hot spring on the Russian volcanic island of Kunashir by Svetlichny et al. in 1991. Its complete genome was sequenced in 2005 by a team of scientists of the Institute for Genomic Research (TIGR) According to TIGR evolutionary biologist Jonathan Eisen, "C. hydrogenoformans is one of the fastest-growing microbes that can convert water and carbon monoxide to hydrogen." The microbe owes this to the fact that it has at least five different forms of carbon monoxide dehydrogenase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Internal validity** Internal validity: Internal validity is the extent to which a piece of evidence supports a claim about cause and effect, within the context of a particular study. It is one of the most important properties of scientific studies and is an important concept in reasoning about evidence more generally. Internal validity is determined by how well a study can rule out alternative explanations for its findings (usually, sources of systematic error or 'bias'). It contrasts with external validity, the extent to which results can justify conclusions about other contexts (that is, the extent to which results can be generalized). Both internal and external validity can be described using qualitative or quantitative forms of causal notation. Details: Inferences are said to possess internal validity if a causal relationship between two variables is properly demonstrated. A valid causal inference may be made when three criteria are satisfied: the "cause" precedes the "effect" in time (temporal precedence), the "cause" and the "effect" tend to occur together (covariation), and there are no plausible alternative explanations for the observed covariation (nonspuriousness).In scientific experimental settings, researchers often change the state of one variable (the independent variable) to see what effect it has on a second variable (the dependent variable). For example, a researcher might manipulate the dosage of a particular drug between different groups of people to see what effect it has on health. In this example, the researcher wants to make a causal inference, namely, that different doses of the drug may be held responsible for observed changes or differences. When the researcher may confidently attribute the observed changes or differences in the dependent variable to the independent variable (that is, when the researcher observes an association between these variables and can rule out other explanations or rival hypotheses), then the causal inference is said to be internally valid.In many cases, however, the size of effects found in the dependent variable may not just depend on variations in the independent variable, the power of the instruments and statistical procedures used to measure and detect the effects, and the choice of statistical methods (see: Statistical conclusion validity).Rather, a number of variables or circumstances uncontrolled for (or uncontrollable) may lead to additional or alternative explanations (a) for the effects found and/or (b) for the magnitude of the effects found. Internal validity, therefore, is more a matter of degree than of either-or, and that is exactly why research designs other than true experiments may also yield results with a high degree of internal validity. Details: In order to allow for inferences with a high degree of internal validity, precautions may be taken during the design of the study. As a rule of thumb, conclusions based on direct manipulation of the independent variable allow for greater internal validity than conclusions based on an association observed without manipulation. When considering only Internal Validity, highly controlled true experimental designs (i.e. with random selection, random assignment to either the control or experimental groups, reliable instruments, reliable manipulation processes, and safeguards against confounding factors) may be the "gold standard" of scientific research. However, the very methods used to increase internal validity may also limit the generalizability or external validity of the findings. For example, studying the behavior of animals in a zoo may make it easier to draw valid causal inferences within that context, but these inferences may not generalize to the behavior of animals in the wild. In general, a typical experiment in a laboratory, studying a particular process, may leave out many variables that normally strongly affect that process in nature. Example threats: To recall eight of these threats to internal validity, use the mnemonic acronym, THIS MESS, which stands for: Testing, History, Instrument change, Statistical regression toward the mean, Maturation, Experimental mortality, Selection, and Selection Interaction. Ambiguous temporal precedence When it is not known which variable changed first, it can be difficult to determine which variable is the cause and which is the effect. Confounding A major threat to the validity of causal inferences is confounding: Changes in the dependent variable may rather be attributed to variations in a third variable which is related to the manipulated variable. Where spurious relationships cannot be ruled out, rival hypotheses to the original causal inference may be developed. Example threats: Selection bias Selection bias refers to the problem that, at pre-test, differences between groups exist that may interact with the independent variable and thus be 'responsible' for the observed outcome. Researchers and participants bring to the experiment a myriad of characteristics, some learned and others inherent. For example, sex, weight, hair, eye, and skin color, personality, mental capabilities, and physical abilities, but also attitudes like motivation or willingness to participate. Example threats: During the selection step of the research study, if an unequal number of test subjects have similar subject-related variables there is a threat to the internal validity. For example, a researcher created two test groups, the experimental and the control groups. The subjects in both groups are not alike with regard to the independent variable but similar in one or more of the subject-related variables. Example threats: Self-selection also has a negative effect on the interpretive power of the dependent variable. This occurs often in online surveys where individuals of specific demographics opt into the test at higher rates than other demographics. Example threats: History Events outside of the study/experiment or between repeated measures of the dependent variable may affect participants' responses to experimental procedures. Often, these are large-scale events (natural disaster, political change, etc.) that affect participants' attitudes and behaviors such that it becomes impossible to determine whether any change on the dependent measures is due to the independent variable, or the historical event. Example threats: Maturation Subjects change during the course of the experiment or even between measurements. For example, young children might mature and their ability to concentrate may change as they grow up. Both permanent changes, such as physical growth and temporary ones like fatigue, provide "natural" alternative explanations; thus, they may change the way a subject would react to the independent variable. So upon completion of the study, the researcher may not be able to determine if the cause of the discrepancy is due to time or the independent variable. Example threats: Repeated testing (also referred to as testing effects) Repeatedly measuring the participants may lead to bias. Participants may remember the correct answers or may be conditioned to know that they are being tested. Repeatedly taking (the same or similar) intelligence tests usually leads to score gains, but instead of concluding that the underlying skills have changed for good, this threat to Internal Validity provides a good rival hypothesis. Example threats: Instrument change (instrumentality) The instrument used during the testing process can change the experiment. This also refers to observers being more concentrated or primed, or having unconsciously changed the criteria they use to make judgments. This can also be an issue with self-report measures given at different times. In this case, the impact may be mitigated through the use of retrospective pretesting. If any instrumentation changes occur, the internal validity of the main conclusion is affected, as alternative explanations are readily available. Example threats: Regression toward the mean This type of error occurs when subjects are selected on the basis of extreme scores (one far away from the mean) during a test. For example, when children with the worst reading scores are selected to participate in a reading course, improvements at the end of the course might be due to regression toward the mean and not the course's effectiveness. If the children had been tested again before the course started, they would likely have obtained better scores anyway. Example threats: Likewise, extreme outliers on individual scores are more likely to be captured in one instance of testing but will likely evolve into a more normal distribution with repeated testing. Example threats: Mortality/differential attrition This error occurs if inferences are made on the basis of only those participants that have participated from the start to the end. However, participants may have dropped out of the study before completion, and maybe even due to the study or programme or experiment itself. For example, the percentage of group members having quit smoking at post-test was found much higher in a group having received a quit-smoking training program than in the control group. However, in the experimental group only 60% have completed the program. If this attrition is systematically related to any feature of the study, the administration of the independent variable, the instrumentation, or if dropping out leads to relevant bias between groups, a whole class of alternative explanations is possible that account for the observed differences. Example threats: Selection-maturation interaction This occurs when the subject-related variables, color of hair, skin color, etc., and the time-related variables, age, physical size, etc., interact. If a discrepancy between the two groups occurs between the testing, the discrepancy may be due to the age differences in the age categories. Diffusion If treatment effects spread from treatment groups to control groups, a lack of differences between experimental and control groups may be observed. This does not mean, however, that the independent variable has no effect or that there is no relationship between dependent and independent variable. Example threats: Compensatory rivalry/resentful demoralization Behavior in the control groups may alter as a result of the study. For example, control group members may work extra hard to see that the expected superiority of the experimental group is not demonstrated. Again, this does not mean that the independent variable produced no effect or that there is no relationship between dependent and independent variable. Vice versa, changes in the dependent variable may only be affected due to a demoralized control group, working less hard or motivated, not due to the independent variable. Example threats: Experimenter bias Experimenter bias occurs when the individuals who are conducting an experiment inadvertently affect the outcome by non-consciously behaving in different ways to members of control and experimental groups. It is possible to eliminate the possibility of experimenter bias through the use of double-blind study designs, in which the experimenter is not aware of the condition to which a participant belongs. Example threats: Mutual-internal-validity problem Experiments that have high internal validity can produce phenomena and results that have no relevance in real life, resulting in the mutual-internal-validity problem. It arises when researchers use experimental results to develop theories and then use those theories to design theory-testing experiments. This mutual feedback between experiments and theories can lead to theories that explain only phenomena and results in artificial laboratory settings but not in real life.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Norton AntiVirus** Norton AntiVirus: Norton AntiVirus is an anti-virus or anti-malware software product founded by Peter Norton, developed and distributed by Symantec (now Gen Digital) since 1990 as part of its Norton family of computer security products. It uses signatures and heuristics to identify viruses. Other features included in it are e-mail spam filtering and phishing protection. Norton AntiVirus: Symantec distributes the product as a download, a box copy, and as OEM software. Norton AntiVirus and Norton Internet Security, a related product, held a 61% US retail market share for security suites as of the first half of 2007. Competitors, in terms of market share in this study, include antivirus products from CA, Trend Micro, and Kaspersky Lab.Norton AntiVirus runs on Microsoft Windows, Linux, and macOS. Windows 7 support was in development for versions 2006 through 2008. Version 2009 has Windows 7 supported update already. Versions 2010, 2011, and 2012 all natively support Windows 7, without needing an update. Version 12 is the only version fully compatible with Mac OS X Lion. Norton AntiVirus: With the 2015 series of products, Symantec made changes in its portfolio and briefly discontinued Norton AntiVirus. This action was later reversed with the introduction of Norton AntiVirus Basic. Origins: In May 1989, Symantec launched Symantec Antivirus for the Macintosh (SAM). SAM 2.0, released March 1990, incorporated technology allowing users to easily update SAM to intercept and eliminate new viruses, including many that didn't exist at the time of the program's release.In August 1990 Symantec acquired Peter Norton Computing from Peter Norton. Norton and his company developed various DOS utilities including the Norton Utilities, which did not include antivirus features. Symantec continued the development of acquired technologies. The technologies are marketed under the name of "Norton", with the tagline "from Symantec". Norton's crossed-arm pose, a registered U.S. trademark, was traditionally featured on Norton product packaging. However, his pose was later moved to the spine of the packaging, and eventually dropped altogether.With the 1998 version 5.0 update, SAM was renamed Norton AntiVirus (NAV) for Macintosh. Windows/DOS editions: By early 1991, U.S. computers were invaded by hundreds of foreign virus strains and corporate PC infection was becoming a serious problem. Symantec's Norton Group launched Norton AntiVirus 1.0 (NAV) for PC and compatible computers. Ads for the product, with suggested retail $129, featured Norton in his crossed-arm pose, wearing a pink shirt and surgical mask covering his nose and mouth. Windows/DOS editions: Due to bug in the software, the original Norton Antivirus 1.0 does not repair infected files or boot sectors properly. This was fixed when version 1.5 was released in June 1991, along with the addition of the option of installing multiple scan levels of the Norton Antivirus Intercept (later renamed to Norton Antivirus Auto-Protect starting off with Norton Antivirus 3.0 released in September 1993. Windows/DOS editions: Norton Antivirus 2.0 was released in December 1991, and introduced the feature of creating a rescue disk, which would include the partition table, CMOS settings memory information, and boot sector of a hard disk of an MS-DOS computer system. This is very handy in case a virus that its definitions do not detect, overwrite this information or move the boot sector to a different location of the hard disk. Windows/DOS editions: Norton Antivirus 3.0, released in September 1993, introduced a very unique feature. Unlike other antivirus software products for MS-DOS and early Windows, which will only notify you to turn off your computer, but continue anyway, Auto-Protect or the main program will scan for viruses in memory before loading themselves. If they find a virus loaded into memory, they will halt the entire computer so that you can't even perform a warm boot (Ctrl+Alt+Delete), So that you can turn off your computer from the power and turn it back on again with a clean, uninfected system disk. Most often, this can either be the rescue disk created, or the original MS-DOS system installation disk, followed by the Norton Antivirus program installation disks. This feature is the safest way to deal with any kind of virus in memory. Norton Antivirus 3.0 is also the first version for Windows 3.1. Windows/DOS editions: Product activation was introduced in Norton AntiVirus 2004, addressing the estimated 3.6 million counterfeit Norton products sold. An alphanumeric code is generated to identify a computer's configuration, which ties in with the product key. Users are allowed to activate their product five times with the same product key. Spyware and adware detection and removal was introduced to the 2005 version, with the tagline "Antispyware Edition". The tagline was dropped in later releases. However, Norton AntiVirus 2009 Classic does not include spyware or adware detection. The Classic edition is marketed alongside Norton AntiVirus 2009, which does include spyware and adware detection. Windows/DOS editions: Existing users of the 2006, 2007, 2008, and 2009 versions can upgrade to the latest 2010 version without buying a new subscription. Upgrading will preserve the number of days left on a user's subscription. Windows/DOS editions: Version 2006 (13.0) The redesigned main graphical user interface aggregates information in a central user interface. CNET reports the Norton Protection Center, while useful, attempts to advertise additional products. To further facilitate detection of zero-day malware, Bloodhound disassembles a variety of programming languages, and scans code for malicious instructions using predefined algorithms. Internet Explorer homepage hijacking protection was introduced in this release as well; however notably missing is search engine hijacking protection. CNET highlighted Norton AntiVirus 2006's noticeable impact on system performance.Operating system requirements call for Windows 2000 Service Pack 3 or Windows XP. 150 MB of free space and a 300 MHz processor is required under either operating system. 128 MB of RAM is required under Windows 2000, while 256 MB is required in Windows XP. Windows/DOS editions: Version 2007 (14.0) Norton AntiVirus was released on September 12, 2006. Symantec revised Norton AntiVirus with the goal of reducing high system resource utilization. Windows Vista compatibility was introduced in this release as well. Despite having about 80% of the code rewritten, CNET reports mixed results in performance testing.Windows 2000 compatibility was dropped from this release. Compatibility with 32-bit versions of Windows Vista was added to this release with a patch from Symantec. Hardware requirements under Vista call for 150 MB free space, an 800 MHz processor and 512 MB RAM. Requirements under Windows XP similarly call for 150 MB free space, a 300 MHz processor, and 256 MB of RAM. Windows/DOS editions: Version 2008 (15.0) Norton AntiVirus 2008 was released on August 28, 2007. Emphasizing malware prevention, new features include SONAR, which looks for suspicious application behavior. This release adds real-time exploit protection, preventing attackers from leveraging common browser and application vulnerabilities.When installed in 32-bit versions of Windows XP Service Pack 2, 300 MB of free space, a 300 MHz processor, and 256 MB of RAM is required. When installed in 32-bit and 64-bit versions of Windows Vista, 300 MB of free space, an 800 MHz processor, and 256 MB of RAM is needed. Windows/DOS editions: Version 2009 (16.0) Norton AntiVirus 2009 was released on September 8, 2008. Addressing performance issues, over 300 changes were made, with a "zero-impact" goal.Benchmarking conducted by Passmark Software PTY LTD highlights its 47-second install time, 32 second scan time, and 5 MB memory utilization. Symantec funded the benchmark test and provided some scripts used to benchmark each participating antivirus software.The security status and settings are now displayed in a single main interface. A CPU usage monitor displays the total CPU utilization and Norton's CPU usage in the main interface. Other features include Norton Insight, a whitelisting technology which cuts scanning times by mapping known safe files using information from an online database. To address malware response times, updates are delivered every 5 to 15 minutes. However, such updates are not tested by Symantec, and may cause false positives, or incorrectly identify files as malicious. The exploit scanner found in the 2007 and 2008 versions was dropped from this release. Windows/DOS editions: When installed in 32-bit versions of Windows XP Service Pack 2, 150 MB of free space, a 300 MHz processor, and 256 MB of RAM is required. When installed in 32-bit or 64-bit versions of Windows Vista, 150 MB of free space, an 800 MHz processor, and 512 MB of RAM is required. Two variations on Norton AntiVirus 2009 are also marketed by Symantec. The Gaming edition provides finer control over when Norton downloads updates and allows components of the suite to be disabled either manually or automatically when the computer enters full-screen mode. The Classic edition cannot find or remove adware and spyware. Windows/DOS editions: Version 2010 (17.0) Version 17.0 was released on September 9, 2009. Several features have been updated in this release, including SONAR, now dubbed SONAR 2. It now uses more information to determine if an application is truly malicious. Norton Insight can present users with information about the origins, activities, and performance of applications along with reputation data. A new feature codenamed Autospy helps users understand what Norton did when malware was found. Previous releases removed threats on sight and quietly warned users, potentially confusing when users are deceived in downloading rogue security software. Much of this information is placed on the back of the main window; a toggle button switches between the sides. Windows/DOS editions: Symantec has also added Windows 7 support. Aside from that, Symantec has also added the Norton Download Insight to prevent drive by drive downloads. Version 2011 (18.0) Version 2012 (19.0) Version 2013 (20.0) Version 2014 (21.0) Lack of 2015 version Symantec briefly discontinued the standalone Norton AntiVirus product in 2015 instead replacing it with Norton Security. Version 2016 (22.0) Criticism: FBI cooperation The FBI confirmed the active development of Magic Lantern, a keylogger intended to obtain passwords to encrypted e-mail and other documents during criminal investigations. Magic Lantern was first reported in the media by Bob Sullivan of MSNBC on 20 November 2001 and by Ted Bridis of the Associated Press. The FBI intends to deploy Magic Lantern in the form of an e-mail attachment. When the attachment is opened, it installs a trojan horse on the suspect's computer, which is activated when the suspect uses PGP encryption, often used to increase the security of sent email messages. When activated, the trojan will log the PGP password, which allows the FBI to decrypt user communications. Symantec and other major antivirus vendors have whitelisted the Magic Lantern trojan, rendering their antivirus products, including Norton AntiVirus, incapable of detecting it. Concerns around this whitelisting include uncertainties about Magic Lantern's full surveillance potential and whether hackers could subvert it and redeploy it for purposes outside of law enforcement.Graham Cluley, a technology consultant from Sophos, said "We have no way of knowing if it was written by the FBI, and even if we did, we wouldn’t know whether it was being used by the FBI or if it had been commandeered by a third party". Another reaction came from Marc Maiffret, chief technology officer and co-founder of eEye Digital Security who states: "Our customers are paying us for a service, to protect them from all forms of malicious code. It is not up to us to do law enforcement's job for them so we do not, and will not, make any exceptions for law enforcement malware or other tools."Proponents of Magic Lantern argue the technology would allow law enforcement to efficiently and quickly decrypt time-sensitive messages protected by encryption schemes. Implementing Magic Lantern does not require physical access to a suspect's computer, unlike Carnivore, a predecessor to Magic Lantern, since physical access to a computer would require a court order. FBI spokesman Paul Bresson, in response to a question about whether Magic Lantern also needed a court order to deploy, would only say "Like all technology projects or tools deployed by the FBI it would be used pursuant to the appropriate legal process." Update disables legitimate software On January 28, 2010 Symantec Anti-virus update marked Spotify as a Trojan Horse disabling the software across millions of PCs. Criticism: Product support Retail customers report slow and indifferent service on bugs. Examples include a faulty error message stating current subscriptions had expired. Users received an error stating "Your virus protection cannot be updated." This error occurred after an update to the software and refused to allow daily updates. Though the bug was reported in 2004, it was not corrected for the 2005 or 2006 versions. Criticism: Another incident occurred in May 2007, when Norton AntiVirus flagged components of the Pegasus email client as malicious, rendering the program corrupted. Symantec customer service addressed the problem by running through a checklist of troubleshooting steps which were not always successful. Criticism: Faulty update On July 25, 2006, Symantec released a faulty update for Norton AntiVirus 2006 users. Users reported an onscreen message stating "Norton AntiVirus 2006 does not support the repair feature. Please uninstall and reinstall.". Symantec claimed the faulty update was downloaded to customers between 1:00 PM and 7:00 PM on July 25, 2006. Symantec developed a workaround tool and has listed troubleshooting steps, available here. The company released a statement, stating they expected to deliver a repair patch to affected users by Monday, July 31, 2006." Uninstallation Norton AntiVirus has been criticized for refusing to uninstall completely, leaving unnecessary files behind. Another issue is versions prior to 2009 installed LiveUpdate, which updates Norton-branded software, separately. The user must uninstall both Norton AntiVirus and the LiveUpdate component manually. The LiveUpdate component is purposely left behind to update other Norton-branded products, if present. In response, Symantec developed the Norton Removal Tool (SymNRT) to remove leftover registry keys and values along with files and folders. However, neither route of uninstallation will remove subscription data, preserved to prevent users from installing multiple trial copies. Criticism: SymNRT can only remove these Norton programs: Norton AntiSpam 2004 and 2005 Norton Antivirus 2003 through 2012 Norton Ghost 2003,9.0,10.0,12.0,1 A.O and 15.0 Norton GoBack 3.1 through 4.2 Norton Internet Security 2003 through 2012 Norton Password Manager Norton Personal Firewall 2003 through 2006 Norton SystemWorks 2003 through 2009 Norton Confidential Online 2007 Norton Add-on Pack 1.0 – 4.0 Norton Save and Restore 1.0 through 2.0 Norton 360/Security Suite/Business Suite 1.0 – 5.0 Norton Safety Minder 1.0 Norton Safe Web 3.2Once SymNRT has started the removal process, it cannot be stopped. It is recommended to close all running programs prior to running SymNRT. ACT! and WinFax users are recommended to back up their databases before running SymNRT. Criticism: Incompatibilities with ZoneAlarm Norton AntiVirus 2007 will not install alongside ZoneAlarm. This incompatibility has caused annoyance for Norton customers who purchased Norton AntiVirus 2007 with no prior warning or notice of the incompatibility. Symantec recommends removing ZoneAlarm, then reinstalling it with its Internet Worm Protection feature disabled, which controls what applications can access the Internet and which protocols they can use to do so. Criticism: PIFTS.exe On March 9, 2009, some users of Norton AntiVirus 2006 and 2007 experienced a firewall warning stating a Norton-associated file, "PIFTS.exe", was trying to connect to the Internet. Although this file was revealed to be a harmless diagnostic patch, the program gained attention in the media when Symantec removed posts from their forum concerning PIFTS. With no information available about the purpose of the program there was speculation that the program was malware or a backdoor.The SANS Internet Storm Center claimed to have spoken to a Symantec employee who has confirmed that "the program is theirs, part of the update process and not intended to do harm." Graham Cluley, a consultant from antivirus vendor Sophos found PIFTS connected to a Symantec server, forwarding product and computer information.On March 10, Symantec made an official response to the PIFTS program, claiming posts in the support forum were deleted due to forum spam rules; however the deletion of PIFTS-related posts began before the spam attacks. Symantec stated PIFTS itself was a diagnostic patch. Cole stated the purpose of the update was to help determine how many customers would need to be migrated to Windows 7-compatible versions of Norton AntiVirus. PIFTS apparently was released without a digital signature to verify its identity, causing firewalls to prompt for permission when it attempted to connect to the Internet. Criticism: Consumer complaints Symantec has been criticized by some consumers for perceived ethical violations, including allegations that support technicians would tell customers that their systems were infected and needed a technician to resolve it remotely for an extra fee, then refuse to refund when the customers alleged their systems had not actually been infected. Macintosh edition: Norton AntiVirus 11 for Mac introduced support for Mac OS X v10.5 Leopard platform, with the capability to detect both Macintosh and Windows malware. Other features include a vulnerability scanner, which blocks attackers from leveraging software exploits. Norton AntiVirus 11 also includes the ability to scan within compressed or archived files, such as Time Capsule volumes. Operating requirements call for Mac OS X Tiger. A PowerPC or an Intel Core processor, 128 MB of RAM, and 100 MB of free hard disk space are also required. Norton AntiVirus Dual Protection for Mac is intended for Macintosh users with Windows running on their systems, using Boot Camp or virtualization software such as VMware Fusion. It provides a license for both Norton AntiVirus 11 with Norton AntiVirus 2009. Comparison with other software: From the 2009 to 2012 editions, Symantec made huge changes to their products' speed and performance. Norton products now have only 2 running processes, using about 24 MB of RAM. As soon as a virus is recognized, information in regards to the virus (a virus signature) is stored in a pandemic definitions file, which contains the vital know-how to become aware of and get rid of the virus. According to tests sponsored by Symantec, PassMark Security Benchmark 2012 Norton AntiVirus and Norton Internet Security are the lightest suites available. Av-comparatives.org also tested these products and gave similar results. Comparison with other software: PCMag recognises 2011 and 2012 lines as the fastest and strongest in protection. PCWorld's tests of security software put Norton Internet Security 2009 in the 1st place. In 2011, in a test of PCWorld, Norton Internet Security was the winner. Dennis Technology Labs (in tests sponsored by Symantec) confirms the performance and effectiveness of Norton 2011 and 2012 lines. Norton AntiVirus vs. GCSB Amendment Bill: On 14 August 2013 the Prime Minister of New Zealand John Key addressed what he identified as "misinformation" surrounding the GCSB Amendment Bill, claiming that the actions of the Government Communications Security Bureau were analogous to Norton AntiVirus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Handbag** Handbag: A handbag, commonly known as a purse in North American English, is a handled medium-to-large bag used to carry personal items. It has also been called a pocketbook in parts of the U.S. Terminology: The term "purse" originally referred to a small bag for holding coins. In many English-speaking countries, it is still used to refer to a small money bag. Terminology: A "handbag" is a larger accessory that holds objects beyond currency, such as personal items. American English typically uses the terms purse and handbag interchangeably. The term handbag began appearing in the early 1900s. Initially, it was most often used to refer to men's hand-luggage. Women's bags grew larger and more complex during this period, and the term was attached to the accessory."Pocketbook" is another term for a woman's handbag that was most commonly used on the East Coast of the United States in the mid-twentieth century. Origin: Antiquity During the ancient period bags were utilised to carry various items including flint, tools, supplies, weapons and currency. Early examples of these bags have been uncovered in Egyptian burial sites (c. 2686-2160BCE) and were made of leather with two straps or handles for carrying or suspending from a stick. The ancient Greeks made use of leather, papyrus and linen purses known as byrsa to store coins, which is the etymological origin of the English word 'purse'. The emergence of money further inspired the creation of drawstring purses, most commonly hung from a belt or kept in clothing folds. A handbag was discovered with the remains of Ötzi, who lived between 3350 and 3105 BC. Whilst one of the earliest discoveries of an ornate leather purse came from Anglo-Saxon Britian, dated circa 625 CE, revealed from the burial site of King Roewald in the mounds of Sutton Hoo in Suffolk. Although the leather had deteriorated, its gold ornaments were still intact. Inside the purse was forty gold coins and it was held in place by a gold belt buckle and golden hinged straps. These features symbolised a display of opulence, making the purse part of a lavish suite of possessions. Origin: Modern Origin Until the late 1700s, both men and women carried bags. Early modern Europeans wore purses for one sole purpose: to carry coins. Purses were made of soft fabric or leather and were worn by men as often as ladies; the Scottish sporran is a survival of this custom. In the 17th century, young girls were taught embroidery as a necessary skill for marriage; this also helped them make very beautiful handbags.By the late 18th century, fashions in Europe were moving towards a slender shape for these accessories, inspired by the silhouettes of Ancient Greece and Rome. Women wanted purses that would not be bulky or untidy in appearance, so reticules were designed. Reticules were made of fine fabrics like silk and velvet, carried with wrist straps. First becoming popular in France, they crossed over into Britain, where they became known as "indispensables." Men, however, did not adopt the trend. They used purses and pockets, which became popular in men's trousers.The modern purse, clutch, pouch, or handbag came about in England during the Industrial Revolution, in part due to the increase in travel by railway. In 1841 the Doncaster industrialist and confectionery entrepreneur Samuel Parkinson (of butterscotch fame) ordered a set of traveling cases and trunks and insisted on a traveling case or bag for his wife's particulars after noticing that her purse was too small and made from a material that would not withstand the journey. Origin: He stipulated that he wanted various handbags for his wife, varying in size for different occasions, and asked that they be made from the same leather that was being used for his cases and trunks to distinguish them from the then-familiar carpetbag and other travelers' cloth bags used by members of the popular classes. H. J. Cave (London) obliged and produced the first modern set of luxury handbags, as we would recognize them today, including a clutch and a tote (named as 'ladies traveling case'). These are now on display in the Museum of Bags and Purses in Amsterdam. H. J. Cave did continue to sell and advertise the handbags, but many critics said that women did not need them and that bags of such size and heavy material would 'break the backs of ladies.' H. J. Cave ceased to promote the bags after 1865, concentrating on trunks instead, although they continued to make the odd handbag for royalty, celebrities or to celebrate special occasions, the Queen's 2012 Diamond Jubilee being the most recent. However, H.J. Cave resumed handbag production in 2010. 20th century: When handbags started to become popularized, they were heavily criticized as it was seen as unfeminine. In the early 20th century, Sigmund Freud argued that purses were sexually suggestive as the structure of the purse symbolized female genitalia and sexuality. Before handbags, pockets were secured inside of a woman's dress which held personal items and retrieving items was done discreetly and modestly. Due to handbags being carried in the open, the accessory exposed a woman's personal items. Freud compared women retrieving items from their purse as a representation of masturbation. According to Freud's argument, women who carried purses openly displayed their sexuality due to the sexual symbolism of the purse.As handbags grew into the mainstream in the 20th century, they began to transform from purely practical items to symbols of the wearer's wealth and worth in society. The styles, materials, prices, and, most importantly, the brand names of purses and handbags became just as (if not more) valuable than the functionality of the bags themselves. Handbags transitioned from being seen as unfeminine, to being seen as specifically feminine and unmasculine. While women's bags served as fashion accessories not meant to hold more than a few personal and beauty items (feminine things), men's bags stayed more in the realm of briefcases: square, hard-edged, plain; containing items pertaining to the "man's world": business-related items, documents, files, stationery and pens. The gendered division between the personal bag and the business bag meets in the middle with the unisex alms purse originating in the Middle Ages meant to carry coins to donate to the church or the poor. The charitable symbolism of the alms purse later carried over to women's handbags in general; a woman carrying a bag was seen as upper class and therefore potentially using the bag to hold her donations.During the 1940s, the rationing of textiles for World War II led to the manufacturing of handbags made in materials like raffia or crocheted from yarn. Some women crocheted their own small handbags from commercial patterns during this period. Men's bags: The oldest known purse dates back more than 5000 years, and was a pouch worn by a man, Ötzi the Iceman. Men once carried coin purses. In early modern Europe, when women's fashions moved in the direction of using small ornamental purses, which evolved into handbags, men's fashions were moving in another direction. Men's trousers replaced men's breeches during the course of the 18th and 19th centuries, and pockets were incorporated in the loose, heavy material. This enabled men to continue carrying coins, and then paper currency, in small leather wallets. Men's pockets were plentiful in the 19th century and 20th century trousers and coats, to carry possessions, such as pipes, matches, and knives, and they were an item frequently mended by their wives.Men's purses were revived by designers in the 1970s in Europe. Since the 1990s, designers have marketed a more diverse range of accessory bags for men. The names man bag, man-purse and murse, mini bag have been used. The designs common in the U.S. are typically variations on backpacks or messenger bags, and have either a masculine or a more unisex appearance, although they are often more streamlined than a backpack and less bulky than a briefcase. These bags are often called messenger bags or organizer bags. In many other countries, it is common for men to carry small rectangular shoulder bags, often made of leather. The leather satchel is also common. Men's designer bags are produced by well-known companies such as Prada, Louis Vuitton, Coach, and Bottega Veneta in a variety of shapes and sizes. The global men's bag and small leather goods trade is a $4-billion-a-year industry. Sales of men's accessories including "holdall" bags are increasing in North America. Types: Baguette: a small, narrow, rectangular shape purse, resembling a French loaf of bread (baguette) Bowling bag: a popular 1990s "retro" style for younger women, modeled after American bags used to carry bowling balls; sturdy design with arched top and sides and a zipper closure with two carrying handles, may or may not have feet, usually no strap, no drawstring, no top flap Bucket bag: a cylindrical bag, shaped like a bucket, medium-size or large, with one or two large handles, often shoulder strap(s), and a drawstring closure Clutch: a small firm handbag with a top flap and without handles, often rectangular in shape (soft versions sometimes are shaped like sections of an orange), often an evening bag but used during the day as well; some will feature a strap that can be worn over the shoulder but many will not Crossbody bag: a bag worn across the body from shoulder to hip; this is as opposed to a smaller hand carried bag such as a clutch as well as opposed to a larger bag such as a tote or bowling bag; a baguette, for example, may be worn crossbody, as can a half-moon or a messenger bag, but a tote cannot be worn this way nor can a hobo (some bucket bags are worn crossbody) Doctor's bag: also known as a Gladstone bag, modeled after a Victorian-era doctor's bag for making house calls, medium to large, has two sturdy handles but no straps and no top flap; resembles a bowling bag but may have a different closure, traditionally always in black leather Half-moon bag: shaped like a half-moon, usually smaller and feminine, worn hanging from the shoulder, may or may not have a handle Hobo bag: a soft-sided medium-sized crescent-shaped bag with a shoulder- or crossbody-length strap with no handle, no feet, and a top zipper closure with no top flap; a modern, casual silhouette Messenger bag: technically a variety of satchel (see below), square or rectangular (wider than tall) with one long strap worn across the body and large flap covering the top opening with no feet; inspired by bags worn by urban messengers to deliver business mail; meant to be carried against the lower back and usually made out of waterproof canvas rather than leather, with a secure front closure Minaudière: a variety of clutch, usually rigid-bodied with a hinge at the bottom, sometimes with a soft fabric lining, with no handles, straps, or feet, often encrusted with jewels and worn as evening wear Reticule: also known as a ridicule or indispensable, is a obscure type of small drawstring handbag or purse, similar to a modern evening bag, used mainly from 1795 to 1820 Saddlebag: a small to medium size bag shaped like an equestrian saddle bag, always with a top flap and curved sides and bottom along with a shoulder strap but no top handle(s), no drawstring, and no feet Satchel: a larger soft-sided case usually of leather, often with a pair of top handles and a shoulder strap, usually has a front flap, similar to a doctor's bag or tote in shape but smaller, worn across the body and resting on the opposite hip; a satchel made of canvas is usually considered a messenger bag Shoulder bag: a bag worn hanging off the shoulder, as opposed to a crossbody bag or a handheld bag; has a shorter strap than a crossbody, but otherwise is not usually distinguished; both shoulder bags and crossbody bags are larger than most clutches or wristlets, but smaller than totes or bucket bags; they may have a top flap, a handle, and feet, or none of these; a hobo bag is a variety of shoulder bag, but because of its distinct shape, it is usually referred to as a hobo specifically Top handle bag: a medium sized bag with one or two top handles, may or may not have a flap, often rectangular with four feet, may also have a strap; many satchels are also top-handle bags, and some of these may be worn as crossbody bags or as shoulder bags if they also have a strap Tote: a medium to large bag with two longer straps and an open top (no flap, no zipper closure), similar to a bucket bag but usually less cylindrical and more square, with no feet; the Hermes Birkin bag is a tote Wristlet: a small rectangular handbag with a short carrying strap resembling a bracelet that can be worn around the wrist. Similar to a clutch in design, but with the added wrist strap Hardware: A distinction can also be made between soft-body handbags or frame handbags, where a metal frame supports the textile or leather of the bag. Frame bags often use a kissing lock closure, with two interlocking metal beads set on the top of the frame. Kissing locks were popular on handbags during the early- to mid-20th century, and remain popular with vintage collectors and in "retro" designs. These locks are still seen on smaller coin purses. Coinage as a verb: The verb "to handbag" and its humorous usage was inspired in the 1980s by UK prime minister Margaret Thatcher having "weaponized" the handbag in the opinion of British biographer and historian David Cannadine. As "her most visible symbol of her power to command" the bag became an emphatic prop that she produced at meetings to show she meant business. She would invariably bring out of the bag a crucial document from which she would quote, her speech notes often being cut to size to fit inside. Because Thatcher was Britain's first female prime minister, former Daily Telegraph editor Charles Moore wrote in his authorised biography of 2013, "her handbag became the sceptre of her rule".The verb's more general meaning of "treating ruthlessly" came to symbolize Thatcher's whole style of government. Victims of her handbaggings, from political leaders to journalists, have testified to what the German chancellor Helmut Kohl perceived as her "ice-cold pursuit of her interests". US secretary of state James Baker recalled her standby ploy: "When negotiations stall, get out the handbag! The solution is always there." Julian Critchley, one of her biggest Tory backbench critics, once said, "Margaret Thatcher and her handbag is the same as Winston Churchill and his cigar." Thatcher's bag was almost as newsworthy an item as she was herself and on the day she died, one of her handbag-makers saw a sharp rise in sales of her favorite structured design. The original bag Thatcher asserts on a signed card was the one "used every day in my time at Downing Street" is archived at Churchill College, Cambridge. Made of dark blue leather "in mock-croc style", it was a gift from friends on her birthday in 1984. Handbag collecting: Handbag collecting has become increasingly popular in the 2000s. In 2014, the auction house Christie's started a handbag department, which now has several staff, headed by an "international head of handbags". In June 2017, Christie's had its first sale devoted exclusively to handbags.According to The Daily Telegraph, the most sought-after and valuable brand is Hermès, followed by others including Céline, Chanel and Louis Vuitton. World records In June 2015, a Christie's handbag sale in Hong Kong saw a pink crocodile skin Hermès Birkin bag made only in 2014, sell for a then world record £146,000.In May 2017, Christie's Hong Kong sold a white crocodile skin Hermès Birkin bag with 10.23 carats of diamonds for a world record HK$2.9 million (£293,000). Museums The Museum of Bags and Purses is in Amsterdam, the Netherlands; the Simone Handbag Museum is in Seoul, South Korea; and the ESSE Purse Museum is in Little Rock, Arkansas. Handbag collecting: Notable collectors Queen Elizabeth II owned over 200 Launer London bags, and kept all of her mother's Launer bags.Other notable collectors include Victoria Beckham, who has over 100 Birkin bags, Katie Holmes, Rita Ora and Kelly Brook. Cara Delevingne, Miranda Kerr, Lauren Conrad, Rosie Huntington-Whiteley, Beyoncé, Mary-Kate Olsen, Ashley Olsen, Lady Gaga, Olivia Palermo, and Rihanna are also collectors. Others include Kim Chiu, KC Concepcion, Kris Aquino, Heart Evangelista, Marian Rivera, Bea Alonzo, Kathryn Bernardo, Lovi Poe, Megan Young, Gretchen Barretto, Camille Prats, Sarah Lahbati, and Jeffree Star.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Facial lymph nodes** Facial lymph nodes: The facial lymph nodes comprise three groups: (a) infraorbital or maxillary, scattered over the infraorbital region from the groove between the nose and cheek to the zygomatic arch; (b) buccinator, one or more placed on the buccinator muscle opposite the angle of the mouth; (c) supramandibular, on the outer surface of the mandible, in front of the masseter and in contact with the external maxillary artery and anterior facial vein.Their afferent vessels drain the eyelids, the conjunctiva, and the skin and mucous membrane of the nose and cheek; their efferents pass to the submandibular glands.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cathedral arch** Cathedral arch: A cathedral arch is an arch used in bridge architecture. It consists of an arched structural system, wherein vertical load bearing occurs only at the crown, or peak of the arch. As applied to bridge design, cathedral arch bridges feature no intermediary spandrel column elements between the foundation abutments and the crown of the arch system, where the roadway superstructure is constrained to the substructure. The largest cathedral arch bridge in the world is the Galena Creek Bridge near Reno, Nevada.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gleaner** Gleaner: Gleaner(Noun)- A highly resourceful individual that utilizes crops and resources left behind by people who squander them. Newspapers: Gleaner Company, a newspaper publishing enterprise in Jamaica The Daily Gleaner, a daily newspaper serving Fredericton, New Brunswick and the upper Saint John River Valley Henderson Gleaner, a daily newspaper in Henderson, Kentucky Alamance Gleaner, a newspaper which was based in Alamance County, North Carolina Northeast News Gleaner, a weekly newspaper that served Northeast Philadelphia Other uses: Gleaners, a non-profit that helps feed the homeless in Jackson, Mississippi The Gleaners, a painting by Jean-François Millet Gleaner Manufacturing Company, a manufacturer of combine harvesters Gleaner A85, a combine harvester Gleaner E, a combine harvester HMS Gleaner (1809), a mercantile ketch HMS Gleaner (J83), a survey vessel launched in 1937 and converted into a minesweeper in 1939 HMSML Gleaner (H86), a survey motor launch in commission since 1983 The Gleaners (album) by Larry Grenadier
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MAA FOCUS** MAA FOCUS: MAA FOCUS is the newsmagazine of the Mathematical Association of America. It carries news items and short articles of interest to the organization's members. History and profile: The magazine was first published in March 1981; the first editor was Marcia P. Sward, who held that position until September 1985. Beginning in 2009 the magazine is published six times a year; previously it was published nine times a year. The magazine is printed on glossy paper with a final trim size of 8-1/4 inches wide by 10-5/8 inches high. Circulation in 2008 was 22,400 copies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GrabCut** GrabCut: GrabCut is an image segmentation method based on graph cuts. GrabCut: Starting with a user-specified bounding box around the object to be segmented, the algorithm estimates the color distribution of the target object and that of the background using a Gaussian mixture model. This is used to construct a Markov random field over the pixel labels, with an energy function that prefers connected regions having the same label, and running a graph cut based optimization to infer their values. As this estimate is likely to be more accurate than the original, taken from the bounding box, this two-step procedure is repeated until convergence.Estimates can be further corrected by the user by pointing out misclassified regions and rerunning the optimization. The method also corrects the results to preserve edges.There are several open source implementations available including OpenCV (as of version 2.1).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Edge cycle cover** Edge cycle cover: In mathematics, an edge cycle cover (sometimes called simply cycle cover) of a graph is a family of cycles which are subgraphs of G and contain all edges of G. If the cycles of the cover have no vertices in common, the cover is called vertex-disjoint or sometimes simply disjoint cycle cover. In this case the set of the cycles constitutes a spanning subgraph of G. Edge cycle cover: If the cycles of the cover have no edges in common, the cover is called edge-disjoint or simply disjoint cycle cover. Properties and applications: Minimum-Weight Cycle Cover For a weighted graph, the Minimum-Weight Cycle Cover Problem (MWCCP) is the problem to find a cycle cover with minimal sum of weights of edges in all cycles of the cover. For bridgeless planar graphs the MWCCP can be solved in polynomial time. Cycle k-cover: A cycle k-cover of a graph is a family of cycles which cover every edge of G exactly k times. It has been proven that every bridgeless graph has cycle k-cover for any integer even integer k≥4. For k=2, it is the well-known cycle double cover conjecture is an open problem in graph theory. The cycle double cover conjecture states that in every bridgeless graph there exists a set of cycles that together cover every edge of the graph twice.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Social television** Social television: Social television is the union of television and social media. Millions of people now share their TV experience with other viewers on social media such as Twitter and Facebook using smartphones and tablets. TV networks and rights holders are increasingly sharing video clips on social platforms to monetise engagement and drive tune-in. Social television: The social TV market covers the technologies that support communication and social interaction around TV as well as companies that study television-related social behavior and measure social media activities tied to specific TV broadcasts – many of which have attracted significant investment from established media and technology companies. The market is also seeing numerous tie-ups between broadcasters and social networking players such as Twitter and Facebook. The market is expected to be worth $256bn by 2017.Social TV was named one of the 10 most important emerging technologies by the MIT Technology Review on Social TV in 2010. And in 2011, David Rowan, the editor of Wired magazine, named Social TV at number three of six in his peek into 2011 and what tech trends to expect to get traction. Ynon Kreiz, CEO of the Endemol Group told the audience at the Digital Life Design (DLD) conference in January 2011: "Everyone says that social television will be big. I think it's not going to be big—it's going to be huge".Much of the investment in the earlier years of social TV went into standalone social TV apps. The industry believed these apps would provide an appealing and complimentary consumer experience which could then be monetized with ads. These apps featured TV listings, check-ins, stickers and synchronised second-screen content but struggled to attract users away from Twitter and Facebook. Most of these companies have since gone out of business or been acquired amid a wave of consolidation and the market has instead focused on the activities of the social media channels themselves – such as Twitter Amplify, Facebook Suggested Videos and Snapchat Discover – and the technologies that support them. Twitter: Twitter and Facebook are both helping users connect around media, which can provoke strong debate and engagement. Both social platforms want to be the 'digital watercooler' and host conversation around TV because the engagement and data about what media people consume can then be used to generate advertising revenue.As an open platform, conversation on Twitter is closely aligned with real-time events. In May 2013, it launched Twitter Amplify – an advertising product for media and consumer brands. With Amplify, Twitter runs video highlights from major live broadcasts, with advertisers' names and messages playing before the clip.By February 2014, all four major U.S. TV networks had signed up to the Amplify program, bringing a variety of premium TV content onto the social platform in the form of in-tweet real-time video clips. In June 2014, Twitter acquired its Twitter Amplify partner in the U.S. SnappyTV, a company that was helping broadcasters and rights holders to share video content both organically across social and via Twitter's Amplify program. Twitter continues to rely on Grabyo, which has also struck numerous deals with some of the largest broadcasters and rights holders in Europe and North America to share video content across Facebook and Twitter. Facebook: Facebook made significant changes to its platform in 2014 including updates to its algorithm to enhance how it serves video in users' feeds. It also launched video autoplay to get users to watch the videos in their feeds. It rapidly surpassed Twitter and by the end of 2014 it was enjoying three billion video views a day on its platform and had announced a partnership with the NFL, one of Twitter's most active Twitter Amplify partners. In April 2015, at its F8 Developer Conference, it revealed it was working with Grabyo among other technology partners to bring video onto its platform. Then in July it announced it would be launching Facebook Suggested Videos, bringing related videos and ads to anyone that clicks on a video – a move that not only competed with Twitter's commercial video offering but also put it in direct competition with YouTube. TV Time: TV Time is a television dedicated social network that allows users to keep track of the television series they watch, as well as films. It also allows them to express their reaction to the media they have seen with episode specific voting for favorite characters and emotional reaction to episodes, as well as commenting in episode restrictive pages. This way users are able to avoid spoilers while also finding a precise audience and community for each of their interactions, as opposed to bigger, non-television dedicated social medias such as Facebook and Twitter where the likelihood of unintentionally reading spoilers is much higher. TV Time offers an analytics service called "TVLytics" where the votes and reactions collected from users can be studied for research and television production purposes. Advertising: According to Businessinsider.com, there are variety of applications for social TV, including support for TV ad sales, optimizing TV ad buys, making ad buys more efficient, as a complement to audience measurement, and eventually, audience forecasting and real-time optimization. Social TV data can ease access to focus groups and may create a positive feedback loop for generating ultra-sticky TV programming and multi-screen ad campaigns. In numbers: Viewers share their TV experience on social media in real-time as events unfold: between 88-100m Facebook users login to the platform during the primetime hours of 8pm – 11pm in the US. The volume of social media engagement in TV is also rising – according to Nielsen SocialGuide, there was a 38% increase in tweets about TV in 2013 to 263m.For the 2014 Super Bowl, Twitter reported that a record 24.9 million tweets about the game were sent during the telecast, peaking at 381,605 tweets per minute. Facebook reported that 50 million people discussed the Super Bowl, generating 185 million interactions.The 2014 Oscars generated 5m tweets, viewed by an audience of 37m unique Twitter users and delivering 3.3bn impressions globally as conversation and key moments were shared virally across the platform.In 2014 the All England Lawn Tennis Club (AELTC), hosts of Wimbledon, used Grabyo to share video content across social. The videos were viewed 3.5 million times across Facebook and Twitter. In partnered with Grabyo again in 2015 and the videos generated over 48 million views across Facebook and Twitter. Television shows with social integration: Here are some examples of how TV executives are integrating social elements with TV shows: C-SPAN streamed tweets from US Senators and Representatives during the quorum call The Voice had the judges of the program tweet during the show and the posts scrolls on the bottom of the screen. The use of Twitter also led to an increase in viewers. Television shows with social integration: "Glee" Entertainment Weekly created a second screen viewing platform for the Glee season 3 premiere. Related publications: Erika Jonietz. "Making TV Social, Virtually" MIT Technology Review. (January 11, 2010) AmigoTV (Alcatel-Lucent; Coppens et al.) – 2004 www.ist-ipmedianet.org/Alcatel_EuroiTV2004_AmigoTV_short_paper_S4-2.pdf Nextream (MIT Media Lab, Martin et al.) – 2010 Social Interactive Television: Immersive Shared Experiences and Perspectives (P. Cesar, D. Geerts, and K. Chorianopoulos (eds.)) – 2009 Social TV and the Emergence of Interactive TV – Multimedia Research Group – November 2010 Interactive Social TV on Service Oriented Environments: Challenges and Enablers (May 2011) Systems: Boxee – acquired by Samsung GetGlue – acquired by i.TV Grabyo KIT digital Miso TV Tank Top TV WiO Xbox Live
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Self-phase modulation** Self-phase modulation: Self-phase modulation (SPM) is a nonlinear optical effect of light–matter interaction. An ultrashort pulse of light, when travelling in a medium, will induce a varying refractive index of the medium due to the optical Kerr effect. This variation in refractive index will produce a phase shift in the pulse, leading to a change of the pulse's frequency spectrum. Self-phase modulation is an important effect in optical systems that use short, intense pulses of light, such as lasers and optical fiber communications systems.Self-phase modulation has also been reported for nonlinear sound waves propagating in biological thin films, where the phase modulation results from varying elastic properties of the lipid films. Theory with Kerr nonlinearity: The evolution along distance z of the equivalent lowpass electric field A(z) obeys the nonlinear Schrödinger equation which, in absence of dispersion, is: dA(z)dz=−jγ|A(z)|2A(z) with j the imaginary unit and γ the nonlinear coefficient of the medium. The cubic nonlinear term on the right hand side is called Kerr effect, and is multiplied by -j according to the engineer's notation used in the definition of Fourier transform. Theory with Kerr nonlinearity: The power of the electric field is invariant along z, since: d|A|2dz=dAdzA∗+AdA∗dz=0 with * denoting conjugation. Since the power is invariant, the Kerr effect can manifest only as a phase rotation. In polar coordinates, with A=|A|ejφ , it is: d|A|ejφdz=d|A|dz⏟=0ejφ+j|A|ejφdφdz=−jγ|A(z)|3ejφ such that: dφdz=−γ|A|2. The phase φ at coordinate z therefore is: φ(z)=φ(0)−γ|A(0)|2z⏟SPM. Such a relation highlights that SPM is induced by the power of the electric field. In presence of attenuation α the propagation equation is: dA(z)dz=−α2A(z)−jγ|A(z)|2A(z) and the solution is: A(z)=A(0)e−α2ze−jγ|A(0)|2Leff(z) where Leff(z) is called effective length and is defined by: Leff(z)=∫0ze−αxdx=1−e−αzα. Hence, with attenuation the SPM does not grow indefinitely along distance in a homogeneous medium, but eventually saturates to: lim z→+∞φ(z)=φ(0)−γ|A(0)|21α. In presence of dispersion the Kerr effect manifests as a phase shift only over short distances, depending on the amount of dispersion. SPM Frequency shift: For an ultrashort pulse with a Gaussian shape and constant phase, the intensity at time t is given by I(t): exp ⁡(−t2τ2) where I0 is the peak intensity, and τ is half the pulse duration. If the pulse is travelling in a medium, the optical Kerr effect produces a refractive index change with intensity: n(I)=n0+n2⋅I where n0 is the linear refractive index, and n2 is the second-order nonlinear refractive index of the medium. As the pulse propagates, the intensity at any one point in the medium rises and then falls as the pulse goes past. This will produce a time-varying refractive index: exp ⁡(−t2τ2). This variation in refractive index produces a shift in the instantaneous phase of the pulse: ϕ(t)=ω0t−kz=ω0t−2πλ0⋅n(I)L where ω0 and λ0 are the carrier frequency and (vacuum) wavelength of the pulse, and L is the distance the pulse has propagated. The phase shift results in a frequency shift of the pulse. The instantaneous frequency ω(t) is given by: ω(t)=dϕ(t)dt=ω0−2πLλ0dn(I)dt, and from the equation for dn/dt above, this is: exp ⁡(−t2τ2). SPM Frequency shift: Plotting ω(t) shows the frequency shift of each part of the pulse. The leading edge shifts to lower frequencies ("redder" wavelengths), trailing edge to higher frequencies ("bluer") and the very peak of the pulse is not shifted. For the centre portion of the pulse (between t = ±τ/2), there is an approximately linear frequency shift (chirp) given by: ω(t)=ω0+α⋅t where α is: α=dωdt|0=4πLn2I0λ0τ2. SPM Frequency shift: It is clear that the extra frequencies generated through SPM broaden the frequency spectrum of the pulse symmetrically. In the time domain, the envelope of the pulse is not changed, however in any real medium the effects of dispersion will simultaneously act on the pulse. In regions of normal dispersion, the "redder" portions of the pulse have a higher velocity than the "blue" portions, and thus the front of the pulse moves faster than the back, broadening the pulse in time. In regions of anomalous dispersion, the opposite is true, and the pulse is compressed temporally and becomes shorter. This effect can be exploited to some degree (until it digs holes into the spectrum) to produce ultrashort pulse compression. SPM Frequency shift: A similar analysis can be carried out for any pulse shape, such as the hyperbolic secant-squared (sech2) pulse profile generated by most ultrashort pulse lasers. If the pulse is of sufficient intensity, the spectral broadening process of SPM can balance with the temporal compression due to anomalous dispersion and reach an equilibrium state. The resulting pulse is called an optical soliton. Applications of SPM: Self-phase modulation has stimulated many applications in the field of ultrashort pulse including to cite a few: spectral broadening and supercontinuum temporal pulse compression spectral pulse compressionThe nonlinear properties of Kerr nonlinearity has also been beneficial for various optical pulse processing techniques such as optical regeneration or wavelength conversion. Mitigation strategies in DWDM systems: In long-haul single-channel and DWDM (dense wavelength-division multiplexing) systems, SPM is one of the most important reach-limiting nonlinear effects. It can be reduced by: Lowering the optical power at the expense of decreasing the optical signal-to-noise ratio Dispersion management, because dispersion can partly mitigate the SPM effect
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Acta Arithmetica** Acta Arithmetica: Acta Arithmetica is a scientific journal of mathematics publishing papers on number theory. It was established in 1935 by Salomon Lubelski and Arnold Walfisz. The journal is published by the Institute of Mathematics of the Polish Academy of Sciences.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Double champions in MMA** Double champions in MMA: Double belt, double champion (or simultaneous champion in cases where the belts are won without abandoning or losing the other) is an achievement made by MMA fighters in several categories. Controversies: Some sports experts argue that a double championship could be detrimental to the divisions that the champion competes in. The progression of the divisions may be blocked, which can be considered unfair to other challengers. This can be due to the fact that MMA athletes cannot compete in short periods of time (except for rare circumstances, such as fights that end quickly and/or without injury) and must have adequate time for recovery and training camp. Furthermore, athletes need time to adjust their weight, which can further delay title fights. Sources: 2. Reinier De Ridder, "The Dutch Knight"
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bempegaldesleukin** Bempegaldesleukin: Bempegaldesleukin (development code NKTR-214) is an experimental anti-cancer drug candidate. It is a PEGylated interleukin-2 (IL-2) acting as a CD122-preferential IL-2 pathway agonist designed to activate and proliferate CD8+ T cells and NK cells. It is being developed by Nektar Therapeutics. In August 2019 the FDA granted breakthrough therapy designation to bempegaldesleukin in combination with nivolumab for the treatment of advanced melanoma. It is in phase 3 clinical trials for melanoma and renal cell carcinoma. Mechanism of action: Bempegaldesleukin is a recombinant form of human cytokine interleukin-2 conjugated to six releasable polyethylene glycol chains. PEGylation of IL-2 is utilized to alter its receptor binding. PEG chains are located at the region of IL-2 that binds to the IL2Rα subunit of the heterotrimeric IL2Rαβγ complex, reducing its ability to bind and activate the heterotrimer. The IL2Rαβγ complex is constitutively expressed on regulatory T cells (Tregs). Therefore, without the use of mutations, PEGylation reduces the affinity for IL2Rαβγ to a greater extent than for IL2Rβγ (CD122), the receptor complex predominant on CD8+ T cells. When fully PEGylated, it is a pro-drug that has essentially no biological activity. Upon intravenous administration, the PEG chains slowly release to generate active cytokine species. Consequently, it increases the proliferation, activation, and effector function of CD8+ T cells and NK cells within the tumor microenvironment without expanding the undesirable intra-tumoral regulatory T cells. Development: Bempegaldesleukin is investigated in combination with other anti-cancer agents. In February 2018 Nektar Therapeutics announced development and commercialization collaboration with Bristol-Myers Squibb to evaluate combination of bempegaldesleukin and nivolumab. In November 2018 Nektar announced collaboration with Pfizer to evaluate combination of Bempegaldesleukin with avelumab and talazoparib or enzalutamide in multiple cancers. In March 2022 it was reported that trials for melanoma didn't meet statistical significance at the first interim analysis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carbonyl metallurgy** Carbonyl metallurgy: Carbonyl metallurgy is used to manufacture products of iron, nickel, steel, and other metals. Coatings are produced by vapor plating using metal carbonyl vapors. These are metal-ligand complexes where carbon monoxide is bonded to individual atoms of metals . Carbonyl metallurgy: Iron carbonyl is stable as iron pentacarbonyl, where five carbon monoxide molecules are pendently bonded to the iron atom, while nickel carbonyl is stable as nickel tetracarbonyl, which has four carbon monoxide molecules pendantly bonded to the nickel atom. Both can be formed by the exposure of the powdered metal to carbon monoxide gas at temperatures of around 75 degrees Celsius. Both the metal carbonyls decompose near 175 °C, resulting in a vapor plated metallic coating. The thickness of the vapor plated deposit can be increased to desired thicknesses by controlling the amount of metal carbonyl used and the duration of the plating process. Carbonyl metallurgy: Vale Inco produces over 100 million pounds (ca. 45000 tonnes) of nickel metal annually by the carbonyl process. The carbonyl process has been used to produce molds in custom shapes for industry. Such molds have been used in plastic molding and other manufacturing techniques. William Jenkin developed many of the techniques and procedures used in carbonyl metallurgy. Carbonyl metallurgy is useful as a low-temperature metal coating technique that may find many applications in the future.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Catamorphism** Catamorphism: In category theory, the concept of catamorphism (from the Ancient Greek: κατά "downwards" and μορφή "form, shape") denotes the unique homomorphism from an initial algebra into some other algebra. In functional programming, catamorphisms provide generalizations of folds of lists to arbitrary algebraic data types, which can be described as initial algebras. The dual concept is that of anamorphism that generalize unfolds. A hylomorphism is the composition of an anamorphism followed by a catamorphism. Definition: Consider an initial F -algebra (A,in) for some endofunctor F of some category into itself. Here in is a morphism from FA to A . Since it is initial, we know that whenever (X,f) is another F -algebra, i.e. a morphism f from FX to X , there is a unique homomorphism h from (A,in) to (X,f) . By the definition of the category of F -algebra, this h corresponds to a morphism from A to X , conventionally also denoted h , such that h∘in=f∘Fh . In the context of F -algebra, the uniquely specified morphism from the initial object is denoted by cataf and hence characterized by the following relationship: h=cataf h∘in=f∘Fh Terminology and history: Another notation found in the literature is (|f|) . The open brackets used are known as banana brackets, after which catamorphisms are sometimes referred to as bananas, as mentioned in Erik Meijer et al. One of the first publications to introduce the notion of a catamorphism in the context of programming was the paper “Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire”, by Erik Meijer et al., which was in the context of the Squiggol formalism. Terminology and history: The general categorical definition was given by Grant Malcolm. Examples: We give a series of examples, and then a more global approach to catamorphisms, in the Haskell programming language. Iteration Iteration-step prescriptions lead to natural numbers as initial object. Examples: Consider the functor fmaybe mapping a data type b to a data type fmaybe b, which contains a copy of each term from b as well as one additional term Nothing (in Haskell, this is what Maybe does). This can be encoded using one term and one function. So let an instance of a StepAlgebra also include a function from fmaybe b to b, which maps Nothing to a fixed term nil of b, and where the actions on the copied terms will be called next. Examples: As a silly example, consider the algebra on strings encoded as ("go!", \s -> "wait.. " ++ s), for which Nothing is mapped to "go!" and otherwise "wait.. " is prepended. As (Succ . Succ . Succ . Succ $ Zero) denotes the number four in Nat, the following will evaluate to "wait.. wait.. wait.. wait.. go!": foldSteps ("go!", \s -> "wait.. " ++ s) (Succ . Succ . Succ . Succ $ Zero). We can easily change the code to a more useful operation, say repeated operation of an algebraic operation on numbers, just by changing the F-algebra (nil, next), which is passed to foldSteps List fold For a fixed type a, consider the functor mapping types b to the product type of those two types. We moreover also add a term Nil to this resulting type. An f-algebra shall now map Nil to some special term nil of b or "merge" a pair (any other term of the constructed type) into a term of b. This merging of a pair can be encoded as a function of type a -> b -> b. Examples: As an example, consider the algebra on numbers types encoded as (3, \x-> \y-> x*y), for which the number from a acts on the number from b by plain multiplication. Then the following will evaluate to 3.000.000: foldrList (3, \x-> \y-> x*y) (Cons 10 $ Cons 100 $ Cons 1000 Nil) Tree fold For a fixed type a, consider the functor mapping types b to a type that contains a copy of each term of a as well as all pairs of b's (terms of the product type of two instances of the type b). An algebra consists of a function to b, which either acts on an a term or two b terms. This merging of a pair can be encoded as two functions of type a -> b resp. b -> b -> b. Examples: General case Deeper category theoretical studies of initial algebras reveal that the F-algebra obtained from applying the functor to its own initial algebra is isomorphic to it. Examples: Strong type systems enable us to abstractly specify the initial algebra of a functor f as its fixed point a = f a. The recursively defined catamorphisms can now be coded in single line, where the case analysis (like in the different examples above) is encapsulated by the fmap. Since the domain of the latter are objects in the image of f, the evaluation of the catamorphisms jumps back and forth between a and f a. Examples: Now again the first example, but now via passing the Maybe functor to Fix. Repeated application of the Maybe functor generates a chain of types, which, however, can be united by the isomorphism from the fixed point theorem. We introduce the term zero, which arises from Maybe's Nothing and identify a successor function with repeated application of the Just. This way the natural numbers arise. Examples: Again, the following will evaluate to "wait.. wait.. wait.. wait.. go!": cata pleaseWait (successor.successor.successor.successor $ zero) And now again the tree example. For this we must provide the tree container data type so that we can set up the fmap (we didn't have to do it for the Maybe functor, as it's part of the standard prelude). The following will evaluate to 4: cata treeDepth $ meet (end "X") (meet (meet (end "YXX") (end "YXY")) (end "YY"))
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Xerox DocuShare** Xerox DocuShare: DocuShare is a content management system developed by Xerox Corporation. DocuShare makes use of open standards and allows for managing content, integrating it with other business systems, and developing customized and packaged software applications. History: Xerox DocuShare was originally developed by Xerox’s research centers as an application for internal use (named AmberWeb). The product was the first web-based document management tool offered to the market in 1997. History: Since its initial launch, DocuShare has added capabilities in workflow/business process management, production imaging, records and retention management, social collaboration, and enterprise scalability. With version 5.0 in 2006, Xerox launched DocuShare CPX and a flexible licensing model that allows customers to mix read-only, guest (public), basic content services and more CM seats as they need. As of version 6.6, there are four separate DocuShare products. Functionality: The DocuShar Content Management Platform includes four products: DocuShare Express supports content management tailored to SMBs, for managing digital documents and converting paper content to digital. DocuShare provides document management, collaboration, image capture, and Web publishing capabilities to support information sharing in an enterprise or department. Add-ons include records management, lifecycle management, team workspaces, enterprise workflow, capture, and eForms. DocuShare Enterprise meets enterprise content management (ECM) requirements for large deployments. DocuShare Education is a special configuration for schools and higher education institutions.DocuShare Developer Environment – A J2EE SDK for integrating DocuShare with other business systems, and building applications centered on content. DocuShare Virtual Filing System – Combines software, hardware, and consulting services to convert paper-based filing into a digital content management system. DocuShare is also available as a hosted offering. Architecture and features: A 2005 article in InfoWorld reported that DocuShare 4 was simple to use but still delivered solid document and file management.DocuShare is a multi-tier platform, based on Java SE rather than Java EE, with an architecture and developer environment that allows interoperability. The platform uses a Tomcat server and various OEM engines including Autonomy Corporation's (for indexing, eForms and BPM) and records management from IBM. It uses a Java SE architecture with added workflow and search engines. It also supports wiki, blog, and comments for social networking. For imaging, it includes a content intake manager (an XMl Parser), ability to e-mail directly to a DocuShare collection, scan cover sheets (with DataGlyph technology formerly in Xerox FlowPort and PaperWorks), and optional Nuance optical character recognition (OCR). Architecture and features: A Verity search engine was used to search for documents and metadata, later replaced by the Autonomy search engine. Terminology In DocuShare, a "collection" is a folder for the storage of information. Multiple collections can be [[Nesting (computing}|nested]] to form aTree (data structure)|tree]]. Architecture and features: Workflow/BPM Content Rules are pre-defined workflows that have been implemented into the product UI. The content rules can perform specific functions on a collection or document. They allow a user to pre-define a workflow by stepping through a process that will take place when a specific event happens: for example, if a document moves into a collection, the workflow can move it to another area, perform OCR, and perform steps based on the document's contents. Architecture and features: To create more complex workflows, the DocuShare Developer Environment provides a Workflow software development kit (SDK) and design tool. Architecture and features: Imaging As of 2007, Xerox DocuShare had a front end imaging component in Scan Cover Sheets, which use proprietary DataGlyph technology. An Extensible Interface (EIP) connector is also provided for Xerox MFP scanning. (EIP provides direct access to a repository from the touch panel of a Xerox MFP.) DocuShare can also be used with third party imaging tools including ABBYY, Pharos, Kofax, Cardiff Teleform, SRC Conveyor, WaterWare ScanManager, Polgroup StrategicValueWare, NSI AutoStore, Xerox Smart Document Travel, SRC File Clerk, ScanFlowStore, eCopy, Visioneer OneTouch, EzeScan, Nuance PaperPort, and DSI Software Systems. These systems and others connect DocuShare with multi-function printers (MFPs) and scanners. Architecture and features: Security DocuShare supports security as required by government and companies. The UK government produced a detailed description of all aspects of DocuShare, including security and asset protection.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Immunodeficiency 26** Immunodeficiency 26: Immunodeficiency 26 is a rare genetic syndrome. It is characterised by absent circulating B and T cells and normal natural killer cells. Signs and symptoms: The features of this condition include recurrent candidiasis and lower respiratory tract infections. Genetics: This condition is due to mutations in the DNA-PKcs gene and is inheritable in an autosomal recessive fashion. The gene is located on the long arm of chromosome 8 (8q11.21) on the minus strand. It encodes a protein of 4128 amino acids with a predicted molecular weight of 469 kiloDaltons. The encoded protein is a protein kinase that is activated by DNA. This protein acts as a sensor for damaged DNA. Diagnosis: Diagnosis is made by examination of the circulating lymphocytes and gene sequencing. Differential diagnosis Ataxia telangectasia Artemis deficiency LIG4 syndrome Nijmegen breakage syndrome Severe combined immunodeficiency with Cernunnos X-linked agammaglobulinemia Epidemiology: This condition is rare. Only two cases have been described up to 2017. History: This condition was described in 2009 by van der Burg et al.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Linolenic acid** Linolenic acid: Linolenic acid is a type of naturally-occurring fatty acid. It can refer to either of two octadecatrienoic acids (i.e. with an 18-carbon chain and three double bonds, which are found in the cis configuration), or a mixture of the two. Linolenate (in the form of triglyceride esters of linolenic acid) is often found in vegetable oils; traditionally, such fatty acylates are reported as the fatty acids: α-Linolenic acid, an omega-3 (n-3) fatty acid γ-Linolenic acid, an omega-6 (n-6) fatty acid
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Azelaic acid** Azelaic acid: Azelaic acid (AzA) is an organic compound with the formula HOOC(CH2)7COOH. This saturated dicarboxylic acid exists as a white powder. It is found in wheat, rye, and barley. It is a precursor to diverse industrial products including polymers and plasticizers, as well as being a component of a number of hair and skin conditioners. AzA inhibits tyrosinase. Production: Azelaic acid is industrially produced by the ozonolysis of oleic acid. The side product is nonanoic acid. It is produced naturally by Malassezia furfur (also known as Pityrosporum ovale), a yeast that lives on normal skin. The bacterial degradation of nonanoic acid gives azelaic acid. Biological function: In plants, azelaic acid serves as a "distress flare" involved in defense responses after infection. It serves as a signal that induces the accumulation of salicylic acid, an important component of a plant's defensive response. Applications: Polymers and related materials Esters of this dicarboxylic acid find applications in lubrication and plasticizers. In lubricant industries it is used as a thickening agent in lithium complex grease. With hexamethylenediamine, azelaic acid forms Nylon-6,9, which finds specialized uses as a plastic. Applications: Medical Azelaic acid is used to treat mild to moderate acne, both comedonal acne and inflammatory acne. It belongs to a class of medication called dicarboxylic acids. It works by killing acne bacteria that infect skin pores. It also decreases the production of keratin, which is a natural substance that promotes the growth of acne bacteria. Azelaic acid is also used as a topical gel treatment for rosacea, due to its ability to reduce inflammation. It clears the bumps and swelling caused by rosacea. The mechanism of action is thought to be through the inhibition of hyperactive protease activity that converts cathelicidin into the antimicrobial skin peptide LL-37.In topical pharmaceutical preparations and scientific research AzA is typically used in concentrations between 15% and 20% but some research demonstrates that in certain vehicle formulations the pharmaceutical effects of 10% Azelaic acid has the potential to be fully comparable to that of some 20% creams. Applications: Acne treatment Azelaic acid is effective for mild to moderate acne when applied topically at a 15%-20% concentration. In patients with moderate acne, twice daily application over 3 months of 20% AzA significantly reduced the number of comedones, papules, and pustules; at this strength, it’s considered to be as effective as benzoyl peroxide 5%, tretinoin 0.05%, erythromycin 2%, and oral tetracycline at 500mg-1000mg. In a comparative review of effects of topical AzA, Salicylic acid, Nicotinamide, Sulfur, Zinc, and alpha-hydroxy acid, AzA had more high-quality evidence of effectiveness than the rest. Results can be expected after 4 weeks of twice-daily treatment. The effectiveness of long term use is unclear, but it’s been recommended that AzA be used for at least 6 months continuously for maintenance. Applications: Whitening agent Azelaic acid is used for treatment of skin pigmentation, including melasma and postinflammatory hyperpigmentation, particularly in those with darker skin types. It has been recommended as an alternative to hydroquinone. As a tyrosinase inhibitor, azelaic acid reduces synthesis of melanin. According to one report in 1988, azelaic acid in combination with zinc sulfate in vitro was found to be a potent (90% inhibition) 5α-reductase inhibitor, similar to the hair loss drugs finasteride and dutasteride. In vitro research during mid-1980s evaluating azelaic acid's depigmenting (whitening) capability concluded it is effective (cytotoxic to melanocytes) at only high concentrations.A 1996 review claimed 20% AzA is as potent as 4% hydroquinone after a period of application of three months without the latter's adverse effects and even more effective if applied along with tretinoin for the same period of time. Brand names: Brand names for azelaic acid include Dermaz 99, Crema Pella Perfetta (micronized azelaic acid, kojic dipalmitate, and liquorice extract), Azepur99, Azetec99, Azaclear (azelaic acid and niacinamide), AzClear Action, Azelex, White Action cream, Finacea, Finevin, Melazepam, Skinoren, Ezanic, Azelac, Azaderm, (Acnegen, Eziderm, Acnicam, Azelexin in Pakistan)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vinyl halide** Vinyl halide: In organic chemistry, a vinyl halide is a compound with the formula CH2=CHX (X = halide). The term vinyl is often used to describe any alkenyl group. For this reason, alkenyl halides with the formula RCH=CHX are sometimes called vinyl halides. From the perspective of applications, the dominant member of this class of compounds is vinyl chloride, which is produced on the scale of millions of tons per year as a precursor to polyvinyl chloride. Polyvinyl fluoride is another commercial product. Related compounds include vinylidene chloride and vinylidene fluoride. Synthesis: Vinyl chloride is produced by dehydrochlorination of 1,2-dichloroethane.Due to their high utility, many approaches to vinyl halides have been developed, such as: reactions of vinyl organometallic species with halogens Takai olefination Stork-Zhao olefination - a modification of the Wittig reaction Olefin metathesis Reactions: Vinyl bromide and related alkenyl halides form the Grignard reagent and related organolithium reagents. Alkenyl halides undergo base elimination to give the corresponding alkyne. Most important is their use in cross-coupling reactions (e.g. Suzuki-Miyaura coupling, Stille coupling, Heck coupling, etc.).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Decene** Decene: Decene is an organic compound with the chemical formula C10H20. Decene contains a chain of ten carbon atoms with one double bond, making it an alkene. There are many isomers of decene depending on the position and geometry of the double bond. Dec-1-ene is the only isomer of industrial importance. As an alpha olefin, it is used as a comonomer in copolymers and is an intermediate in the production of epoxides, amines, oxo alcohols, synthetic lubricants, synthetic fatty acids and alkylated aromatics.The industrial processes used in the production of dec-1-ene are oligomerization of ethylene by the Ziegler process or by the cracking of petrochemical waxes.In ethenolysis, methyl oleate, the methyl ester of oleic acid, converts to 1-decene and methyl 9-decenoate: CH CH CH CH CH CO Me methyl oleate CH CH CH CH CH CH 1-decene MeO CH CH CH 9-decenoate Dec-1-ene has been isolated from the leaves and rhizome of the plant Farfugium japonicum and has been detected as the initial product in the microbial degradation of n-decane.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Computational Biology and Chemistry** Computational Biology and Chemistry: Computational Biology and Chemistry is a peer-reviewed scientific journal published by Elsevier covering all areas of computational life sciences. The current editor-in-chief are Wentian Li (The Feinstein Institute for Medical Research) and Donald Hamelberg (Georgia State University). The journal was established in 1976 as Computers & Chemistry, with DeLos F. DeTar (Florida State University) as its first editor. It obtained its current title in 2003 under the editorship of Andrzej K Konopka and James Crabble (University of Bedfordshire). Abstracting and indexing: The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal had a 2011 impact factor of 1.551, ranking it 42nd out of 85 journals in the category "Biology" and 36th out of 99 journals in the category "Computer Science, Interdisciplinary Applications"
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mast cell leukemia** Mast cell leukemia: Mast cell leukemia is an extremely aggressive subtype of acute myeloid leukemia that usually occurs de novo but can, rarely, evolve from transformation of chronic myeloid leukemia into the more aggressive acute myeloid leukemia. In a small proportion of cases, acute mast cell leukemia may evolve from a more progressive form of systemic mastocytosis. The diagnosis of acute mast cell leukemia by the WHO criteria includes the requirement for a prevalence of 20% neoplastic mast cells in marrow and 10% in blood. If the mast cells represent less than 10% of blood cells, the tumor is called "aleukemic" mast cell leukemia. Signs and symptoms: Acute mast cell leukemia is a rapidly progressive disorder with leukemic mast cells in blood and in large numbers in marrow. The common signs and symptoms include fever, headache, flushing of face and trunk. The typical cutaneous mast cell infiltrates of urticaria pigmentosa are usually not present before, during, or after diagnosis in patients who have mast cell leukemia. Symptoms include abdominal pain, bone pain, and peptic ulcer which are more prevalent than in other subtypes of acute myeloid leukemia. These former symptoms are due to release of a substance called histamine from neoplastic mast cells. Enlargement of the liver and spleen, or hepatosplenomegaly is characteristic. The mast cells release also many anticoagulants like heparin which can lead to serious bleeding. Liver and splenic dysfunction also contributes to hemorrhage. Involvement of the bone can lead to osteoporosis. Abdominal ultrasound or computerized tomography (CT) scanning is used to look for hepatosplenomegaly and lymphadenopathy. Plain radiography and bone densitometry can be used to assess bone involvement and the presence of osteoporosis. Endoscopy and biopsy can be useful if gut involvement is suspected. Diagnosis: Cytochemistry Cytochemical properties of the leukemic cells must be typical of mast cell derivation (presence of metachromatic granules staining with alpha-naphthyl chloroacetate esterase, but not with peroxidase). Mast cell tryptase is an enzyme contained in mast cell granules. Mast cell numbers are best estimated by tryptase immunostaining because very poorly granulated cells may stain very weakly if at all for alpha-naphthol chloroacetate esterase. Diagnosis: Tumor markers The leukemic cells usually are strongly positive for CD13, CD33, CD68, and CD117. Characteristically, basophil (e.g. CD11b, CD123) and monocyte markers (CD14, CD15) are absent. The cells usually express CD2 and CD25. Malignant mast cells overexpress the anti-apoptosis gene, bcl-2. A mutation called KIT mutation is detected in most patients. Diagnosis: Biochemistry Total serum tryptase is elevated in mast cell leukemia. Normal total (alpha + beta) serum tryptase is approximately 6 micro g/L (range 0 to 11 micro g/L). Values of several hundred micro g/L are characteristic of mast cell leukemia. Plasma and urinary histamine levels are frequently elevated in mast cell leukemia. Histidine decarboxylase (HDC) is the enzyme that catalyzes the reaction which produces histamine from histidine. Measurement of histidine carboxylase in the marrow cells of patients with mast cell leukemia is a very sensitive marker of mast cells. Treatment: Immunoglobulin E (IgE) is important in mast cell function. Immunotherapy with anti-IgE immunoglobulin raised in sheep resulted in a transient decrease in the numbers of circulating mast cells in one patient with mast cell leukemia. Although splenectomy has led to brief responses in patients with mast cell leukemia, no firm conclusions as to the efficacy of this treatment are possible. Chemotherapy with combination of cytosine arabinoside and either idarubicin, daunomycin, or mitoxantrone as for acute myeloid leukemia has been used. Stem cell transplantation is an option, although no experience exists concerning responses and outcome. Prognosis: Acute mast cell leukemia is extremely aggressive and has a grave prognosis. In most cases, multi-organ failure including bone marrow failure develops over weeks to months. Median survival after diagnosis is only about 6 months.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Applied Sciences (journal)** Applied Sciences (journal): Applied Sciences is a semi-monthly peer-reviewed open-access scientific journal covering all aspects of applied physics, applied chemistry, applied biology, and engineering, environmental, and earth sciences. It was established in 2011 and is published by MDPI. The editor-in-chief is Takayoshi Kobayashi (University of Electro-Communications). Abstracting and indexing: The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.679.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**U-turn** U-turn: A U-turn in driving refers to performing a 180° rotation to reverse the direction of travel. It is called a "U-turn" because the maneuver looks like the letter U. In some areas, the maneuver is illegal, while in others, it is treated as a more ordinary turn, merely extended. In still other areas, lanes are occasionally marked "U-turn permitted" or even "U-turn only." Occasionally, on a divided highway, special U-turn ramps exist to allow traffic to make a U-turn, though often their use is restricted to emergency and police vehicles only. U-turn: In the United States, U-turn regulations vary by state: in Indiana U-turns are allowed as long as the driver follows all of the precautions normally ascribed to making a left turn (yielding right-of-way, etc.). Many places, including Texas and Georgia, have specially designed U-turn lanes (referred to as Texas U-turn lanes). In Michigan, U-turns are required for many left turns to and from divided highways, as part of the Michigan left maneuver. U-turn: In some special situations, U-turns can be regulated through the use of a traffic light, where it is the only directional choice and drivers in the specified lane cannot continue forward (“U-turn only” lanes). Prohibited U-turns: U-turns are often prohibited for various reasons. Sometimes a sign indicates the legality of U-turns. However, traffic regulations in many jurisdictions specifically prohibit certain types of U-turns. Laws vary by jurisdiction as to when a U-turn may or may not be legal. Examples of jurisdictions with codified U-turn prohibitions include the Canadian provinces of Alberta and British Columbia and the U.S. states of Colorado and Oregon. In Alberta, U-turns are prohibited in certain circumstances, for example (ref. Alberta Regulation 304/2002, Division 7): At the crest of a hill or on a curve unless the driver can see at least 150 m ahead Anywhere a sign prohibits a U-turn In urban areas between intersections At alleys and driveways At an intersection controlled by a traffic signal (unless signage or signals specifically allow this maneuver) By a school bus on an undivided highway or on a divided highway where the length of the bus is longer than the width of the median between the two carriageways Taiwan In Taiwan, Article 49 of the Act Governing the Punishment of Violation of Road traffic Regulations (zh:道路交通管理處罰條例) administratively fines a motorist 600 to 1800 new Taiwan dollars for any of the following unlawful U-turn: Making a U-turn on a curve, a slope, a narrow road, a narrow bridge, or a tunnel. Prohibited U-turns: Making a U-turn at a road segment signed No U-turn or painted double solid yellow or white lines or no-overtaking lines. Making a U-turn at a road segment prohibiting left turn. Not surrounding a roundabout to make a U-turn in such an intersection. Prohibited U-turns: Before making a U-turn, failing to stop or signal left turn as required, or making a U-turn without paying attention to vehicles or pedestrians passing by.In addition, a Taiwanese driver license is demerited one point for an unlawful U-turn pursuant to Article 63 of the same Act unless the license has been suspended or revoked. Furthermore, the same Act makes a U-turn on a railway level crossing a violation for drivers of motorized and non-motorized vehicles: Article 54: A driver of a motor vehicle shall be administratively fined 6000 to 12000 new Taiwan dollars for making a U-turn on a railway level crossing. Should an accident occur, the driver license shall also be revoked, which is for life pursuant to Article 67. This lifetime revocation used to be absolute, but the amendment of the law proclaimed on 28 December 2005 and effective on 1 July 2006 has allowed a possible waiver after serving at least six years of the revocation. Prohibited U-turns: Article 75: A driver of a non-motorized vehicle (e.g. a bicycle) shall be administratively fined 1200 to 2400 New Taiwan dollars for making a U-turn on a railway level crossing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**202-CoV** 202-CoV: 202-CoV is a COVID-19 vaccine candidate developed by Shanghai Zerun Biotechnology Co., Ltd., Walvax Biotech. It is one of several candidates under development by Walvax. Development: In May 2020, the Bill & Melinda Gates Foundation awarded Shanghai Zerun Biotechnology a $1,000,000 USD vaccine development grant to "support research and development for COVID-19 response".In July 2021, the Coalition for Epidemic Preparedness Innovations (CEPI) announced that it had partnered with Shanghai Zerun Biotechnology and its parent company, Walvax Biotech, to develop COVID-19 vaccine candidates against both the original strain of SARS-CoV-2 and its newer variants. As of October 2022, CEPI had provided up to $25.1 million USD towards 202-CoV, but had ceased further funding. The chimeric protein candidate remains in Phase I clinical trials.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Renews Head Formation** Renews Head Formation: The Renews Head Formation is a geologic formation in Newfoundland and Labrador. It preserves fossils dating back to the Ediacaran period.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BatchMaster Software** BatchMaster Software: BatchMaster Software is a software company that develops Enterprise Resource Planning (ERP) solution. About: BatchMaster Software develops ERP software for the process manufacturing industry, such as Food & Beverage, Nutraceutical, Chemicals & Coatings, Cosmetics & Personal Care, Pharmaceutical & Life Sciences. The company is headquartered in Laguna Hills, California, USA and has offices in New York, India, New Zealand and Mexico.It is a Microsoft Gold Certified Partner and reseller of SAP Business One. History: BatchMaster was founded by Randy Peck as Pacific Micro Software Engineering and later changed the name to BatchMaster DOS. In 2000, the Company was acquired by eWorkplace Solutions and was reincorporated as BatchMaster Software. The company then started the project to come up with Windows based application software. In 2001, Infocus Solutions Pvt. Ltd. (ISPL) was formed in Indore, India to finish the project. ISPL started its operation with a team of seven people and within four years more than hundred people were working for the organization. The Company formally announced its India operations in 2006 and changed the name to BatchMaster Software Pvt. Ltd. Products: BatchMaster ERP is the flagship product of the company and offers integration with: SAP Business One Microsoft Dynamics GP QuickBooks Sage 100 and Sage 300
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bregman Lagrangian** Bregman Lagrangian: The Bregman-Lagrangian framework permits a systematic understanding of the matching rates associated with higher-order gradient methods in discrete and continuous time.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nectarivore** Nectarivore: In zoology, a nectarivore is an animal which derives its energy and nutrient requirements from a diet consisting mainly or exclusively of the sugar-rich nectar produced by flowering plants. Nectarivore: Nectar as a food source presents a number of benefits as well as challenges. It is essentially a solution of (as much as 80%) the simple sugars sucrose, glucose and fructose, which are easily ingested and digested, representing a rich and efficient source of nutrition. This solution is often diluted either by the plant that produces it or by rain falling on a flower and many nectarivores possess adaptations to effectively rid themselves of any excess water ingested this way. Nectarivore: However, nectar is an incomplete source of nutrition. While it does contain proteins and amino acids, these are found in low quantities, and it is severely deficient in minerals and vitamins. Very few organisms consume nectar exclusively over their whole life cycle, either supplementing it with other sources, particularly insects (thus overlapping with insectivores) or only consuming it exclusively for a set period. Many species are nectar robbers or nectar thieves, performing no pollination while still consuming nectar. Many species are both nectar robbers and pollinators, depending on the plant species they encounter. Nectarivore: Nectar is produced by flowering plants to attract pollinators to visit the flowers and transport pollen between them. Flowers often have specialized structures that make the nectar accessible only for animals possessing appropriate morphological structures, and there are numerous examples of coevolution between nectarivores and the flowers they pollinate. For example, hummingbirds and hawkmoths have long narrow beaks that can reach nectar at the bottom of long tubular flowers.The majority of nectar feeders are insects or birds, but instances can also be found in other animal groups. Insects: Nectarivory is extremely common in insects. Key families with large proportions of nectarivores include the Coleoptera, Lepidoptera, Diptera, Hymenoptera and Hemiptera. Some, but not all, are also pollinators: others engage in nectar robbing by avoiding the reproductive organs of plants altogether, particularly those with deep corollas, by piercing into the base of the flower to reach the nectary directly, such as carpenter bees and secondarily honey bees (who consume nectar from holes made by others), as well as ants, who frequently consume nectar and pollen where available despite actively inhibiting germination of pollen at the flowers they visit to the detriment of the plant. Insects: Nectar-feeding insects gain enough water from nectar to rarely need to drink, though adult butterflies and moths may engage in puddling in order to obtain dissolved substances not abundant in nectar, particularly salts and amino acids. Insects: Some flying nectarivores, particularly larger bees, do not lose enough water by evaporation while on the wing to offset their high intake due to nectar-feeding, as well as water produced metabolically while flying. They must excrete while on the wing to prevent water loading, and may wait at the nest entrance to evaporate off some of their water load before flying out. Arachnids: There is evidence that some spiders, though normally thought to be exclusively carnivorous, consume nectar indirectly by consuming nectarivorous insects, and/or directly from flowers. This behavior is thought to be more common among spiders that live among foliage. A few make nectar their primary food source, such as Bagheera kiplingi, a member of the jumping spiders, while others such as the crab spiders, feed more rarely and opportunistically. None of the spider groups observed feeding on nectar build webs, they are all wandering species. Birds: Nectar-feeding is widespread among birds, but no species consumes nectar exclusively. Most combine it with insectivory for a mixed diet. Of particular interest are three lineages of specialized nectarivorous birds: the hummingbirds (Trochilidae), sunbirds (Nectariniidae) and honeyeaters (Meliphagidae). These groups have adapted to permit a nectar-central diet, showing higher activity of digestive enzymes which break down sugars, higher rates of absorption of sugars, and altered kidney function. To maintain flight a bird must rapidly excrete much of the water content of the nectar it consumes. A hummingbird’s kidneys are capable of rapidly producing large quantities of hyposmotic urine i.e. urine containing a lower concentration of dissolved substances than the blood. Some other bird groups have one or more similar specializations – for instance, the Lories, one group of Australasian parrots within the larger parrot family Psittacidae, possess similar digestive modifications. These are examples of parallel evolution. Birds: The Hawaiian honeycreepers have several species adapted to feed on nectar. The Hawaiian tree Metrosideros polymorpha is heavily dependant on the pollination of the more or less nectarivorous honeycreepers. Mammals: Many species of bat feed on nectar, their lifestyle similar to that of nectarivorous birds. In the Americas there is significant overlap between flowers pollinated by bats and hummingbirds – both need similarly-composed nectar to keep up energy-intensive hovering flight. In this part of the world there is particularly close association between some species of columnar cacti and bat species, who provide pollination in exchange for nectar with composition matching their nutritional needs. Nectarivorous bats might be at particular risk of extinction due to their reliance on particular species of flowering plants.A single marsupial species, the honey possum, feeds on nectar and pollen exclusively. It raises fewer young which grow more slowly than other marsupials of its size, because of the time-consuming effort of nectar-drinking from many flowers to support itself. It may spend periods in deep sleep to reduce its need for food, and shows the typical nectarivore adaptations for excess water-removal.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Meter data management** Meter data management: Meter data management (MDM) refers to software that performs long-term data storage and management for the vast quantities of data delivered by smart metering systems. This data consists primarily of usage data and events that are imported from the head-end servers managing the data collection in advanced metering infrastructure (AMI) or automatic meter reading (AMR) systems. MDM is a component in the smart grid infrastructure promoted by utility companies. This may also incorporate meter data analytics, the analysis of data emitted by electric smart meters that record consumption of electric energy. MDM Systems: An MDM system will typically import the data, then validate, cleanse and process it before making it available for billing and analysis. Products for meter data include: Smart meter deployment planning and management; Meter and network asset monitoring and management; Automated smart meter provisioning (i.e. addition, deletion and updating of meter information at utility and AMR side) and billing cutover; Meter-to-Cash system, workforce management system, asset management and other systems.Furthermore, an MDM may provide reporting capabilities for load and demand forecasting, management reports, and customer service metrics. MDM Systems: An MDM provide application programming interfaces (APIs) between the MDM and the multiple destinations that rely on meter data. This is the first step to ensure that consistent processes and 'understanding' get applied to the data. Besides this common functionality, an advanced MDM may provide facility for remote connect/disconnect of meters, power status verification\power restoration verification and On demand read of remote meters . Data analysis: Smart meters send usage data to the central head end systems as often as every minute from each meter whether installed at a residential or a commercial or an industrial customer. Utility companies sometimes analyze this voluminous data as well as collect it. Some of the reasons for analysis are to make efficient energy buying decisions based on the usage patterns, launching energy efficiency or energy rebate programs, energy theft detection, comparing and correcting metering service provider performance, and detecting and reducing unbilled energy.This data not only helps utility companies make their businesses more efficient, but also helps consumers save money by using less energy at peak times. So, it is both economical and green. Smart meter infrastructure is fairly new to Utilities industry. As utility companies collect more and more data over the years, they may uncover further uses to these detailed smart meter activities. Similar analysis can be applied to water and gas as well as electric usage. Data analysis: According to a 2012 web posting, data that is required for complete meter data analytics may not reside in the same database. Instead, it might reside in disparate databases among various departments of utility companies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dual Work Exchanger Energy Recovery** Dual Work Exchanger Energy Recovery: The Dual Work Exchanger Energy Recovery (DWEER) is an energy recovery device. In the 1990s developed by DWEER Bermuda and licensed by Calder AG for use in the Caribbean. Seawater reverse osmosis (SWRO) needs high pressure and some of the reject stream can be reused by using this device. According to Calder AG, 97% of the energy in the reject stream is recovered. The DWEER system uses a piston doublechamber reciprocating hydraulically driven pump, and a patented valve system in a high pressure batch process with large pressure vessels, similar to a locomotive, to capture and transfer the energy lost in the membrane reject stream. Its advantage is its high efficiency rate, but it suffers from complex and large mechanical components which are susceptible to corrosion from seawater due to its metal composition.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DEMOnstration Power Plant** DEMOnstration Power Plant: DEMO refers to a proposed class of nuclear fusion experimental reactors that are intended to demonstrate the net production of electric power from nuclear fusion. Most of the ITER partners have plans for their own DEMO-class reactors. With the possible exception of the EU and Japan, there are no plans for international collaboration as there was with ITER.Plans for DEMO-class reactors are intended to build upon the ITER experimental nuclear fusion reactor.The most well-known and documented DEMO-class reactor design is that of the European Union (EU). The following parameters have been used as a baseline for design studies: the EU DEMO should produce at least 2000 megawatts (2 gigawatts) of fusion power on a continuous basis, and it should produce 25 times as much power as required for scientific breakeven, which does not include the power required to operate the reactor. The EU DEMO design of 2 to 4 gigawatts of thermal output will be on the scale of a modern electric power station. However, the nominal value of the steam turbine is 790 megawatts, which, after overcoming a 5% loss because of the coupling from the turbine to the synchronous generator, results in a nominal value for electrical power output of approximately 750 megawatts.:5 To achieve its goals, if utilizing a conventional tokamak design, a DEMO reactor must have linear dimensions about 15% larger than ITER, and a plasma density about 30% greater than ITER. According to timeline from EUROfusion, operation is planned to begin in 2051.It is estimated that subsequent commercial fusion reactors could be built for about a quarter of the cost of DEMO. However, the ITER experience suggests that development of a multi-billion US dollar tokamak-based technology innovation cycle able to develop fusion power stations that can compete with non-fusion energy technologies is likely to encounter the "valley of death" problem in venture capital, i.e., insufficient investment to go beyond prototypes, as DEMO tokamaks will need to develop new supply chains and are labor intensive. DEMO's place in the development of fusion power: The 2019 US National Academies of Sciences, Engineering, and Medicine 'Final Report of the Committee on a Strategic Plan for U. S. Burning Plasma Research' noted, "a large DEMO device no longer appears to be the best long-term goal for the U.S. program. Instead, science and technology innovations and the growing interest and potential for private-sector ventures to advance fusion energy concepts and technologies suggest that smaller, more compact facilities would better attract industrial participation and shorten the time and lower the cost of the development path to commercial fusion energy". Approximately two dozen private-sector companies are now aiming to develop their own fusion reactors within the DEMO roadmap timetable. The US appears to be working towards one or more national DEMO-class fusion power plants on a cost-sharing basis.The 3 October 2019 UK Atomic Energy announcement of its Spherical Tokamak for Energy Production (STEP) grid-connected reactor for 2040 suggests a combined DEMO/PROTO phase machine apparently to be designed to leapfrog the ITER timetable. China's proposed CFETR machine, a grid-connected gigawatt-generating reactor, overlaps the DEMO timetable. Japan also has plans for a DEMO reactor, the JA-DEMO, via its upgraded JT-60, as does South Korea (K-DEMO).In November 2020, an independent expert panel reviewed EUROfusion's design and R&D work on the EU's DEMO, and EUROfusion confirmed it was proceeding with the next step of its Roadmap to Fusion Energy, namely the conceptual design of a DEMO in partnership with the European fusion community and industry, suggesting an EU-backed DEMO-phase machine that could formally bear the DEMO name.In June 2021, General Fusion announced it would accept the UK government's offer to host the world's first substantial public-private partnership fusion demonstration plant, at Culham Centre for Fusion Energy. The plant will be constructed from 2022 to 2025 and is intended to lead the way for commercial pilot plants in the late 2020s. The plant will be 70% of full scale and is expected to attain a stable plasma of 150 million degrees. History of the concept: The DEMO reactor concept goes back to the 1970s. A graph by W.M. Stacey shows that by 1979, there were completed DEMO designs by General Atomics and Oak Ridge National Laboratory.At a June 1986 meeting organized by the IAEA, participants agreed on the following, concise definition for a DEMO reactor: "The DEMO is a complete electric power station demonstrating that all technologies required for a prototype commercial reactor work reliably enough to develop sufficient confidence for such commercial reactors to be competitive with other energy sources. The DEMO does not need to be economic itself nor does it have to be full scale reactor size."The following year, an IAEA document shows design parameters for a DEMO reactor in the US by Argonne National Laboratory, a DEMO reactor in Italy called FINTOR, (Frascati, Ispra, Napoli Tokamak Reactor), a DEMO reactor at Culham (UK), and a European DEMO reactor called NET (Next European Torus). The major parameters of NET were 628 MW net electrical power and 2200 MW gross thermal power output, nearly the same as the current EU DEMO design. Timeline: The EU DEMO timeline has slipped several times, following slippage in the ITER timetable. The following timetable was presented at the IAEA Fusion Energy Conference in 2004 by Christopher Llewellyn Smith: Conceptual design was completed in 2017 Engineering design is to be complete by 2024 (after input from ITER D-T tests, and data from IFMIF - both delayed as of 2016) The first construction phase is to last from 2024 to 2033 The first phase of operation is to last from 2033 to 2038 The station is then to be expanded and updated (e.g. with phase 2 blanket design) The second phase of operation is to start in 2040In 2012, European Fusion Development Agreement (EFDA) presented a roadmap to fusion power with a plan showing the dependencies of DEMO activities on ITER and IFMIF. Timeline: Conceptual design to be complete in 2020: 63  Engineering design complete, and decision to build, in 2030 Construction from 2031 to 2043 Operation from 2044, Electricity generation demonstration 2048This 2012 roadmap was intended to be updated in 2015 and 2019.: 49  The EFDA was superseded by EUROfusion in 2013. The roadmap was subsequently updated in 2018. Conceptual design to be complete before 2030 Engineering design 2030-2040 Construction from 2040This would imply operations commencing sometime in the 2050s. Technical considerations: When deuterium and tritium fuse, the two nuclei come together to form a resonant state which splits to form in turn a helium nucleus (an alpha particle) and a high-energy neutron. 21H + 31H → 42He + 10n + 17.6 MeVDEMO will be constructed once designs which solve the many problems of current fusion reactors are engineered. These problems include: containing the plasma fuel at high temperatures, maintaining a great enough density of reacting ions, and capturing high-energy neutrons from the reaction without melting the walls of the reactor. Technical considerations: The activation energy for fusion is very large because the protons in each nucleus strongly repel one another; they are both positively charged. In order to fuse, the nuclei must be within 1 femtometre (1 × 10−15 metres) of each other, where quantum-tunnelling effects permit the parent nuclei to fuse together into the resonant state. The principle is to form a quasi-Maxwellian distribution for the deuterons and the tritons, at very high temperatures, where the nuclei in the tail of the Maxwellian undergo fusion, while the continuous elastic collisions among the other nuclei will not alter the state of the plasma. Technical considerations: DEMO, a Tokamak reactor, requires both dense plasma and high temperatures for the fusion reaction to be sustained. High temperatures give the nuclei enough energy to overcome their electrostatic repulsion. This requires temperatures in the region of 100MK, and is achieved using energy from various sources, including Ohmic heating (from electric currents induced in the plasma), microwaves, ion beams, or neutral beam injection. Technical considerations: Containment vessels melt at these temperatures, so the plasma is to be kept away from the walls using magnetic confinement.Once fusion has begun, high-energy neutrons at about 160GK will flood out of the plasma along with X-rays, neither being affected by the strong magnetic fields. Since neutrons receive the majority of the energy from the fusion, they will be the reactor's main source of thermal energy output. The ultra-hot helium product at roughly 40GK will remain behind (temporarily) to heat the plasma, and must make up for all the loss mechanisms (mostly bremsstrahlung X-rays from electron deceleration) which tend to cool the plasma rather quickly. Technical considerations: The Tokamak containment vessel will have a lining composed of ceramic or composite tiles containing tubes in which warm liquid lithium metal will flow, cooling the lining. Lithium readily absorbs high-speed neutrons to form helium and tritium, becoming hot in the process. This increase in temperature is passed on to another (intermediate) coolant, possibly (pressurized) liquid water in a sealed, pressurized pipe. Heat from the intermediate coolant will be used to boil water in a heat exchanger. Steam from the heat exchanger will be used to drive turbines and generators, to create electric current. Waste heat energy in excess of the generated electrical energy is dumped into the environment. Helium byproduct is the "ash" of this fusion, and will not be allowed to accumulate too much in the plasma. Carefully measured amounts of deuterium and tritium are added back into the plasma and heated. Technical considerations: The lithium is processed to remove the helium and tritium, with the balance recycled to collect more heat and neutrons. Only a tiny amount of lithium is consumed.The DEMO project is planned to build upon and improve the concepts of ITER. Since it is only proposed at this time, many of the details, including heating methods and the method for the capture of high-energy neutrons, are still undetermined. Conceptual design: All aspects of DEMO were discussed in detail in a 2009 document by the Euratom-UKAEA Fusion Association. Four conceptual designs PPCS A,B,C,D were studied. Challenges identified included: structural materials resistant to the high neutron flux high-temperature superconductors, to avoid the need for large amounts of helium for cooling, that would challenge world helium reserves need for high efficiency in the heating and current drive systems.In the 2012 timeline, the conceptual design should be completed in 2020. Radioactive waste: While fusion reactors like ITER and DEMO will produce neither transuranic nor fission product wastes, which together make up the bulk of the nuclear wastes produced by fission reactors, some of the components of the ITER and DEMO reactors will become radioactive due to neutrons impinging upon them. It is hoped that plasma facing materials will be developed so that wastes produced in this way will have much shorter half lives than the waste from fission reactors, with wastes remaining harmful for less than one century. Development of these materials is the prime purpose of the International Fusion Materials Irradiation Facility. The process of manufacturing tritium currently comes with production of long-lived waste. However, while early-stage ITER's tritium will mainly come from the current operation of heavy-water CANDU fission reactors, late-stage ITER (to some extent) and DEMO should be able to produce its own tritium thanks to tritium breeding, dispensing with the fission reactor currently used for this purpose. PROTO: PROTO was a proposal for a beyond-DEMO experiment, part of the European Commission long-term strategy for research of fusion energy. PROTO would act as a prototype power station, taking in any remaining technology refinements, and demonstrating electricity generation on a commercial basis. It was only expected after DEMO, beyond 2050, and probably will not be the second part of a DEMO/PROTO experiment as it no longer appears in official documentation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Monitoring as a service** Monitoring as a service: Monitoring as a service (MaaS) is one of many cloud computing delivery models under anything as a service (XaaS). It is a framework that facilitates the deployment of monitoring functionalities for various other services and applications within the cloud. The most common application for MaaS is online state monitoring, which continuously tracks certain states of applications, networks, systems, instances or any element that may be deployable within the cloud.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pedersen process** Pedersen process: The Pederson process is a process of refining aluminum that first separates iron by reducing it to metal, and reacting alumina with lime to produce calcium aluminate, which is then leached with sodium hydroxide. It is more environmentally friendly than the more well-known Bayer process. This is because instead of producing alumina slag, also known as red mud, it produces pig iron as a byproduct. Red mud is considered both an economic and environmental challenge in the aluminum industry because it is considered a waste, with little benefit. It destroys the environment with its high pH, and is costly to maintain, even when in a landfill. Iron, however, is used in the manufacture of steel, and has structural uses in civil engineering and chemical uses as a catalyst. History: The Pedersen Process was invented by Harald Pedersen in the 1920s and used in Norway for over 40 years before shutting down due to the Pedersen Process being less economically competitive than the Bayer Process. However, it is believed a modern Pedersen process could be economically viable with "low-quality" bauxite, as even though "low-quality" bauxite has less alumina in the form of trihydrate gibbsite, it has more iron oxide which would be converted to pig iron in the smelting process instead of red mud. Use in aluminum smelting: In most of today's smelting, aluminum ore, also known as bauxite, is first smelted into alumina through the Bayer Process. This step could be replaced by the Pedersen process -- either result in alumina. Unlike the smelting processes of iron and coal into steel or copper and tin into bronze, which require thermal energy, alumina must be smelted with electrical energy. This is done through the Hall–Héroult process, producing 99.5–99.8% pure aluminum.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fast Virtual Disk** Fast Virtual Disk: Fast Virtual Disk (better known as FVD) is a virtualization-oriented disk image file format developed by IBM for the QEMU virtualization platform. It differs from existing paravirtualization-centric virtual disk image formats through a design that emphasizes lack of contention and separation of concerns between the host and guest kernels through deduplication of filesystem and block layer storage management. FVD can be written either directly to a physical or logical blockstore (avoiding host filesystem overheads), or to a regular host file system file. It strives to maintain similarity to raw disk layouts, eliminate host filesystem and disk image compression overheads, and minimize metadata-related overheads.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Curses (programming library)** Curses (programming library): curses is a terminal control library for Unix-like systems, enabling the construction of text user interface (TUI) applications. The name is a pun on the term "cursor optimization". It is a library of functions that manage an application's display on character-cell terminals (e.g., VT100). Overview: Using curses, programmers are able to write text-based applications without writing directly for any specific terminal type. The curses library on the executing system sends the correct control characters based on the terminal type. It provides an abstraction of one or more windows that maps onto the terminal screen. Each window is represented by a character matrix. The programmer sets up the desired appearance of each window, then tells the curses package to update the screen. The library determines a minimal set of changes that are needed to update the display and then executes these using the terminal's specific capabilities and control sequences. Overview: In short, this means that the programmer simply creates a character matrix of how the screen should look and lets curses handle the work. Overview: The curses API is described in several places. Most implementations of curses use a database that can describe the capabilities of thousands of different terminals. There are a few implementations, such as PDCurses, which use specialized device drivers rather than a terminal database. Most implementations use terminfo; some use termcap. Curses has the advantage of back-portability to character-cell terminals and simplicity. For an application that does not require bit-mapped graphics or multiple fonts, an interface implementation using curses will usually be much simpler and faster than one using an X toolkit. History: The first curses library was written by Ken Arnold and originally released with BSD UNIX, where it was used for several games, most notably Rogue. Some improvements were made to the BSD library in the 1990s as "4.4BSD" curses, e.g., to provide more than one type of video highlighting. However, those are not widely used. History: The name "curses" is a pun on cursor optimization. Sometimes it is incorrectly stated that curses was used by the vi editor. In fact the code in curses that optimizes moving the cursor from one place on the screen to another was borrowed from vi, which predated curses.According to Goodheart, Ken Arnold's original implementation of curses started by reusing functions from the termcap library, and adding to that. A few years later, Mary Ann Horton, who had maintained the vi and termcap sources at Berkeley, went to AT&T Corporation and made a different version using terminfo, which became part of UNIX System III and UNIX System V. Due to licensing restrictions on the latter, the BSD and AT&T versions of the library were developed independently. In addition to the termcap/terminfo improvement, other improvements were made in the AT&T version: video highlighting (bold, underline) The BSD version supported only standout. History: line-drawing The BSD version gave little support here. History: colors This was not supported in the BSD version.AT&T curses development appears to have halted in the mid-1990s when X/Open Curses was defined. In 1995, BSD maintainer, Keith Bostic, officially deprecated the curses library in favor of ncurses. Development of ncurses and PDCurses continues. A version of BSD curses continues to be maintained in the NetBSD operating system (wide character support, termcap to terminfo migration, etc.). History: pcurses and PDCurses Different lines of development started by imitating the AT&T curses, from at least three implementations: pcurses by Pavel Curtis (started in 1982), PDCurses (Public Domain curses) by Mark Hessling to support his editor THE (started in 1987) as well as Rexx/Curses, and PC curses (version 1.4 and earlier by Bjorn Larsson-based inspired by Pavel Curtis' library before 1990.) ncurses ncurses (new curses) "originated as pcurses ... and was re-issued as ncurses 1.8.1 in late 1993". ncurses is the most widely known implementation of curses, and has motivated further development of other variations, such as BSD curses in the NetBSD project. Portability: Although the ncurses library was initially developed under Linux, OpenBSD, FreeBSD, and NetBSD, it has been ported to many other ANSI/POSIX UNIX systems, mainly by Thomas Dickey. PDCurses, while not identical to ncurses, uses the same function calls and operates the same way as ncurses does except that PDCurses targets different devices, e.g., console windows for DOS, Win32, OS/2, as well as X11. Porting between the two is not difficult. For example, the roguelike game ADOM was written for Linux and ncurses, later ported to DOS and PDCurses. Curses-based software: Curses-based software is software whose user interface is implemented through the curses library, or a compatible library (such as ncurses). Curses is designed to facilitate GUI-like functionality on a text-only device, such as a PC running in console mode, a hardware ANSI terminal, a Telnet or SSH client, or similar. Curses-based software: Curses-based programs often have a user interface that resembles a traditional graphical user interface, including 'widgets' such as text boxes and scrollable lists, rather than the command line interface (CLI) most commonly found on text-only devices. This can make them more user-friendly than a CLI-based program, while still being able to run on text-only devices. Curses-based software can also have a lighter resource footprint and operate on a wider range of systems (both in terms of hardware and software) than their GUI-based counterparts. This includes old pre-1990 machines along with modern embedded systems using text-only displays. Curses-based software: Curses is most commonly associated with Unix-like operating systems, although implementations for Microsoft Windows also exist.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ytterbium-doped lutetium orthovanadate** Ytterbium-doped lutetium orthovanadate: Ytterbium-doped Lutetium orthovanadate, typically abbreviated Yb:LuVO4, is an active laser medium. The peak absorption cross section for the pi-polarization is 8.42×10−20 cm² at 985 nm, and the stimulated emission cross section at 1020 nm is 1.03×10−20 cm².
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Panasonic Lumix DMC-G2** Panasonic Lumix DMC-G2: The Panasonic Lumix DMC-G2 is a digital mirrorless interchangeable lens camera adhering to the Olympus and Panasonic developed Micro Four Thirds System (MFT) system design standard. It was announced in March 2010 along with a lesser featured Panasonic Lumix DMC-G10.Introduced as successor to the Panasonic Lumix DMC-G1, the G2 included 720p HD video capability using both AVCHD Lite and Motion JPEG recording formats. Panasonic Lumix DMC-G2: The G2 has a resistive touchscreen to control many camera functions including easy selection of a focus point within the live view frame. The touchscreen interface allows control duplicating the numerous dials and buttons on the G2. The G2 shipped with a new Panasonic 14–42 mm kit zoom lens, a lighter, and less expensive, version of the original Panasonic 14–45 mm kit zoom sold that shipped with the Panasonic G1. Panasonic Lumix DMC-G2: The United States MSRP with 14–42 mm kit zoom lens was US$800.00. Available colors were black, red and blue. The Micro Four Thirds system: The Micro Four Thirds (MFT) system design standard was jointly announced in 2008 by Olympus and Panasonic, as a further evolution of the similarly named predecessor Four Thirds System pioneered by Olympus. The Micro Four Thirds system standard uses the same sized sensor (nominal 4000 pixels by 3000 pixels) as the original Four Thirds system. One potential advantage of the smaller MFT system sensor (when compared to market leaders Canon and Nikon APS-C and full frame sized) is potentially smaller and lighter lenses. The smaller MFT sensor with reduced image circle allows the development of smaller and lighter native lenses. The MFT sensor has a crop factor of 2.0 when compared to 35mm film equivalent full frame sensors. By comparison, the more popular consumer (as opposed to professional) DSLRs such as those made by Canon, Nikon and Sony have 1.5 to 1.6 crop factor APS-C sensors, which means larger and heavier lens designs. For example, a typical Olympus MFT M.Zuiko 14-42mm f/3.5-5.6 kit lens weighs 112g, is 56mm in diameter and 50mm in length. The equivalent Canon APS-C DSLR EF-S 18-55mm f3.5–5.6 kit lens weighs 190g, and is 69mm in diameter and 80mm in lengthWhile the older Four Thirds system design standard allowed the incorporation of a single lens reflex (SLR) camera design including a mirror box and pentaprism based optical viewfinder system, the MFT system design standard sought to pursue a technically different camera, and specifically slimmed down the key physical specifications which eliminated the ability to include the traditional complex optical path and the bulky mirror box needed for a SLR optical viewfinder. Instead, MFT uses either a built-in (Panasonic) or optional (Olympus/Panasonic) compact electronic viewfinder (EVF) and/or LCD back panel displaying a Live view from the main image sensor. Use of an EVF/back panel LCD and smaller four thirds image sensor format and allows for smaller and lighter camera bodies and lenses. The MFT system standard also specifically includes seamless switching between still photography and HD video recording as a design criterion. The Micro Four Thirds system: MFT cameras are physically slimmer than most interchangeable lens cameras because the standard specifies a much-reduced lens mount flange to imaging sensor plane distance of just 20mm. Typically, this so-called flange focal distance is over 40mm on most interchangeable lens cameras. The MFT system design flange focal length distance allows for, through use of an adapter, the possibility to mount virtually any manufacturer's existing and legacy still camera interchangeable lens (as well as some video and cine lenses) to an MFT body, albeit using manual focus and manual aperture control. For example, many theoretically obsolete 35mm film camera lenses, as well as existing current lenses for APS-C and full frame DSLR's are now usable on MFT cameras. As an example, an older (i.e., used, obsolete and low priced), but still high quality, 50mm f/1.8 "standard" lens from a 35mm film camera can be used on a MFT camera body. With MFT sensors having a crop factor of 2.0, the old 50mm f/1.8 "standard" lens becomes a high-speed (although manual) 100mm f/1.8 telephoto portrait lens. So, the MFT system allows the re-use of expensive lenses that may have outlived their 35mm film format camera, and can be used on a modern digital camera body capable of both still and HD video recording. Similarly, the MFT system design allows current DSLR lenses to be used as well, although only with manual focus and aperture control. Panasonic Lumix DMC-G2 features: Upon introduction in March 2010, the Panasonic Lumix DMC-G2 was marketed as the world's first interchangeable lens camera with an articulated, touch control LCD. Also added was 720p HD video and a redesigned physical user interface, changing placement of dials and button controllers, and an electronic viewfinder. Notably, the G2 was not capable of full 1080p HD video as was the then top-of-the-line Panasonic GH1. The ability to choose the focus point by touching the desired area on the screen was implemented in all Panasonic MFT cameras introduced after the G2. Other manufacturers such as Sony with its new NEX family of cameras, and Olympus in its PEN E-P3 MFT camera also incorporated use of the touch screen feature for camera controls. Panasonic Lumix DMC-G2 features: The "new" 14-42mm kit zoom lens was less expensive than the original optical image stabilized 14-45mm f/3.5-5.6 kit zoom lens that came with the G1. The 14-42mm kit lens is lighter, but longer than the original 14-45mm kit lens, features a plastic, rather than metal lens mount, and omits on-off switch for the in lens optical image stabilization system. However, the 14-42mm optical image stabilization system on-off could be controlled through camera menus. Many enthusiasts regard the 14-42mm kit lens as a step down in both optical image quality and build quality from the original 14-45mm kit lens. Panasonic Lumix DMC-G2 features: Body colors and MSRP The camera was available in three colors — black (suffix K), red (R) and blue (B). MSRP in the United States for the body and 14-42mm kit zoom lens was $USD 800.00. Successor model The G2 camera's successor model is the Panasonic Lumix DMC-G3 which was announced in May 2011. Video recording formats: AVCHD Lite Format (.MTS files) M-JPEG Format (.MOV files)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Neotonality** Neotonality: Neotonality (or neocentricity) is an inclusive term referring to musical compositions of the twentieth century in which the tonality of the common-practice period (i.e. functional harmony and tonic-dominant relationships) is replaced by one or several nontraditional tonal conceptions, such as tonal assertion or contrapuntal motion around a central chord. Neotonality: Although associated with the neoclassicism of Stravinsky and Les Six in France and Hindemith in Germany, neotonality is a broader concept, encompassing such nationalist composers as Bartók and Kodály in Hungary, Janáček and Martinů in Czechoslovakia, Vaughan Williams in England, Chávez and Revueltas in Mexico, Villa-Lobos in Brazil, and Ginastera in Argentina. Figures with less nationalistic ties such as Prokofiev, Shostakovich, William Walton, Britten, and Samuel Barber also are counted amongst neotonal composers. Without establishing any one style or school, neotonality became the dominant international idea in the 1930s and 1940s ("new tonalities"). Many of these composers (e.g., Bartók, Hindemith, Prokofiev, and Stravinsky) combine features characteristic of common-practice tonality with features of atonality.The most common means of establishing a tonal centre in neotonality is by "assertion". This may involve repeating a central pitch or emphasizing it in some other way, for example through instrumentation, register, rhythmic elongation, or metric accent. No single method of tonal assertion ever became dominant in the 20th century. Another possibility is to retain some element of common-practice tonality, such as beginning and ending on the same triad, using tonic or dominant pedal points, or through the use of contrapuntal motion around some central chord.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Simca Profissional** Simca Profissional: The Simca Profissional was a successor to the Simca Alvorada, which was itself a stripped version of the entry level Simca Chambord. Simca do Brasil had responded reluctantly to the demand by the Brazilian government of president Juscelino Kubitschek that every car manufacturer must offer an affordable basic version within their range. The idea was to give as many Brazilians as possible the opportunity to own a car. New incentive, new version: In 1965, the Brazilian government created a new public financing tool through its publicly owned bank Caixa Econômica Federal that would allow Brazilians to finance their vehicle over four years with a monthly interest rate of 1%. This obviously was to attract a new range of clients and Simca do Brasil looked into how to make the Alvorada even cheaper in order to make it attractive for, for example for taxicab drivers. Plastic replaces leather: The Simca Profissional appeared in 1965 with three colour options (yellow, green and cream white), no chrome (even the bumpers were painted in dark gray, no trimmings), the already very simple interior of the Alvorada was downscaled further with plastic seat covers and the door covers were dark and naked cardboard screwed onto the metal. But the Profissional was 30% cheaper than its far posher brother, the all chrome and leather Simca Chambord. The production numbers of this version apparently were never documented and, unlike the Simca Alvorada, the Simca Profissional had no distinct range of chassis numbers so that this version is mixed in with the production figures cited for the Simca Chambord. Plastic replaces leather: Production figures 1965 - 1966 = number of units produced not documented by Simca do Brasil
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Null (radio)** Null (radio): In radio electronics, a null is a direction in an antenna's radiation pattern where the antenna radiates almost no radio waves, so the far field signal strength is a local minimum. Nulls occur because different parts of an antenna radiate radio waves of different phase. In directions at which the antenna radiates equal amplitude radio waves of opposite phase, the radio waves cancel, resulting in little or no radio power being radiated in that direction. In other directions the radio waves from different parts of the antenna are in phase and reinforce, resulting in a maximum signal strength in the radiation pattern, called a lobe. Null (radio): In transmitting antennas designed to provide broad coverage nulls can be a problem, preventing reception in a given area. Null fill in the vertical plane is used to prevent this. Nulling antenna: Nulls can also be used to advantage. In a radio receiver the receiver's antenna can be adjusted so the direction of the interference source is located in a null of the antenna, to minimize reception of interference. Nulling antenna: Nulls have also been used intentionally to prevent an antenna from broadcasting to a certain area. For example, CIII-DT-22, a repeater of Toronto-based Global TV station CIII-DT in Wheatley has a null towards Windsor to protect broadcast rights of American stations in the bordering Detroit area.Radio direction finding (RDF) receivers use special antennas with very narrow, sharp nulls to find the location of transmitters. The antenna is rotated until the received signal is minimum; at that point the antenna's null is pointed along the bearing line to the transmitter.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Benzene** Benzene: Benzene is an organic chemical compound with the molecular formula C6H6. The benzene molecule is composed of six carbon atoms joined in a planar ring with one hydrogen atom attached to each. Because it contains only carbon and hydrogen atoms, benzene is classed as a hydrocarbon.Benzene is a natural constituent of petroleum and is one of the elementary petrochemicals. Due to the cyclic continuous pi bonds between the carbon atoms, benzene is classed as an aromatic hydrocarbon. Benzene is a colorless and highly flammable liquid with a sweet smell, and is partially responsible for the aroma of gasoline. It is used primarily as a precursor to the manufacture of chemicals with more complex structure, such as ethylbenzene and cumene, of which billions of kilograms are produced annually. Although benzene is a major industrial chemical, it finds limited use in consumer items because of its toxicity. Benzene is a volatile organic compound. History: Discovery The word "benzene" derives from "gum benzoin" (benzoin resin), an aromatic resin known since ancient times in Southeast Asia; and later to European pharmacists and perfumers in the 16th century via trade routes. An acidic material was derived from benzoin by sublimation, and named "flowers of benzoin", or benzoic acid. The hydrocarbon derived from benzoic acid thus acquired the name benzin, benzol, or benzene. Michael Faraday first isolated and identified benzene in 1825 from the oily residue derived from the production of illuminating gas, giving it the name bicarburet of hydrogen. In 1833, Eilhard Mitscherlich produced it by distilling benzoic acid (from gum benzoin) and lime. He gave the compound the name benzin. In 1836, the French chemist Auguste Laurent named the substance "phène"; this word has become the root of the English word "phenol", which is hydroxylated benzene, and "phenyl", the radical formed by abstraction of a hydrogen atom (free radical H•) from benzene. History: In 1845, Charles Blachford Mansfield, working under August Wilhelm von Hofmann, isolated benzene from coal tar. Four years later, Mansfield began the first industrial-scale production of benzene, based on the coal-tar method. Gradually, the sense developed among chemists that a number of substances were chemically related to benzene, comprising a diverse chemical family. In 1855, Hofmann used the word "aromatic" to designate this family relationship, after a characteristic property of many of its members. In 1997, benzene was detected in deep space. History: Ring formula The empirical formula for benzene was long known, but its highly polyunsaturated structure, with just one hydrogen atom for each carbon atom, was challenging to determine. Archibald Scott Couper in 1858 and Johann Josef Loschmidt in 1861 suggested possible structures that contained multiple double bonds or multiple rings, but too little evidence was then available to help chemists decide on any particular structure. History: In 1865, the German chemist Friedrich August Kekulé published a paper in French (for he was then teaching in Francophone Belgium) suggesting that the structure contained a ring of six carbon atoms with alternating single and double bonds. The next year he published a much longer paper in German on the same subject. Kekulé used evidence that had accumulated in the intervening years—namely, that there always appeared to be only one isomer of any monoderivative of benzene, and that there always appeared to be exactly three isomers of every disubstituted derivative—now understood to correspond to the ortho, meta, and para patterns of arene substitution—to argue in support of his proposed structure. Kekulé's symmetrical ring could explain these curious facts, as well as benzene's 1:1 carbon-hydrogen ratio. History: The new understanding of benzene, and hence of all aromatic compounds, proved to be so important for both pure and applied chemistry that in 1890 the German Chemical Society organized an elaborate appreciation in Kekulé's honor, celebrating the twenty-fifth anniversary of his first benzene paper. Here Kekulé spoke of the creation of the theory. He said that he had discovered the ring shape of the benzene molecule after having a reverie or day-dream of a snake biting its own tail (a symbol in ancient cultures known as the ouroboros). This vision, he said, came to him after years of studying the nature of carbon-carbon bonds. This was seven years after he had solved the problem of how carbon atoms could bond to up to four other atoms at the same time. Curiously, a similar, humorous depiction of benzene had appeared in 1886 in a pamphlet entitled Berichte der Durstigen Chemischen Gesellschaft (Journal of the Thirsty Chemical Society), a parody of the Berichte der Deutschen Chemischen Gesellschaft, only the parody had monkeys seizing each other in a circle, rather than snakes as in Kekulé's anecdote. Some historians have suggested that the parody was a lampoon of the snake anecdote, possibly already well known through oral transmission even if it had not yet appeared in print. Kekulé's 1890 speech in which this anecdote appeared has been translated into English. If the anecdote is the memory of a real event, circumstances mentioned in the story suggest that it must have happened early in 1862.In 1929, the cyclic nature of benzene was finally confirmed by the crystallographer Kathleen Lonsdale using X-ray diffraction methods. Using large crystals of hexamethylbenzene, a benzene derivative with the same core of six carbon atoms, Lonsdale obtained diffraction patterns. Through calculating more than thirty parameters, Lonsdale demonstrated that the benzene ring could not be anything but a flat hexagon, and provided accurate distances for all carbon-carbon bonds in the molecule. History: Nomenclature The German chemist Wilhelm Körner suggested the prefixes ortho-, meta-, para- to distinguish di-substituted benzene derivatives in 1867; however, he did not use the prefixes to distinguish the relative positions of the substituents on a benzene ring. It was the German chemist Carl Gräbe who, in 1869, first used the prefixes ortho-, meta-, para- to denote specific relative locations of the substituents on a di-substituted aromatic ring (viz, naphthalene). In 1870, the German chemist Viktor Meyer first applied Gräbe's nomenclature to benzene. History: Early applications In 1903, Ludwig Roselius popularized the use of benzene to decaffeinate coffee. This discovery led to the production of Sanka. This process was later discontinued. Benzene was historically used as a significant component in many consumer products such as liquid wrench, several paint strippers, rubber cements, spot removers, and other products. Manufacture of some of these benzene-containing formulations ceased in about 1950, although Liquid Wrench continued to contain significant amounts of benzene until the late 1970s. History: Occurrence Trace amounts of benzene are found in petroleum and coal. It is a byproduct of the incomplete combustion of many materials. For commercial use, until World War II, much of benzene was obtained as a by-product of coke production (or "coke-oven light oil") for the steel industry. However, in the 1950s, increased demand for benzene, especially from the growing polymers industry, necessitated the production of benzene from petroleum. Today, most benzene comes from the petrochemical industry, with only a small fraction being produced from coal. Benzene has been detected on Mars. Structure: X-ray diffraction shows that all six carbon-carbon bonds in benzene are of the same length, at 140 picometres (pm). The C–C bond lengths are greater than a double bond (135 pm) but shorter than a single bond (147 pm). This intermediate distance is caused by electron delocalization: the electrons for C=C bonding are distributed equally between each of the six carbon atoms. Benzene has 6 hydrogen atoms, fewer than the corresponding parent alkane, hexane, which has 14. Benzene and cyclohexane have a similar structure, only the ring of delocalized electrons and the loss of one hydrogen per carbon distinguishes it from cyclohexane. The molecule is planar. The molecular orbital description involves the formation of three delocalized π orbitals spanning all six carbon atoms, while the valence bond description involves a superposition of resonance structures. It is likely that this stability contributes to the peculiar molecular and chemical properties known as aromaticity. To accurately reflect the nature of the bonding, benzene is often depicted with a circle inside a hexagonal arrangement of carbon atoms. Structure: Derivatives of benzene occur sufficiently often as a component of organic molecules, so much so that the Unicode Consortium has allocated a symbol in the Miscellaneous Technical block with the code U+232C (⌬) to represent it with three double bonds, and U+23E3 (⏣) for a delocalized version. Benzene derivatives: Many important chemical compounds are derived from benzene by replacing one or more of its hydrogen atoms with another functional group. Examples of simple benzene derivatives are phenol, toluene, and aniline, abbreviated PhOH, PhMe, and PhNH2, respectively. Linking benzene rings gives biphenyl, C6H5–C6H5. Further loss of hydrogen gives "fused" aromatic hydrocarbons, such as naphthalene, anthracene, phenanthrene, and pyrene. The limit of the fusion process is the hydrogen-free allotrope of carbon, graphite. Benzene derivatives: In heterocycles, carbon atoms in the benzene ring are replaced with other elements. The most important variations contain nitrogen. Replacing one CH with N gives the compound pyridine, C5H5N. Although benzene and pyridine are structurally related, benzene cannot be converted into pyridine. Replacement of a second CH bond with N gives, depending on the location of the second N, pyridazine, pyrimidine, or pyrazine. Production: Four chemical processes contribute to industrial benzene production: catalytic reforming, toluene hydrodealkylation, toluene disproportionation, and steam cracking etc. According to the ATSDR Toxicological Profile for benzene, between 1978 and 1981, catalytic reformates accounted for approximately 44–50% of the total U.S benzene production. Production: Catalytic reforming In catalytic reforming, a mixture of hydrocarbons with boiling points between 60 and 200 °C is blended with hydrogen gas and then exposed to a bifunctional platinum chloride or rhenium chloride catalyst at 500–525 °C and pressures ranging from 8–50 atm. Under these conditions, aliphatic hydrocarbons form rings and lose hydrogen to become aromatic hydrocarbons. The aromatic products of the reaction are then separated from the reaction mixture (or reformate) by extraction with any one of a number of solvents, including diethylene glycol or sulfolane, and benzene is then separated from the other aromatics by distillation. The extraction step of aromatics from the reformate is designed to produce aromatics with lowest non-aromatic components. Recovery of the aromatics, commonly referred to as BTX (benzene, toluene and xylene isomers), involves such extraction and distillation steps. Production: In similar fashion to this catalytic reforming, UOP and BP commercialized a method from LPG (mainly propane and butane) to aromatics. Production: Toluene hydrodealkylation Toluene hydrodealkylation converts toluene to benzene. In this hydrogen-intensive process, toluene is mixed with hydrogen, then passed over a chromium, molybdenum, or platinum oxide catalyst at 500–650 °C and 20–60 atm pressure. Sometimes, higher temperatures are used instead of a catalyst (at the similar reaction condition). Under these conditions, toluene undergoes dealkylation to benzene and methane: CH CH 4 This irreversible reaction is accompanied by an equilibrium side reaction that produces biphenyl (aka diphenyl) at higher temperature: 2 C6H6 ⇌ H2 + C6H5–C6H5If the raw material stream contains much non-aromatic components (paraffins or naphthenes), those are likely decomposed to lower hydrocarbons such as methane, which increases the consumption of hydrogen. Production: A typical reaction yield exceeds 95%. Sometimes, xylenes and heavier aromatics are used in place of toluene, with similar efficiency. This is often called "on-purpose" methodology to produce benzene, compared to conventional BTX (benzene-toluene-xylene) extraction processes. Toluene disproportionation Toluene disproportionation (TDP) is the conversion of toluene to benzene and xylene. Given that demand for para-xylene (p-xylene) substantially exceeds demand for other xylene isomers, a refinement of the TDP process called Selective TDP (STDP) may be used. In this process, the xylene stream exiting the TDP unit is approximately 90% p-xylene. In some systems, even the benzene-to-xylenes ratio is modified to favor xylenes. Production: Steam cracking Steam cracking is the process for producing ethylene and other alkenes from aliphatic hydrocarbons. Depending on the feedstock used to produce the olefins, steam cracking can produce a benzene-rich liquid by-product called pyrolysis gasoline. Pyrolysis gasoline can be blended with other hydrocarbons as a gasoline additive, or routed through an extraction process to recover BTX aromatics (benzene, toluene and xylenes). Production: Other methods Although of no commercial significance, many other routes to benzene exist. Phenol and halobenzenes can be reduced with metals. Benzoic acid and its salts undergo decarboxylation to benzene. The reaction of the diazonium compound derived from aniline with hypophosphorus acid gives benzene. Alkyne trimerisation of acetylene gives benzene. Complete decarboxylation of mellitic acid gives benzene. Uses: Benzene is used mainly as an intermediate to make other chemicals, above all ethylbenzene (and other alkylbenzenes), cumene, cyclohexane, and nitrobenzene. In 1988 it was reported that two-thirds of all chemicals on the American Chemical Society's lists contained at least one benzene ring. More than half of the entire benzene production is processed into ethylbenzene, a precursor to styrene, which is used to make polymers and plastics like polystyrene. Some 20% of the benzene production is used to manufacture cumene, which is needed to produce phenol and acetone for resins and adhesives. Cyclohexane consumes around 10% of the world's benzene production; it is primarily used in the manufacture of nylon fibers, which are processed into textiles and engineering plastics. Smaller amounts of benzene are used to make some types of rubbers, lubricants, dyes, detergents, drugs, explosives, and pesticides. In 2013, the biggest consumer country of benzene was China, followed by the USA. Benzene production is currently expanding in the Middle East and in Africa, whereas production capacities in Western Europe and North America are stagnating.Toluene is now often used as a substitute for benzene, for instance as a fuel additive. The solvent-properties of the two are similar, but toluene is less toxic and has a wider liquid range. Toluene is also processed into benzene. Uses: Component of gasoline As a gasoline (petrol) additive, benzene increases the octane rating and reduces knocking. As a consequence, gasoline often contained several percent benzene before the 1950s, when tetraethyl lead replaced it as the most widely used antiknock additive. With the global phaseout of leaded gasoline, benzene has made a comeback as a gasoline additive in some nations. In the United States, concern over its negative health effects and the possibility of benzene entering the groundwater has led to stringent regulation of gasoline's benzene content, with limits typically around 1%. European petrol specifications now contain the same 1% limit on benzene content. The United States Environmental Protection Agency introduced new regulations in 2011 that lowered the benzene content in gasoline to 0.62%.In many European languages, the word for petroleum or gasoline is an exact cognate of "benzene". Reactions: The most common reactions of benzene involve substitution of a proton by other groups. Electrophilic aromatic substitution is a general method of derivatizing benzene. Benzene is sufficiently nucleophilic that it undergoes substitution by acylium ions and alkyl carbocations to give substituted derivatives. The most widely practiced example of this reaction is the ethylation of benzene. Reactions: Approximately 24,700,000 tons were produced in 1999. Highly instructive but of far less industrial significance is the Friedel-Crafts alkylation of benzene (and many other aromatic rings) using an alkyl halide in the presence of a strong Lewis acid catalyst. Similarly, the Friedel-Crafts acylation is a related example of electrophilic aromatic substitution. The reaction involves the acylation of benzene (or many other aromatic rings) with an acyl chloride using a strong Lewis acid catalyst such as aluminium chloride or Iron(III) chloride. Reactions: Sulfonation, chlorination, nitration Using electrophilic aromatic substitution, many functional groups are introduced onto the benzene framework. Sulfonation of benzene involves the use of oleum, a mixture of sulfuric acid with sulfur trioxide. Sulfonated benzene derivatives are useful detergents. In nitration, benzene reacts with nitronium ions (NO2+), which is a strong electrophile produced by combining sulfuric and nitric acids. Nitrobenzene is the precursor to aniline. Chlorination is achieved with chlorine to give chlorobenzene in the presence of a Lewis acid catalyst such as aluminium tri-chloride. Reactions: Hydrogenation Via hydrogenation, benzene and its derivatives convert to cyclohexane and derivatives. This reaction is achieved by the use of high pressures of hydrogen in the presence of heterogeneous catalysts, such as finely divided nickel. Whereas alkenes can be hydrogenated near room temperatures, benzene and related compounds are more reluctant substrates, requiring temperatures >100 °C. This reaction is practiced on a large scale industrially. In the absence of the catalyst, benzene is impervious to hydrogen. Hydrogenation cannot be stopped to give cyclohexene or cyclohexadienes as these are superior substrates. Birch reduction, a non catalytic process, however selectively hydrogenates benzene to the diene. Reactions: Metal complexes Benzene is an excellent ligand in the organometallic chemistry of low-valent metals. Important examples include the sandwich and half-sandwich complexes, respectively, Cr(C6H6)2 and [RuCl2(C6H6)]2. Health effects: Benzene is classified as a carcinogen, which increases the risk of cancer and other illnesses, and is also a notorious cause of bone marrow failure. Substantial quantities of epidemiologic, clinical, and laboratory data link benzene to aplastic anemia, acute leukemia, bone marrow abnormalities and cardiovascular disease. The specific hematologic malignancies that benzene is associated with include: acute myeloid leukemia (AML), aplastic anemia, myelodysplastic syndrome (MDS), acute lymphoblastic leukemia (ALL), and chronic myeloid leukemia (CML).The American Petroleum Institute (API) stated as early as 1948 that "it is generally considered that the only absolutely safe concentration for benzene is zero". There is no safe exposure level; even tiny amounts can cause harm. The US Department of Health and Human Services (DHHS) classifies benzene as a human carcinogen. Long-term exposure to excessive levels of benzene in the air causes leukemia, a potentially fatal cancer of the blood-forming organs. In particular, acute myeloid leukemia or acute nonlymphocytic leukemia (AML & ANLL) is caused by benzene. IARC rated benzene as "known to be carcinogenic to humans" (Group 1). Health effects: As benzene is ubiquitous in gasoline and hydrocarbon fuels that are in use everywhere, human exposure to benzene is a global health problem. Benzene targets the liver, kidney, lung, heart and brain and can cause DNA strand breaks and chromosomal damage, hence is teratogenic and mutagenic. Benzene causes cancer in animals including humans. Benzene has been shown to cause cancer in both sexes of multiple species of laboratory animals exposed via various routes. Exposure to benzene: According to the Agency for Toxic Substances and Disease Registry (ATSDR) (2007), benzene is both a synthetically-made and naturally occurring chemical from processes that include: volcanic eruptions, wild fires, synthesis of chemicals such as phenol, production of synthetic fibers, and fabrication of rubbers, lubricants, pesticides, medications, and dyes. The major sources of benzene exposure are tobacco smoke, automobile service stations, exhaust from motor vehicles, and industrial emissions; however, ingestion and dermal absorption of benzene can also occur through contact with contaminated water. Benzene is hepatically metabolized and excreted in the urine. Measurement of air and water levels of benzene is accomplished through collection via activated charcoal tubes, which are then analyzed with a gas chromatograph. The measurement of benzene in humans can be accomplished via urine, blood, and breath tests; however, all of these have their limitations because benzene is rapidly metabolized in the human body.Exposure to benzene may lead progressively to aplastic anemia, leukaemia, and multiple myeloma.OSHA regulates levels of benzene in the workplace. The maximum allowable amount of benzene in workroom air during an 8-hour workday, 40-hour workweek is 1 ppm. As benzene can cause cancer, NIOSH recommends that all workers wear special breathing equipment when they are likely to be exposed to benzene at levels exceeding the recommended (8-hour) exposure limit of 0.1 ppm. Exposure to benzene: Benzene exposure limits The United States Environmental Protection Agency has set a maximum contaminant level for benzene in drinking water at 0.0005 mg/L (5 ppb), as promulgated via the U.S. National Primary Drinking Water Regulations. This regulation is based on preventing benzene leukemogenesis. The maximum contaminant level goal (MCLG), a nonenforceable health goal that would allow an adequate margin of safety for the prevention of adverse effects, is zero benzene concentration in drinking water. The EPA requires that spills or accidental releases into the environment of 10 pounds (4.5 kg) or more of benzene be reported. Exposure to benzene: The U.S. Occupational Safety and Health Administration (OSHA) has set a permissible exposure limit of 1 part of benzene per million parts of air (1 ppm) in the workplace during an 8-hour workday, 40-hour workweek. The short term exposure limit for airborne benzene is 5 ppm for 15 minutes. These legal limits were based on studies demonstrating compelling evidence of health risk to workers exposed to benzene. The risk from exposure to 1 ppm for a working lifetime has been estimated as 5 excess leukemia deaths per 1,000 employees exposed. (This estimate assumes no threshold for benzene's carcinogenic effects.) OSHA has also established an action level of 0.5 ppm to encourage even lower exposures in the workplace.The U.S. National Institute for Occupational Safety and Health (NIOSH) revised the Immediately Dangerous to Life and Health (IDLH) concentration for benzene to 500 ppm. The current NIOSH definition for an IDLH condition, as given in the NIOSH Respirator Selection Logic, is one that poses a threat of exposure to airborne contaminants when that exposure is likely to cause death or immediate or delayed permanent adverse health effects or prevent escape from such an environment. The purpose of establishing an IDLH value is (1) to ensure that the worker can escape from a given contaminated environment in the event of failure of the respiratory protection equipment and (2) is considered a maximum level above which only a highly reliable breathing apparatus providing maximum worker protection is permitted. In September 1995, NIOSH issued a new policy for developing recommended exposure limits (RELs) for substances, including carcinogens. As benzene can cause cancer, NIOSH recommends that all workers wear special breathing equipment when they are likely to be exposed to benzene at levels exceeding the REL (10-hour) of 0.1 ppm. The NIOSH short-term exposure limit (STEL – 15 min) is 1 ppm. Exposure to benzene: American Conference of Governmental Industrial Hygienists (ACGIH) adopted Threshold Limit Values (TLVs) for benzene at 0.5 ppm TWA and 2.5 ppm STEL. Exposure to benzene: Toxicology Biomarkers of exposure Several tests can determine exposure to benzene. Benzene itself can be measured in breath, blood or urine, but such testing is usually limited to the first 24 hours post-exposure due to the relatively rapid removal of the chemical by exhalation or biotransformation. Most people in developed countries have measureable baseline levels of benzene and other aromatic petroleum hydrocarbons in their blood. In the body, benzene is enzymatically converted to a series of oxidation products including muconic acid, phenylmercapturic acid, phenol, catechol, hydroquinone and 1,2,4-trihydroxybenzene. Most of these metabolites have some value as biomarkers of human exposure, since they accumulate in the urine in proportion to the extent and duration of exposure, and they may still be present for some days after exposure has ceased. The current ACGIH biological exposure limits for occupational exposure are 500 μg/g creatinine for muconic acid and 25 μg/g creatinine for phenylmercapturic acid in an end-of-shift urine specimen. Exposure to benzene: Biotransformations Even if it is not a common substrate for metabolism, benzene can be oxidized by both bacteria and eukaryotes. In bacteria, dioxygenase enzyme can add an oxygen to the ring, and the unstable product is immediately reduced (by NADH) to a cyclic diol with two double bonds, breaking the aromaticity. Next, the diol is newly reduced by NADH to catechol. The catechol is then metabolized to acetyl CoA and succinyl CoA, used by organisms mainly in the citric acid cycle for energy production. Exposure to benzene: The pathway for the metabolism of benzene is complex and begins in the liver. Several enzymes are involved. These include cytochrome P450 2E1 (CYP2E1), quinine oxidoreductase (NQ01 or DT-diaphorase or NAD(P)H dehydrogenase (quinone 1)), GSH, and myeloperoxidase (MPO). CYP2E1 is involved at multiple steps: converting benzene to oxepin (benzene oxide), phenol to hydroquinone, and hydroquinone to both benzenetriol and catechol. Hydroquinone, benzenetriol and catechol are converted to polyphenols. In the bone marrow, MPO converts these polyphenols to benzoquinones. These intermediates and metabolites induce genotoxicity by multiple mechanisms including inhibition of topoisomerase II (which maintains chromosome structure), disruption of microtubules (which maintains cellular structure and organization), generation of oxygen free radicals (unstable species) that may lead to point mutations, increasing oxidative stress, inducing DNA strand breaks, and altering DNA methylation (which can affect gene expression). NQ01 and GSH shift metabolism away from toxicity. NQ01 metabolizes benzoquinone toward polyphenols (counteracting the effect of MPO). GSH is involved with the formation of phenylmercapturic acid.Genetic polymorphisms in these enzymes may induce loss of function or gain of function. For example, mutations in CYP2E1 increase activity and result in increased generation of toxic metabolites. NQ01 mutations result in loss of function and may result in decreased detoxification. Myeloperoxidase mutations result in loss of function and may result in decreased generation of toxic metabolites. GSH mutations or deletions result in loss of function and result in decreased detoxification. These genes may be targets for genetic screening for susceptibility to benzene toxicity. Exposure to benzene: Molecular toxicology The paradigm of toxicological assessment of benzene is shifting towards the domain of molecular toxicology as it allows understanding of fundamental biological mechanisms in a better way. Glutathione seems to play an important role by protecting against benzene-induced DNA breaks and it is being identified as a new biomarker for exposure and effect. Benzene causes chromosomal aberrations in the peripheral blood leukocytes and bone marrow explaining the higher incidence of leukemia and multiple myeloma caused by chronic exposure. These aberrations can be monitored using fluorescent in situ hybridization (FISH) with DNA probes to assess the effects of benzene along with the hematological tests as markers of hematotoxicity. Benzene metabolism involves enzymes coded for by polymorphic genes. Studies have shown that genotype at these loci may influence susceptibility to the toxic effects of benzene exposure. Individuals carrying variant of NAD(P)H:quinone oxidoreductase 1 (NQO1), microsomal epoxide hydrolase (EPHX) and deletion of the glutathione S-transferase T1 (GSTT1) showed a greater frequency of DNA single-stranded breaks. Exposure to benzene: Biological oxidation and carcinogenic activity One way of understanding the carcinogenic effects of benzene is by examining the products of biological oxidation. Pure benzene, for example, oxidizes in the body to produce an epoxide, benzene oxide, which is not excreted readily and can interact with DNA to produce harmful mutations. Exposure to benzene: Routes of exposure Inhalation Outdoor air may contain low levels of benzene from automobile service stations, wood smoke, tobacco smoke, the transfer of gasoline, exhaust from motor vehicles, and industrial emissions. About 50% of the entire nationwide (United States) exposure to benzene results from smoking tobacco or from exposure to tobacco smoke. After smoking 32 cigarettes per day, the smoker would take in about 1.8 milligrams (mg) of benzene. This amount is about 10 times the average daily intake of benzene by nonsmokers.Inhaled benzene is primarily expelled unchanged through exhalation. In a human study 16.4 to 41.6% of retained benzene was eliminated through the lungs within five to seven hours after a two- to three-hour exposure to 47 to 110 ppm and only 0.07 to 0.2% of the remaining benzene was excreted unchanged in the urine. After exposure to 63 to 405 mg/m3 of benzene for 1 to 5 hours, 51 to 87% was excreted in the urine as phenol over a period of 23 to 50 hours. In another human study, 30% of absorbed dermally applied benzene, which is primarily metabolized in the liver, was excreted as phenol in the urine. Exposure to benzene: Exposure from soft drinks Under specific conditions and in the presence of other chemicals benzoic acid (a preservative) and ascorbic acid (Vitamin C) may interact to produce benzene. In March 2006, the official Food Standards Agency in United Kingdom conducted a survey of 150 brands of soft drinks. It found that four contained benzene levels above World Health Organization limits. The affected batches were removed from sale. Similar problems were reported by the FDA in the United States. Exposure to benzene: Contamination of water supply In 2005, the water supply to the city of Harbin in China with a population of almost nine million people, was cut off because of a major benzene exposure. Benzene leaked into the Songhua River, which supplies drinking water to the city, after an explosion at a China National Petroleum Corporation (CNPC) factory in the city of Jilin on 13 November 2005. Exposure to benzene: When plastic water pipes are subject to high heat, the water may be contaminated with benzene. Genocide The Nazi German government used benzene administered via injection as one of their many methods for killing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Endrin** Endrin: Endrin is an organochlorine compound with the chemical formula C12H8Cl6O that was first produced in 1950 by Shell and Velsicol Chemical Corporation. It was primarily used as an insecticide, as well as a rodenticide and piscicide. It is a colourless, odorless solid, although commercial samples are often off-white. Endrin was manufactured as an emulsifiable solution known commercially as Endrex. The compound became infamous as a persistent organic pollutant and for this reason it is banned in many countries.In the environment endrin exists as either endrin aldehyde or endrin ketone and can be found mainly in bottom sediments of bodies of water. Exposure to endrin can occur by inhalation, ingestion of substances containing the compound, or skin contact. Upon entering the body, it can be stored in body fats and can act as a neurotoxin on the central nervous system, which can cause convulsions, seizures, or even death.Although endrin is not currently classified as a mutagen, nor as a human carcinogen, it is still a toxic chemical in other ways with detrimental effects. Due to these toxic effects, the manufacturers cancelled all use of endrin in the United States by 1991. Food import concerns have been raised because some countries may have still been using endrin as a pesticide. History: J. Hyman & Company first developed endrin in 1950. Shell International was licensed in the United States and in the Netherlands to produce it. Velsicol was the other producer in the Netherlands. Endrin was used globally until the early 1970s. Due to its toxicity, it was banned or severely restricted in many countries. In 1982, Shell discontinued its manufacturing.In 1962, an estimated 2.3-4.5 million kilograms of endrin were sold by Shell in the USA. In 1970, Japan imported 72,000 kilograms of endrin. From 1963 until 1972, Bali used 171 to 10,700 kilograms of endrin annually for the production of rice paddies until endrin use was discontinued in 1972. Taiwan reported to show higher levels of organochlorine pesticides including endrin in soil samples of paddy fields, compared to other Asian countries such as Thailand and Vietnam. During the 1950s-1970s over two million kilograms of organochlorine pesticides were estimated of having been be released into the environment per year. Endrin was banned in the United States on October 10, 1984. Taiwan banned endrin's use as a pesticide in 1971 and regulated it as a toxic chemical in 1989.In May 2004, the Stockholm Convention on Persistent Organic Pollutants came into effect and listed endrin as one of the 12 initial persistent organic pollutants (POPs) that have been causing adverse effects on humans and the environment. The convention requires the participating parties to take measures to eliminate or restrict the production of POPs. Production: The synthesis of endrin begins with the condensation of hexachlorocyclopentadiene with vinyl chloride. The product is then dehydrochlorinated. Following reaction with cyclopentadiene, isodrin is formed. Epoxide formation by adding either peracetic acid or perbenzoic acid to the isodrin is the final step in synthesizing endrin.Endrin is a stereoisomer of dieldrin with comparable properties, though endrin degrades more easily. Use: Endrin was formulated as emulsifiable concentrates (ECs), wettable powders (WPs), granules, field strength dusts (FSDs), and pastes. The product could then be applied by aircraft or by handheld sprayers in its various formulations.Endrin has been used primarily as an agricultural insecticide on tobacco, apple trees, cotton, sugar cane, rice, cereal, and grains. It is effective against a variety of species, including cotton bollworms, corn borers, cut worms and grass hoppers. In addition, endrin has been employed as a rodenticide and avicide. In Malaysia, fish farms used a solution of endrin as a piscicide to rid mine pools and fish ponds of all fish prior to restocking.A study conducted from 1981 to 1983 in the US aimed to determine endrin's effects on non-target organisms when applied as a rodenticide in orchards. Most wildlife in and around the orchard was found to have endrin exposure, with endrin toxicity accounting for more than 24% of bird deaths recorded. Endrin was eventually banned in the US on October 10, 1984. Health effects: Exposure and metabolism Exposure to endrin can occur by inhalation, ingestion of substances containing the compound, or by skin contact. In addition to inhalation and skin contact, infants can be exposed by ingesting the breast milk of an exposed woman. In utero, fetuses are exposed by way of the placenta if the mother has been exposed.Upon entering the body, endrin metabolizes into anti-12-hydroxyendrin and other metabolites, which can be expelled in the urine and feces. Both anti-12-hydroxyendrin and its metabolite, 12-ketoendrin, are likely responsible for the toxicity of endrin. The rapid metabolism of endrin into these metabolites makes detection of endrin itself difficult unless exposure is very high. Health effects: Neurological effects Symptoms of endrin poisoning include headache, dizziness, nervousness, confusion, nausea, vomiting, and convulsions. Acute endrin poisoning in humans affects primarily the central nervous system. There, it can act as a neurotoxin that blocks the activity of inhibitory neurotransmitters. In cases of acute exposure, this may result in seizures, or even death. Because endrin can be stored in body fats, acute endrin poisoning can lead to recurrent seizures when stressors induce the release of endrin back into the body, even months after the initial exposure is terminated.People occupationally exposed to endrin may experience abnormal EEG readings even if they exhibit none of the clinical symptoms, possibly due to injury to the brain stem. These readings show bilateral synchronous theta waves with synchronous spike-and-wave complexes. EEG readings can take up to one month to return to normal. Health effects: Developmental effects Though endrin exposure has not been found to adversely affect fertility in mammals, an increase in fetal mortality has been observed in mice, rats, and mallard ducks. In those animals that have survived gestation, developmental abnormalities have been observed, particularly in rodents whose mothers were exposed to endrin early in pregnancy. In hamsters, the number of cases of fused ribs, cleft palate, open eyes, webbed feet, and meningoencephaloceles have increased. Along with open eyes and cleft palate, mice have developed with fused ribs and exencephaly. Skeletal abnormalities in rodents have also been reported. Health effects: Other effects Higher doses of endrin have been found to cause the following in rodents: renal tubular necrosis; inflammation of the liver, fatty liver, and liver necrosis; possible kidney degradation; and a decrease in body weight and body weight gain.Endrin is very toxic to aquatic organisms, namely fish, aquatic invertebrates, and phytoplankton. It was found to remain in the tissues of infected fish for up to one month. Health effects: 1984 poisoning outbreak in Pakistan From July 14 to September 26, 1984, an outbreak of endrin poisoning occurred in 21 villages in and around Talagang, a subdistrict of the Punjab province of Pakistan. Eighty percent of the 194 known cases were children under the age of 15. Poisoned individuals had seizures along with vomiting, pulmonary congestion, and hypoxia, leaving 19 people dead. Some individuals had low grade fevers (37.8 °C/100 °F, axillary) following seizures. The more seriously affected had less vomiting, but higher temperatures than people who were less affected. Most patients could be controlled in under two hours using diazepam, phenobarbital, and atropine, though the more seriously affected patients required general anesthesia. Recovery took up to two days. Following treatment, patients reported not remembering their seizures. The outbreak affected both men and women equally.Based on the demographics of the affected individuals and their area of residence, the outbreak was likely caused by endrin contamination of food. As members of these villages rarely had contact with one another, investigators determined that contaminated sugar shipped to the villages was the most probable cause, though no credible evidence was found to support this. Around this time, endrin was being used by cotton and sugar cane farmers in the Punjab region. A number of truck drivers stated that they had used the same trucks to deliver endrin to farmers and to pick up crops for Talagang, possibly leading to contamination. Environmental behavior: Insecticides like dieldrin and endrin have been shown to persist for decades in the environment. A definitive detection of the residues was not possible until 1971 when mass spectrometer started being used as a detector in gas chromatography. Detection of these chemicals in the environment has been reported across the world up to 2005, even though the frequency of reported cases are low due to its relatively small-scale use and very low concentrations.Endrin regularly enters the environment when applied to crops or when rain washes it off. It has been found in water, sediments, atmospheric air and biotic environment, even after uses have been stopped. Organochlorine pesticides strongly resist degradation, are poorly soluble in water but highly soluble in lipids, which is called lipophilic. This leads to bioaccumulation in fatty tissues of organisms, mainly those dwelling in water. A high bioconcentration factor of 1335–10,000 has been reported in fish. Endrin binds very strongly to organic matter in soil and aquatic sediments due to their high adsorption coefficient, making it less likely to leach into groundwater, even though contaminated groundwater samples have been found. In 2009, EPA released data indicating that the endrin in soil could last up to 14 years or more. The extent of endrin's persistence depends highly on local conditions. For example, high temperature (230 °C) or intense sunlight leads to more rapid breakdown of endrin into endrin ketone and endrin aldehyde, however, this breakdown is less than 5%. Environmental behavior: Removal from the environment In the United States, endrin was mainly disposed in land until U.S. federal regulations were applied in 1987 on land disposal of wastes containing endrin. Primary methods of endrin disappearance from soil are volatilization and photodecomposition. Under ultraviolet light, endrin forms δ-ketoendrin and International Programme on Chemical Safety (IPCS) claims that in intense summer sun, about 50% of endrin is isomerized to δ-ketoendrin in 7 days. In anaerobic conditions microbial degradation by fungi and bacteria takes place to form the same major end product.Mammalian metabolic studies with endrin are difficult because of the high toxicity of the compound. Baldwin M K identified two hydroxylated metabolites in the faeces of rats fed a diet containing 4 parts per million of endrin. At least one was the result of hydroxylation of the methylene bridge. The other might be the opposite isomer, or it could conceivably be the result of hydroxylation at another site. Endrin rarely occurs as a resdue in tissues. What is found is the ketone, probably produced by metabolism of the alcohol derived from the methylene group. Environmental behavior: Hazardous Substances Data Bank (HSDB) lists reductive dechlorination and incineration for field disposal of small quantities of endrin. In reductive dechlorination, endrin's chlorine atoms were completely replaced with hydrogen atoms, which is suspected to be more environmentally acceptable. Even though endrin binds very strongly to soil, phytoremediation has been proposed by group of Japanese scientists using crops in the family Cucurbitaceae. As of 2009, exact mechanisms behind the plant uptake of endrin have not been understood. Research in uptake mechanisms and factors that influence the uptake is needed for practical application. Regulation: United States In the United States, endrin has been regulated by the EPA. It set a freshwater acute criterion of 0.086 µg/L and a chronic criterion of 0.036 µg/L. In saltwater, the numbers are acute 0.037 and chronic 0.0023 µg/L. The human health contaminate criterion for water plus organism is 0.059 µg/L. The drinking water limit (maximum contaminant level) is set to 2 ppb. Use of endrin in fisheries has been advised against due to the zero tolerance of endrin levels in food products. For occupational exposures to endrin, OSHA and NIOSH have set exposure limits at 0.1 mg/m3. International organizations The WHO lists Endrin as an obsolete pesticide in its 'Classification of Pesticides by Hazard' and did not assign any hazard class per the Globally Harmonized System of Classification and Labelling of Chemicals. Regulation: Taiwan Taiwan is not a party to the Stockholm Convention as of 2015, but has drafted its own "National Implementation Plan of the Stockholm Convention on Persistent Organic Pollutants" which was approved by the Executive Yuan in April 2008. The Central Competent Authorities of Taiwan sets the limit of 20 mg/kg for soil pollution control. For marine environment quality, standards of 0.002 mg/L has been set. For occupational exposures to endrin, warning has been given that the contact with skin, eyes, and mucous membranes can contribute to the overall exposure.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ABT-724** ABT-724: ABT-724 is a drug which acts as a dopamine agonist, and is selective for the D4 subtype. It was developed as a possible drug for the treatment of erectile dysfunction, although poor oral bioavailability means alternative drugs such as ABT-670 may be more likely to be developed commercially. Nonetheless, it continues to be used in scientific research into the function of the D4 receptor.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cigarette receptacle** Cigarette receptacle: A cigarette receptacle is a container or device for extinguishing and disposing of cigarette waste. Other common names for cigarette receptacles include: ash urns, ash pans, cigarette butt receptacles, butt bins, butt holders, snuffers, smokers poles, cigarette waste receptacles, smokers waste receptacles, and ash/trash combinations. Originally provided as a courtesy to smokers in public places, cigarette receptacles are now commonplace as smoking bans and designated smoking areas require proper disposal methods. A typical receptacle can hold hundreds — even thousands — of disposed cigarette butts. Cigarette litter problem: Proper disposal of cigarette butts is promoted as both an environmental and health issue. It is estimated 4.5 trillion cigarette butts become litter every year. While cigarette smoking in the United States has decreased, cigarette butt litter remains the most littered item in the United States and globally. The overall littering rate for cigarette butts is 65%, and in all, tobacco products make up 38% of all roadway litter in the United States. Depending on composition, a cigarette butt can take as short as one month, and up to three years and longer to biodegrade. Cigarette filters made of cellulose acetate do eventually biodegrade, while some environmental groups claim filters containing plastic never fully biodegrade. Cigarette butts contain toxic chemicals including nicotine, cadmium and benzene. Types: Cigarette receptacles for use in public and private establishments can be wall-mounted or free-standing, for indoor and outdoor use. Construction materials include metal (steel, stainless steel, aluminum), concrete, stone/epoxy aggregate, various plastics (polyethylene, recycled plastic), and fiberglass, with receptacles made of one material or a combination of several. Ash urns Ash urns are usually mounted on a pedestal, and present a media such as sand or gravel for a smoker to snuff cigarette butts. This type of cigarette receptacle is also common in ash/trash combination units, with the urn placed on top of a trash bin below. Wall mounted cigarette receptacles Wall mounted cigarette receptacles are manufactured in a variety of shapes, sizes and butt disposal methods. Units with all-metal construction allow disposing of cigarette butts into a container with no other extinguishing media required. Some receptacles utilize a separate butt container for clean-out, while one-piece models simply dump used butts. Tube cigarette receptacles Tubular cigarette receptacles can be wall-mounted or free standing, of various lengths and diameters. Their simple design allows butts to be deposited directly into the tube, and to extinguish on their own. Usually constructed of metal, no other media, such as sand or water, is required. One and two-piece construction is common. Types: Free standing cigarette receptacles Made in a variety of configurations and construction materials, free standing cigarette receptacles come in heavy pre-cast concrete, lighter weight stone/epoxy aggregate and lighter still materials such as polyethylene, recycled plastic and fiberglass.Many free standing cigarette receptacles utilize an oxygen restricting design that extinguishes still burning butts. These style receptacles are usually made of molded plastic, polyethylene and fiberglass, with metal inner liners and metal pails to gather disposed butts. Due to their oxygen depriving design, no additional extinguishing media is generally required. Cigarette butts are deposited through a small opening and drop in a long neck into the collection chamber. The collection chamber typically houses the removable pail. Types: Other free standing receptacles, such as those made of pre-cast concrete and stone aggregate materials, rely on water, sand or gravel placed in the collection bin to assist in extinguishing disposed butts. Butt removal and clean-out is accomplished through a door in the receptacle's base, or, the base and top separate to allow access to disposed butts. Enhanced design features: Some cigarette receptacles contain unique design features at the butt entry point. These include: limited entry designs, to discourage unwanted trash covered openings to eliminate rainwater overflow large snuffer plates for better hygiene and an easier target recessed snuffer screens to prevent ashes from falling to the ground. Theme cigarette receptacles: Cigarette receptacles made of molded materials are often designed to match their surroundings. Sports themes such as receptacles decorated with golf balls and baseballs are common; other examples include nautical receptacles designed to look like buoys, receptacles resembling trees, and receptacles mimicking an architectural style. Accessories: Various size metal pails provide easy butt collection and removal. For receptacles with collection pails, odor-absorbing and fire suppressing filters, placed inside the pails, use baking soda for odor control and also release CO2 for fire suppression. Weighted bases assist free standing receptacles to remain steady in inclement weather. In areas where security is a concern, tie-downs, security cables and locks keep the cigarette receptacle safe from theft and vandalism.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Egg jelly** Egg jelly: Egg jelly (extracellular layer, jelly coat) is a gelatinous layer that surrounds the oocytes of many organisms and releases species-specific chemoattractants that activate and guide sperm to the oocyte. The release of chemoattractants is species dependent. For example, sperm in Lytechinus variegatus, the green sea urchin, are not chemotactically attracted to the jelly or the egg. The egg jelly is located immediately surrounding the vitelline envelope and consists primarily of a network of short peptides and sulfated fucan glycoproteins. These short peptides diffuse into the surrounding area and stimulate respiration and movement of the sperm to the egg. An example of such a peptide is resact, which has been studied as the primary means of attracting and orientating sperm to the eggs in sea urchins. The sulfated fucan glycoproteins play an important role in binding to sperm receptors and triggering the acrosomal reaction.Many other functions for the egg jelly have been proposed including sperm agglomeration, protection from mechanical stress and polyspermy, and increasing the size of the egg to improve its chances of colliding with sperm. For echinoderms the jelly coat can increase the diameter of the egg by more than 100%, making it efficient in enhancing fertilization. In female P. shqipericus, the Albanian water frog, the jelly coat cause sperm to become motile and move faster. For this species of frog, the sperm must interact with the jelly coat for the egg to be successfully fertilized.Unlike the egg cell, jelly coats do not provide the embryo nutrients. In addition to the sea urchin, egg jelly appears in many species including invertebrates and mammals. Egg jelly can vary in composition and complexity from the relatively homogenous single layer sea urchin egg to the three layer egg jelly in starfish.There is an increasing concern in how ocean acidification will affect the fertilization of eggs. In H. tuberculate, low pH can damage the eggs chemical influence on sperm mobility and velocity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ventral reticular nucleus** Ventral reticular nucleus: The ventral reticular nucleus is a continuation of the parvocellular nucleus in the brainstem. The ventral reticular nucleus has been shown to receive afferent projections from the dentate gyrus in rabbits.The rostral portion of the ventral reticular nucleus has been shown to mediate inspiration along with a portion of the lateral reticular nucleus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Eyebar** Eyebar: In structural engineering and construction, an eyebar is a straight bar, usually of metal, with a hole ("eye") at each end for fixing to other components. Eyebars are used in structures such as bridges, in settings in which only tension, and never compression, is applied. Also referred to as "pin- and eyebar construction" in instances where pins are being used. Structure: A closed eyebar will typically have a rectangular cross section of constant thickness throughout its length and a constant width for all but the ends. The ends will transition to a wider part that is terminated by a rounded end. In the center of this end will be a hole which will receive a cylindrical pin, which may have provision to accept one or more nuts or bolts. If of round cross section the bar will typically be end-forged to create a head, which is then flatted by additional forging. The head may then be machined to a precise thickness and flatness. An alternative method for using round bar is to form a loop and to forge-weld (hammer weld) or electrically weld the free end to the main bar.Open eyebars are not used in the cable anchorages of modern wire-cable suspension bridges. This does not allow the wires to be looped over the eye, rather than requiring threading through a closed eye. Application: The bars may be fabricated with pin holes that are slightly undersized. If so, these are then reamed in the field. This field reaming ensures that stresses will be uniformly distributed among the several bars forming the truss element or the chain link. Corrosion resistant treatment in the form of grease, white or red lead oil paste, or other water-excluding material may be added at the time of the assembly. Application: Trusses: roofs and buildings Eyebars are used in portions of pin-jointed trusses where it can be established by engineering procedures that the bar will not be imposed with any stress other than tension under all expected conditions. Eyebars are used to supplement roof truss framing supports made of wood or metal. They are placed as the struts for the truss, located next to the king joist. Application: Chain link suspension spans Eyebar links have long been used in suspension bridges with a number of eyebar links combed together to form a highly redundant structure. This use of eyebar places it in a chain linkage that is holding a load based on tension rather than compression. However, more modern low-redundancy chain link suspension spans fell into general disfavor as a result of the collapse of the Silver Bridge in 1967, which led to the deaths of 46 people. Application: (The current method of suspension bridge design is to use multiple strands of drawn wire to form substantial cables.) Fabrication: Eyebars may be cast, forged, or cut from rolled plate. If round stock is used the eyes will usually be forged. Heat treatment (heating and rapid cooling) will result in a fine-grained microscopic crystal structure, enhancing the strength of the bar. Excessive hardness may induce brittleness, which should be avoided. The pins used to join bars will also be heat treated, usually to a degree of hardness exceeding that of the bars so that they will not shear under high stress. Fabrication: Piling Original eyebars were formed from "piling" thin iron metal on top of one another and forging it together in a furnace. Once together the piece was heated and hammered into a U shape over a die. To create the eye the heated bent iron was hammered into itself closing the gap and creating the eye shape. This method created a quick and efficient way to create the bar, however would not structurally stay together after a certain point due to the piling method being ill heated or being defective. Fabrication: Casting Piling was superseded by casting, wherein the eye and the bar are cast together in the same mold, creating a more sound piece with less area for the bond to break apart. Newer methods of steel cutting such as laser / plasma / and water-jetting allow the production of steel items such as eyebars from prefabricated steel plates: Laser A strong laser is used to accurately cut a programmed design from steel. This method is quick and reduces waste, but also requires additional sanding and finishing before use. Fabrication: Plasma Oxygen gas is funneled past an electrode creates an arc, which can be channeled down into steel allowing the metal to be cut. This method for cutting only works on conductive metals. Water-jet Similar to the laser, water-jet cutting utilizes a cutting machine but uses for the force of water to cut through the steel. Using water creates smoothed near finished cuts lowering production time. Advantages of use: Eyebars were created during the early 1900s where the cost of steel was high. The creation of the eyebar provided a simple solution to lessening the amount of steel needed in a bridge. Using a pin and eye method less stress would theoretically be placed on the joining members. Problems in use: Issues occur for the following reasons: Improper fabrication A bar may not be made properly due to bad casting, if steel, or not being hammered properly, if iron. This error is evident in points where the head has snapped off from the bar or the head has cracked across from a pin hole to the exterior side. Problems in use: Insufficient layering Eye bars when placed as supports in bridges are not layered enough. Consider the catastrophe of Silver Bridge, this was an instance where only 2 eyebars were paired together as supports in the chain. It was more common practice to use 4 eye bars pinned together in the instance where one eyebar failed 3 more would be able to split the load rather than just the single eyebar left. In the case of Silver bridge the remaining eye bar also broke which caused the bridge to collapse. Problems in use: General wear Like all metal, steel wears down over time. As a result, the steel pins in the eyes become loose and lose tension, which in turn compromises the integrity of the structure. Review of eyebar use: Due to the technological advancements in creating eyebars Iron and old cast method of Steel eyebars are less common. These older bridges however still need to be maintained and reviewed. Researchers like Dewey Walls, Jr. of the Union Pacific Rail Road have compiled resources on how to review, identify compromised locations and how to properly repair the area.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Scribe (log server)** Scribe (log server): Scribe was a server for aggregating log data streamed in real-time from many servers. It was designed to be scalable, extensible without client-side modification, and robust to failure of the network or any specific machine. Scribe (log server): Scribe was developed at Facebook and released in 2008 as open source.Scribe servers are arranged in a directed graph, with each server knowing only about the next server in the graph. This network topology allows for adding extra layers of fan-in as a system grows, and batching messages before sending them between datacenters, without having any code that explicitly needs to understand datacenter topology, only a simple configuration.Scribe was designed to consider reliability but to not require heavyweight protocols and expansive disk usage. Scribe spools data to disk on any node to handle intermittent connectivity node failure, but doesn't sync a log file for every message. This creates a possibility of a small amount of data loss in the event of a crash or catastrophic hardware failure. However, this degree of reliability is often suitable for most Facebook use cases.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HDDerase** HDDerase: HDDerase is a freeware utility that securely erases data on hard drives using the Secure Erase unit command built into the firmware of Parallel ATA and Serial ATA drives manufactured after 2001. HDDerase was developed by the Center for Magnetic Recording Research at the University of California, San Diego. HDDerase is designed for command-line use only. It differs from other file deletion programs such as Darik's Boot and Nuke which attempt to erase data using block writes which cannot access certain portions of the hard drive. The internal firmware Secure Erase command can access data that is no longer accessible through software, such as bad blocks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flour sack** Flour sack: A flour sack or flour bag is a bag or sack for flour. Large bulk bags as well as smaller consumer sizes are available. Description: A flour sack or flour bag is a bag or sack for flour. Sacks range in size and material. Package types: Bulk packaging Flour is often shipped from the miller to bakeries, institutions, and other bulk uses. Sizes range from 10 kg to 100 kg. One traditional construction was cheap cotton bags. These printed cotton bags were sometimes viewed as collectables; other times the flour sack fabric was repurposed into a variety of household items. Current practices are to use multi-wall paper sacks. Some include a layer of plastic film for barrier properties and insect control. Woven polypropylene bags are also used for high strength; at least one variety (Purdue Improved Crop Storage bags) also includes inner plastic bags. Consumer packaging Consumer packages are often bags or sacks constructed of paper. Plastic films are also used, sometimes with reclosable features. Stand-up pouches of flour have recently been introduced. Considerations: Contents A wide variety of wheat flour are available. Flour can also be made from other grains, roots, nuts, etc. Packaging engineers and food scientists need to understand the properties of the particular flour, intended handling and logistics systems, and desired shelf life. Package forms and materials can be matched to these needs. Insects Insects can be a problem. When available, a suitable insecticide can be used; care must be used to ensure product safety. Hermetic plastic bags also help. When insect infestation is noted, one method of stopping further growth is to freeze the sacks of flour for several days. Cultural impact: Flour sack fabric has been used as a cheap source of fabrics for consumers to create their own textiles. Printed cotton bags were sometimes viewed as collectables. Cultural impact: Various place names were named after flour sacks, since they were so ubiquitous in so many cultures. Blatobulgium in Scotland, and Pieniężno in Poland, for example, are possibly named after words for flour sack in different languages. The all-white tower in the old city of Ravensburg in Germany is called Mehlsack? Reuel Colt Gridley famously carried a 50-pound bag of flour on his shoulder after losing a political bet in Austin, Nevada. The sack of flour was later auctioned off, then re-donated, then re-auctioned again and again to raise money for the United States Sanitary Commission during the American Civil War. Auctioning this single flour sack eventually raised more than $250,000.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Post-chorus** Post-chorus: In music, particularly Western popular music, a post-chorus (or postchorus) is a section that appears after the chorus. The term can be used generically for any section that comes after a chorus, but more often refers to a section that has similar character to the chorus, but is distinguishable in close analysis. The concept of a post-chorus has been particularly popularized and analyzed by music theorist Asaf Peres, who is followed in this article. Characterization: Characterizations of post-chorus vary, but are broadly classed into simply a second chorus (in Peres's terms, a detached postchorus) or an extension of the chorus (in Peres's terms, an attached postchorus). Some restrict "post-chorus" to only cases where it is an extension of a chorus (attached postchorus), and do not consider the second part of two-part choruses (detached postchorus) as being a "post"-chorus.As with distinguishing the pre-chorus from a verse, it can be difficult to distinguish the post-chorus from the chorus. In some cases they appear separately – for example, the post-chorus only appears after the second and third chorus, but not the first – and thus are clearly distinguishable. In other cases they always appear together, and thus a "chorus + post-chorus" can be considered a subdivision of the overall chorus, rather than an independent section. Characterization: Characterization of a post-chorus varies, beyond "comes immediately after the chorus"; Peres characterizes it by two conditions: it maintains or increases sonic energy, otherwise it's a bridge or verse; and contains a melodic hook (vocal or instrumental), otherwise it's a transition. Examples: Detached post-choruses typically have distinct melody and lyrics from the chorus: Chandelier (Sia, 2014): the chorus begins and ends with "I'm gonna swing from the chandelier / From the chandelier", while the post-chorus repeats instead "holding on", in "I'm holding on for dear life" and "I'm just holding on for tonight", and has a new melody, but the same chord progression as the chorus.Lyrics of attached post-choruses typically repeat the hook/refrain from the chorus, with little additional content, often using vocables like "ah" or "oh". Examples include: "Umbrella" (Rihanna, 2007): the chorus begins "When the sun shine, we shine together" and run through "You can stand under my umbrella / You can stand under my umbrella, ella, ella, eh, eh, eh", which is followed by three more repetitions of "Under my umbrella, ella, ella, eh, eh, eh", the last one adding another "eh, eh-eh". Here the division between chorus and post-chorus is blurred, as the "ella, ella" begins in the chorus, and was a play on the reverb effect. Examples: "Shape of You" (Ed Sheeran, 2017): the chorus runs "I'm in love with the shape of you ... Every day discovering something brand new / I'm in love with your body", and the post-chorus repeats vocables and the hook "Oh—I—oh—I—oh—I—oh—I / I'm in love with your body", then repeats the end of the chorus, switching "your body" to "the shape of you": "Every day discovering something brand new / I'm in love with the shape of you" "Girls Like You" (Maroon 5, 2018): the chorus runs "'Cause girls like you ... I need a girl like you, yeah, yeah ... I need a girl like you, yeah, yeah", and the post-chorus repeats the hook with added "yeah"s: "Yeah, yeah, yeah, yeah, yeah, yeah / I need a girl like you, yeah, yeah / Yeah yeah yeah, yeah, yeah, yeah / I need a girl like you".Hybrids are also common (Peres: hybrid postchorus), where the post-chorus keeps the hook from the chorus (like an attached postchorus), but introduces some additional content (hook or melody, like a detached postchorus).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bongcloud Attack** Bongcloud Attack: The Bongcloud Attack (or Bongcloud Opening) is an irregular chess opening that consists of the moves: 1. e4 e5 2. Ke2?It is considered a joke opening and is associated with internet chess humor. Twitch streamers such as grandmaster (GM) Hikaru Nakamura have used it in online blitz chess, including in games against high-level opponents, as has former world champion Magnus Carlsen. The name has also been applied to other opening sequences in which a player moves the king on move two. Background: The opening's name is thought to originate either from Chess.com user "Lenny_Bongcloud", who used the opening with little success, or more generally in reference to a bong, a device used to smoke cannabis, humorously implying that one would need to be intoxicated to think that using the opening is a legitimate strategy. The opening's usage in chess humor was furthered by Andrew Fabbro's joke manual Winning With the Bongcloud. Analysis: The Bongcloud Attack violates several accepted principles of chess strategy by forgoing castling, impeding the movement of both the queen and the light-squared bishop, leaving the king exposed, wasting a tempo, and doing nothing to improve White's position. The lack of any redeeming feature, unlike some other dubious openings, puts the Bongcloud well outside of conventional practice. High-level usage: GM Hikaru Nakamura has used the Bongcloud Attack in online blitz games. He streamed himself using the opening exclusively on a new Chess.com account and reached 3000 rating. In 2018, Nakamura played the Bongcloud three times against GM Levon Aronian during the Chess.com Speed Chess Championship, winning one and losing two. Nakamura also played the Bongcloud against GM Vladimir Dobrov in the 3+1 section and GM Wesley So in the 1+1 section of the 2019 Speed Chess Championship, winning both games. On 19 September 2020, Nakamura used the opening against GM Jeffery Xiong in the final round of the St. Louis Rapid and Blitz tournament played on Lichess with a 5+3 time control and won the game.On 15 March 2021, Magnus Carlsen, playing white, led with the Bongcloud in a game against Nakamura at the Magnus Carlsen Invitational. Nakamura mirrored the opening with 2...Ke7, leading to a position nicknamed the Double Bongcloud. The game was intentionally drawn by threefold repetition after the players immediately repeated moves, the particular sequence they used known as the "Hotbox Variation". The game occurred in the last round of the preliminary stage of the tournament, and both players had already qualified for the following knockout stage, making the game dead rubber. It marked the first recorded occurrence of 1.e4 e5 2.Ke2 Ke7 in a major tournament.Despite its obvious disadvantages, usage of such a "joke" opening can also have a psychological impact: following Carlsen's win over Wesley So in a 2020 blitz tournament with a 3+2 time control where Carlsen played 1.f3 (the Barnes Opening) followed by 2.Kf2 (a variant also named the "Bongcloud"), So noted that losing the game after such an opening had a crushing impact.The first use of the joke opening in a FIDE-rated game between top grandmasters occurred during the Chess.com Global Championship finals in November 2022 which was an in-person rapid event played on Chess.com. Trailing 3–0 in his knockout match against Hikaru Nakamura, Polish GM Jan-Krzysztof Duda played 1.e3 and 2.Ke2. Duda lost the game after missing some chances to equalise.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nanostray 2** Nanostray 2: Nanostray 2 is a scrolling shooter video game for the Nintendo DS, and is the sequel to the original Nanostray. The game was released in 2008. Story: Taking place in the future, the supply ship E.S.S. Ariga is returning from its latest voyage when the awakening crew is alerted by a three-year-old distress call. The colonized area the Ariga is returning to has been contaminated by a techno-virus known as Nanostray. According to the distress call, the Nanostray virus had infected the colonist technology from computers to war-machines and made each one hostile. A flight commander has been assigned to win back the infested areas and, with the help of Officer Diane Stewart aboard the Ariga, discover and destroy the source of the Nanostray virus. Gameplay: Addressing the complaints many had with tacked-on touchscreen features, Nanostray 2 boasts three control schemes – classic control, left-handed touch control, and right-handed touch control, classic being the default scheme. The classic scheme employs the A and B buttons for primary and secondary weapons, the D-pad for movement, and the shoulder buttons (L and R) to change satellite drone placement. The touch control scheme employs the stylus/touch screen for movement, the D-pad or face buttons for use of the primary weapon, and the shoulder buttons for use of the secondary weapon. Like the previous game, gameplay focuses more on graphics quality rather than touch-screen control. Customization is now a key part of the experience: at the start of a level, players have the ability to adjust which special weapons they'll take into the fight, alter the angle of their side-mounted guns as they get mounted on the front, sides and rear of the ship, and even the ship's sensitivity to D-pad commands. Besides the main single-player mode, Nanostray 2 also has a Challenge mode, where players can try to get a set number of points, collect a set number of coins, or survive for a specific time limit. Gameplay: Modes Adventure – new to the Nanostray series is a developed story and voice acting. Apparently, 'Nanostray' is a virus that infects and controls machines for malicious purposes, and you must collect samples and seek more information on the virus. To unlock other features, the player must first play through Adventure mode. Each level cleared in the Adventure mode is made available in the Arcade mode, and one or more challenges are added to the Challenge mode. After clearing the first stage (Teppeki Dock), the game allows the player to play the next three stages (Kaikan Outpost, Naizoh Habitat, and Shinkai Bay) in any order. The player can then do the same with the following three stages (Daitoshi Station, Kigan Belt, and Kohai City). After those stages have been cleared, then the final stage (Himuro Base) is unlocked. However, if the player runs out of lives or continues, then they must start back at the first level. Gameplay: Arcade – in Arcade mode, the objective is to score as many points as possible in the 'hard' difficulty. Stages in Arcade mode are unlocked after they are played in Adventure mode. A player's high scores can be downloaded via the Nintendo Wi-Fi Connection to online leaderboards. Challenge – four groups of challenges, eight strong each, are presented to the player. Challenges force the player to end the stages with different conditions, for example, reaching a minimum score, surviving a set amount of time, collecting a certain number of coins, etc. 2-Player – the game's multiplayer mode is limited to play between two players in multiplayer cooperative (multi-card) and duel modes, both of which are played locally. The game also has single-card download capability, with two modes available. Simulator – for each group of challenges cleared, one mini-game is unlocked in Simulator mode. These mini-games include Nanobreak, Nanogrid, Nanorush, and Nanotorque. Gameplay: Weapons Players are limited to selecting which of six subweapons they would prefer. The primary weapon remains constant throughout gameplay, being a repeating laser bolt which can be augmented by satellites. Subweapons have different abilities, acting as lasers, mines, or remotely detonated devices. Each subweapon has a different power requirement, which draws from a limited supply on the player's ship. The power supply is replenished by collecting blue energy coins throughout a level. Reception: Nanostray 2 received "generally favorable reviews" according to the review aggregation website Metacritic. Some reviewers praised features such as the 3D graphics and solid gameplay, and others criticized the still-awkward-though-completely-optional touch-screen controls and unusual positions of save points between levels. GameSpot praised it as "a dyed-in-the-wool shoot-'em-up that offers great action in a shiny, proficient package", while lamenting its "D pad controls [as] too sluggish" and its "Disappointing single-card play". IGN praised the game's graphics as "impressive...even the title screen" while lamenting its "enemy and vehicle design [as] uninspired. Game Informer gave the game an above-average review, while Electronic Gaming Monthly and Nintendo Power gave it mixed reviews, a few months before the game was released Stateside.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Customer base** Customer base: The customer base is a group of customers who repeatedly purchase the goods or services of a business. These customers are a main source of revenue for a company. The customer base may be considered a business's target market, where customer behaviors are well understood through market research or past experience. Relying on a customer base can make growth and innovation difficult.Companies with a customer base consisting mainly of large companies may increase their customer base by pursuing small and mid-size companies. Legal issues: From a legal point of view, the customer base is an accessible collection of confidential data on entities buying goods or using services of a particular entrepreneur, actually or contractually related to that entrepreneur (customers), of measurable economic value, enabling the conclusion or execution of contracts with those customers. Customer base within this meaning generally satisfies the conditions for recognizing it as a type of non-technical know-how. Customer base may be traded, in particular, it may be sold, it is possible to authorize someone to use it. Customer base may be also contributed to the company as an in-kind contribution. Building the Base: All businesses begin with no customers. These start-ups begin with an abstract idea that slowly evolves into something someone will buy. As these products evolve from abstract ideas into primitive objects that are then further refined, the business that created the product begins to gain customers. The satisfied customers become the repeat buyers and core customer of the company. This is the process that creates the customer base. Most often, successful start-ups begin with low-end or down market customers with low income and low costs. As the products or services that are being bought are polished and remade, a company gains higher-end customers who gain interest in the product as it reaches higher levels of functionality, use, or value. As the shift to these higher priority customers continue, they begin to be a larger source of income for the company, and slowly become the main base whom the business lends the most importance. This process, of moving from low-end customers to more expensive and more profitable customers, is known as upstreaming, and is an integral part of the theory of disruptive innovation.Businesses work very competitively to keep their core market intact. The sellers will research their buyers to increase customer awareness. Keeping products customer oriented has become so huge a priority, in fact, that it has become a large focus of business schools to teach all types of business administrators, from manager to marketer, to keeping the customer in mind for the improvement and creation of sellable products. It is very rare for an established company to lose its core customers to incumbents, and it has been stated that when an established company loses their consumer base via sudden and straightforward methods, it was not an ingenious move of the incumbent that allowed this to happen, but rather a result of the established company “dropping the ball.” The Customer-Base Customer: As companies grow their customer base, and gain experience satisfying them, their customers grow accustomed to that business accomplishing a certain task for them. The company or product’s brand name may even correlate with the task the customer uses it for. Xerox, Kleenex, and Band-Aid are some extreme cases of brand-names being used as the generic name of the product itself. In fact, as long as customers are continually satisfied with their purchases, the act of going to that company’s brand to accomplish a specific task becomes habitual.Repeat buyers and users are also useful for further reasons, as they are the source of “word of mouth” advertising. Studies have shown that customer satisfaction with a brand leads to more purchases, from both the same and new customers. A satisfied customer expresses their enjoyment in the product, or even shows a friend the product and has them try it out, and a dissatisfied customer may speak against a product or not mention it at all. Of course, the core consumer is the main spreader of the company’s brand name, and the more they use and like what they consume, the more those that surround them will gain interest and then potentially become customers themselves. Shifting of Customer Priority: Content consumers eventually become fully saturated, and no longer desire the product to be upgraded as it had been before. This customer begins to lose interest, and stops becoming a regular buyer for the business. As a company tends to drift upmarket, many lower-end customers do not keep up. These customers then tend to turn to other companies for alternative products or services that have features they value over the original company's usual upgrades. The original company also allows these customers to leave, as they have shifted priority to higher-end customers.As old core customers lose priority, the company that sold to them does not fight very hard to keep them. Fighting for the old customers could risk losing the new, more profitable people. This allows new start-up businesses to start moving upstream by interesting and attaining these customers for themselves, as the start-up goes through the same cycles that the established company went through. By chasing after higher-end customers and letting less profitable customers lose priority and be taken away from rising incumbents, a business manages to shift its base to entirely new sets of people.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zirconium acetylacetonate** Zirconium acetylacetonate: Zirconium acetylacetonate is the coordination complex with the formula Zr(C5H7O2)4. It is a common acetylacetonate of zirconium. It is a white solid that exhibits high solubility in nonpolar organic solvents, but not simple hydrocarbons.The complex is prepared by treating zirconium oxychloride with acetylacetone: ZrOCl2 + 4 Hacac → Zr(acac)4 + 2 HCl + H2OThe complex has a square antiprismatic geometry with eight nearly equivalent Zr-O bonds of length 2.19 Å. The molecular symmetry is D2, i.e. the complex is chiral. Compounds of high coordination number tend to be stereochemically nonrigid as indicated by the observation of one methyl signal by proton NMR spectroscopy.More volatile than Zr(acac)4 is the related complex of 1,1,1-trifluoroacetylacetonate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Exocrine pancreas cell** Exocrine pancreas cell: An exocrine pancreas cell is a pancreatic cell that produces enzymes that are secreted into the small intestine. These enzymes help digest food by releasing enzymes as it passes through the gastrointestinal tract. These include acinar cells, which secrete bicarbonate solution and mucin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Isotoluene** Isotoluene: The isotoluenes in organic chemistry are the non-aromatic toluene isomers with an exocyclic double bond. They are of some academic interest in relation to aromaticity and isomerisation mechanisms.The three basic isotoluenes are ortho-isotoluene or 5-methylene-1,3-cyclohexadiene (here labelled 1); para-isotoluene (2); and meta-isotoluene (3). Another structural isomer is the bicyclic compound 5-methylenebicyclo[2.2.0] hexene (4). The o- and p-isotoluenes isomerise to toluene, a reaction driven by aromatic stabilisation. It is estimated that these compounds are 96 kJ mol−1 less stable. The isomerisation of p-isotoluene to toluene takes place at 100 °C in benzene with bimolecular reaction kinetics by an intermolecular free radical reaction. The intramolecular isomerisation, a 1,3-sigmatropic reaction, is unfavorable because an antarafacial mode is enforced. Other dimer radical reaction products are formed as well. Isotoluene: The ortho-isomer is found to isomerise at 60 °C in benzene, also in a second order reaction. The proposed reaction mechanism is a concerted intermolecular ene reaction. The reaction product is either toluene or a mixture of dimerized ene reaction products, depending on the exact reaction conditions. Ortho-isotoluene has been researched in connection with the mechanism of initiator-free polymerization of polystyrene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gondola (retail)** Gondola (retail): A gondola (usually pronounced in this context) is a freestanding fixture used by retailers to display merchandise. Gondolas typically consist of a flat base and a vertical component featuring notches, pegboards, or slatwalls. The vertical piece can be fitted with shelves, hooks, or other displays. Gondolas placed end-to-end can form rows of shelving, while stand-alone gondolas tend to be used for special themed displays. A gondola placed perpendicular to the end of a row of other gondolas can be used as an endcap. In Europe, gondola normally refers to double-sided shop shelving. In clothing stores, merchandising is carried out using 3-specialized shelving for clothing, and makes it possible to highlight specific products to increase the average basket at the checkout.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**High-speed door** High-speed door: High-speed doors are door systems, mainly used in industrial applications. They are technical enhancements of the generally known sectional doors, PVC fabric doors or roller shutters. The main difference is that the durable construction provides a higher operating speed and they are able to sustain a higher number of cycles (opening and closing cycles) and require lower maintenance and repair cost. Depending on the intended field of application, horizontal or vertical operating door types are available. High-speed door: In North America, the Door and Access Systems Manufacturing Association (DASMA) defines high-performance doors as non-residential, powered doors, characterized by rolling, folding, sliding or swinging action, that are either high-cycle (minimum 100 cycles/day) or high-speed (minimum 20 inches(508 mm)/second), and two out of three of the following: made-to-order for exact size and custom features, designed to be able to withstand equipment impact (break-away if accidentally hit by vehicle) or designed to sustain heavy usage with minimal maintenance.In Europe & the UK high speed doors are generally doors which operate at over 500mm/second opening speed. Other common names include "Rapid Action Doors", "Fast Doors", "PVC Speed Doors" and "Speed Doors". The doors may be constructed from a variety of materials including PVC, aluminium, and steel insulated sections. Other popular variations include "Zipper Doors" which include a special zipper breakout system to prevent damage. High Speed Doors are an evolution of the traditional roller shutter door (hence the alternative name they are often given of Rapid Roll Doors). They are primarily designed to give higher operating speeds, improved sealing and sustain a higher number of opening & closing cycles than traditional roller shutters, without compromising reliability & durability. To achieve this they generally have a strengthened drivetrain, strong but lightweight PVC curtain (usually with a vision window for visibility) and a high speed industrial motor. Usually they will roll up vertically to top of the doorway though there are specialist bi-parting horizontally opening variations available Application: High-speed doors are usually used wherever goods traffic occurs and where the doors have to fulfill special requirements. In the food and beverage industry, or Medical industry for example, special climatic conditions have to prevail; short opening and closing times reduce cooling loss, avoid airflow and enable a smooth operating procedure. They can also be designed in larger dimensions for the mining and aircraft industries. Application: Beverage industry: Intelligent airlock solutions can be achieved by means of high-speed doors. Two doors with highly transparent laths gives a clear view throughout. Pressure and temperature differences can easily be controlled by an airlock where the transporter enters the airlock, and the one door cannot be operated before the other has closed. This is also used in security applications. Application: Food, clean, and pharmaceutical processes: In the strict environmental constraints of pharmaceutical and aggressive environments of the food processes, where hygiene is imperative, doors must not only provide a structure made of stainless steel or composite materials to prevent corrosion, but they must also ensure an exposure time as short as possible to reduce the risk of airborne contamination. Application: Car manufacturing industry: The automotive industry is an industry where high-speed doors are well known. High volumes of cars are produced in short periods of time. High-speed doors are vital for the logistics processes, high speed and low maintenance ensures optimum production. Special high-speed folding doors are used on the dynamo tuning cells.Profitability: High-speed doors may increase the efficiency of many companies. High opening speeds minimize the waiting time in front of the doors and thus accelerate the logistic processes, and control temperatures and pressure differences while saving energy, isolating clean areas from airborne contamination but still optimizing traffic flow.Refrigeration: Refrigerators require an easy transfer between different zones of temperature and humidity. High-speed doors work in order to best ensure the most comprehensive maintenance of temperature in a refrigerator through the latest technologies available. The easy temperature transition from zone to zone is blocked by technologies designed for this sector. In this case is very important the flexibility and rapidity of high-speed doors. Application: Chemical Factory: Design requirements for explosion-proof electrical equipment are dispensed by ATEX legislation, which reports the levels of the electrical standard requirements complying conservative prescriptions. In places where men work with highly volatile and flammable products, the best advice is to rely on a manufacturer of rapid flexible doors, guaranteed and reliable. Pneumatic components react quickly in complete safety. Supermarket Sector: Rapid doors for food businesses provide the minimum requirements of hygiene and protection of unauthorized personnel, as for example in a supermarket. Since it is quite likely that in the area next to the door we can find unauthorized and untrained customers. Application: Hangar Zone: With large openings where high-speed doors work in airports and naval ports it is required to provide an easy, quick and safe passage from inside to outside, and vice versa. Although the deposits of naval vessels are a very difficult environment to operate, places where the service of maintenance to ships or insurance protection from the sea can be requested at any time. With uptime 100%, airports can not break on time. Requirements: High strains, caused by the high operating speed (up to 4 m/s) and the frequency of openings, have to be taken into account during construction. In the same way, basic conditions like size and installation location add up to considerable requirements regarding safety and control technique. Requirements: A unique "roll-up" system generally defines a high-speed door from a conventional roller door. The door's main objective is to produce a high opening speed and the guiding system must allow smooth operations with minimal friction. Effortless movement will ensure the longevity of the door's moving parts and operating soundness. The latest versions of high quality high-speed doors use a spiral guiding system, thus keeping the door blade apart through a whole operating cycle, ensuring effortless movement of the door's roller devices in the guiding system and contributing to the excessive high speed. Excessive because these doors, by opening so fast and closing very slowly, are just trying to avoid getting their rigid closing edge caught by a forklift, at the expense of the insulation. Requirements: A counterbalance system generally forms part of the door's construction. This is designed with spring or weight mechanisms in the side frames, ensuring an emergency opening function and aiding to the opening speed. Regulation: In Europe, after a few incidents due to the excessive speed of the curtain and the stiffness of the embedded sensor, the European Community has implemented new regulations (EN 13241-1). Among the various normative constraints, the requirement against mechanical crushing during closure makes necessary to prevent contact with the door being closed or, in case of contact, to limit the vertical dynamic load below a strict curve. Regulation: If the high-speed door does not meet one or more of these constraints, the closing speed will be limited to 0.5 m/s (1.64 ft/s). Only a few manufacturers offering a fully flexible and harmless sealing edge kept possibility of closing at a higher speed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ISO 639-1** ISO 639-1: ISO 639-1:2002, Codes for the representation of names of languages—Part 1: Alpha-2 code, is the first part of the ISO 639 series of international standards for language codes. Part 1 covers the registration of two-letter codes. There are 183 two-letter codes registered as of June 2021. The registered codes cover the world's major languages. These codes are a useful international and formal shorthand for indicating languages. Many multilingual web sites use these codes to prefix URLs of specific language versions of their web sites, for example, "ru." before the website name is the Russian version of that website. See also IETF language tag. (Two-letter country-specific top-level-domain code suffixes are often different from these language-tag prefixes). ISO 639-1: ISO 639, the original standard for language codes, was approved in 1967. It was split into parts, and in 2002 ISO 639-1 became the new revision of the original standard. The last code added was ht, representing Haitian Creole on 2003-02-26. The use of the standard was encouraged by IETF language tags, introduced in RFC 1766 in March 1995, and continued by RFC 3066 from January 2001 and RFC 4646 from September 2006. The current version is RFC 5646 from September 2009. Infoterm (International Information Center for Terminology) is the registration authority for ISO 639-1 codes. ISO 639-1: New ISO 639-1 codes are not added if an ISO 639-2 code exists, so systems that use ISO 639-1 and 639-2 codes, with 639-1 codes preferred, do not have to change existing codes.If an ISO 639-2 code that covers a group of languages is used, it might be overridden for some specific languages by a new ISO 639-1 code. There is no specification on treatment of macrolanguages (see ISO 639-3).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cutscene** Cutscene: A cutscene or event scene (sometimes in-game cinematic or in-game movie) is a sequence in a video game that is not interactive, interrupting the gameplay. Such scenes are used to show conversations between characters, set the mood, reward the player, introduce newer models and gameplay elements, show the effects of a player's actions, create emotional connections, improve pacing or foreshadow future events.Cutscenes often feature "on the fly" rendering, using the gameplay graphics to create scripted events. Cutscenes can also be pre-rendered computer graphics streamed from a video file. Pre-made videos used in video games (either during cutscenes or during the gameplay itself) are referred to as "full motion videos" or "FMVs". Cutscenes can also appear in other forms, such as a series of images or as plain text and audio. History: The Sumerian Game (1966), an early mainframe game designed by Mabel Addis, introduced its Sumerian setting with a slideshow synchronized to an audio recording; it was essentially an unskippable introductory cutscene, but not an in-game cutscene. Taito's arcade video game Space Invaders Part II (1979) introduced the use of brief comical intermission scenes between levels, where the last invader who gets shot limps off screen. Namco's Pac-Man (1980) similarly featured cutscenes in the form of brief comical interludes, about Pac-Man and Blinky chasing each other.Shigeru Miyamoto's Donkey Kong (1981) took the cutscene concept a step further by using cutscenes to visually advance a complete story. Data East's laserdisc video game Bega's Battle (1983) introduced animated full-motion video (FMV) cutscenes with voice acting to develop a story between the game's shooting stages, which became the standard approach to game storytelling years later. The games Bugaboo (The Flea) in 1983 and Karateka (1984) helped introduce the cutscene concept to home computers. History: In the point-and-click adventure genre, Ron Gilbert introduced the cutscene concept with non-interactive plot sequences in Maniac Mansion (1987). Tecmo's Ninja Gaiden for the Famicom in 1988 and NES the following year featured over 20 minutes of anime-like "cinema scenes" that helped tell an elaborate story. In addition to an introduction and ending, the cutscenes were intertwined between stages and gradually revealed the plot to the player. The use of animation or full-screen graphics was limited, consisting mostly of still illustrations with sound effects and dialogue written underneath; however the game employed rather sophisticated shots such as low camera angles and close-ups, as well as widescreen letterboxing, to create a movie-like experience. History: Other early video games known to use cutscenes extensively include The Portopia Serial Murder Case in 1983; Valis in 1986; Phantasy Star and La Abadía del Crimen in 1987; Ys II: Ancient Ys Vanished – The Final Chapter, and Prince of Persia and Zero Wing in 1989. Since then, cutscenes have been part of many video games, especially in action-adventure and role-playing video games. History: Cutscenes became much more common with the rise of CD-ROM as the primary storage medium for video games, as its much greater storage space allowed developers to use more cinematically impressive media such as FMV and high-quality voice tracks. Types: Live-action cutscenes Live-action cutscenes have many similarities to films. For example, the cutscenes in Wing Commander IV used both fully constructed sets, and well known actors such as Mark Hamill and Malcolm McDowell for the portrayal of characters. Types: Some movie tie-in games, such as Electronic Arts' The Lord of the Rings and Star Wars games, have also extensively used film footage and other assets from the film production in their cutscenes. Another movie tie-in, Enter the Matrix, used film footage shot concurrently with The Matrix Reloaded that was also directed by the film's directors, the Wachowskis. In the DreamWorks Interactive (now known as Danger Close Games) 1996 point and click title, The Neverhood Chronicles, full motion video cutscenes were made using the animation technique of stop motion and puppets sculpted out of plasticine, much like the game’s actual worlds and characters. The game’s creator, Douglas TenNapel was in charge of filming the cutscenes, as stated in the game’s behind the scenes video. Types: Pre-rendered cutscenes Pre-rendered cutscenes are animated and rendered by the game's developers, and take advantage of the full array of techniques of CGI, cel animation or graphic novel-style panel art. Like live-action shoots, pre-rendered cutscenes are often presented in full motion video. Real time cutscenes Real time cutscenes are rendered on-the-fly using the same game engine as the graphics during gameplay. This technique is also known as Machinima. Types: Real time cutscenes are generally of much lower detail and visual quality than pre-rendered cutscenes, but can adapt to the state of the game. For example, some games allow the player character to wear several different outfits, and appear in cutscenes wearing the outfit the player has chosen. It is also possible to give the player control over camera movement during real time cutscenes, as seen in Dungeon Siege, Metal Gear Solid 2: Sons of Liberty, Halo: Reach, and Kane & Lynch: Dead Men. Types: Mixed media cutscenes Many games use both pre-rendered and real time cutscenes as the developer feels is appropriate for each scene. Types: During the 1990s in particular, it was common for the techniques of live action, pre-rendering, and real time rendering to be combined in a single cutscene. For example, popular games such as Myst, Wing Commander III, and Phantasmagoria use film of live actors superimposed upon pre-rendered animated backgrounds for their cutscenes. Though Final Fantasy VII primarily uses real-time cutscenes, it has several scenes in which real-time graphics are combined with pre-rendered full motion video. Though rarer than the other two possible combinations, the pairing of live action video with real time graphics is seen in games such as Killing Time. Types: Interactive cutscenes Interactive cutscenes involve the computer taking control of the player character while prompts (such as a sequence of button presses) appear onscreen, requiring the player to follow them in order to continue or succeed at the action. This gameplay mechanic, commonly called quick time events, has its origins in interactive movie laserdisc video games such as Dragon's Lair, Road Blaster, and Space Ace. Criticism: Director Steven Spielberg, director Guillermo del Toro, and game designer Ken Levine, all of whom are avid video gamers, criticized the use of cutscenes in games, calling them intrusive. Spielberg states that making the story flow naturally into the gameplay is a challenge for future game developers. Hollywood writer Danny Bilson called cinematics the "last resort of game storytelling", as a person doesn't want to watch a movie when they are playing a video game. Game designer Raph Koster criticized cutscenes as being the part that has "the largest possibility for emotional engagement, for art dare we say", while also being the bit that can be cut with no impact on the actual gameplay. Koster claims that because of this, many of the memorable peak emotional moments in video games are actually not given by the game itself at all. It is a common criticism that cutscenes simply belong to a different medium.Others think of cutscenes as another tool designers can use to make engrossing video games. An article on GameFront calls upon a number of successful video games that make excessive use of cutscenes for storytelling purposes, referring to cutscenes as a highly effective way to communicate a storyteller's vision. Rune Klevjer states: "A cutscene does not cut off gameplay. It is an integral part of the configurative experience", saying that they will always affect the rhythm of a game, but if they are well implemented, cutscenes can be an excellent tool for building suspense or providing the player with helpful or crucial visual information.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Orgastic potency** Orgastic potency: Within the work of the Austrian psychoanalyst Wilhelm Reich (1897–1957), orgastic potency is a human's natural ability to experience an orgasm with certain psychosomatic characteristics and resulting in full sexual gratification.For Reich, "orgastic impotence" is an acquired fear of sexual excitation, resulting in the inability to find full sexual gratification (not to be confused with anorgasmia, the inability to reach orgasm). This always resulted in neurosis, according to Reich, because that person could never discharge all built-up libido, which Reich regarded as actual biological or bioelectric energy. According to Reich, "not a single neurotic individual possesses orgastic potency" and, inversely, all people free from neuroses have orgastic potency.Reich coined the term orgastic potency in 1924 and described the concept in his 1927 book Die Funktion des Orgasmus, the manuscript of which he presented to Sigmund Freud on the latter's 70th birthday. Though Reich regarded his work as complementing Freud's original theory of anxiety neurosis, Freud was ambivalent in his reception. Freud's view was that there was no single cause of neurosis.Reich continued to use the concept as a foundation of a person's psychosexual health in his later therapeutic methods, such as character analysis and vegetotherapy. During the period 1933–1937, he attempted to ground his orgasm theory in physiology, both theoretically and experimentally, as he published in the articles: The Orgasm as an Electrophysiological Discharge (1934), Sexuality and Anxiety: The Basic Antithesis of Vegetative Life (1934) and The Bioelectrical Function of Sexuality and Anxiety (1937). Background: Reich developed his orgasm theory between 1921 and 1924 and it formed the basis for much of his later work, including the theory of character analysis. The starting point of Reich's orgasm theory was his clinical observation of genital disturbance in all neurotics, which he presented in November 1923, in the paper "Über Genitalität vom Standpunkt der psychoanalytischen Prognose und Therapie" ("Genitality from the viewpoint of psycho-analytic prognosis and therapy"). That presentation was met with a chilling silence, much hostility, and was partially discredited because Reich could not adequately define normal sexual health. In response, and after a further year of research, Reich introduced the concept "orgastic potency" at the 1924 Psycho-analytic Congress, Salzburg in the paper "Die therapeutische Bedeutung des Genitallibidos" ("Further Remarks on the Therapeutic Significance of Genital Libido").In addition to his own patients' love lives, Reich had examined through interviews and case records of 200 patients seen at the Vienna Psychoanalytic Polyclinic. Reich was impressed by the depth and frequency of genital disturbances he observed. One example was a patient who had reported having a normal sex life, but on closer interviewing by Reich revealed not experiencing orgasm during intercourse and having thoughts of murdering her partner following the act. Such observations made Reich very suspicious of superficial reports about sexual experience. His analysis of these cases led Reich to three conclusions: Severe genital disturbance was present in all cases of neurosis, The severity of the genital disturbance correlated to the severity of the neurosis, and All patients who improved in therapy and remained symptom-free achieved a gratifying genital sex life.This led Reich to establish criteria for satisfactory sexual intercourse. Based on interviews with people who appeared to have satisfactory sex lives, he described the sex act as being optimally satisfactory only if it follows a specific pattern. Orgastic potency is Reich's term for the ability to have this maximally fulfilling type of sexual experience, which in the Reichian view is limited to those who are free from neuroses and appears to be shared by all people free of neuroses.Reich distinguished between complete release of accumulated sexual tensions in orgasm, resulting in the restoration of energy equilibrium, and orgastic impotence, in which the release of energy is incomplete. Reich argued that the inability of psychoneurotics to wholly discharge sexual energy caused a damming-up of sexual energy, providing in real-time the physiological 'energy stasis' underlying the neurosis, with the psyche merely providing the historical content of the neurosis, but which could not exist without the accompanying energy stasis. Definitions: Reich's precise definition for the phrase "orgastic potency" changed over time as he changed his understanding of the phenomenon. He first described it in detail in his 1927 book Die Funktion Des Orgasmus. In the 1980 English translation of the book, Genitality in the Theory and Therapy of Neuroses, he defined orgastic potency as "the ability to achieve full resolution of existing sexual need-tension".In his 1940 book Die Entdeckung des Orgons Erster Teil: Die Function des Orgasmus, published in English in 1942 as The Discovery of the Orgone, Volume 1: The Function of the Orgasm, he defined it as "the capacity to surrender to the flow of biological energy, free of any inhibitions; the capacity to discharge completely the dammed-up sexual excitation through involuntary, pleasurable convulsions of the body."His last published definition of orgastic potency, which is repeated in his 1960 published Selected Writings, is "the capacity for complete surrender to the involuntary convulsion of the organism and complete discharge of the excitation at the acme of the genital embrace."Reich related orgastic potency and orgastic impotence to a, respectively, healthy and unhealthy sexual experience for the adult. He described that the healthy experience has specific biological and psychological characteristics; is identical for men and women; is characterised by love and the ability to express it; full, deep, pleasurable breathing is present; deep, delicious current-like sensations run up and down the body shortly before orgasm; and involuntary muscular movements are present before climax. Moreover, Reich defined the healthy sexual experience exclusively in terms of the sexual union between male and female. The difference between the presence and absence of orgastic potency in the sexual encounter, as described by Reich, is summarised by Boadella as follows: Recurrence in Reich's work: Reich expanded on the concept throughout his career. In his 1942 scientific autobiography The Discovery of Orgone, Vol. 1: The Function of the Orgasm, Reich provided the following summary of his findings regarding orgastic potency: it is an outcome of health, he argued, because full orgastic potency can only come about if a person is psychologically free of neurosis (pleasure anxiety absent), physically free from "body armor" (chronic muscular contraction absent), socially free from compulsive morality and duty as imposed by authoritarian and mechanistic ways of life, and has the natural ability to love. According to one source, Reich held that the vast majority of people do not meet these criteria and thus lack orgastic potency. Recurrence in Reich's work: Character analysis In Reichian psychology, the individual lacking orgastic potency is seen to have developed a neurotic psychosomatic "armor" that blocks the experience of pleasure. This is differentiated between the functionally identical "character armor" and "muscular armor". Muscular armor prevents the sexual climax from being experienced throughout the body. For example, forms of armoring are pulling back the pelvis or tightening the thigh and buttock muscles.Reich used the terms "genital character" and "neurotic character" respectively to distinguish between two ideal character types: one with and one without orgastic potency. The genital character is the non-neurotic character structure, which is free from armor and, therefore, has the capacity of natural sexual and moral self-regulation, and experiences life as a fulfilment and unfolding of his or her natural tendencies and struggle to achieve objectives. The neurotic character operates under a principle of compulsive moral regulation due to chronic energy stasis. The neurotic character's work and life is permeated by struggle to suppress original and even more basic urges or tendencies. The various forms of neurotic character correspond to the equally many ways of suppressing such urges or tendencies that the human being in question considers to be dangerous or is ashamed of. Recurrence in Reich's work: Therapeutic resolving of armor The two goals of Reichian vegetotherapy are the attainment of orgastic potency (for sexual intercourse) and of the "orgasm reflex" during therapy. The orgasm reflex may be observed as waves of pleasure moving through the body, a series of spontaneous, involuntary movements, and signifies that the person is free of body armoring, entailing the ability to give and receive love in all its forms. Recurrence in Reich's work: Prevention through social reform The Invasion of Compulsory Sex-Morality, written in 1931, was Reich's first step in approaching the answer to the problem of mass neuroses in society, followed by The Mass Psychology of Fascism and The Sexual Revolution. The primary sociological issues with which Reich dealt included in particular the following three: How to prevent neurosis through correct upbringing and education. Recurrence in Reich's work: How to prevent sex-negative attitudes in society through sexual reform. How to prevent authoritarian repression through general social reform. Recurrence in Reich's work: Bio-electric experiments In 1934, Reich expanded his orgasm theory in the essay "Der Orgasmus als Elektro-physiologische Entladung" ("The Orgasm as an Electrophysiological Discharge"). Through clinical observations in his sex-counseling centers, Reich concluded that conceiving of the orgasm as only mechanical tension and relaxation could not explain why some experience gratification and others do not. Thus, based on the work of Friedrich Kraus and others, Reich proposed that the orgasm is a bio-electric discharge, and is part of what Reich termed the orgasm formula: mechanical tension > bioelectric charge > bioelectric discharge > mechanical relaxation. Recurrence in Reich's work: In 1934, Reich published the paper "Der Urgegensatz des Vegetatives Lebens" ("Sexuality and Anxiety: The Basic Antithesis of Vegetative Life"). The paper is a literature study in which Reich explored "the physiology of the autonomic nervous system, the chemistry of anxiety, the electro-physiology of the body fluids and the hydro-mechanics of plasma movements in protozoa". In conclusion, Reich proposed a functional psychosomatic antithesis between the parasympathetic and sympathetic nervous systems, captured respectively as pleasure or movement "towards the world", and anxiety or movement "away from the world". The corollary is the idea that bioelectric energy displayed an antithetic function: if it flows outward to the skin surface, causing a build-up of charge at the skin, it is experienced as pleasure; in contrast, if it flows inward, away from the skin surface, resulting in a lowering of charge at the skin, then it is experienced as an increase in central tension or anxiety.Finally, in 1937 Reich published Experimentelle Ergebnisse über die elektrische Funktion von Sexualitat und Angst (The Bioelectrical Function of Sexuality and Anxiety) in which he thought he experimentally verified the existence of what he first termed the "libidinal economy". The report summarised two years of research into the reaction of the skin to states of pleasure and anxiety. His claimed findings included the following: normal skin has a constant, basic electrical charge of 40 milivolts that does not change with mood states; erogenous zones have a wandering potential that can at times be much higher (200 milivolts) or lower, depending on the mood states; change in potential does not depend on the mechanical nature of the stimulus, but on changes in the subject's sensation or emotion; and, erogenous zones can have mechanical tension (be tumescent) without changes in levels of the charge, e.g. as in the case of a "cold erection". Recurrence in Reich's work: Orgone energy A common misconception about Reich's later developed orgone energy accumulator is that he claimed it could provide orgastic potency to those sitting inside the device. As Reich put it, "The orgone accumulator, as has been clearly stated in the relevant publications (The Cancer Biopathy, etc.), cannot provide orgastic potency." Reception: Academic and psychoanalytic reception According to Myron Sharaf, Reich's view that the capacity to unite tender and sensuous feelings is important for a healthy love relationship was not a new concept. Freud had noted this as early as 1912. However, Sharaf states that the involuntary physical aspects of the full genital discharge in Reich's work were new. He called the concept orgastic potency and the manner in which Reich "connected a series of psychological, social, and biological findings with the presence or absence of this function" unique to Reich.When Reich first introduced the orgasm theory at the psychoanalytic congress in Salzburg he was congratulated by Karl Abraham for successfully formulating the economic element of neurosis. However, Reich's presentation of the orgasm theory came exactly when psychoanalysis was moving away from the original Freudian instinct theory based on psychic energy. In his 1926 book Inhibitions, Symptoms, Anxiety Freud completely abandoned his earlier position and wrote: "Anxiety never arises from repressed libido."Freud was ambivalent in his reception. When Reich presented him the manuscript of Die Funktion des Orgasmus in May 1926, Freud replied, "That thick?" Later that year he wrote to Reich that the book was "valuable, rich in observation and thought", but in May 1928 wrote to Lou Andreas-Salomé: "We have here a Dr. Reich, a worthy but impetuous young man, passionately devoted to his hobby-horse, who now salutes in the genital orgasm the antidote to every neurosis. Perhaps he might learn from your analysis of K. to feel some respect for the complicated nature of the psyche."Reich was strongly influenced by Freud's distinction between psychoneuroses and actual neuroses, the latter being considered of a physiological origin, and the related libido as the energy of an unconscious sexual instinct. However, Reich emphasised the libido theory exactly when it was being discarded by psychoanalysis. Freud had reasoned that sexual maladaption caused the active damming-up of "sexual stuff" and defined "actual neurosis" as anxiety based on dammed-up libido. However, Freud abandoned his view in the 1920s and postulated the never popularly accepted death instinct to explain the destructive behaviour that was earlier attributed to frustrated libido. Reich's view of the relationship between actual and psychoneuroses has not found its way into psychoanalytic thinking. However, it has the advantage of connecting psychopathology with physiology and, according to Charles Rycroft, this makes Reich the only psychoanalyst to provide any explanation as to why childhood pathogenic experiences (causing neuroses in classical psychoanalysis) do not disappear when neurotics leave their childhood environment.Sharaf writes that the theory was immediately unpopular within psychoanalysis. Paul Federn, Reich's training assistant, and Hermann Nunberg were particularly opposed to it. The German psychiatrist Arthur Kronfeld (1886–1941) wrote a positive review of Die Funktion des Orgasmus in 1927: "In this extremely valuable and instructive work the author has really succeeded in broadening as well as deepening Freud's theory of sex and of the neuroses. He broadens it by clarifying for the first time the significance of the genital orgasm for the development and the whole structure of the neuroses; he deepens it by giving Freud's theory of the actual neuroses an exact psychological and physiological meaning. I do not hesitate to consider this work of Reich's the most valuable contribution since Freud's The Ego and the Id." The most prominent Freudian to make clinical use of the concept orgastic potency was Eduard E. Hitschmann (1871-1957), the Director of the Psychoanalytic Polyclinic.Two further reactions to Reich's work in the psychoanalytic movement were either completely ignoring it or using the concept as if it was commonly accepted, but without referring to Reich as the source. As a result, the theme orgastic potency survived, but became divorced from the concepts in which Reich embedded it. For example Charles Berg (1892-1957), in his Clinical Psychology - A Case Book of the Neuroses and their Treatment (1948), uses Reich's sex economic theory of anxiety as his own, without attributing it to Reich. Erik Erikson was another psychoanalytic writer who partially adopted Reich's concept without acknowledgement. In his bestselling Childhood and Society, Erikson wrote: "Genitality, then, consists in the unobstructed capacity to develop an orgastic potency so free of pregenital interferences that the genital libido ... is expressed in heterosexual mutuality ... and with a convulsion-like discharge of tension from the whole body."Otto Fenichel, in the classic textbook The Psychoanalytic Theory of Neuroses, uses aspects of Reich's orgasm theory but disguised that they were Reich's contribution, and furthermore he hid the conflicts in the psychoanalytic movement that were explicit in Reich's work. A major entry mainly based on Fenichel's work appeared in the 1953, 1970 Psychiatric Dictionary by L. Hinsie and R. Campbell: "Impotence, orgastic: The incapacity for achieving the orgasm or acme of satisfaction in the sexual act. Many neurotics cannot achieve adequate discharge of their sexual energy through the sexual act ... According to Fenichel, an important concomitant of orgastic impotence is that these patients are incapable of love."As of September 2012, there are no peer-reviewed articles in the PubMed database that discuss the concept of orgastic potency or Reich's orgasm theory. Reception: Reichian legacy The two colleagues of Reich who build most on Reich's orgasm theory and orgastic potency are Danish psychiatrist Tage Philipson (1907-1961) and Alexander Lowen (1910-2008). They emphasised the importance of human relationship in orgastic functions.Tage Philipson, in his 1952 book Kaerlighedslivet: Natur Eller Unnatur, studied natural and unnatural love-life. He wrote that "in healthy people sexuality and love will always be associated together. Sex will come from the heart and return to the heart ... the fully healthy person must be the person with completely free love feelings ... When this is the case other feelings will also be able to stream through the entire organism: hate, sorrow, anxiety, etc., and the orgasm, as the highest point of sexuality, will also be able to affect the entire organism."Alexander Lowen, in his 1966 book Love and Orgasm, distinguishes between achieving orgasm in the Kinsey meaning of sexual performance, and the entering into a love relationship as a whole human, similar to Reich. Like Reich, Lowen considers the latter to be the expression of health, not a means to it.Theodore Peter Wolfe (Theodor Peter Wolfensberger) (1902-1954), an American pioneer in psychosomatic medicine and later colleague of Reich, thought that anxiety was the cause of both neuroses and psychosomatic distortions. When reading Reich's Der Funktion des Orgasmus he found in it what he called the key to understanding the dynamics of this relationship.In a review of Reich's sexual theories Elsworth Fredrick Baker (1903-1985), a psychiatrist and colleague of Reich, wrote that in particular Reich's sexual theories were commonly misinterpreted and misunderstood. While Reich was portrayed as advocating "a wild frantic promiscuity" to seek "mystical, ecstatic orgasm" that could cure all neuroses and physical ills, Baker continues, Reich in fact found that the healthy person needs less sexual activity and that the orgasm has a function to maintain health only for the healthy person. Comparing definitions of orgasm: The concepts of the sexual acme used in the 1948 and 1953 Kinsey reports and the 1966 research by Masters and Johnson were different from the one used by Reich. Reich directly related orgastic potency with the total response system, the personality, contact-ability, total psychosomatic health of a person. In contrast, Kinsey and Masters and Johnson restricted their conclusions to phenomena that all sexual climaxes had in common. For example, Kinsey defined the male orgasm as "all cases of ejaculation" and the female orgasm as "the sudden and abrupt release ... from sexual tension, [excluding] the satisfaction that may result from sexual experience." In other words, Kinsey focusses on the physiology, anatomy and technique involved in inducing a discharge of tension. Therefore, Kinsey's usage of the term orgasm covers behaviour that in the Reichian typology ranges from orgastic potency to orgastic impotence. Furthermore, examples of physiological distinctions Reich made but which were not pursued by Kinsey and Masters and Johnson include the difference between local and total bodily responses, and between voluntary and involuntary movements. Comparing definitions of orgasm: Mature orgasm In 1905, Freud developed the psychoanalytic distinction between clitoral and vaginal orgasm, with only the latter being identified with psychosexual maturity. This distinction has since been challenged among others on physiological grounds. For example, Masters and Johnson wrote: "Are clitoral and vaginal orgasms truly separate and anatomic entities? From a biological point of view the answer to this question is an unequivocal NO." However, a clinically grounded qualitative distinction between psychosexual maturity and immaturity was only introduced with Reich's concept orgastic potency vs. orgastic impotence (instead of vaginal vs. clitoral). As Masters and Johnson focussed on phenomena shared by all sexual climaxes – ranging from what Reich categorised as orgastic potency to impotence – their finding has no direct relevance to or implications for Reich's distinction. Works by Wilhelm Reich: Sexology 1921: "Der Koitus und die Geschlechter", Zeitschrift für Sexualwissenschaft 8. Republished in English in 1975 as "Coiton and the Sexes", Early Writings, Vol. 1, New York: FSG: 73–85, ISBN 0374513473. 1922: "Triebbegriffe von Forel bis Jung," Zeitschrift für Sexualwissenschaft 9. Republished in English in 1975 as "Drive and Libido Concepts from Forel to Jung" in Early Writings, Vol. 1, New York: FSG: 86–124, ISBN 0374513473. Works by Wilhelm Reich: 1923: "Zür Triebenergetik," Zeitschrift für Sexualwissenschaft 10. Republished in English in 1975 as "Concerning the Energy of Drives" in Early Writings, Vol. 1, New York: FSG: 143–157, ISBN 0374513473.Psychoanalysis In the following articles Reich discussed the positive and negative therapeutic reactions of patients to changes in their genitality: 1922: "Über Spezifität der Onanieformen", Internationale Zeitschrift für Psychoanalyse 8. Republished in English in 1975 as "Concerning Specific Forms of Masturbation" in Early Writings, Vol. 1, New York: FSG: 125–132, ISBN 0374513473. Works by Wilhelm Reich: 1924: "Über Genitalität vom Standpunkt der psychoanalytischen Prognose und Therapie", Internationale Zeitschrift für Psychoanalyse 10. Republished in English in 1975 as "On Genitality: From the Standpoint of Psychoanalytic Prognosis and Therapy" in Early Writings, Vol. 1, New York: FSG: 158–179, ISBN 0374513473. 1925: "Weitere Bemerkungen über die therapeutische Bedeutung der Genitallibido", Internationale Zeitschrift für Psychoanalyse 11. Republished in English in 1975 as "Further Remarks on the Therapeutic Significance of Genital Libido" in Early Writings, Vol. 1, New York: FSG: 199–221, ISBN 0374513473. 1926: "Über die Quellen der neurotischen Angst (Beitrag zur Theorie der psychoanalytischen Therapie) [On the Sources of Neurotic Anxiety (A Contribution to the Theory of Psychoanalytic Therapy)]", Internationale Zeitschrift für Psychoanalyse 12, and International Journal for Psychoanalysis 7: 381–391. Works by Wilhelm Reich: 1927: Die Funktion des Orgasmus: Zur Psychopathologie und zur Soziologie des Geschlechtslebens, Vienna: Internationale Psychoanalytische Verlag. Second, revised edition published in English in 1980 as Genitality in the Theory and Therapy of Neurosis, New York: FSG, ISBN 0374516413.Biology In the following articles Reich explored whether the orgasm theory was rooted in physiology: 1934: "Der Orgasmus als Elektro-physiologische Entladung", Zeitschrift für Politische Psychologie und Sexualökonomie 1: 29–43, Copenhagen. Republished in English in 1982 as "The Orgasm as an Electrophysiological Discharge", The Bioelectrical Investigation of Sexuality and Anxiety, New York: FSG: 3–20, ISBN 0374517282. Works by Wilhelm Reich: 1934: "Der Urgegensatz des Vegetatives Lebens", Zeitschrift für Politische Psychologie und Sexualökonomie 1: 125–142, Copenhagen. Republished in English in 1982 as "Sexuality and Anxiety: The Basic Antithesis of Vegetative Life", The Bioelectrical Investigation of Sexuality and Anxiety, New York: FSG: 21–70, ISBN 0374517282. Works by Wilhelm Reich: 1937: Experimentelle Ergebnisse über die elektrische Funktion von Sexualitat und Angst, Klinische und Experimentelle Berichte 4, Copenhagen: Sexpol Verlag. Republished in English in 1982 as "The Bioelectrical Function of Sexuality and Anxiety", The Bioelectrical Investigation of Sexuality and Anxiety, New York: FSG: 71–161, ISBN 0374517282.Synthesis1942: The Discovery of the Orgone Vol. 1: The Function of the Orgasm, New York: Orgone Institute Press.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aperiodic graph** Aperiodic graph: In the mathematical area of graph theory, a directed graph is said to be aperiodic if there is no integer k > 1 that divides the length of every cycle of the graph. Equivalently, a graph is aperiodic if the greatest common divisor of the lengths of its cycles is one; this greatest common divisor for a graph G is called the period of G. Graphs that cannot be aperiodic: In any directed bipartite graph, all cycles have a length that is divisible by two. Therefore, no directed bipartite graph can be aperiodic. In any directed acyclic graph, it is a vacuous truth that every k divides all cycles (because there are no directed cycles to divide) so no directed acyclic graph can be aperiodic. And in any directed cycle graph, there is only one cycle, so every cycle's length is divisible by n, the length of that cycle. Testing for aperiodicity: Suppose that G is strongly connected and that k divides the lengths of all cycles in G. Testing for aperiodicity: Consider the results of performing a depth-first search of G, starting at any vertex, and assigning each vertex v to a set Vi where i is the length (taken mod k) of the path in the depth-first search tree from the root to v. It can be shown (Jarvis & Shier 1996) that this partition into sets Vi has the property that each edge in the graph goes from a set Vi to another set V(i + 1) mod k. Conversely, if a partition with this property exists for a strongly connected graph G, k must divide the lengths of all cycles in G. Testing for aperiodicity: Thus, we may find the period of a strongly connected graph G by the following steps: Perform a depth-first search of G For each e in G that connects a vertex on level i of the depth-first search tree to a vertex on level j, let ke = j - i - 1. Compute the greatest common divisor of the set of numbers ke.The graph is aperiodic if and only if the period computed in this fashion is 1. If G is not strongly connected, we may perform a similar computation in each strongly connected component of G, ignoring the edges that pass from one strongly connected component to another. Jarvis and Shier describe a very similar algorithm using breadth first search in place of depth-first search; the advantage of depth-first search is that the strong connectivity analysis can be incorporated into the same search. Applications: In a strongly connected graph, if one defines a Markov chain on the vertices, in which the probability of transitioning from v to w is nonzero if and only if there is an edge from v to w, then this chain is aperiodic if and only if the graph is aperiodic. A Markov chain in which all states are recurrent has a strongly connected state transition graph, and the Markov chain is aperiodic if and only if this graph is aperiodic. Thus, aperiodicity of graphs is a useful concept in analyzing the aperiodicity of Markov chains. Applications: Aperiodicity is also an important necessary condition for solving the road coloring problem. According to the solution of this problem (Trahtman 2009), a strongly connected directed graph in which all vertices have the same outdegree has a synchronizable edge coloring if and only if it is aperiodic.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Super Mario Maker 2** Super Mario Maker 2: Super Mario Maker 2 is a 2019 side-scrolling platform game and level creation system developed and published by Nintendo for the Nintendo Switch. It is the sequel to Super Mario Maker and was released worldwide on June 28, 2019. The gameplay is largely retained from that of its predecessor, in which players create their own custom courses using assets from various games across the Super Mario franchise and share them online. Super Mario Maker 2 introduces new features and course assets, including a single player story mode and new level assets based on Super Mario 3D World. Super Mario Maker 2: Like its predecessor, the game was met with positive reviews from critics, who praised its user interface, course editing tools, and music, although issues with the online multiplayer mode were criticized. As of December 2022, the game has shipped over 8.42 million copies, making it one of the best-selling games on the Nintendo Switch. Gameplay: Like its predecessor, Super Mario Maker 2 is a side-scrolling platform game in which players create their own courses using assets from across the Super Mario series and publish them onto the internet for others to play. Players can choose from a selection of prior Super Mario games to base their courses' visual style and gameplay on, including Super Mario Bros., Super Mario Bros. 3, Super Mario World, New Super Mario Bros. U, and the newly introduced Super Mario 3D World theme, which has been retooled to 2.5D to fit with the game's platforming style. Gameplay mechanics and enemy behaviors can vary between the styles, with some elements being limited to specific styles.The sequel adds various assets and tools, including assets and a course theme based on Super Mario 3D World. This theme is especially different from the four others, with many features and gameplay mechanics unique to it. Due to the difference from this style to the others, the course has to be reset in order to switch to this style. This sequel also brings along the new vertical course feature giving creators the ability to raise the vertical height limit. It also introduces local and online multiplayer modes including co-op course creation, where up to 2 players can locally create stages together at the same time; as well as allowing up to 4 online players to complete user-made courses, cooperatively or competitively.The game also features a World Maker mode, where players create their own overworld maps, creating the equivalent of their own Super Mario game, called a Super World. The world style is locked as Super Mario World, but the courses themselves can be any style. Up to six Super Worlds can be saved but only one can be uploaded. One world can have up to five levels, including a castle, and a single Super World can have up to eight separate worlds. A world can also feature Toad houses, where the player plays minigames for extra lives, and Warp Pipes to get around the world quickly. Gameplay: Super Mario Maker 2 also features a new single player campaign known as Story Mode. The story follows Mario, Toadette, and several other Toads helping to rebuild Princess Peach's Castle, which had accidentally been reset by Undodog, a non-playable character. Players must traverse through over 100 Nintendo-created courses in order to collect enough coins to rebuild the castle. Non-player characters also offer players extra tasks and jobs throughout the mode.A Nintendo Switch Online subscription is required in order to access any online functionality in the game, including accessing player-created levels. Development and release: Developed inhouse at Nintendo's Kyoto Development Center, planning for Super Mario Maker 2 began alongside development of the Nintendo Switch hardware itself. Most of the original development team reprised their roles for this sequel, including producer Hiroyuki Kimura, director Yosuke Oshino, and planner/game designer Shigefumi Hino. Nintendo's producer Takashi Tezuka stated that the theme for the sequel was to expand on what could be done compared to its predecessor and try new things, which took the form of new course elements and new side content in the form of a full-fledged single player campaign. Tezuka also stated that as players continue to upload levels, he and the development staff would use these creations as a reference for adding content after launch, viewing the dynamic as a give-and-take between developers and consumers. Longtime Super Mario series composer Koji Kondo served as the game's sound director and composed some music. Additional music was composed by Atsuko Asahi, Toru Minegishi, and Sayako Doi.Super Mario Maker 2 was revealed during a Nintendo Direct presentation on February 13, 2019. It was released worldwide for the Nintendo Switch on June 28, 2019. Another Nintendo Direct was broadcast on May 15, 2019, which provided more information about new and returning features, gameplay modes, and pre-orders.In Europe, a capacitive stylus was included as part of the limited edition bundle of the game for customers who pre-ordered.Three major content updates were released for the game: The first update, released on October 2, 2019, added more multiplayer options, including playing with friends on local area networks or nearby networks. Development and release: The second update, released on December 5, 2019, added an extra mode themed around speedrunning Nintendo-created courses, with "ghost" images of the best timed performances shown for the player to know how well they compare against. Additional parts were added, such as a Master Sword power-up that Mario can pick up to become Link and gain a different set of moves, ice-encased coin blocks, platforms that greatly increase the player's speed, and invisible "P" blocks that are triggered by a "P" switch. Spikes and Pokeys were also added as new enemies. Development and release: The third and final update, released on April 22, 2020, added the ability to compose worlds to hold multiple courses, akin to the presentation of Super Mario World with up to eight different worlds and up to forty levels. A new power-up gives Mario the ability to pick up and throw objects as in Super Mario Bros. 2. Additional power-ups added in this update include the Frog Suit for the Super Mario Bros. 3 style, the Power Balloon for Super Mario World, the Super Acorn in the New Super Mario Bros. U style, and the boomerang flower for the Super Mario 3D World game style. Five other new power-ups were added to the Super Mario 3D World style as well, including the Propeller Box, the Bullet Bill Mask, the Goomba Mask, the Red POW Box, and the Cannon Box. The Koopalings and Mechakoopas were added as new enemies, along with red-colored keys guarded by Phanto (an enemy from Super Mario Bros. 2) and new ON/OFF Switch-triggered blocks and mushroom trampolines in the Super Mario 3D World game style. Reception: Super Mario Maker 2 received generally favorable reviews, according to review aggregator Metacritic. The online multiplayer feature, however, was criticized for its performance issues. GameSpot, who gave the game an 8/10, stated that online lag frequently ruined the experience. Reception: Sales It was the best-selling game in Japan during its first two weeks of release, selling 279,357 physical copies. By the end of March 2021, the game had sold over 7.15 million copies worldwide, making it one of the best-selling games on the Switch. The 2023 CESA Games White Papers revealed that Super Mario Maker 2 had sold 8.42 million units, as of December 31, 2022.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ticktack** Ticktack: Ticktack or Tick-Tack, is an historical English tables game for two players using a board similar to that used today for Backgammon and other tables games. Like its much more elaborate French counterpart, Trictrac, it has the unusual feature that there are several different ways in which it can be won, including Toots and Rovers. History: Ticktack is mentioned as early as 1586 as a game played by English country gentlemen in inclement weather along with three other games of the tables family: Lurch, Irish and Doublets. The earliest and only comprehensive set of rules appeared in 1672 by Willughby. However, Cotton gives an overview in The Compleat Gamester of 1674, an account which was reprinted until 1754, after which the game faded from view, being reported in Halliwell-Phillips (1881) as archaic. Name: Willughby says that the name Ticktack came from the rule that if a man is touched, it must be played. Cotton agrees and likens it to "touch" and "take". However, the game appears to be related to French Trictrac – there are several common features – which was commonly thought to derive from the rattling noise of the dice being thrown against the side rail of the board, however, Fiske suggests it may be "merely alliterative reduplication (having reference to the route taken by the men), signifying a forward and back movement after the manner of 'zig-zag'; or it may be the application... of an onomatopoetic word already existing (signifying any sharp, clattering sound)." Players and equipment: Ticktack was a game for two players using two dice and 15 men apiece, and played on a tables board (see illustration) with 12 playing positions or points on each side. In Willughby's diagram of the board, the points are numbered from 1 to 12 on each side of the board, the numbers running in parallel. He refers to the board as having two tables, which are the two halves of the board – left and right. Rules: The notation used in Willughby's original MS is illustrated. In this case, Black sits at the top and assembles all 15 men on the home point: the 1 point at the top in this case. White sits at the bottom and assembles all 15 white pieces likewise on point 1 at the bottom of the board.Black's aim is to move the 15 black pieces clockwise around the board from their first point along the remaining 11 points on the home side and then in the reverse direction on the far side of the board towards the bearing table before bearing them off. Meanwhile, White moves anticlockwise from point 1 to point 12 on the home side, then around to the far side of the board to the bearing table on Black's side and bears off from there.To move their men, players roll the dice and assign each roll to one man, moving it the corresponding number of points forward. Two rolls may be combined e.g. a 4 and 3 may be used to move a man 7 points. Men may move to any point except one occupied by two or more opposing men.Willughby explains certain terms: Taking a point means playing 2 men onto it in the same turn (it therefore cannot be occupied by the opponent's men nor can its men be 'hit'). Rules: Binding a man is to add a second man to a point already occupied by a man of the same colour. Binding at length is when this is done using a two throws for one man. Playing at length is simply to use two throws for one move. Playing at Home is when a player is playing men on the nearest side. Also called playing in his owne Tables. Playing from Home means moving all the men out from the starting point and off around the board. A blot is one man on a point and within 12 or fewer points of opposing men. A blot of a die is one man on a point within 6 or fewer points of opposing men. Rules: Hitting a blot is playing a man onto a point occupied by just one adverse man.A point may be occupied by as many men as a player desires. A man may be played to any vacant point; or to one that has men of the same colour or to one that is occupied by just one enemy man. A point occupied by two or more opposing men may not be played to nor may a man be 'played at length' (moved by the sum of two dice) if the intermediate point is so occupied. Rules: Winning An unusual feature of Ticktack is that there are five different ways of winning: Hitting a blot. This is the most common and is worth a single game. Tootes or Toots. Strictly this is when all the points in the last quadrant (the home quadrant) are taken by the player. However taking all the points in any of the four quadrants is a win and scores double. Optional. Rovers. If a player can occupy the 12th and 13th points from home with one man each and no other men have left the home point (point 1), this wins double. Optional. Rules: Two Corners. If a player either takes points 12 and 13 or points 1 and 24 (the opponent's point 1 in Willughby's notation) simultaneously, that player wins a double game. Keeping two men on point 1 is called keeping your sweetheart; if a player is forced to move one of them it is called breaking your sweetheart or losing your sweetheart. Rules: Bearing Off. If a player is able to bear off all 15 men over point 24 the game is won double. Willughby suggest this is very rare.If a player fails to spot that he could win, the opponent may say "Why not?" and claim the victory. A player may also raise by saying "I vie", whereupon the opponent must concede the game or hold by saying "I see it." The first vie doubles the game, the second trebles it and so on. Rules: Variation Cotton described a slightly different scheme. In the "plain game", players win a single game for hitting a blot or a double game either for filling up all the points in their second table or for taking the adversary's point 11. Cotton states that some play the game with Toots (= Toutes above), Boveries (= Rovers) and Flyers. Boveries means having a man on one's own and one's opponents point 11s. The last named feat is that of bringing a man around the tables before the opponent has moved out of the first table on the opponent's home side. Cotton does not mention bearing off. In summary, Cotton's scheme is: Hitting a blot - single win Filling the 2nd table - double win Point 11 - double win Toots (filling the 1st table) – optional Boveries – optional Flyers – optional Related games: Several sources equate Ticktack to the French game of Trictrac. which, although it has similar features, is considerably more complicated. It is possible, however, that Ticktack evolved as a simplified version of Trictrac. Literature: Baird, Caroline (2020). Games and Gaming in Early Modern Drama: Stakes and Hazards. Palgrave Macmillan. Boyer, Abel (1714). The Compleat French Master. London: Richard Sore. Cotton, Charles (1674). The Compleat Gamester. London: A.M. OCLC 558875155. Fiske, Willard (1905). Chess in Iceland and in Icelandic Literature: with Historical Notes on Other Table-Games. Florence: The Floretine Typographical Society. Literature: Forgeng, Jeff; Johnston, Dorothy; Cram, David, eds. (2003). Francis Willughby's Book of Games. Farnham: Ashgate. ISBN 1-85928-460-4. (Critical edition of Willughby's volume containing descriptions of games and pastimes, c.1660–1672. Manuscript in the Middleton collection, University of Nottingham; document reference Mi LM 14) Lee, Sir Sidney (1890). Stratford on Avon: From the Earliest Times to the Death of Shakespeare. Seeley and Co. Literature: Willughby, Francis (1672) Manuscript in the Middleton collection, University of Nottingham; document reference Mi LM 14.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sulfosalt mineral** Sulfosalt mineral: Sulfosalt minerals are sulfide minerals with the general formula AmBnXp, where A represents a metal such as copper, lead, silver, iron, and rarely mercury, zinc, vanadium B usually represents semi-metal such as arsenic, antimony, bismuth, and rarely germanium, or metals like tin and rarely vanadium X is sulfur or rarely selenium and/or tellurium.The Strunz classification includes the sulfosalts in a sulfides and sulfosalts superclass. A group which have similar appearing formulas are the sulfarsenides (for example cobaltite (Co,Fe)AsS). In sulfarsenides the arsenic substitutes for sulfide anions whereas in the sulfosalts the arsenic substitutes for a metal cation.About 200 sulfosalt minerals are known. Examples include: A3BX3 type Pyrargyrite Ag3SbS3 Proustite Ag3AsS3 Tetrahedrite Cu12Sb4S13 Tennantite Cu12As4S13 A3BX4 type Enargite Cu3AsS4 Sulvanite Cu3VS4 Samsonite Ag4MnSb2S6 Geocronite Pb14(Sb,As)6S23 Gratonite Pb9As4S15 A2BX3 type Bournonite PbCuSbS3 Seligmannite PbCuAsS3 Aikinite PbCuBiS3 ABX2 type Boulangerite Pb5Sb4S11 Matildite AgBiS2 Smithite AgAsS2 Chalcostibite CuSbS2 Emplectite CuBiS2 Teallite PbSnS2 A2B2X5 type Ramdohrite Ag3Pb6Sb11S24 Jamesonite Pb4FeSb6S14 Cosalite Pb2Bi2S5 A2B3X6 type Andorite PbAgSb3S6 Lindstromite Pb3Cu3Bi7S15 AB2X4 type Zinkenite Pb9Sb22S42 Berthierite FeSb2S4 Cylindrite Pb3Sn4FeSb2S14 Nickel–Strunz Classification -02- Sulfosalts: IMA-CNMNC proposes a new hierarchical scheme (Mills et al., 2009). This list uses the Classification of Nickel–Strunz (mindat.org, 10 ed, pending publication). Abbreviations: "*" - discredited (IMA/CNMNC status). "?" - questionable/doubtful (IMA/CNMNC status). Nickel–Strunz Classification -02- Sulfosalts: "REE" - Rare-earth element (Sc, Y, La, Ce, Pr, Nd, Pm, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu) "PGE" - Platinum-group element (Ru, Rh, Pd, Os, Ir, Pt) 03.C Aluminofluorides, 06 Borates, 08 Vanadates (04.H V[5,6] Vanadates), 09 Silicates: Neso: insular (from Greek νῆσος nēsos, island) Soro: grouping (from Greek σωρός sōros, heap, mound (especially of corn)) Cyclo: ring (from Greek κύκλος kyklos, wheel, ring, round) Ino: chain (from Greek ἴς [genitive: ἰνός inos], fibre) Phyllo: sheet (from Greek φύλλον phyllon, leaf) Tekto: three-dimensional framework (from Greek stem τεκτ- tekt- in words having to do with carpentry) Nickel–Strunz code scheme: NN.XY.##x NN: Nickel–Strunz mineral class number X: Nickel–Strunz mineral division letter Y: Nickel–Strunz mineral family letter ##x: Nickel–Strunz mineral/group number, x add-on letter Class: sulfosalts 02.G Sulfarsenites, sulfantimonites, sulfobismuthites 02.G: IMA2007-010 02.GA Neso-sulfarsenites, etc., without additional S: 05 Proustite, 05 Pyrargyrite; 10 Xanthoconite, 10 Pyrostilpnite; 15 Samsonite; 20 Wittichenite, 20 Skinnerite, 25 Malyshevite, 25 Lisiguangite, 25 Muckeite, 25 Lapieite; 30 Aktashite, 30 Nowackiite, 30 Gruzdevite; 35 Laffittite; 40 Stalderite, 40 Routhierite; 45 Erniggliite; 50 Seligmannite, 50 Soucekite, 50 Bournonite 02.GB Neso-sulfarsenites, etc.: 05 Argentotennantite, 05 Giraudite, 05 Goldfieldite, 05 Freibergite, 05 Hakite, 05 Tennantite, 05 Tetrahedrite; 10 Selenostephanite, 10 Stephanite; 15 Cupropearceite, 15 Selenopolybasite, 15 Cupropolybasite, 15 Polybasite, 15 Pearceite, 15 Antimonpearceite, 15 Arsenpolybasite, 20 Galkhaite 02.GC Poly-sulfarsenites: 05 Hatchite, 05 Wallisite; 10 Sinnerite, 15 Watanabeite, 20 Simonite, 25 Q­ratite, 30 Smithite, 35 Trechmannite, 40a Aleksite, 40b Kochkarite, 40c Rucklidgeite, 40c Poubaite, 40d Saddlebackite, 40e Babkinite; 45 Tvalchrelidzeite, 50 Mutnovskite 02.H Sulfosalts of SnS Archetype 02.HA With Cu, Ag, Fe (without Pb): 05 Emplectite, 05 Chalcostibite; 10 Miargyrite, 15 Livingstonite; 20 Berthierite, 20 Clerite, 20 Garavellite; 25 Baumstarkite, 25 Aramayoite 02.HB With Cu, Ag, Hg, Fe, Sn and Pb: 05a Krupkaite, 05a Aikinite, 05a Hammarite, 05a Gladite, 05a Friedrichite, 05a Lindstromite, 05a Pekoite, 05a Paarite, 05a Emilite, 05a Salzburgite, 05b Meneghinite, 05c Jaskolskiite; 10a Kobellite, 10a Tintinaite, 10b Giessenite, 10b Izoklakeite, 10c Eclarite; 15 Jamesonite, 15 Benavidesite; 20a Nagyagite, 20b Buckhornite, 20c Museumite, 20d Berryite, 20e Watkinsonite 02.HC With only Pb: 05a Sartorite, 05a Twinnite, 05a Guettardite, 05b Baumhauerite, 05b Baumhauerite-2a, 05c Liveingite, 05d Dufrenoysite, 05d Veenite, 05d Rathite, 05e Chabourneite, 05f Pierrotite, 05f Parapierrotite, 05g Marumoite; 10a Fuloppite, 10b Bismutoplagionite*, 10b Plagionite, 10c Heteromorphite, 10d Semseyite, 10d Rayite; 15 Boulangerite, 15 Falkmanite, 15 Plumosite*; 20 Robinsonite, 25 Moeloite, 30 Dadsonite, 35 Zoubekite, 35 Owyheeite 02.HD With Tl: 05 Lorandite, 05 Weissbergite; 15 Christite, 20 Jankovicite, 25 Rebulite, 30 Imhofite, 35 Edenharterite, 40 Jentschite, 45 Hutchinsonite, 50 Bernardite, 55 Sicherite, 60 Gabrielite 02.HE With alkalies, H2O: 05 Gerstleyite 02.HF With SnS and PbS archetype structure units: 20 Vrbaite; 25a Abramovite, 25a Levyclaudite, 25a Cylindrite, 25b Coiraite, 25b Incaite, 25b Potosiite, 25b Franckeite; 30 Lengenbachite 02.J Sulfosalts of PbS Archetype 02.JA Galena derivatives with little or no Pb: 05a IMA2005-036, 05a IMA2008-058, 05a Cupropavonite, 05a Pavonite, 05b Grumiplucite, 05c Kudriavite, 05d Cupromakovickyite, 05d Makovickyite, 05e Benjaminite, 05f Mummeite, 05g Borodaevite, 05h Mozgovaite; 10a Cuprobismutite, 10b Kupcikite, 10c Hodrushite, 10d Pizgrischite, 10e Paderaite; 15 Cuboargyrite, 15 Schapbachite; 20 Bohdanowiczite, 20 Matildite, 20 Volynskite 02.JB Galena derivatives, with Pb: 05 Diaphorite, 10 Cosalite; 15 Marrite, 15 Freieslebenite; 20 Cannizzarite, 20 Wittite; 25a Junoite, 25b Felbertalite, 25c Nordstromite, 25d Proudite, 25g Nuffieldite, 25i IMA2008-053, 25i Neyite, 25j Rouxelite; 30a Jordanite, 30a Geocronite, 30b Kirkiite, 30c Tsugaruite; 35a Zinkenite, 35b Scainiite, 35c Pillaite, 35d Pellouxite; 40a Bursaite?, 40a Gustavite, 40a Lillianite, 40a Xilingolite, 40a Treasurite, 40a Vikingite, 40a Fizelyite, 40a Andorite, 40a Roshchinite, 40a Uchucchacuaite, 40a Ramdohrite, 40b Aschamalmite, 40b Eskimoite, 40b Heyrovskyite, 40c Ourayite, 40d Schirmerite, 40e Ustarasite; 45 Angelaite, 45 Galenobismutite, 45 Weibullite; 55 Gratonite, 60 Marrucciite, 65 Vurroite 02.JC Galena derivatives, with Tl: 05 Ellisite, 10 Gillulyite 02.K Sulfarsenates, Sulfantimonates 02.KA Sulfarsenates with (As,Sb)S4 tetrahedra: 05 Enargite, 05 Stibioenargite*, 05 Petrukite; 10 Briartite, 10 Famatinite, 10 Luzonite, 10 Permingeatite, 10 Barquillite; 15 Fangite 02.KB Sulfarsenates with additional S: 05 Billingsleyite 02.L Unclassified Sulfosalts 02.LA Without essential Pb: 10 Dervillite, 15 Daomanite*, 20 Vaughanite, 25 Criddleite, 30 Fettelite, 35 Chameanite, 40 Arcubisite, 45 Mgriite, 50 Benleonardite, 55 Tsnigriite, 60 Borovskite, 65 Jonassonite 02.LB With essential Pb: 05 Miharaite, 20 Ardaite, 30 Madocite, 35 Larosite; 40 Petrovicite, 40 Mazzettiite; 45 Crerarite, 50 Launayite, 55 Playfairite, 60 Sorbyite, 65 Sterryite 02.M 02.MA Oxysulfosalts of Alkalies and Alkali Earths: 05 Ottensite, 05 Cetineite; 10 Sarabauite 02.X Unclassified Strunz Sulfosalts 02.XX Unknown: 00 Tazieffite, 00 Horobetsuite*, 00 Kitaibelite?, 00 Parajamesonite?, 00 Sakharovaite?, 00 Volfsonite* Synthetic sulfosalts: Many sulfosalts can be prepared in the laboratory, including many that do not occur in nature.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Coenzyme Q5, methyltransferase** Coenzyme Q5, methyltransferase: Coenzyme Q5, methyltransferase, more commonly known as COQ5, is an enzyme involved in the electron transport chain. COQ5 is located within the mitochondrial matrix and is a part of the biosynthesis of ubiquinone. Function: COQ5 has the role of catalyst in the C-methylation in the coenzyme Q biosynthesis, on the benzoic ring of CoQ6, the biosynthetic intermediate, in both in humans and yeast Saccharomyces cerevisiae. COQ5 is one of the eleven polypeptides in yeast, that are essential for Q production. Moreover, it assembles with the CoQ-synthome, a multi-subunit complex. In humans, primary Q deficiency happens due to many COQ genes mutating. And diseases such as mitochondrial, cardiovascular, kidney and neurodegenerative diseases, are results of the decrease in Q biosynthesis. Development of soluble COQ5 proteins can be applied to other mitochondrial proteins. Coenzyme Q10 Deficiency is associated with COQ5. Therefore, to maintain CoQ10 levels in human cells, COQ5 is required. Catalytic activity: Catalyzes C-methylation and ubiquinone biosynthetic process. Mechanism: COQ5 is an S-adenosyl methionine (SAM)-dependent methyltransferase (SAM-MTase) catalyzing the C-methylation step, converting 2-methoxy-6-polyprenyl-1,4-benzoquinone (DDMQH2) to 2-methoxy-5-methyl-6-polyprenyl-1,4-benzoquinone (DMQH2) in the CoQ6 biosynthesis pathway. Mechanism: In the catalytic mechanism of COQ5, based on the structural analyses, as the first step, before methyl transfer, Arg201 abstracts a hydrogen from the water molecule, forming a negatively charged oxygen atom which deprotonates the C5 atom of DDMQH2. Looking at the DDMQH2 substrate and Asn202, the hydroxyl group on the C4 atom and the side chain forms a hydrogen bond which leads to the formation of the O4′ anion. The stability of the C5 anion is a result of the negative charge being delocalized on the π bond conjugation system. Tyr78 acts as a catalytic base and Tyr78, Arg201 and Asn202 are invariant in COQ5 homologues.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Moldable wood** Moldable wood: Moldable wood is a strong and flexible cellulose-based material. Moldable wood can be folded into different shapes without breaking or snapping. The patented synthesis is based on the deconstruction and softening of the wood's lignin, then re-swelling the material in a rapid "water-shock" process that produces a wrinkled cell wall structure. The result of this unique structure is a flexible wood material that can be molded or folded, with the final shape locked in plate by simple air-drying. This discovery broadens the potential applications of wood as a sustainable structural material. This research, which was a collaborative effort between the University of Maryland, Yale University, Ohio State University, USDA Forest Service, University of Bristol, University of North Texas, ETH Zurich, and the Center for Materials Innovation, was published on the cover of Science in October 2021.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NOBOX** NOBOX: Homeobox protein NOBOX, also known as newborn ovary homeobox protein, is a protein that in humans is encoded by the NOBOX gene. The official symbol (NOBOX) and the official full name (NOBOX oogenesis homeobox) are maintained by the HGNC. The NOBOX gene is conserved in chimpanzee, Rhesus monkey, cow, mouse, and rat. There are 175 organisms that have orthologs with human gene NOBOX. It is capable of regulating other genes that are important in the development of follicles. Follicles do not develop and oocytes decrease in its absence which lead to infertility. Discovery: NOBOX is an in silico subtraction discovery when Suzumori et al. searched for novel genes involved in early mammalian folliculogenesis in 2002. It is one of the several genes that appeared in the search in expressed sequence tag (EST) databases of mouse. It was then cloned and characterised for its genomic structure. Gene location: The human NOBOX is located in chromosome 7q35 while the mouse NOBOX is in proximal chromosome 6. Protein structure: The human NOBOX is a 14 kb protein and encoded by 8 exons. It has a proline rich C terminus and contains putative SH3 and WW domains. This C terminus is believed to be critical in its transcriptional activities when bound to oocyte-specific genes. NOBOX belongs to the family of proteins that contains homeodomain. Homeodomain is a stretch of 32 specific amino acids in primates downstream the NOBOX Arg303 residue and is very well-conserved among the species. It contains an asparagine residue at position 51 which is important for its interactions with DNA base pairs. Function: NOBOX is a homeobox gene that is preferentially expressed in oocytes. In mice, it is essential for folliculogenesis and regulation of oocyte-specific genes. Regulation of these oocyte-specific genes is thru direct binding of NOBOX to its promoter regions via the specific consensus sequences, the NOBOX DNA binding elements (NBEs). There are three NBEs that have been identified: 5'-TAATTG-3', 5'-TAGTTG-3', and 5'-TAATTA-3'. Knockout study of NOBOX against wild-type ovaries in newborn female mice revealed that 74% (28/38 genes) were downregulated more than 5-fold and 15% (5/33 genes) were upregulated more than 5-fold. However, microRNA population is not affected by NOBOX in newborn ovaries. NOBOX also plays an important role in the suppression of male-determining genes such as Dmrt1. Its deficiency can cause rapid loss of postnatal oocytes and during its absence in female mice, follicles are replaced by fibrous tissue. Recently, a new role of NOBOX in controlling the G2/M arrest was discovered. Mutations and clinical significance: A mutation in the NOBOX gene is associated with premature ovarian failure (POF), also known as premature ovarian insufficiency (POI). It is a condition which ovaries loss its normal function before the age of 40. It is a heritable disease in up to 30% of patients which is characterised by secondary infertility, amenorrhea, hypoestrogenism, and elevated follicle-stimulating hormone levels in the serum (FSH>40IU/liter). It affects ≈1% of women below 40 years old. A study conducted on 96 white women with POF revealed one case of heterozygous mutation in the NOBOX homeodomain, p.Arg355His, in one patient. This mutation was absent in the control population and significantly disrupts the binding of NOBOX to the NBE. Arg355 is critical to DNA binding and is conserved in the homeodomain of the NOBOX from zebrafish to humans. Moreover, its significant negative effect suggests that NOBOX homeodomain may function as a dimer but its rare occurrence suggests a low contribution to POF. Further investigations on POF were conducted on Caucasian, African, Chinese, and Japanese women diagnosed with POF. Several NOBOX loss-of-function mutations were observed in Caucasian and African women accounting to 6.2%, 5.6% and 6.4%. These results suggest that NOBOX gene is a strong autosomal candidate for POF and its genetic mechanism involves haploinsufficiency. However, these mutations were not found in Chinese and Japanese women making it a less common explanation for POF in the region.The POF syndrome is a highly heterogenous clinical disorder but a recent study showed the first homozygous mutation associated with NOBOX loss-of-function. One patient out of 96 population diagnosed with POF in China was found with one novel homozygous truncating variant in the NOBOX gene. This truncated variant caused a defective transcriptional activation of GDF9, a well-known target of NOBOX, which led to the lost ability of NOBOX to induce G2/M arrest. This finding disagrees that mutation is a less common explanation for POF in Asian population. Mutations and clinical significance: Understanding the mutations in NOBOX homeodomain is important to researchers and clinicians to develop diagnostic and therapeutic approaches for POF such as genetic control of mammalian reproductive life-span, regulation of fertility, and generation of mature eggs in the lab. Interactions: GDF9 POU5F1 DNMT10 FOXL2 FIGLA RSPO2 DMRT1
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Suppliers and Parts database** Suppliers and Parts database: The Suppliers and Parts database is an example relational database that is referred to extensively in the literature and described in detail in C. J. Date's An Introduction to Database Systems, 8th ed. It is a simple database comprising three tables: Supplier, Part and Shipment, and is often used as a minimal exemplar of the interrelationships found in a database. Suppliers and Parts database: The Supplier relation holds information about suppliers. The SID attribute identifies the supplier, while the other attributes each hold one piece of information about the supplier. The Part relation holds information about parts. Likewise, the PID attribute identifies the part, while the other attributes hold information about the part. The Shipment relation holds information about shipments. The SID and PID attributes identify the supplier of the shipment and the part shipped, respectively. The remaining attribute indicates how many parts where shipped.Referential constraints known as Foreign keys ensure that these attributes can only hold values that are also found in the corresponding attributes in the Supplier and Parts relations. It is assumed that only one shipment exists for each supplier/part pairing, which isn't realistic for real world scenarios. This is intentionally oversimplified for pedagogical purposes, as is the entire database. SQL: The following SQL schema is one possible expression of the Suppliers-and-Parts database. Notes: The ID attributes are simple integers, but they could be (among other things) UUIDs or a system-defined identifier type that holds system-generated values. The choice of VARCHAR(10) is arbitrary and would be too small for real-world use. The application of the NOT NULL constraint to all attributes is a design decision based on the view that NULLs are to be avoided. It is not, strictly speaking, a requirement of the schema.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**UBE2J1** UBE2J1: Ubiquitin-conjugating enzyme E2 J1 is a protein that in humans is encoded by the UBE2J1 gene.The modification of proteins with ubiquitin is an important cellular mechanism for targeting abnormal or short-lived proteins for degradation. Ubiquitination involves at least three classes of enzymes: ubiquitin-activating enzymes, or E1s, ubiquitin-conjugating enzymes, or E2s, and ubiquitin-protein ligases, or E3s. This gene encodes a member of the E2 ubiquitin-conjugating enzyme family. This enzyme is located in the membrane of the endoplasmic reticulum (ER) and may contribute to quality control ER-associated degradation by the ubiquitin-proteasome system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Panel painting** Panel painting: A panel painting is a painting made on a flat panel of wood, either a single piece or a number of pieces joined together. Until canvas became the more popular support medium in the 16th century, panel painting was the normal method, when not painting directly onto a wall (fresco) or on vellum (used for miniatures in illuminated manuscripts). Wood panels were also used for mounting vellum paintings. History: Panel painting is very old; it was a very prestigious medium in Greece and Rome, but only very few examples of ancient panel paintings have survived. A series of 6th century BC painted tablets from Pitsa (Greece) represent the oldest surviving Greek panel paintings. Most classical Greek paintings that were famous in their day seem to have been of a size comparable to smaller modern works – perhaps up to a half-length portrait size. However, for a generation in the second quarter of the fifth-century BC there was a movement, called the "new painting" and led by Polygnotus, for very large painted friezes, apparently painted on wood, decorating the interiors of public buildings with very large and complicated subjects containing numerous figures at least half life-size, and including battle scenes. We can only attempt to imagine what these looked like from some detailed literary descriptions and vase-paintings that appear to echo their compositions.The first century BC to third century AD Fayum mummy portraits, preserved in the exceptionally dry conditions of Egypt, provide the bulk of surviving panel painting from the Imperial Roman period – about 900 face or bust portraits survive. The Severan Tondo, also from Roman Egypt (about 200 AD), is one of the handful of non-funerary Graeco-Roman specimens to survive. Wood has always been the normal support for the Icons of Byzantine art and the later Orthodox traditions, the earliest of which (all in Saint Catherine's Monastery) date from the 5th or 6th centuries, and are the oldest panel paintings which seem to be of the highest contemporary quality. Encaustic and tempera are the two techniques used in antiquity. Encaustic largely ceased to be used after the early Byzantine icons. History: Although there seem from literary references to have been some panel paintings produced in Western Europe through the centuries between Late Antiquity and the Romanesque period, and Byzantine icons were imported, there are next to no survivals in an unaltered state. In the 12th century panel painting experienced a revival. Altarpieces seem to have begun to be used during the 11th century, with the possible exception of a few earlier examples. They became more common in the 13th century because of new liturgical practices—the priest and congregation were now on the same side of the altar, leaving the space behind the altar free for the display of a holy image—and thus altar decorations were in demand. The habit of placing decorated reliquaries of saints on or behind the altar, as well as the tradition of decorating the front of the altar with sculptures or textiles, preceded the first altarpieces.The earliest forms of panel painting were dossals (altar backs), altar fronts and crucifixes. All were painted with religious images, commonly the Christ or the Virgin, with the saints appropriate to the dedication of the church, and the local town or diocese, or to the donor. Donor portraits including members of the donor's family are also often shown, usually kneeling to the side. They were for some time a cheaper alternative to the far more prestigious equivalents in metalwork, decorated with gems, enamels, and perhaps ivory figures, most of which have long been broken up for their valuable materials. Painted panels for altars are most numerous in Spain, especially Catalonia, which is explained by the poverty of the country at this time, as well as the lack of Reformation iconoclasm.The 13th and 14th centuries in Italy were a great period of panel painting, mostly altarpieces or other religious works. However, it is estimated that of all the panel paintings produced there, 99.9 percent have been lost. The vast majority of Early Netherlandish paintings are on panel, and these include most of the earliest portraits, such as those by Jan van Eyck, and some other secular scenes. However, one of the earliest surviving oils on canvas is a French Madonna with angels of about 1410 in the Gemäldegalerie, Berlin, which is very early indeed for oil painting also. In these works the frame and panel are sometimes a single piece of wood, as with Portrait of a Man (Self Portrait?) by van Eyck (National Gallery, London), where the frame was also painted, including an inscription done illusionistically to resemble carving. History: By the 15th century with the increased wealth of Europe, and later the appearance of humanism, and a changing attitude about the function of art and patronage, panel painting went in new directions. Secular art opened the way to the creation of chests, painted beds, birth trays and other furniture. Many such works are now detached and hung framed on walls in museums. Many double-sided wings of altarpieces (see picture at top) have also been sawn into two one-sided panels. History: Canvas took over from panel in Italy by the first half of the 16th century, a change led by Mantegna and the artists of Venice (which made the finest canvas at this point, for sails). In the Netherlands the change took about a century longer, and panel paintings remained common, especially in Northern Europe, even after the cheaper and more portable canvas had become the main support medium. The young Rubens and many other painters preferred it for the greater precision that could be achieved with a totally solid support, and many of his most important works also used it, even for paintings over four metres long in one dimension. His panels are of notoriously complicated construction, containing as many as seventeen pieces of wood (Het Steen, National Gallery, London). For smaller cabinet paintings, copper sheets (often old printmaking plates) were another rival support, from the end of the 16th century, used by many artists including Adam Elsheimer. Many Dutch painters of the Golden Age used panel for their small works, including Rembrandt on occasion. By the 18th century it had become unusual to paint on panel, except for small works to be inset into furniture, and the like. But, for example, The National Gallery in London has two Goya portraits on panel. History: Many other painting traditions also painted, and still paint, on wood, but the term is usually only used to refer to the Western tradition described above. Panel construction and preparation: The technique is known to us through Cennino Cennini's "The Craftsman's Handbook" (Il libro dell' arte) published in 1390, and other sources. It changed little over the centuries. It was a laborious and painstaking process: A carpenter would construct a solid wood piece the size of the panel needed. Usually a radial cut piece was preferred (across rather than along the length of the tree; the opposite of most timber cuts), with the outer sapwood excluded. In Italy it was usually seasoned poplar, willow or linden. It would be planed and sanded and if needed, joined with other pieces to obtain the desired size and shape. Panel construction and preparation: The wood would be coated with a mixture of animal-skin glues and resin and covered with linen (the mixture and linen combination was known as a "size"); this might be done by a specialist, or in the artists studio. Once the size had dried, layer upon layer of gesso would be applied, each layer sanded down before the next applied, sometimes as many as 15 layers, before a smooth hard surface emerged, not unlike ivory. This stage was not necessarily done after the 16th century, or darker grounds were used. Painting techniques: Once the panel construction was complete, the design was laid out, usually in charcoal. The usual ancient painting technique was encaustic, used at Al-Fayum and in the earliest surviving Byzantine icons, which are at the Saint Catherine's Monastery. This uses heated wax as the medium for the pigments. This was replaced before the end of first millennium by tempera, which uses an egg-yolk medium. Using small brushes dipped in a mixture of pigment and egg-yolk, the paint was applied in very small, almost transparent, brushstrokes. Thin layers of paint would be used to create volumetric forms. By the beginning of the 15th century, oil painting was developed. This was more tolerant, and allowed the exceptional detail of Early Netherlandish art. This used a very painstaking multi-layered technique, where the painting, or a particular part of it, had to be left for a couple of days for one layer to dry before the next was applied. Conservation and scientific analysis: Wood panels, especially if kept with too little humidity, often warp and crack with age, and from the 19th century, when reliable techniques were developed, many have been transferred to canvas or modern board supports. Conservation and scientific analysis: Wood panel is now rather more useful to art historians than canvas, and in recent decades there has been great progress in extracting this information. Many fakes have been discovered and mistaken datings corrected. Specialists can identify the tree species used, which varied according to the area where the painting was made. Carbon-dating techniques can give an approximate date-range (typically to a range of about 20 years), and dendrochronology sequences have been developed for the main source areas of timber for panels. Italian paintings used local or sometimes Dalmatian wood, most often poplar, but including chestnut, walnut, oak and other woods. The Netherlands ran short of local timber early in the 15th century, and most Early Netherlandish masterpieces are Baltic oak, often Polish, cut north of Warsaw and shipped down the Vistula, across the Baltic to the Netherlands. Southern German painters often used pine, and mahogany imported into Europe was used by later painters, including examples by Rembrandt and Goya. Conservation and scientific analysis: In theory, dendro-chronology gives an exact felling date, but in practice allowances have to be made for a seasoning period of several years, and a small panel may be from the centre of the tree, with no way of knowing how many rings outside the panel there were. So dendro-chronological conclusions tend to be expressed as a "terminus post quem" or an earliest possible date, with a tentative estimate of an actual date, that may be twenty or more years later. Conservation and scientific analysis: The so-called Panel Paintings Initiative is a multi-year project in collaboration between the Getty Conservation Institute, the Getty Foundation, and the J. Paul Getty Museum. The Panel Paintings Initiative is a response to the growing recognition that significant collections of paintings on wood panels may be at risk in coming decades due to the waning numbers of conservators and craftspeople with the highly specialized skills required for the conservation of these complex works of art. Types of wood: Artists would typically use wood native to the region. Albrecht Dürer (1471–1528), for example, painted on poplar when he was in Venice and on oak when in the Netherlands and southern Germany. Leonardo da Vinci (1452–1519) used oak for his paintings in France; Hans Baldung Grien (1484/5–1545) and Hans Holbein (1497/8–1543) used oak while working in southern Germany and England. In the Middle Ages, spruce and lime were used in the Upper Rhine and often in Bavaria. Outside of the Rhineland, softwood (such as pinewood) was mainly used. Of a group of twenty Norwegian altar frontals from the Gothic period (1250–1350) fourteen were made of fir, two of oak, and four of pine (Kaland 1982). Large altars made in Denmark during the fifteenth century used oak for the figures as well as for the painted wings. Lime was popular with Albrecht Altdorfer (c. 1480–1538), Baldung Grien, Christoph Amberger (d. 1562), Dürer, and Lucas Cranach the Elder (1472–1553). Cranach often used beech wood—an unusual choice. In Northern Europe, poplar is very rarely found, but walnut and chestnut are not uncommon. In the northeast and south, coniferous trees such as spruce, and various types of fir, and pine have been used. Fir wood is shown to have been used in the Upper and Middle Rhine, Augsburg, Nuremberg, and Saxony. Pinewood was used mainly in Tirol and beech wood only in Saxony. However, in general, oak was the most common substrate used for panel making in the Low Countries, northern Germany, and the Rhineland around Cologne. In France, until the seventeenth century, most panels were made from oak, although a few made of walnut and poplar have been found. Types of wood: The oak favored as a support by the painters of the northern school was, however, not always of local origin. In the seventeenth century about four thousand full-grown oak trees were needed to build a medium-sized merchant ship; thus, imported wood was necessary. Oak coming from Königsberg as well as Gdańsk is often found among works by Flemish and Dutch artists from the 15th through the 17th centuries; the origin can be established by the patterns of growth rings. In the last decade of the seventeenth century, Wilhelmus Beurs, a Dutch writer on painting techniques, considered oak to be the most useful wooden substrate on which to paint. However, exceptions are seen rather early in the seventeenth century: sometimes walnut, pearwood, cedarwood, or Indian woods were used. Mahogany was already in use by a number of painters during the first decades of the seventeenth century and was used often in the Netherlands in the nineteenth century. Even so, when canvas or copper was not used, the main oeuvre of the northern school was painted on oak panels.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Snapforce CRM** Snapforce CRM: Snapforce CRM is a comprehensive Customer relationship management (CRM) SaaS application, developed by Snapforce.com. Primary use case is customer management and sales automation, although it can also be configured with telephony support.Additional software components include customer databases, customer interaction tracking, reporting, and workflow automation. Deployment options: Snapforce CRM can be configured to handle inbound and outbound calling, calls are logged to the prospect or customer record automatically in real time. Recognition: In May 2014 the company became the first crm software provider to offer telephony services as a native feature.Snapforce CRM was recognized as one of ten "Top Players" in the Customer Relationship Management Market Report 2015.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MonoMouse** MonoMouse: MonoMouse is a handheld electronic magnifier manufactured by Bierley, designed to help people with a visual impairment to read printed text or images.Originally developed by Ian Bierley in 2003 for use by his mother, who had glaucoma, and then later released as a commercial product supplied through opticians in 2005. The MonoMouse is shaped like an oversized computer mouse and connects to any television through either the SCART connector in Europe or the RCA connector in the rest of the world.It is an example of assistive technology to enable people with eye conditions like macular degeneration to read printed text and see printed images.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lexicographically minimal string rotation** Lexicographically minimal string rotation: In computer science, the lexicographically minimal string rotation or lexicographically least circular substring is the problem of finding the rotation of a string possessing the lowest lexicographical order of all such rotations. For example, the lexicographically minimal rotation of "bbaaccaadd" would be "aaccaaddbb". It is possible for a string to have multiple lexicographically minimal rotations, but for most applications this does not matter as the rotations must be equivalent. Finding the lexicographically minimal rotation is useful as a way of normalizing strings. If the strings represent potentially isomorphic structures such as graphs, normalizing in this way allows for simple equality checking. Lexicographically minimal string rotation: A common implementation trick when dealing with circular strings is to concatenate the string to itself instead of having to perform modular arithmetic on the string indices. Algorithms: The Naive Algorithm The naive algorithm for finding the lexicographically minimal rotation of a string is to iterate through successive rotations while keeping track of the most lexicographically minimal rotation encountered. If the string is of length n, this algorithm runs in O(n2) time in the worst case. Booth's Algorithm An efficient algorithm was proposed by Booth (1980). Algorithms: The algorithm uses a modified preprocessing function from the Knuth-Morris-Pratt string search algorithm. The failure function for the string is computed as normal, but the string is rotated during the computation so some indices must be computed more than once as they wrap around. Once all indices of the failure function have been successfully computed without the string rotating again, the minimal lexicographical rotation is known to be found and its starting index is returned. The correctness of the algorithm is somewhat difficult to understand, but it is easy to implement. Algorithms: Of interest is that removing all lines of code which modify the value of k results in the original Knuth-Morris-Pratt preprocessing function, as k (representing the rotation) will remain zero. Booth's algorithm runs in O(n) time, where n is the length of the string. The algorithm performs at most 3n comparisons in the worst case, and requires auxiliary memory of length n to hold the failure function table. Algorithms: Shiloach's Fast Canonization Algorithm Shiloach (1981) proposed an algorithm improving on Booth's result in terms of performance. It was observed that if there are q equivalent lexicographically minimal rotations of a string of length n, then the string must consist of q equal substrings of length d=n/q. The algorithm requires only n + d/2 comparisons and constant space in the worst case. Algorithms: The algorithm is divided into two phases. The first phase is a quick sieve which rules out indices that are obviously not starting locations for the lexicographically minimal rotation. The second phase then finds the lexicographically minimal rotation start index from the indices which remain. Duval's Lyndon Factorization Algorithm Duval (1983) proposed an efficient algorithm involving the factorization of the string into its component Lyndon words, which runs in linear time with a constant memory requirement. Variants: Shiloach (1979) proposed an algorithm to efficiently compare two circular strings for equality without a normalization requirement. An additional application which arises from the algorithm is the fast generation of certain chemical structures without repetitions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Solenoid bolt** Solenoid bolt: A solenoid bolt is a type of electronic-mechanical locking mechanism. This type of lock is characterized by the use of a solenoid to throw the bolt. Sophisticated solenoid bolt locks may use microprocessors to perform voltage regulation, reduce power consumption, and/or provide access control. Depending on the strength of the solenoid, some models can provide a holding force on the order of 1000 kg. A solenoid bolt can be designed either to fail open (the lock opens on power loss) or to fail closed (the device is locked upon power loss). Some models may be suitable for high-security sites.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stoner–Wohlfarth astroid** Stoner–Wohlfarth astroid: In magnetism the Stoner–Wohlfarth astroid curve is a curve that separates regions with two minima of the free energy density from those with only one energy minimum. It is a geometric representation of the Stoner–Wohlfarth model. This curve is of particular importance as discontinuous changes of the magnetization can take place when crossing it. One important property of the astroid is that tangents to the astroid represent magnetization directions with extremal energy, i.e. either local minima or local maxima. For a system with a uniaxial anisotropy the tangent(s) that are closest to the easy axis lead to stable solutions, i.e. minimal energy. History: The astroid solution was first proposed by John P. Slonczewski in an unpublished IBM research memorandum. It has been extended to single-domain magnets with more general two-dimensional magnetic anisotropy and three-dimensional anisotropy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shopping mall** Shopping mall: A shopping mall (or simply mall) is a North American term for a large indoor shopping center, usually anchored by department stores. The term "mall" originally meant a pedestrian promenade with shops along it (that is, the term was used to refer to the walkway itself which was merely bordered by such shops), but in the late 1960s, it began to be used as a generic term for the large enclosed shopping centers that were becoming commonplace at the time. In the U.K., such complexes are considered shopping centres (Commonwealth English: shopping centre), though "shopping center" covers many more sizes and types of centers than the North American "mall". Other countries may follow U.S. usage (Philippines, India, and U.A.E.) while still others (Australia, etc.) follow U.K. usage. In Canadian English, and often in Australia and New Zealand, the term 'mall' may be used informally but 'shopping centre' or merely 'centre' will feature in the name of the complex (such as Toronto Eaton Centre). The term 'mall' is less-commonly a part of the name of the complex. Shopping mall: Many malls have declined considerably in North America, particularly in subprime locations, and some have closed and become so-called "dead malls". Successful exceptions have added entertainment and experiential features, added big-box stores as anchors, or converted to other specialized shopping center formats such as power centers, lifestyle centers, factory outlet centers, and festival marketplaces. In Canada, shopping centres have frequently been replaced with mixed-use high rise communities. Types: The International Council of Shopping Centers, based in New York City, classifies two types of shopping centers as malls: regional malls and superregional malls. Regional mall A regional mall, per the International Council of Shopping Centers, is a shopping mall with 400,000 sq ft (37,000 m2) to 800,000 sq ft (74,000 m2) gross leasable area with at least two anchor stores. Super-regional mall A super-regional mall, per the International Council of Shopping Centers, is a shopping mall with over 800,000 sq ft (74,000 m2) of gross leasable area, three or more anchors, mass merchant, more variety, fashion apparel, and serves as the dominant shopping venue for the region (25 miles or 40 km) in which it is located. Types: Not malls Not classified as malls are smaller formats such as strip malls and neighborhood shopping centers, and specialized formats such as power centers, festival marketplaces, and outlet centers.Conversely in some countries, many shopping centers less than half or a quarter of the size of the U.S. minimum to be considered a mall, 400,000 sq ft (37,000 m2), have "mall" in their names – for example in Namibia or Zambia. Types: The world's largest malls with over 500,000 square metres (5,400,000 sq ft) of gross leasable area are in the Philippines, Thailand, and China – more than half again as large as previous contenders such as the Dubai Mall. List of types of shopping centers (including malls) The International Council of Shopping Centers classifies Asia-Pacific, European, U.S., and Canadian shopping centers into the following types:Abbreviations: SC=shopping center/centre, GLA = Gross Leasable Area, NLA = Net Leasable Area, AP=Asia-Pacific, EU=Europe, Can=Canada, US=United States of America*does not apply to Europe History: Forerunners to the shopping mall Shopping centers in general may have their origins in public markets and, in the Middle East, covered bazaars. In 1798, the first covered shopping passage was built in Paris, the Passage du Caire. The Burlington Arcade in London was opened in 1819. The Arcade in Providence, Rhode Island claims to be the first shopping arcade in the United States in 1828.Following on from the covered shopping arcades that first appeared in Western Europe, the Galleria Vittorio Emanuele II in Milan, which opened in 1877, was larger in scale than its predecessors, and inspired the use of the term galleria for many other shopping arcades and malls. In the mid-20th century, with the rise of the suburb and automobile culture in the United States, a new style of shopping center was created away from downtowns. Early shopping centers designed for the automobile include Market Square, Lake Forest, Illinois (1916), and Country Club Plaza, Kansas City, Missouri (1924).The suburban shopping center concept evolved further in the United States after World War II (see table above) with larger open-air shopping centers anchored by major department stores, such as the 550,000-square-foot (51,000 m2) Broadway-Crenshaw Center in Los Angeles, built in 1947 and anchored by a five-story Broadway and a May Company California. History: Downtown pedestrian malls and use of term mall In the late 1950s and into the 1960s, the term "shopping mall" was first used, but in the original sense of the word "mall", meaning a pedestrian promenade in the U.S., or in U.K. usage, a "shopping precinct". Early downtown pedestrianized malls included the Kalamazoo Mall (the first, in 1959), "Shoppers' See-Way" in Toledo, Lincoln Road Mall in Miami Beach, Santa Monica Mall (1965).Although Bergen Mall opened in 1957 using the name "mall" and inspired other suburban shopping centers to rebrand themselves as malls, these types of properties were still referred to as "shopping centers" until the late 1960s. History: Enclosed malls The enclosed shopping center, which would eventually be known as the shopping mall, did not appear in mainstream until the mid-1950s. One of the earliest examples was the Valley Fair Shopping Center in Appleton, Wisconsin, which opened on March 10, 1955. Valley Fair featured a number of modern features including central heating and cooling, a large outdoor parking area, semi-detached anchor stores, and restaurants. Later that year the world's first fully enclosed shopping mall was opened in Luleå, in northern Sweden (architect: Ralph Erskine) and was named Shopping; the region now claims the highest shopping center density in Europe.The idea of a regionally-sized, fully enclosed shopping complex was pioneered in 1956 by the Austrian-born architect and American immigrant Victor Gruen. This new generation of regional-size shopping centers began with the Gruen-designed Southdale Center, which opened in the Twin Cities suburb of Edina, Minnesota, United States in October 1956. For pioneering the soon-to-be enormously popular mall concept in this form, Gruen has been called the "most influential architect of the twentieth century" by Malcolm Gladwell.The first retail complex to be promoted as a "mall" was Paramus, New Jersey's Bergen Mall. The center, which opened with an open-air format on November 14, 1957 and was enclosed in 1973. Aside from Southdale Center, significant early enclosed shopping malls were Harundale Mall (1958) in Glen Burnie, Maryland, Big Town Mall (1959) in Mesquite, Texas, Chris-Town Mall (1961) in Phoenix, Arizona, and Randhurst Center (1962) in Mount Prospect, Illinois. History: Other early malls moved retailing away from the dense, commercial downtowns into the largely residential suburbs. This formula (enclosed space with stores attached, away from downtown, and accessible only by automobile) became a popular way to build retail across the world. Gruen himself came to abhor this effect of his new design; he decried the creation of enormous "land wasting seas of parking" and the spread of suburban sprawl.Even though malls mostly appeared in suburban areas in the U.S., some U.S. cities facilitated the construction of enclosed malls downtown as an effort to revive city centers and allow them to compete effectively with suburban malls. Examples included Main Place Mall in Buffalo (1969) and The Gallery (1977, now Fashion District Philadelphia) in Philadelphia. Other cities created open-air pedestrian malls. History: In the United States, developers such as A. Alfred Taubman of Taubman Centers extended the concept further in 1980, with terrazzo tiles at the Mall at Short Hills in New Jersey, indoor fountains, and two levels allowing a shopper to make a circuit of all the stores. Taubman believed carpeting increased friction, slowing down customers, so it was removed. Fading daylight through glass panels was supplemented by gradually increased electric lighting, making it seem like the afternoon was lasting longer, which encouraged shoppers to linger. History: Decline of shopping malls In the United States, in the mid-1990s, malls were still being constructed at a rate of 140 a year. But in 2001, a PricewaterhouseCoopers study found that underperforming and vacant malls, known as "greyfield" and "dead mall" estates, were an emerging problem. In 2007, a year before the Great Recession, no new malls were built in America, for the first time in 50 years. City Creek Center Mall in Salt Lake City, which opened in March 2012, was the first to be built since the recession.Malls began to lose consumers to open-air power centers and lifestyle centers during the 1990s, as consumers preferred to park right in front of and walk directly into big-box stores with lower prices and without the overhead of traditional malls (i.e., long enclosed corridors).Another issue was that the growth-crazed American commercial real estate industry had simply built too many nice places to shop—far more than could be reasonably justified by the actual growth of the American population, retail sales, or any other economic indicator. The number of American shopping centers exploded from 4,500 in 1960 to 70,000 by 1986 to just under 108,000 by 2010.Thus, the number of dead malls increased significantly in the early 21st century. The economic health of malls across the United States has been in decline, as revealed by high vacancy rates. From 2006 to 2010, the percentage of malls that are considered to be "dying" by real estate experts (have a vacancy rate of at least 40%), unhealthy (20–40%), or in trouble (10–20%) all increased greatly, and these high vacancy rates only partially decreased from 2010 to 2014. In 2014, nearly 3% of all malls in the United States were considered to be "dying" (40% or higher vacancy rates) and nearly one-fifth of all malls had vacancy rates considered "troubling" (10% or higher). Some real estate experts say the "fundamental problem" is a glut of malls in many parts of the country creating a market that is "extremely over-retailed". By the time shopping mall operator Unibail-Rodamco-Westfield decided to exit the American market in 2022, the United States had an average of 24.5 square feet of retail space per capita (in contrast to 4.5 square feet per capita in Europe).In 2019, The Shops & Restaurants at Hudson Yards opened as an upscale mall in New York City with "a 'Fifth Avenue' mix of shops", such as H&M, Zara, and Sephora below them. This is one the first two malls built recently, along with American Dream in which both opened in 2019 since City Creek Center.Online shopping has also emerged as a major competitor to shopping malls. In the United States, online shopping has accounted for an increasing share of total retail sales. In 2013, roughly 200 out of 1,300 malls across the United States were going out of business. To combat this trend, developers have converted malls into other uses including attractions such as parks, movie theaters, gyms, and even fishing lakes. In the United States, the 600,000 square foot Highland Mall will be a campus for Austin Community College. In France, the So Ouest mall outside of Paris was designed to resemble elegant, Louis XV-style apartments and includes 17,000 square metres (180,000 sq ft) of green space. The Australian mall company Westfield launched an online mall (and later a mobile app) with 150 stores, 3,000 brands and over 1 million products.The COVID-19 pandemic also significantly impacted the retail industry. Government regulations temporarily closed malls, increased entrance controls, and imposed strict public sanitation requirements. Design: Vertical malls High land prices in populous cities have led to the concept of the "vertical mall", in which space allocated to retail is configured over a number of stories accessible by elevators and/or escalators (usually both) linking the different levels of the mall. The challenge of this type of mall is to overcome the natural tendency of shoppers to move horizontally and encourage shoppers to move upwards and downwards. The concept of a vertical mall was originally conceived in the late 1960s by the Mafco Company, former shopping center development division of Marshall Field & Co. The Water Tower Place skyscraper in Chicago, Illinois was built in 1975 by Urban Retail Properties. It contains a hotel, luxury condominiums, and office space and sits atop a block-long base containing an eight-level atrium-style retail mall that fronts on the Magnificent Mile.Vertical malls are common in densely populated conurbations in East and Southeast Asia. Hong Kong in particular has numerous examples such as Times Square, Dragon Centre, Apm, Langham Place, ISQUARE, Hysan Place and The One. Design: A vertical mall may also be built where the geography prevents building outward or there are other restrictions on construction, such as historical buildings or significant archeology. The Darwin Shopping Centre and associated malls in Shrewsbury, UK, are built on the side of a steep hill, around the former town walls; consequently the shopping center is split over seven floors vertically – two locations horizontally – connected by elevators, escalators and bridge walkways. Some establishments incorporate such designs into their layout, such as Shrewsbury's former McDonald's, split into four stories with multiple mezzanines which featured medieval castle vaults – complete with arrowslits – in the basement dining rooms. Components: Food court A common feature of shopping malls is a food court: this typically consists of a number of fast food vendors of various types, surrounding a shared seating area. Components: Department stores When the shopping mall format was developed by Victor Gruen in the mid-1950s, signing larger department stores was necessary for the financial stability of the projects, and to draw retail traffic that would result in visits to the smaller stores in the mall as well. These larger stores are termed anchor store or draw tenant. In physical configuration, anchor stores are normally located as far from each other as possible to maximize the amount of traffic from one anchor to another. Regional differences: Mall versus shopping center/centre Shopping mall is a term used predominantly in North America and some other countries that follow U.S. usage (India, U.A.E., etc.) and others (Australia, etc.) follow U.K. usage. Regional differences: In the United States, Persian Gulf countries, and India, the term shopping mall is usually applied to enclosed retail structures (and is generally abbreviated to simply mall), while shopping center/centre usually refers to open-air retail complexes; both types of facilities usually have large parking lots, face major traffic arterials, and have few pedestrian connections to surrounding neighbourhoods. Outside of North America, "shopping precinct" and "shopping arcade" are also used. Regional differences: In Canada, "shopping centre" is often used officially (as in Square One Shopping Centre), but conversationally, "mall" is mostly used. Europe There are a reported 222 malls in Europe. In 2014, these malls had combined sales of US$12.47 billion. This represented a 10% bump in revenues from the prior year. U.K. and Ireland In the United Kingdom and Ireland, both open-air and enclosed centers are commonly referred to as shopping centres. Mall primarily refers to either a shopping mall – a place where a collection of shops all adjoin a pedestrian area – or an exclusively pedestrianized street that allows shoppers to walk without interference from vehicle traffic. Regional differences: The majority of British enclosed shopping centres, the equivalent of a U.S. mall, are located in city centres, usually found in old and historic shopping districts and surrounded by subsidiary open air shopping streets. Large examples include West Quay in Southampton; Manchester Arndale; Bullring Birmingham; Liverpool One; Trinity Leeds; Buchanan Galleries in Glasgow; St James Quarter in Edinburgh; and Eldon Square in Newcastle upon Tyne. In addition to the inner city shopping centres, large UK conurbations will also have large out-of-town "regional malls" such as the Metrocentre in Gateshead; Meadowhall Centre, Sheffield serving South Yorkshire; the Trafford Centre in Greater Manchester; White Rose Centre in Leeds; the Merry Hill Centre near Dudley; and Bluewater in Kent. These centres were built in the 1980s and 1990s, but planning regulations prohibit the construction of any more. Out-of-town shopping developments in the UK are now focused on retail parks, which consist of groups of warehouse style shops with individual entrances from outdoors. Planning policy prioritizes the development of existing town centres, although with patchy success. Westfield London (White City) is the largest shopping centre in Europe. Regional differences: Russia In Russia, on the other hand, as of 2013 a large number of new malls had been built near major cities, notably the MEGA malls such as Mega Belaya Dacha mall near Moscow. In large part they were financed by international investors and were popular with shoppers from the emerging middle class. Management and legal issues: Shopping property management firms A shopping property management firm is a company that specializes in owning and managing shopping malls. Most shopping property management firms own at least 20 malls. Some firms use a similar naming scheme for most of their malls; for example, Mills Corporation puts "Mills" in most of its mall names and SM Prime Holdings of the Philippines puts "SM" in all of its malls, as well as anchor stores such as The SM Store, SM Appliance Center, SM Hypermarket, SM Cinema, and SM Supermarket. In the UK, The Mall Fund changes the name of any center it buys to "The Mall (location)", using its pink-M logo; when it sells a mall the center reverts to its own name and branding, such as the Ashley Centre in Epsom. Similarly, following its rebranding from Capital Shopping Centres, intu Properties renamed many of its centres to "intu (name/location)" (such as intu Lakeside); again, malls removed from the network revert to their own brand (see for instance The Glades in Bromley). Management and legal issues: Legal issues One controversial aspect of malls has been their effective displacement of traditional main streets or high streets. Some consumers prefer malls, with their parking garages, controlled environments, and private security guards, over central business districts (CBD) or downtowns, which frequently have limited parking, poor maintenance, outdoor weather, and limited police coverage.In response, a few jurisdictions, notably California, have expanded the right of freedom of speech to ensure that speakers will be able to reach consumers who prefer to shop, eat, and socialize within the boundaries of privately owned malls. The Supreme Court decision Pruneyard Shopping Center v. Robins was issued on 9 June 1980 which affirmed the decision of the California Supreme Court in a case that arose out of a free speech dispute between the Pruneyard Shopping Center in Campbell, California, and several local high school students. World's largest malls: This is an incomplete list of the world's largest shopping malls based on their gross leasable area (GLA), with a GLA of at least 250,000 m2 (2,700,000 sq ft). Combination retail and wholesale shopping malls Some wholesale market complexes also function as shopping malls in that they contain retail space which operate as stores in normal malls do but also act as producer vendor outlets that can take large orders for export.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mortification (theology)** Mortification (theology): Mortification in Christian theology to the subjective experience of Sanctification, the objective work of God between justification and glorification. It means the 'putting to death' of sin in a believer's life. (Colossians 3:5) Reformed theologian J.I. Packer describes it in the following way: "The Christian is committed to a lifelong fight against the world, the flesh and the devil. Mortification is his assault on the second." Christians believe that this internal work against sin is empowered by the Holy Spirit and so therefore is also part of regeneration. Historical Interpretations of Mortification: Roman Catholicism Roman Catholic theology frames ≈mortification ≈within . According to the Catholic Encyclopedia, "What it slays is the disease of the soul, and by slaying this it restores and invigorates the soul's true life." Mortification is also practiced by some Catholic subgroups for the purpose of saving sinners from hell, as devotees of Our Lady of Fátima believe the Virgin Mary asked her child visionaries to do. Historical Interpretations of Mortification: Calvinism and Reformed theology John Calvin observed that if believers died with Jesus then He would destroy our sinful earthly members and their lust, "so that they may no longer perform their functions." Mortification in Reformed theology has been generally understood to be the subjective experience of sanctification.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Necrotizing fasciitis** Necrotizing fasciitis: Necrotizing fasciitis (NF), also known as flesh-eating disease, is a bacterial infection that results in the death of parts of the body's soft tissue. It is a severe disease of sudden onset that spreads rapidly. Symptoms usually include red or purple skin in the affected area, severe pain, fever, and vomiting. The most commonly affected areas are the limbs and perineum.Typically, the infection enters the body through a break in the skin such as a cut or burn. Risk factors include poor immune function such as from diabetes or cancer, obesity, alcoholism, intravenous drug use, and peripheral artery disease. It does not typically spread between people. The disease is classified into four types, depending on the infecting organism. Between 55 and 80% of cases involve more than one type of bacteria. Methicillin-resistant Staphylococcus aureus (MRSA) is involved in up to a third of cases. Medical imaging is often helpful to confirm the diagnosis.Necrotizing fasciitis may be prevented with proper wound care and handwashing. It is usually treated with surgery to remove the infected tissue, and intravenous antibiotics. Often, a combination of antibiotics is used, such as penicillin G, clindamycin, IV vancomycin, and gentamicin. Delays in surgery are associated with a much higher risk of death. Despite high-quality treatment, the risk of death is between 25 and 35%.Necrotizing fasciitis occurs in about 0.4 people per 100,000 per year in the U.S., and about 1 per 100,000 in Western Europe. Both sexes are affected equally. It becomes more common among older people and is rare in children. It has been described at least since the time of Hippocrates. The term "necrotizing fasciitis" first came into use in 1952. Signs and symptoms: Symptoms may include fever, swelling, and complaints of excessive pain. The initial skin changes are similar to cellulitis or abscess, thus making the diagnosis at early stages difficult. Hardening of the skin and soft tissue and swelling beyond the area of skin changes are commonly present in those with early necrotizing changes. The redness and swelling usually blend into surrounding normal tissues. The overlying skin may appear shiny and tense. Other signs which are more suggestive of necrotizing changes (but present in later stages in 7 to 44% of the cases) are: formation of bullae, bleeding into the skin which is present before skin necrosis (skin turning from red to purple and black due to thrombosis of blood vessels), presence of gas in tissues, and reduced or absent sensation over the skin (due to the necrosis of the underlying nerves). Rapid progression to shock despite antibiotic therapy is another indication of necrotizing fasciitis. Necrotizing changes affecting the groin are known as Fournier gangrene.However, those who are immunocompromised (have cancer, use corticosteroid, on radiotherapy, chemotherapy, HIV/AIDS, or prior organ or bone marrow transplantation) may not show typical symptoms. Immunocompromised persons also have twice the risk of death from necrotizing infections, so higher suspicion should be maintained in this group. Cause: Risk factors More than 70% of cases are recorded in people with at least one of these clinical situations: immunosuppression, diabetes, alcoholism/drug abuse/smoking, malignancies, and chronic systemic diseases. For reasons that are unclear, it occasionally occurs in people with an apparently normal general condition.Necrotizing fasciitis can occur at any part of the body, but it is more commonly seen at the extremities, perineum, and genitals. Only a few of such cases arise from the chest and abdomen. Trauma is the usual cause of the infection, such as from intravenous drug injection, insulin injection, animal and insect bites, catheter insertion over the skin, or a fistula connecting skin to the internal body organs. Skin infections such as abscess and ulcers can also complicate necrotizing fasciitis. Spreading of infection through blood has been suggested for those with streptococcal pharyngitis. For infection of the perineum and genitals (Fournier gangrene), trauma, surgery, urinary tract infection, stones, and Bartholin gland abscess are the usual causes.The risk of developing necrotizing fasciitis from a wound can be reduced by good wound care and handwashing. Cause: Bacteria Types of soft-tissue necrotizing infection can be divided into four classes according to the types of bacteria infecting the soft tissue. This classification system was first described by Giuliano and his colleagues in 1977.Type I infection: This is the most common type of infection, and accounts for 70 to 80% of cases. It is caused by a mixture of bacterial types, usually in abdominal or groin areas. This type of infection is usually caused by various species of Gram-positive cocci, (Staphylococcus aureus, Streptococcus pyogenes, and enterococci), Gram-negative rods, (Escherichia coli, Pseudomonas aeruginosa), and anaerobes, (Bacteroides and Clostridium species). Populations of those affected are typically older with medical comorbidities such as diabetes mellitus, obesity, and immunodeficiency. Usually, trauma is not the cause of such infections. Previous history of abscess infection or gut perforation with bacterial translocation may be elicited. Clostridial infection accounts for 10% of type I infection. Clostridium species involved are Clostridium perfringens, Clostridium septicum, and Clostridium sordellii, which typically cause gas gangrene (also known as myonecrosis). Clostridium perfringens produces two deadly toxins: alpha-toxin and theta-toxin. Alpha-toxin causes excessive platelet aggregation which blocks blood vessels and deprives the vital organs of oxygen supply. This creates an acidic, oxygen-deficient environment for the proliferation of bacteria. When alpha-toxin is absorbed by soft tissues, it can inhibit the migration of white blood cells from blood vessels into the soft tissue, thus impairing phagocyte function. The two toxins together can cause destruction of red blood cells in blood vessels, damage to the integrity of the blood vessels, and suppression of heart function.Clostridium sordellii can also produce two major toxins: all known virulent strains produce the essential virulence factor lethal toxin (TcsL), and a number also produce haemorrhagic toxin (TcsH). TcsL and TcsH are both members of the large clostridial cytotoxin (LCC) family. The key Clostridium septicum virulence factor is a pore-forming toxin called alpha-toxin, though it is unrelated to the Clostridium perfringens alpha-toxin. Myonecrotic infections caused by these clostridial species commonly occur in injecting heroin users. Those with clostridial infections typically have severe pain at the wound site, where the wound typically drains foul-smelling blood mixed with serum (serosanguinous discharge). Shock can progress rapidly after initial injury or infection, and once the state of shock is established, the chance of dying exceeds 50%. Another bacterium associated with similar rapid disease progression is group A streptococcal infection (mostly Streptococcus pyogenes). Meanwhile, other bacterial infections require two or more days to become symptomatic.Type II infection: This infection accounts for 20 to 30% of cases, mainly involving the extremities. This mainly involves Streptococcus pyogenes bacteria, alone or in combination with staphylococcal infections. Both types of bacteria can progress rapidly and manifest as toxic shock syndrome. Streptococcus species produce M protein, which acts as a superantigen, stimulating a massive systemic immune response which is not effective against the bacterial antigen, precipitating shock. Type II infection more commonly affects young, healthy adults with a history of injury.Type III infection: Vibrio vulnificus, a bacterium found in saltwater, is a rare cause of this infection, which occurs through a break in the skin. Disease progression is similar to type II but sometimes with little visible skin changes.Type IV infection: Some authors have described the type IV infection as fungal in nature. Diagnosis: Early diagnosis is difficult, as the disease often looks early on like a simple superficial skin infection. While a number of laboratory and imaging modalities can raise the suspicion for necrotizing fasciitis, none can rule it out. The gold standard for diagnosis is a surgical exploration in a setting of high suspicion. When in doubt, a small incision can be made into the affected tissue, and if a finger easily separates the tissue along the fascial plane, the diagnosis is confirmed and an extensive debridement should be performed. Diagnosis: Medical imaging Imaging has a limited role in the diagnosis of necrotizing fasciitis. The time delay in performing imaging is a major concern. Plain radiography may show subcutaneous emphysema (gas in the subcutaneous tissue), which is strongly suggestive of necrotizing changes, but it is not sensitive enough to detect all the cases, because necrotizing skin infections caused by bacteria other than clostridial infections usually do not show subcutaneous emphysema. If the diagnosis is still in doubt, computed tomography (CT) scans and magnetic resonance imaging (MRI) are more sensitive modalities than plain radiography. However, both the CT scan and MRI are not sensitive enough to rule out necrotizing changes completely. CT scan may show fascial thickening, edema, subcutaneous gas, and abscess formation. In MRI, when fluid collection with deep fascia involvement occurs, thickening or enhancement with contrast injection, necrotizing fasciitis should be strongly suspected. Meanwhile, ultrasonography can show superficial abscess formation, but is not sensitive enough to diagnose necrotizing fasciitis. CT scan is able to detect about 80% of cases, while MRI may pick up slightly more. Diagnosis: Scoring system A white blood cell count greater than 15,000 cells/mm3 and serum sodium level less than 135 mmol/L have a sensitivity of 90% in detecting the necrotizing soft tissue infection. It also has a 99% chance of ruling out necrotizing changes if the values have shown otherwise. Various scoring systems are being developed to determine the likelihood of getting necrotizing fasciitis, but a scoring system developed by Wong and colleagues in 2004 is the most commonly used. It is the laboratory risk indicator for necrotizing fasciitis (LRINEC) score, which can be used to stratify by risk those people having signs of severe cellulitis or abscess to determine the likelihood of necrotizing fasciitis being present. It uses six laboratory values: C-reactive protein, total white blood cell count, hemoglobin, sodium, creatinine, and blood glucose. A score of 6 or more indicates that necrotizing fasciitis should be seriously considered. The scoring criteria are: CRP (mg/L) ≥150: 4 points WBC count (×103/mm3) <15: 0 points 15–25: 1 point >25: 2 points Hemoglobin (g/dL) >13.5: 0 points 11–13.5: 1 point <11: 2 points Sodium (mmol/L) <135: 2 points Creatinine (umol/L) >141: 2 points Glucose (mmol/L) >10: 1 pointHowever, the scoring system has not been validated. The values would be falsely positive if any other inflammatory conditions are present. Therefore, the values derived from this scoring system should be interpreted with caution. About 10% of patients with necrotizing fasciitis in the original study still had a LRINEC score <6. A validation study showed that patients with a LRINEC score ≥6 have a higher rate of both death and amputation. Prevention: Necrotizing fasciitis can be partly prevented by good wound care and handwashing. Treatment: Surgical debridement (cutting away affected tissue) is the mainstay of treatment for necrotizing fasciitis. Early medical treatment is often presumptive; thus, antibiotics should be started as soon as this condition is suspected. Tissue cultures (rather than wound swabs) are taken to determine appropriate antibiotic coverage, and antibiotics may be changed in light of results. Besides blood pressure control and hydration, support should be initiated for those with unstable vital signs and low urine output. Treatment: Surgery Aggressive wound debridement should be performed early, usually as soon as the diagnosis of necrotizing soft tissue infection (NSTI) is made. Surgical incisions often extend beyond the areas of induration (the hardened tissue) to remove the damaged blood vessels that are responsible for the induration. However, cellulitic soft tissues are sometimes spared from debridement for later skin coverage of the wound. More than one operation may be used to remove additional necrotic tissue. In some cases when an extremity is affected by a NSTI, amputation may be the surgical treatment of choice. After the wound debridement, adequate dressings should be applied to prevent exposure of bones, tendons, and cartilage so that such structures do not dry out and to promote wound healing.For necrotizing infection of the perineal area (Fournier's gangrene), wound debridement and wound care in this area can be difficult because of the excretory products that often render this area dirty and affect the wound-healing process. Therefore, regular dressing changes with a fecal management system can help to keep the wound at the perineal area clean. Sometimes, colostomy may be necessary to divert the excretory products to keep the wound at the perineal area clean. Treatment: Antibiotics Empiric antibiotics are usually initiated as soon as the diagnosis of NSTI has been made, and then later changed to culture-guided antibiotic therapy. In the case of NSTIs, empiric antibiotics are broad-spectrum, covering gram-positive (including MRSA), gram-negative, and anaerobic bacteria.While studies have compared moxifloxacin (a fluoroquinolone) and amoxicillin-clavulanate (a penicillin) and evaluated appropriate duration of treatment (varying from 7 to 21 days), no definitive conclusions on the efficacy of treatment, ideal duration of treatment, or the adverse effects could be made due to poor-quality evidence. Treatment: Add-on therapy Hyperbaric oxygen: While human and animal studies have shown that high oxygen tension in tissues helps to reduce edema, stimulate fibroblast growth, increase the killing ability of white blood cells, inhibit bacterial toxin release, and increase antibiotic efficacy, no high-quality trials have been shown to support or refute the use of hyperbaric oxygen therapy in patients with NSTIs. Treatment: Intravenous immunoglobulin (IVIG): No clear difference between using IVIG and placebo has been shown in the treatment of NSTIs, and one study showed serious adverse effects with IVIG use, including acute kidney injury, allergic reactions, aseptic meningitis syndrome, haemolytic anaemia, thrombi, and transmissible agents. AB103: One study assessed the efficacy of a new type of treatment that affects the immune response, called AB103. The study showed no difference in mortality with use of this therapy, but it is difficult to draw definitive conclusions due to low-quality evidence. Supportive therapy: Supportive therapy, often including intravenous hydration, wound care, anticoagulants to prevent thromboembolic events, pain control, etc. should always be provided to patients when appropriate. Epidemiology: Necrotizing fasciitis affects about 0.4 in every 100,000 people per year in the United States. About 1,000 cases of necrotizing fasciitis occur per year in the United States, but the rates have been increasing. This could be due to increasing awareness of this condition, leading to increased reporting, or bacterial virulence or increasing bacterial resistance against antibiotics. In some areas of the world, it is as common as one in every 100,000 people.Higher rates of necrotizing fasciitis are seen in those with obesity or diabetes, and those who are immunocompromised or alcoholic, or have peripheral artery disease. However, the disease may also occur in young, healthy adults with no underlying illnesses. NSAIDs may increase the rates of necrotizing infections due to the modification of immune response in the body, because NSAIDs inhibit the cycloxygenase-1 and cycloxygenase-2 enzymes which are important in producing thromboxane and prostaglandin E2. Prostaglandin has been responsible for fever, inflammation, and pain. The inhibition of prostaglandin E2 production reduces inflammatory response and leukocyte adhesion, and thus reduces immune response against bacterial invasion, giving rise to soft-tissue infection. History: In the fifth century BCE, Hippocrates described necrotizing soft tissue infection as a disease where those affected would have "erysipelas all over the body while the cause was only a trivial accident. Bones, flesh, and sinew (cord, tendon, or nerve) would fall off from the body and there were many deaths". The first English description for necrotizing soft-tissue infection was by British surgeon Leonard Gillespie and British physicians Gilbert Blaine and Thomas Trotter in the 18th century. At that time, necrotizing soft-tissue infections were known variously as "phagedaenic ulcer" (ulceration that spreads and destroys surrounding tissue), "gangrenous phagedena", "gangrenous ulcer", "malignant ulcer", "putrid ulcer", "fulminating gangrene", "necrotizing erysipelas", "gangrenous erysipelas", "crepitant cellulitis", "gangrenous cellulitis", "Meleney cellulitis", "necrotizing synergistic cellulitis", "hemolytic streptococcal gangrene", "progressive bacterial synergistic gangrene", or "necrotizing abscess". Later, "hospital gangrene" became more commonly used. In 1871 Confederate States Army surgeon Joseph Jones reported 2,642 cases of hospital gangrene with a mortality rate of 46%. In 1883, Dr Jean-Alfred Fournier described the necrotizing infection of the perineum and scrotum, now called Fournier gangrene. The term "necrotizing fasciitis" was first coined by Wilson in 1952. Its definition has become broader, to include not only infection of fascia, but also other soft-tissue infection. Despite being disfavored by the medical community, the term "galloping gangrene" is frequently used in sensationalistic news media to refer to outbreaks of necrotizing fasciitis. Society and culture: Notable cases 1994: Lucien Bouchard, former premier of Québec, Canada, who was infected while leader of the federal official opposition Bloc Québécois party, lost a leg to the illness. Society and culture: 1994: A cluster of cases occurred in Gloucestershire, in the west of England. Of five confirmed and one probable infection, two died. The cases were believed to be connected. The first two had acquired the Streptococcus pyogenes bacteria during surgery; the remaining four were community-acquired. The cases generated much newspaper coverage, with lurid headlines such as "Flesh Eating Bug Ate My Face". Society and culture: 1997: Ken Kendrick, former agent and partial owner of the San Diego Padres and Arizona Diamondbacks, contracted the disease. He had seven surgeries in a little more than a week and later fully recovered. 2004: Don Rickles, American stand-up comedian, actor, and author, known especially for his insult comedy, contracted the disease in his left leg. He had six operations and later recovered. The condition confined him in his later years to performing comedy from a chair. 2004: Eric Allin Cornell, winner of the 2001 Nobel Prize in Physics, lost his left arm and shoulder to the disease. 2005: Alexandru Marin, an experimental particle physicist, professor at MIT, Boston University, and Harvard University, and researcher at CERN and JINR, died from the disease. 2006 Alan Coren, British writer and satirist, announced in his Christmas column for The Times that his long absence as a columnist had been caused by his contracting the disease while on holiday in France. 2009: R. W. Johnson, British journalist and historian, contracted the disease in March after injuring his foot while swimming. His leg was amputated above the knee. Society and culture: 2011: Jeff Hanneman, guitarist for the thrash metal band Slayer, contracted the disease. He died of liver failure two years later, on May 2, 2013, and it was speculated that his infection was the cause of death. However, on May 9, 2013, the official cause of death was announced as alcohol-related cirrhosis. Hanneman and his family had apparently been unaware of the extent of the condition until shortly before his death. Society and culture: 2011: Peter Watts, Canadian science fiction author, contracted the disease. On his blog, Watts reported, "I'm told I was a few hours away from being dead ... If there was ever a disease fit for a science-fiction writer, flesh-eating disease has got to be it. This ... spread across my leg as fast as a Star Trek space disease in time-lapse." 2014: Daniel Gildenlöw, Swedish singer and songwriter for the band Pain of Salvation, spent several months in a hospital after being diagnosed with necrotizing fasciitis on his back in early 2014. After recovering, he wrote the album In the Passing Light of Day, a concept album about his experience during the hospitalization. Society and culture: 2014: Ricky Bartlett, CBS Radio Morning Host, had his left leg amputated. He got the disease during a trip to Wyoming & South Dakota, USA. He lost his right leg to bone disease (associated with the flesh eating disease he contacted) in 2022. 2015: Edgar Savisaar, Estonian politician, had his right leg amputated. He got the disease during a trip to Thailand. Society and culture: 2018: Alex Smith, an American football quarterback for the Washington Football Team of the National Football League (NFL), contracted the disease after being injured during a game. He suffered an open compound fracture in his lower leg, which became infected. Smith narrowly avoided amputation, and eventually returned to playing professional football in October 2020. Smith's injury and recovery is the subject of the ESPN documentary "E60 Presents: Project 11".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Casting television** Casting television: Casting television is a competition television program where the winner wins a recording contract, or takes part in a competition. Slovakia: Slovakia's competition is Superstars, and the prize is a car, cash and recording contracts. Stars such as Zdenka Predna came to prominence this way. United Kingdom: Simon Cowell's "Popstars" is perhaps the most famous one of the lot, and other shows on a similar focus are Stars in Their Eyes, and Andrew Lloyd Webber's How Do You Solve a Problem Like Maria?. United States: Following directly on the UK template is the American Idol competition. Republic of Ireland: Ireland's version, called You're a Star, had a unique twist: winners came into the Eurovision Song Contest as the prize. However, after the disaster of the McCall Twins that format has been abandoned. The most famous winner and most successful in Eurovision (though he did not win) was Mickey Joe Harte.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Haemophilus parainfluenzae** Haemophilus parainfluenzae: Haemophilus parainfluenzae is a species of Haemophilus. It is one of the HACEK organisms.H. parainfluenzae is an opportunistic pathogen that has been associated with endocarditis, bronchitis, otitis, conjunctivitis, pneumonia, abscesses and genital tract infections. Natural genetic transformation: H. parainfluenzae biotypes I and II are capable of natural genetic transformation. Natural genetic transformation is a bacterial adaptation for DNA transfer. In order for a bacterium to bind, take up and recombine exogenous DNA into its genome it must enter a special physiological state termed natural competence. In H. parainfluenzae, competence is induced during the late stationary phase of growth. Natural DNA transformation may play a major role in the exchange of genetic information among H. parainfluenzae isolates. Treatment: Acute H. parainfluenzae infections must be treated with antibiotics. Beta-lactam agents such as amoxicillin and ampicillin are antibiotics that are effective against H. parainfluenzae. The Duration of Antibiotic Therapy depends on the severity of the infection. In 40% of infective endocarditis cases caused by H. parainfluenzae, the best treatment is a valve replacement.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lamborghini Athon** Lamborghini Athon: The Lamborghini Athon is a concept car designed by Bertone for Lamborghini. Performance capabilities and features: The Lamborghini Athon is capable of being driven and is a fully functional production concept car. Under the hood of the Lamborghini Athon sits a 3.0 L DOHC V8 engine from the Lamborghini Silhouette, with two valves per cylinder capable of a max power of 260 hp (194 kW) at 7,500 rpm and 237 lb⋅ft (321 N⋅m) of torque with a compression ratio of ten to one. The transmission contains an all synchromesh gearbox that consists of a five speed with a single plate hydraulically assisted clutch and an axle ratio of 14/35. The Bertone company SpA design includes an integral chassis and steel body. The suspension has independent wry coil springs and telescopic shock absorbers. The Campagnolo cast magnesium pneumatically actuated brakes consist of Girling ventilated discs. The front tyres are Michelin 195/50 VR 15 and 275/40 VR 15 at the rear. The Lamborghini Athon weighs 2,390 lbs and has an 80 litre fuel tank. In terms of performance, the Lamborghini Athon is able to reach a top speed of 170 mph (273.6 km/h) and can go from 0 to 60 mph (97 km/h) in 7.3 seconds. The RM Sotheby's company auctioned the Lamborghini Athon in Concorso d'Eleganza Villa d'Este on May 21, 2011. It sold for $487,000 United States Dollars and its present-day estimated price value, according to RM Auctions, is between $213,000 to $312,000 United States Dollars. The Lamborghini Athon as a concept car: The Bertone company, a private company based in Italy, created the Lamborghini Athon to show their everlasting support for the Lamborghini company according to the Turin coachbuilder press release. The Lamborghini Athon was given its name because the car is a spider and made for fair-weather; the name makes reference to the Egyptian cult of the sun. The Lamborghini Athon as a concept car: Design aspects Marc Deschamps, a Frenchman, led the design process for the Lamborghini Athon which was Bertone Studio's first ever concept car. He was chosen to lead the design after Marcello Gandini left the position as the design coordinator in 1979 for Bertone. The car was based on the silhouette sport type aesthetic and resembled some of the looks of the Lamborghini Urraco. Marc Deschamps honored the prior design of Bertone's concept cars; he specifically made the Lamborghini Athon much like the concept cars Bertone created in the 1970s. He included "sculpted geometric volumes" that were defined by clear edges and cut lines. Marc Deschamps also did not follow what is universally known as the traditional spider design for the car. The Lamborghini Athon, a proclaimed spider, has its cabin located in a forward position as opposed to the traditional mid-set cabin in a normal spider. Another detail that sets the Athon apart from the original aesthetics of a spider is the height and position of the rear deck compared to the height and positioning of the sloping hood. This design concept would later be used when the Bertone company created the Jalpa Speedster. The design of the Lamborghini Athon also influenced media and movie productions. The Athon was referenced when making the props for the following films: Tron, Total Recall, and RoboCop. The Lamborghini Athon as a concept car: Signature Marc Deschamp Athon design Marc Deschamp was also inspired by Nuccio Bertone added a few more unique features the cars body. For example, Marc Deschamps created the doors so they would have a noticeable gap between the doors and the door sills. Another thing to note is that Marc Deschamps also designed the tail lights to have very thin grooves in order to assure that they did not interfere with the solid rear end of the car. Something unique to note about the car is the design of the steering wheel and touch screen panels. The steering wheel was designed with a single spoke. Note that to the left of the steering wheel, there was a pod. The mounted pod was used as a place to hold the secondary controls. The touch screen panels were equipped with electronic readouts.[1] Vegalie, an Italian supplier, created the instrumental design of the Lamborghini Athon. They made the windshield wipers, turn signals, as well as the indicator switches which are in close reach within the steering wheel. The Lamborghini Athon's design was created in honor of Fillipo Perini in honor of his devout love to the Lamborghini Silhouette aesthetic appearance. His impact as a designer for Lamborghini is seen in the Lamborghini Athon's front sloping hood. The Lamborghini Athon was forcefully given to the Bertone company as the Lamborghini company was in the process of liquidation and going through financial difficulty. The Lamborghini Athon was retired in the Bertone museum located in Rubiana, Italy directly after it showcased in the Turin Auto Show. Bertone occasionally removed the car from its museum and made it displayed for the public at a few select shows. Although it has had minor repairs to some of the mechanical components of the car, the Lamborghini Athon was never restored. Because of it having never been restored, the Lamborghini Athon is offered in its original condition. Impact on Lamborghini as a company: The Athon was created during Lamborghini's financial crisis, which threatened to end with the company's liquidation. As a result, the Athon's greatest impact on the company would have to be when Bertone put it in their museum. The press associated with this move brought more attention to the Athon and Lamborghini as a company.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Retinal waves** Retinal waves: Retinal waves are spontaneous bursts of action potentials that propagate in a wave-like fashion across the developing retina. These waves occur before rod and cone maturation and before vision can occur. The signals from retinal waves drive the activity in the dorsal lateral geniculate nucleus (dLGN) and the primary visual cortex. The waves are thought to propagate across neighboring cells in random directions determined by periods of refractoriness that follow the initial depolarization. Retinal waves are thought to have properties that define early connectivity of circuits and synapses between cells in the retina. There is still much debate about the exact role of retinal waves. Some contend that the waves are instructional in the formation of retinogeniculate pathways, while others argue that the activity is necessary but not instructional in the formation of retinogeniculate pathways. Discovery: One of the first scientists to theorize the existence of spontaneous cascades of electrical activity during retinal development was computational neurobiologist David J. Willshaw. He proposed that adjacent cells generate electrical activity in a wave-like formation through layers of interconnected pre-synaptic and postsynaptic cells. Activity propagating through a close span of pre- and postsynaptic cells is thought to result in strong electrical activity in comparison to pre- and postsynaptic cells that are farther apart, which results in weaker activity. Willshaw thought this difference in the firing strength and the location of cells was responsible for determining the activities' boundaries. The lateral movement of firing from neighboring cell to neighboring cell, starting in one random area of cells and moving throughout both the pre- and postsynaptic layers, is thought to be responsible for the formation of the retinotopic map. To simulate the cascade of electrical activity, Willshaw wrote a computer program to demonstrate the movement of electrical activity between pre- and postsynaptic cell layers. What Willshaw called "spontaneous patterned electrical activity" is today referred to as "retinal waves."From this purely theoretical concept, Italian scientists Lucia Galli and Lamberto Maffei used animal models to observe electrical activity in ganglion cells of the retina. Before Galli and Maffei, retinal ganglion cell activity had never been recorded during prenatal development. To study ganglion activity, Galli and Maffei used premature rat retinas, between embryonic days 17 and 21, to record electrical activity. Several isolated, single cells were used for this study. The recordings showed cell activity was catalyzed from ganglion cells. Galli and Maffei speculated that the electrical activity seen in the retinal ganglion cells may be responsible for the formation of retinal synaptic connections and for the projections of retinal ganglion cells to the superior colliculus and lateral geniculate nucleus (LGN).As the idea of retinal waves became established, neurobiologist Carla Shatz used calcium imaging and microelectrode recording to visualize the movement of action potentials in a wave-like formation. For more information on calcium imaging and microelectrode recording, see section below. The calcium imaging showed ganglion cells initiating the formation of retinal waves, along with adjacent amacrine cells, which take part in the movement of the electrical activity. Microelectrode recordings were also thought to show LGN neurons being driven by the wave-like formation of electrical activity across neighboring retinal ganglion cells. From these results, it was suggested that the waves of electrical activity were responsible for driving the pattern of spatiotemporal activity and also playing a role in the formation of the visual system during prenatal development.Rachel Wong is another researcher involved in the study of retinal waves. Wong speculated that electrical activity, within the retina, is involved in the organization of retinal projections during prenatal development. More specifically, the electrical activity may be responsible for the segregation and organization of the dLGN. Wong also speculated that specific parts of the visual system, such as the ocular dominance columns, require some form of electrical activity in order to develop completely. She also believed being able to figure out the signals encoded by retinal waves, may allow scientists to better understand how retinal waves play a role in retinal development.Some of the most recent research being conducted is attempting to better understand the encoded signals of retinal waves during development. According to research conducted by Evelyne Sernagor, it is thought that retinal waves are not just necessary for their spontaneous electrical activity but are also responsible for encoding information to be used in the formation of spatiotemporal patterns allowing retinal pathways to become more refined. Using turtles to test this concept, Sernagor used calcium imaging to look at the change in retinal waves during various stages of retinal development. From the study, at the very first stages of development, retinal waves fire quickly and repeatedly, causing what is thought to be a large wave of action potentials across the retina. However, as the turtle nears completion of development, the retinal waves gradually stop spreading and instead become immobile clumps of retinal ganglion cells. This is thought to be a result of GABA changing from excitatory to inhibitory during continual retinal development. Whether the change in retinal wave formation during development is unique to turtles, is still largely unknown. Observation of waves in other systems: Spontaneous generation and propagation of waves is seen elsewhere in developing circuits. Similar synchronized spontaneous activity early in development has been seen in neurons of the hippocampus, spinal cord, and auditory nuclei. Patterned activity shaping neuronal connections and control of synaptic efficiency in multiple systems including the retina are important for understanding interaction between presynaptic and postsynaptic cells that create precise connections essential to the function of the nervous system. Development: During development, communication via synapse is important between amacrine cells and other retinal interneurons as well as ganglion cells, which act as a substrate for retinal waves. Development: There are three stages of development that characterize retinal wave activity in mammals. Before birth, the waves are mediated by non-synaptic currents, waves during the period from birth until 10 days after birth are mediated by the neurotransmitter acetylcholine acting on nicotinic acetylcholine receptors, and waves during the third period, from 10 days after birth to 2 weeks, are mediated by ionotropic glutamate receptors.Chemical synapses during the cholinergic wave period involve the starburst amacrine cells (SACs) releasing acetylcholine onto other SACs, which then propagate waves. During this period, cholinergic wave production exceeds wave production via gap junctions, of which the signals are quite reduced. This signaling happens before bipolar cells form connections in the inner plexiform layer. SACs are thought to be the source of retinal waves because spontaneous depolarizations have been observed without synaptic excitation.Cholinergic wave activity eventually dies out, and the release of glutamate in bipolar cells generates waves. Bipolar cells differentiate later than amacrine and ganglion cells, which could be the cause for this change in wave behavior. Development: The change from cholinergic mediation to glutamatergic mediation occurs when bipolar cells make their first synaptic connections with ganglion cells. Glutamate, the neurotransmitter contained in bipolar cells, generates spontaneous activity in ganglion cells. Waves are still present after bipolar cells establish synaptic connection with amacrine and ganglion cells.Additional activity involved in retinal waves includes the following. In certain species, GABA appears to play a role in the frequency and duration of the bursts in ganglion cells. The interactions in cells vary in different test subjects and at different maturity levels, especially the complex interactions mediated by amacrine cells. Activity propagated via gap junctions has not been observed in all test subjects; for example, research has shown that ferret retina ganglion cells are not coupled. Other studies have shown that extracellular excitatory agents such as potassium could be instrumental in wave propagation. Research suggests that synaptic networks of amacrine and ganglion cells are necessary for the production of waves. Broadly put, waves are produced and continue over a relatively long developmental period, during which new cellular components of the retina and synapses are added. Variation in the mechanisms of retinal waves account for diversity in the connections between cells and the maturation of processes in the retina. Activity pattern of waves: Waves are generated at random but limited spatially due to a refractory period in cells after bursts of action potentials have been produced. After a wave has been propagated in one place, it cannot be propagated in the same place again. Wave-induced refractory areas last about 40 to 60 seconds. Research suggests that every region of the retina has an equal probability of generating and propagating a wave. The refractory period also determines the velocity (distance between wave fronts per unit of time) and periodicity (average time interval between wave-induced calcium transients or depolarizations recorded in a particular neuron in the ganglion cell layer). Activity pattern of waves: The density of refractory cells corresponds to how fast retinal waves propagate; for instance, if there is a low number or density of refractory cells, the velocity of propagation will be high. Experimental procedures: Visualization of waves Two primary methods of visualizing retinal waves are the use of calcium imaging and multielectrode array. Calcium imaging allows analysis of wave pattern over a large area of the retina (more than with multielectrode recording). Imaging as such has allowed researchers to investigate spatiotemporal properties or waves as well as wave mechanism and function in development. Experimental procedures: Disrupting waves There are three main techniques currently used to disrupt retinal waves: intraocular injection of pharmacological substances that alter wave patterns, use of immunotoxins that eliminate certain classes of amacrine cells, or use of knockout mouse lines that have altered spontaneous firing patterns. There are several pharmacological agents that can be used to disrupt retinal activity. Tetrodotoxin (TTX) can be injected near the optic tract to block incoming retinal activity in addition to the outgoing activity of lateral geniculate neurons. Intraocular injections of epibatidine, a cholinergic agonist, can be used to block spontaneous firing in half of all retinal ganglion cells and cause uncorrelated firing in the remaining half. Effects of the pharmacological agents on retinal ganglion cell activity are observed using either MEA or calcium imaging. Experimental procedures: Immunotoxins can be used to target starburst amacrine cells. Starburst amacrine cells are retinal interneurons responsible for cholinergic retinal waves. The third method is to use knockout mice with altered spontaneous firing patterns. The most common line of mouse for this method is the neuronal nicotinic acetylcholine receptor beta-2 subunit knockout (β2-nAChR-KO). β2-nAChR-KO mice have been observed to have reduced eye-specific retinotopic refinement similar to epitbatidine injection as well as no correlated waves, as observed with calcium imaging and MEA recording. Controversial role in neuronal development: There is currently still much controversy about whether retinal waves play an 'instructive' or a 'permissive' role in the formation of eye-specific projections in the retinogeniculate pathway. Injections of pharmacological agents prevents the formation of eye-specific retinogeniculate inputs, which indicates that retinal waves play some role in the formation. β2-nAChR-KO mice have been found to have altered patterns of spontaneous firing. It is important to note that while experiments done in knock-out lines to date have helped to explain some things about retinal waves, only experiments done in vivo at normal body temperature and in a normal chemical environment can truly determine what the true pattern of firing is in the knock-out animals. Controversial role in neuronal development: Instructive argument Retinal wave activity has been found to coincide with the period in which eye-specific retinogeniculate projections are formed. This temporal overlap would be necessary for a causal relationship. TTX injections in fetal cats prevented the formation of eye-specific retinogeniculate projections, which indicates that neuronal activity is necessary for the formation of eye-specific layers. After treatment with epibatidine, the lack of correlated firing in the remaining half of retinal ganglion cells despite the robust firing as well as the lack of eye-specific layer formation can be indicated as proof that the waves play an instructional role. Calcium imaging observation following immunotoxin use showed that some correlated firing still remained where coupled voltage clamp recording showed significant reduction in correlated firing. The remaining correlated firing could explain the formation of eye-specific retinogeniculate projections that was found. Using calcium imaging and MEA recording these cells have shown to have no correlated firing. Instead, reduced firing rates have been observed, and depolarization in one cell seemed to inhibit surrounding cells. The altered firing pattern of the β2-nAChR-KO mice is also controversial as there has been some evidence that correlated firing still occurs in the knock-out mice, as detailed in the next section. Controversial role in neuronal development: Permissive argument Retinal waves have been found while eye-specific retinogeniculate pathways are formed; however, it is important to note that in all species studied to date retinal waves begin prior to and continue after these eye-specific pathways are formed. It also is noted that some species in which retinal waves have been documented to have projections that are crossed. This suggests that retinal waves can be present and not play an instructive role in eye-specific inputs. There are several issues to be considered when looking at data from use of pharmacological substance to block retinal activity. First, the long-term effects of treatment with TTX are unknown, as it is not yet possible to monitor the retinal activity for a long duration in an intact animal. The finding that long-term injection of TTX did not inhibit and instead merely delayed eye-specific layer formation could be explained then by the reduced effects of TTX on retinal activity at a longer duration. This supports the argument that blocking all retinal activity prevents eye-specific projection formation remains to be determined. Furthermore, since immunotoxin treatment to kill starburst amacrine cells shows no difference in the formation of eye-specific retinogeniculate projections while treatment with epibatidine does, it could suggest that some sort of retinal activity is essential for the eye-specific layer formation, but not retinal waves. One study showed that β2-nAChR-KO mice did still have robust retinal wave activity, unlike previously reported; however, they found that the retinal waves were propagated using gap junctions in the knock-out line, instead of cholinergic transmission wild-type mice display.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Liquid breathing** Liquid breathing: Liquid breathing is a form of respiration in which a normally air-breathing organism breathes an oxygen-rich liquid (such as a perfluorocarbon), rather than breathing air, by selecting a liquid that can hold a large amount of oxygen and is capable of CO2 gas exchange.This requires certain physical properties such as respiratory gas solubility, density, viscosity, vapor pressure, and lipid solubility which some perfluorochemicals (PFCs) have. Thus, it is critical to choose the appropriate PFC for a specific biomedical application, such as liquid ventilation, drug delivery or blood substitutes. The physical properties of PFC liquids vary substantially; however, the one common property is their high solubility for respiratory gases. In fact, these liquids carry more oxygen and carbon dioxide than blood.In theory, liquid breathing could assist in the treatment of patients with severe pulmonary or cardiac trauma, especially in pediatric cases. Liquid breathing has also been proposed for use in deep diving and space travel. Despite some recent advances in liquid ventilation, a standard mode of application has not yet been established. Approaches: Because liquid breathing is still a highly experimental technique, there are several proposed approaches. Approaches: Total liquid ventilation Although total liquid ventilation (TLV) with completely liquid-filled lungs can be beneficial, the complex liquid-filled tube system required is a disadvantage compared to gas ventilation—the system must incorporate a membrane oxygenator, heater, and pumps to deliver to, and remove from the lungs tidal volume aliquots of conditioned perfluorocarbon (PFC). One research group led by Thomas H. Shaffer has maintained that with the use of microprocessors and new technology, it is possible to maintain better control of respiratory variables such as liquid functional residual capacity and tidal volume during TLV than with gas ventilation. Consequently, the total liquid ventilation necessitates a dedicated liquid ventilator similar to a medical ventilator except that it uses a breathable liquid. Many prototypes are used for animal experimentation, but experts recommend continued development of a liquid ventilator toward clinical applications. Approaches: Specific preclinical liquid ventilator (Inolivent) is currently under joint development in Canada and France. The main application of this liquid ventilator is the ultra-fast induction of therapeutic hypothermia after cardiac arrest. This has been demonstrated to be more protective than slower cooling method after experimental cardiac arrest. Approaches: Partial liquid ventilation In contrast, partial liquid ventilation (PLV) is a technique in which a PFC is instilled into the lung to a volume approximating functional residual capacity (approximately 40% of total lung capacity). Conventional mechanical ventilation delivers tidal volume breaths on top of it. This mode of liquid ventilation currently seems technologically more feasible than total liquid ventilation, because PLV could utilise technology currently in place in many neonatal intensive-care units (NICU) worldwide. Approaches: The influence of PLV on oxygenation, carbon dioxide removal and lung mechanics has been investigated in several animal studies using different models of lung injury. Clinical applications of PLV have been reported in patients with acute respiratory distress syndrome (ARDS), meconium aspiration syndrome, congenital diaphragmatic hernia and respiratory distress syndrome (RDS) of neonates. In order to correctly and effectively conduct PLV, it is essential to properly dose a patient to a specific lung volume (10–15 ml/kg) to recruit alveolar volume redose the lung with PFC liquid (1–2 ml/kg/h) to oppose PFC evaporation from the lung.If PFC liquid is not maintained in the lung, PLV can not effectively protect the lung from biophysical forces associated with the gas ventilator. Approaches: New application modes for PFC have been developed.Partial liquid ventilation (PLV) involves filling the lungs with a liquid. This liquid is a perfluorocarbon such as perflubron (brand name Liquivent). The liquid has some unique properties. It has a very low surface tension, similar to the surfactant substances produced in the lungs to prevent the alveoli from collapsing and sticking together during exhalation. It also has a high density, oxygen readily diffuses through it, and it may have some anti-inflammatory properties. In PLV, the lungs are filled with the liquid, the patient is then ventilated with a conventional ventilator using a protective lung ventilation strategy. The hope is that the liquid will help the transport of oxygen to parts of the lung that are flooded and filled with debris, help remove this debris and open up more alveoli improving lung function. The study of PLV involves comparison to protocolized ventilator strategy designed to minimize lung damage. Approaches: PFC vapor Vaporization of perfluorohexane with two anesthetic vaporizers calibrated for perfluorohexane has been shown to improve gas exchange in oleic acid-induced lung injury in sheep.Predominantly PFCs with high vapor pressure are suitable for vaporization. Aerosol-PFC With aerosolized perfluorooctane, significant improvement of oxygenation and pulmonary mechanics was shown in adult sheep with oleic acid-induced lung injury. In surfactant-depleted piglets, persistent improvement of gas exchange and lung mechanics was demonstrated with Aerosol-PFC. The aerosol device is of decisive importance for the efficacy of PFC aerosolization, as aerosolization of PF5080 (a less purified FC77) has been shown to be ineffective using a different aerosol device in surfactant-depleted rabbits. Partial liquid ventilation and Aerosol-PFC reduced pulmonary inflammatory response. Human usage: Medical treatment The most promising area for the use of liquid ventilation is in the field of pediatric medicine. The first medical use of liquid breathing was treatment of premature babies and adults with acute respiratory distress syndrome (ARDS) in the 1990s. Liquid breathing was used in clinical trials after the development by Alliance Pharmaceuticals of the fluorochemical perfluorooctyl bromide, or perflubron for short. Current methods of positive-pressure ventilation can contribute to the development of lung disease in pre-term neonates, leading to diseases such as bronchopulmonary dysplasia. Liquid ventilation removes many of the high pressure gradients responsible for this damage. Furthermore, perfluorocarbons have been demonstrated to reduce lung inflammation, improve ventilation-perfusion mismatch and to provide a novel route for the pulmonary administration of drugs.In order to explore drug delivery techniques that would be useful for both partial and total liquid ventilation, more recent studies have focused on PFC drug delivery using a nanocrystal suspension. The first image is a computer model of a PFC liquid (perflubron) combined with gentamicin molecules. Human usage: The second image shows experimental results comparing both plasma and tissue levels of gentamicin after an intratracheal (IT) and intravenous (IV) dose of 5 mg/kg in a newborn lamb during gas ventilation. Note that the plasma levels of the IV dose greatly exceed the levels of the IT dose over the 4 hour study period; whereas, the lung tissue levels of gentamicin when delivered by an intratracheal (IT) suspension, uniformly exceed the intravenous (IV) delivery approach after 4 hours. Thus, the IT approach allows more effective delivery of the drug to the target organ while maintaining a safer level systemically. Both images represent the in-vivo time course over 4 hours. Numerous studies have now demonstrated the effectiveness of PFC liquids as a delivery vehicle to the lungs.Clinical trials with premature infants and adults have been conducted. Since the safety of the procedure and the effectiveness were apparent from an early stage, the US Food and Drug Administration (FDA) gave the product "fast track" status (meaning an accelerated review of the product, designed to get it to the public as quickly as is safely possible) due to its life-saving potential. Clinical trials showed that using perflubron with ordinary ventilators improved outcomes as much as using high frequency oscillating ventilation (HFOV). But because perflubron was not better than HFOV, the FDA did not approve perflubron, and Alliance is no longer pursuing the partial liquid ventilation application. Whether perflubron would improve outcomes when used with HFOV or has fewer long-term consequences than HFOV remains an open question. Human usage: In 1996 Mike Darwin and Steven B. Harris proposed using cold liquid ventilation with perfluorocarbon to quickly lower the body temperature of victims of cardiac arrest and other brain trauma to allow the brain to better recover. The technology came to be called gas/liquid ventilation (GLV), and was shown able to achieve a cooling rate of 0.5 °C per minute in large animals. It has not yet been tried in humans. Human usage: Most recently, hypothermic brain protection has been associated with rapid brain cooling. In this regard, a new therapeutic approach is the use of intranasal perfluorochemical spray for preferential brain cooling. The nasopharyngeal (NP) approach is unique for brain cooling due to anatomic proximity to the cerebral circulation and arteries. Based on preclinical studies in adult sheep, it was shown that independent of region, brain cooling was faster during NP-perfluorochemical versus conventional whole body cooling with cooling blankets. To date, there have been four human studies including a completed randomized intra-arrest study (200 patients). Results clearly demonstrated that prehospital intra-arrest transnasal cooling is safe, feasible and is associated with an improvement in cooling time. Proposed uses: Diving Gas pressure increases with depth, rising 1 bar (14.5 psi (100 kPa)) every 10 meters to over 1,000 bar at the bottom of the Mariana Trench. Diving becomes more dangerous as depth increases, and deep diving presents many hazards. All surface-breathing animals are subject to decompression sickness, including aquatic mammals and free-diving humans (see taravana). Breathing at depth can cause nitrogen narcosis and oxygen toxicity. Holding the breath while ascending after breathing at depth can cause air embolisms, burst lung, and collapsed lung. Proposed uses: Special breathing gas mixes such as trimix or heliox reduce the risk of nitrogen narcosis but do not eliminate it. Heliox further eliminates the risk of nitrogen narcosis but introduces the risk of helium tremors below about 500 feet (150 m). Atmospheric diving suits maintain body and breathing pressure at 1 bar, eliminating most of the hazards of descending, ascending, and breathing at depth. However, the rigid suits are bulky, clumsy, and very expensive. Proposed uses: Liquid breathing offers a third option, promising the mobility available with flexible dive suits and the reduced risks of rigid suits. With liquid in the lungs, the pressure within the diver's lungs could accommodate changes in the pressure of the surrounding water without the huge partial pressure gas exposures required when the lungs are filled with gas. Liquid breathing would not result in the saturation of body tissues with high pressure nitrogen or helium that occurs with the use of non-liquids, thus would reduce or remove the need for slow decompression. Proposed uses: A significant problem, however, arises from the high viscosity of the liquid and the corresponding reduction in its ability to remove CO2. All uses of liquid breathing for diving must involve total liquid ventilation (see above). Total liquid ventilation, however, has difficulty moving enough liquid to carry away CO2, because no matter how great the total pressure is, the amount of partial CO2 gas pressure available to dissolve CO2 into the breathing liquid can never be much more than the pressure at which CO2 exists in the blood (about 40 mm of mercury (Torr)).At these pressures, most fluorocarbon liquids require about 70 mL/kg minute-ventilation volumes of liquid (about 5 L/min for a 70 kg adult) to remove enough CO2 for normal resting metabolism. This is a great deal of fluid to move, particularly as liquids are more viscous and denser than gases, (for example water is about 850 times the density of air). Any increase in the diver's metabolic activity also increases CO2 production and the breathing rate, which is already at the limits of realistic flow rates in liquid breathing. It seems unlikely that a person would move 10 liters/min of fluorocarbon liquid without assistance from a mechanical ventilator, so "free breathing" may be unlikely. However, it has been suggested that a liquid breathing system could be combined with a CO2 scrubber connected to the diver's blood supply; a US patent has been filed for such a method. Proposed uses: Space travel Liquid immersion provides a way to reduce the physical stress of G forces. Forces applied to fluids are distributed as omnidirectional pressures. Because liquids cannot be practically compressed, they do not change density under high acceleration such as performed in aerial maneuvers or space travel. A person immersed in liquid of the same density as tissue has acceleration forces distributed around the body, rather than applied at a single point such as a seat or harness straps. This principle is used in a new type of G-suit called the Libelle G-suit, which allows aircraft pilots to remain conscious and functioning at more than 10g acceleration by surrounding them with water in a rigid suit.Acceleration protection by liquid immersion is limited by the differential density of body tissues and immersion fluid, limiting the utility of this method to about 15g to 20g. Proposed uses: Extending acceleration protection beyond 20g requires filling the lungs with fluid of density similar to water. An astronaut totally immersed in liquid, with liquid inside all body cavities, will feel little effect from extreme G forces because the forces on a liquid are distributed equally, and in all directions simultaneously. However effects will be felt because of density differences between different body tissues, so an upper acceleration limit still exists. Proposed uses: Liquid breathing for acceleration protection may never be practical because of the difficulty of finding a suitable breathing medium of similar density to water that is compatible with lung tissue. Perfluorocarbon fluids are twice as dense as water, hence unsuitable for this application. Examples in fiction: Literary works Alexander Beliaev's 1928 science fiction novel Amphibian Man is based on a scientist and a maverick surgeon, who makes his son, Ichthyander (etymology: "fish" + "man") a life-saving transplant – a set of shark gills. There is a film based on the novel. L. Sprague de Camp's 1938 short story "The Merman" hinges on an experimental process to make lungs function as gills, thus allowing a human being to "breathe" under water. Hal Clement's 1973 novel Ocean on Top portrays a small underwater civilization living in a 'bubble' of oxygenated fluid denser than seawater. Joe Haldeman's 1975 novel The Forever War describes liquid immersion and breathing in great detail as a key technology to allow space travel and combat with acceleration up to 50 G. In the Star Trek: The Next Generation novel The Children of Hamlin (1988) the crew of the Enterprise-D encounter an alien race whose ships contain a breathable liquid environment. Peter Benchley's 1994 novel White Shark centers around a Nazi scientist's experimental attempts to create an amphibious human, whose lungs are surgically modified to breathe underwater, and trained to reflexively do so after being flooded with a fluorocarbon solution. Judith and Garfield Reeves-Stevens' 1994 Star Trek novel Federation explains that before the invention of the inertial dampener, the stresses of high-G acceleration required starship pilots to be immersed in liquid-filled capsules, breathing an oxygen-rich saline solution to prevent their lungs from being crushed. Nicola Griffith's novel Slow River (1995) features a sex scene occurring within a twenty cubic foot silvery pink perflurocarbon pool, with the sensation described as "like breathing a fist". Ben Bova's novel Jupiter (2000) features a craft in which the crew are suspended in a breathable liquid that allows them to survive in the high-pressure environment of Jupiter's atmosphere. In Scott Westerfeld's sci-fi novel The Risen Empire (2003), the lungs of soldiers performing insertion from orbit are filled with an oxygen-rich polymer gel with embedded pseudo-alveoli and a rudimentary artificial intelligence. The novel Mechanicum (2008) by Graham McNeill, Book 9 in the Horus Heresy book series, describes physically crippled Titan (gigantic war machine) pilots encased in nutrient fluid tanks. This allows them to continue operating beyond the limits normally imposed by the body. Examples in fiction: In Liu Cixin's novel The Dark Forest (2008), the warships of humanity in the 23rd century flood their compartments with an oxygen-rich liquid called 'deep-sea acceleration fluid' to protect the crew against the forces of extreme acceleration that the ships undergo. Ships enter a 'deep-sea state' where the crew are immersed in the fluid and sedated before acceleration can commence. Examples in fiction: In the 2009 novel The Lost Symbol by Dan Brown, Robert Langdon (the protagonist) is completely submerged in breathable liquid mixed with hallucinogenic chemicals and sedatives as a torture and interrogation technique by Mal'akh (the antagonist). He goes through a near death experience when he inhales the liquid and blacks out, losing control over his body, but is soon revived. Examples in fiction: In Greg van Eekhout's 2014 novel California Bones, two characters are put into tanks filled with liquid: "They were given no breathing apparatus, but the water in the tank was rich with perfluorocarbon, which carried more oxygen than blood." In author A.L. Mengel's science fiction novel The Wandering Star (2016), several characters breathe oxygenated fluid during a dive to explore an underwater city. They submerge in high pressure "bubbles" filled with the perfluorocarbon fluid. Examples in fiction: In Tiamat's Wrath, a 2019 novel in The Expanse series by James S. A. Corey, The Laconian empire utilizes a ship with full-immersion liquid-breathing pods that allow the crew to undergo significantly increased g-forces. As powerful and fuel-efficient fusion engines in the series have made the only practical limitations of a ships' acceleration the survivability of the crew, this makes the ship the fastest in all of human-colonized space. Examples in fiction: Films and television The aliens in the Gerry Anderson UFO series (1970-1971) use liquid-breathing spacesuits. The 1989 film The Abyss by James Cameron features a character using liquid breathing to dive thousands of feet without compressing. The Abyss also features a scene with a rat submerged in and breathing fluorocarbon liquid, filmed in real life. Examples in fiction: In the 1995 anime Neon Genesis Evangelion, the cockpits of the titular mecha are filled with a fictional oxygenated liquid called LCL which is required for the pilot to mentally sync with an Evangelion, as well as providing direct oxygenation of their blood, and dampening the impacts from battle. Once the cockpit is flooded the LCL is ionized, bringing its density, opacity, and viscosity close to that of air. Examples in fiction: In the movie Mission to Mars (2000), a character is depicted as being immersed in apparent breathable fluid before a high-acceleration launch. Examples in fiction: In season 1, episode 13 of Seven Days (1998-2001) chrononaut Frank Parker is seen breathing a hyper-oxygenated perfluorocarbon liquid that is pumped through a sealed full body suit that he is wearing. This suit and liquid combination allow him to board a Russian submarine through open ocean at a depth of almost 1000 feet. Upon boarding the submarine he removes his helmet, expels the liquid from his lungs and is able to breathe air again. Examples in fiction: In an episode of the Adult Swim cartoon series Metalocalypse (2006-2013), the other members of the band submerge guitarist Toki in a "liquid oxygen isolation chamber" while recording an album in the Mariana Trench. In an episode of the Syfy Channel show Eureka (2006-2012), Sheriff Jack Carter is submerged in a tank of "oxygen rich plasma" to be cured of the effects of a scientific accident. In the anime series Aldnoah.Zero (2014-2015), episode 5 shows that Slaine Troyard was in a liquid-filled capsule when he crashed. Princess Asseylum witnessed the crash, helped him to get out of the capsule, then used CPR on him to draw out the liquid from his lungs. Video games In the classic 1995 PC turn-based strategy game X-COM: Terror from the Deep, "Aquanauts" fighting in deep ocean conditions breathe a dense oxygen-carrying fluid. In the EVE Online Universe (2003), pilots in capsules (escape pods that function as the control center for the spacecraft) breathe an oxygen rich, nano-saturated, breathable glucose-based suspension solution.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Parallel communication** Parallel communication: In data transmission, parallel communication is a method of conveying multiple binary digits (bits) simultaneously using multiple conductors. This contrasts with serial communication, which conveys only a single bit at a time; this distinction is one way of characterizing a communications link. Parallel communication: The basic difference between a parallel and a serial communication channel is the number of electrical conductors used at the physical layer to convey bits. Parallel communication implies more than one such conductor. For example, an 8-bit parallel channel will convey eight bits (or a byte) simultaneously, whereas a serial channel would convey those same bits sequentially, one at a time. If both channels operated at the same clock speed, the parallel channel would be eight times faster. A parallel channel may have additional conductors for other signals, such as a clock signal to pace the flow of data, a signal to control the direction of data flow, and handshaking signals. Parallel communication: Parallel communication is and always has been widely used within integrated circuits, in peripheral buses, and in memory devices such as RAM. Computer system buses, on the other hand, have evolved over time: parallel communication was commonly used in earlier system buses, whereas serial communications are prevalent in modern computers. Examples of parallel communication systems: Internal buses: memory bus, system bus, and front-side bus IBM System/360 Direct Control Feature (1964).: p.18  Standard System/360 had an eight-bit wide port. The process-control variant Model 44 had a 32-bit width. Legacy computer peripheral buses: ISA, ATA, SCSI, PCI, and the once-ubiquitous IEEE-1284 / Centronics "printer port" Laboratory Instrumentation bus IEEE-488 (see more examples at computer bus) Comparison with serial links: Before the development of high-speed serial technologies, the choice of parallel links over serial links was driven by these factors: Speed: Superficially, the speed of a parallel data link is equal to the number of bits sent at one time times the bit rate of each individual path; doubling the number of bits sent at once doubles the data rate. In practice, clock skew reduces the speed of every link to the slowest of all of the links. Comparison with serial links: Cable length: Crosstalk creates interference between the parallel lines, and the effect worsens with the length of the communication link. This places an upper limit on the length of a parallel data connection that is usually shorter than a serial connection. Comparison with serial links: Complexity: Parallel data links are easily implemented in hardware, making them a logical choice. Creating a parallel port in a computer system is relatively simple, requiring only a latch to copy data onto a data bus. In contrast, most serial communication must first be converted back into parallel form by a universal asynchronous receiver/transmitter (UART) before they may be directly connected to a data bus.The decreasing cost and better performance of integrated circuits has led to serial links being used in favor of parallel links; for example, IEEE 1284 printer ports vs. USB, Parallel ATA vs. Serial ATA, and FireWire or Thunderbolt are now the most common connectors for transferring data from audiovisual (AV) devices such as digital cameras or professional-grade scanners that used to require purchasing a SCSI HBA years ago. Comparison with serial links: One huge advantage of having fewer wires/pins in a serial cable is the significant reduction in the size, the complexity of the connectors, and the associated costs. Designers of devices such as smartphones benefit from the development of connectors/ports that are small, durable, and still provide adequate performance. Comparison with serial links: On the other hand, there has been a resurgence of parallel data links in RF communication. Rather than transmitting one bit at a time (as in Morse code and BPSK), well-known techniques such as PSM, PAM, and Multiple-input multiple-output communication send a few bits in parallel. (Each such group of bits is called a "symbol"). Such techniques can be extended to send an entire byte at once (256-QAM).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Paraspeckle** Paraspeckle: In cell biology, a paraspeckle is an irregularly shaped compartment of the cell, approximately 0.2-1 μm in size, found in the nucleus' interchromatin space. First documented in HeLa cells, where there are generally 10-30 per nucleus, Paraspeckles are now known to also exist in all human primary cells, transformed cell lines and tissue sections. Their name is derived from their distribution in the nucleus; the "para" is short for parallel and the "speckle" refers to the splicing speckles to which they are always in close proximity. Their function is still not fully understood, but they are thought to regulate gene expression by sequestrating proteins or mRNAs with inverted repeats in their 3′ UTRs. Structure: Paraspeckles are organised into core-shell spheroidal structures; seven proteins on a scaffold of lncRNA NEAT1 (the 23kb isoform termed NEAT1_2 or NEAT1v2). In 2016, West et al. proposed the currently accepted model for Paraspeckles. This was based on their current findings using super-resolution microscopy. Their models state that the NEAT1_2 scaffold folds into a V-shaped unit. Many of these units then are assembled into a core-shell spheroid by FUS proteins. Core proteins SFPQ, NONO and PSPC1 tightly associate to the assembled structure. Finally, the shell forms, composed of partially co-localised TDP43 proteins. Due to the integral nature of NEAT1 to paraspeckles assembly, assembly is thought to occur in close proximity to NEAT1 transcription sites.It has been noted that Paraspeckles have a great deal of commonality both in features and structures with cytoplasmic stress granules, another type of membrane-less organelle. This conclusion arose from the fact that both contain common component proteins, become more abundant with stress, seem to function through sequestering other proteins and both have distinct core or shell regions with predictable localised molecules. Localization: Paraspeckles are dynamic structures that are altered in response to changes in cellular metabolic activity. They are transcription-dependent. All five of the proposed protein components have RNA recognition motifs (RRMs) and, in the absence of RNA polymerase II transcription, the Paraspeckle disappears and all of its associated proponents form a crescent shaped perinucleolar cap in the nucleolus. This phenomenon is demonstrated during the cell cycle. In the cell cycle, Paraspeckles are present during interphase and during all of mitosis except for telophase because, when the two daughter nuclei are formed, there is no RNA Pol II transcription so the protein components instead form a perinucleolar cap. The localization patterns were also duplicated in experiments using transcription inhibiting drugs. Function: The role of the Paraspeckle is as of yet not fully understood. It has been suggested that the activity of NONO (a.k.a. p54nrb), a protein component, is dependent on its localisation within the nucleus. Thus, one explanation of the Paraspeckle's function is that it provides ordered localisation of its component proteins and to thereby help direct their activity. In turn, this is believed to give the Paraspeckle a regulatory function over transcription. Also, a meta-analysis by Fox et al. (2018) links the Paraspeckle's regulation to its ability to sequester or steal component proteins and RNAs. This causes other nuclear compartments to be depleted. The current research into the Paraspeckle's function is mainly targeted towards the roles of several of its components to indicate larger cellular use, this page mainly focuses on the roles of Paraspeckle proteins and NEAT1. Function: Physiological The main insight into their physiological function is their location. Prominent Paraspeckles are only found in a subpopulation of cells in murine tissues, e.g. luteal cells or cells at the tip of the gut epithelium. Hence, based on location Paraspeckles are thought to play a role in cancer regulation, reproduction and viral management. Function: One focus has been the Paraspeckle's role in cancer and cell stress scenarios. Wang, Z, Li K, Huang, W (2019), records that quantities of NEAT1 and thus Paraspeckles are increased in digestive system tumours and respiratory cancers. Furthermore, that expression of NEAT1 is associated with tumour size, stage of cancer, ability to spread and overall patient survival. Whilst, failure to regulate NEAT1 production has been linked to non-cancerous diseases, such as neurodegenerative diseases like Parkinson's or Alzheimer's. However, the function of NEAT1 and Paraspeckles is not always positive, it has been proven that they enhance the malignancy and stemness of breast tumours by increasing the expression of the WNT4 gene.NEAT1 also affects pregnancy and fertility chances, especially in female mammals whose luteal cells are regulated by Paraspeckles. This can cause malformation or potential no formation of the corpus luteum leading to infertility, smaller litters, and fewer viable pregnancies. In a study by Chai Y, Liu J, Zhang Z, Liu L (2016), knockout mice (no NEAT1) exhibited malfunctions in epithelial cell proliferation, causing mothers to lactate poorly and reduced litter survival even further. Interestingly these knockout mice exhibit the stochastic effect; the corpus luteum will form in some, but not in all. This reinforces the fact that Paraspeckles are inducible by cell stress and that environmental triggers have an impact. Function: From a viral aspect, NEAT1 levels have an observable impact on infections within cells with many different RNA viruses, including Japanese encephalitis, rabies, HIV, influenza, and Hantaan, as well as the DNA-encoded herpes simplex virus. Wang, Z, Li K, Huang, W (2019) suggest NEAT1_2/Paraspeckles act as a promoter to cell defence triggering and aiding the cellular defence mechanism. Molecular From the molecular perspective, this page examines the Paraspeckle's function through NEAT1, NONO (p45nrb) and SFPQ (PSF). Function: One aspect of the Molecular function is the Paraspeckle's ability to sequester other molecules affecting transcription. This is done by both NEAT1 and some constituent proteins. NEAT1 is primarily responsible for the Paraspeckle's architecture and providing stability to the protein components. Yet, it has also been shown to regulate gene expression. This is done by recruiting transcription factors, sequestering them from gene promoters and ultimately altering transcription. Furthermore, Wang, Z, Li K, Huang, W (2019) state that NEAT1 can regulate expression by associating with RNA-binding proteins this regulates RNA splicing events and can manipulate the stability of proteins. Another form of molecule sequestering is through NONO and SFPQ, both proteins can bind to double-stranded RNA that has formed as a result of transcribed inverted repeat motifs.Another aspect of molecular function is NEAT1's localisation of Paraspeckle proteins to direct their activity. In a study by Hirose, T. et al. (2014), when NEAT1_2 levels increase, Paraspeckles elongate. This, in turn, not only increases Paraspeckle length but also the demand for more Paraspeckle proteins to build the tertiary structures required for proper functioning. This reduces nucleoplasmic protein availability. It was noted in their study that this has an impact on the role of free Paraspeckle proteins such as SFPQ which normally represses IL-8, an immune-responsive gene, or can activate the ADARB2 gene. Thus, gene regulation can be manipulated not just through sequestering of non-constituent proteins but also Paraspeckle constitutive proteins.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lamb chop and pineapple diet** Lamb chop and pineapple diet: The Lamb chop and pineapple diet was an American high-protein fad diet that was popular in the 1920s. The idea behind the diet was that lamb chops provide sufficient protein for strength, pineapples enough sugar for energy, while the fruit acid would absorb or destroy any leftover fat from the lamb chops. In 1924, it was adopted and promoted by Nita Naldi and other Hollywood celebrities. She claimed the diet made her lose twenty pounds. It was later reported that the diet made Naldi sick for weeks. She abandoned the diet after suffering from dizziness and hunger.Like other imbalanced fad diets, it was not recommended by nutritionists, as it failed to provide the essential nutrients that the body needs to function properly.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**1L-myo-inositol 1-phosphate cytidylyltransferase** 1L-myo-inositol 1-phosphate cytidylyltransferase: 1L-myo-inositol 1-phosphate cytidylyltransferase (EC 2.7.7.74, CTP:inositol-1-phosphate cytidylyltransferase (bifunctional CTP:inositol-1-phosphate cytidylyltransferase/CDP-inositol:inositol-1-phosphate transferase (IPCT/DIPPS)), IPCT (bifunctional CTP:inositol-1-phosphate cytidylyltransferase/CDP-inositol:inositol-1-phosphate transferase (IPCT/DIPPS)), L-myo-inositol-1-phosphate cytidylyltransferase) is an enzyme with systematic name CTP:1L-myo-inositol 1-phosphate cytidylyltransferase. This enzyme catalyses the following chemical reaction CTP + 1L-myo-inositol 1-phosphate ⇌ diphosphate + CDP-1L-myo-inositolThis enzyme is involved in biosynthesis of bis(1L-myo-inositol) 1,3'-phosphate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**User review** User review: A user review is a review conducted by any person who has access to the internet and publishes their experience to a review site or social media platform following product testing or the evaluation of a service. User reviews are commonly provided by consumers who volunteer to write the review, rather than professionals who are paid to evaluate the product or service. User reviews might be compared to professional nonprofit reviews from a consumer organization, or to promotional reviews from an advertiser or company marketing a product. Growth of social media platforms has enabled the facilitation of interaction between consumers after a review has been placed on online communities such as blogs, internet forums or other popular platforms. Purpose of user reviews: User reviews guide stakeholders, including consumers, producers, and competitors decision making process regarding the good or service experienced by the user providing the review. Purchase decisions can be made with easy access to product information through reviews from users who have knowledge from an experience, information or tangible good. Producers of goods and services can utilise user reviews through word of mouth (WOM) recognition enhancing their reputation, but can also be disparaged. For goods which value is derived from knowledge and information, user reviews provide a "wealth of experience information," and therefore increase potential consumers. Economic effect: In some markets, user reviews are considered more trustworthy than professional or firm initiated marketing. Economic effect: Consumer Through user reviews, consumers seeking to make a purchase decision are able to independently analyse and evaluate their choices. Consumers are able to identify with specific product attributes that provide the highest utility by comparing their own value chain with users who provide information about their personal experience. Through the online network, consumers positive interpretation of a user review is likely to increase the chance of purchase, whereas negative interpretation of a user review is likely to broaden the consumers search. Economic effect: Producer User reviews are seen as a 'driving force' in marketing, in direct correlation with sales of a good or service. Positive user reviews of a good or service are likely to increase demand of the product through positive attitudes and behaviour toward the company. Research has shown that negative user reviews have a more widespread impact than positive. Both the volume and valence of reviews are recorded to impact demand of goods and services but serve as an opportunity for improvement for management and production chains. Economic effect: Competitor By interpreting user reviews, competitors are able to understand their competition's strengths and weaknesses from a user's perspective. The facilitation of distribution of personal experience through user reviews provides an advantageous opportunity for competitors to improve their own product based on their competitors feedback. By providing personal experiences, user reviews give the market a chance to analyse their weaknesses and use it as an opportunity, sometimes at the dispense of the company originally reviewed. Fake reviews: Advertisers, marketers, and other competitive stakeholders have motivation to produce fake positive user reviews for products they wish to promote or fake negative user reviews for products which they wish to disparage. In a fake user review, an actor will create a user account based on some marketing persona and post a user review purporting to be a real person with the traits of the persona. Marketing companies who sell fake reviews train workers to write them in realistic ways and to post them from multiple accounts in order to increase credibility. This is a misuse of the user review system, which universally only invite reviews from typical users and not paid fake personalities. Alternatively, a real user may provide a fake review of a good or service they have not experienced.A 2021 study from University of California, Los Angeles documented large markets where sellers on Amazon purchase fake reviews in private Facebook groups. These reviews increase the ratings and sales of products and are widely used by sellers.One way to prevent fake reviews is to create barriers which favor long-term identified users who understand and support community rules in a review site. Amazon is suing fake reviewers. By providing boundaries for membership such as knowing the user's details, or having to pay for membership, companies can provide boundaries.In 2016, the Australian Competition & Consumer Commission fined Electrodry $215,000 for inciting its franchisees to forge false online reviews to boost their rating on online review websites. Evaluation of user reviews: Various systems have been proposed to evaluate the quality of user reviews so that consumers can access the best ones, avoid lower quality ones, and prevent mixing of honestly provided reviews with less honest reviews from advertisers or people with an agenda other than nonpartial evaluation.Consumers perceive user reviews using good grammar and persuasive writing style to be of higher quality than those written in other ways.The relationship between user reviews and the quality of a product is uncertain. For some levels of quality in some circumstances, there may be no relationship between quality and ratings. For top levels of quality, one study found that user ratings matched scientific ratings a little more than half the time. Furthermore, people reading user reviews tend to perceive them to be as objective as scientific testing, especially when there is an average user review score.Given a large set of multiple user reviews by different people, there are text analytics algorithms which can accurately predict which reviews come from the same individual authors.Sentiment analysis can be used to predict the extent to which a review is favorable or critical. Motivations for contributing a user review: Research suggests that motivation to provide a user review commonly stems from psychological attitudes and behaviour. Uses and gratifications theory is a discipline which considers why anyone would volunteer time to create a user review. Some researchers suggest that internal behaviours that value social benefits, self-enhancement, concern for others and the need for gratification are more likely to provide user reviews. Providing a user review is suggested to fulfil a sense of belonging by conforming to beliefs of a majority or minority opinion of personal experience.Review bombing is when user reviews are made en masse in order to more strongly influence the creator of a product or its sales, in response to an actual or perceived slight against the customers. In some situations, research suggests that competitors take advantage of anonymous review systems to negatively influence and control the intensity of their competition. Case studies: Many researchers have profiled user reviews on Yelp.Research has shown that user reviews often influence consumer purchases in the hospitality industry.User reviews have created criticism and questioning of health care practices, when before the advent of user reviews, health care providers were rarely criticized or evaluated by users.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded