text
stringlengths
60
353k
source
stringclasses
2 values
**Symbol table** Symbol table: In computer science, a symbol table is a data structure used by a language translator such as a compiler or interpreter, where each identifier (or symbol), constant, procedure and function in a program's source code is associated with information relating to its declaration or appearance in the source. In other words, the entries of a symbol table store the information related to the entry's corresponding symbol. Background: A symbol table may only exist in memory during the translation process, or it may be embedded in the output of the translation, such as in an ABI object file for later use. For example, it might be used during an interactive debugging session, or as a resource for formatting a diagnostic report during or after execution of a program. Description: The minimum information contained in a symbol table used by a translator and intermediate representation (IR) includes the symbol's name and its location or address. For a compiler targeting a platform with a concept of relocatability, it will also contain relocatability attributes (absolute, relocatable, etc.) and needed relocation information for relocatable symbols. Symbol tables for high-level programming languages may store the symbol's type: string, integer, floating-point, etc., its size, and its dimensions and its bounds. Not all of this information is included in the output file, but may be provided for use in debugging. In many cases, the symbol's cross-reference information is stored with or linked to the symbol table. Most compilers print some or all of this information in symbol table and cross-reference listings at the end of translation. Implementation: Numerous data structures are available for implementing tables. Trees, linear lists and self-organizing lists can all be used to implement a symbol table. The symbol table is accessed by most phases of a compiler, beginning with lexical analysis, and continuing through optimization. Implementation: A compiler may use one large symbol table for all symbols or use separated, or hierarchical symbol tables for different scopes. For example, in a scoped language such as Algol or PL/I a symbol "p" can be declared separately in several procedures, perhaps with different attributes. The scope of each declaration is the section of the program in which references to "p" resolve to that declaration. Each declaration represents a unique identifier "p". The symbol table must have some means of differentiating references to the different "p"s. Implementation: A common data structure used to implement symbol tables is the hash table. The time for searching in hash tables is independent of the number of elements stored in the table, so it is efficient for a large number of elements. It also simplifies the classification of literals in tabular format by including the classification in calculation of the hash key.As the lexical analyser spends a great proportion of its time looking up the symbol table, this activity has a crucial effect on the overall speed of the compiler. A symbol table must be organised in such a way that entries can be found as quickly as possible. Hash tables are usually used to organise a symbol table, where the keyword or identifier is 'hashed' to produce an array subscript. Collisions are inevitable in a hash table, and a common way of handling them is to store the synonym in the next available free space in the table. Applications: An object file will contain a symbol table of the identifiers it contains that are externally visible. During the linking of different object files, a linker will identify and resolve these symbol references. Usually all undefined external symbols will be searched for in one or more object libraries. If a module is found that defines that symbol it is linked together with the first object file, and any undefined external identifiers are added to the list of identifiers to be looked up. This process continues until all external references have been resolved. It is an error if one or more remains unresolved at the end of the process. Applications: While reverse engineering an executable, many tools refer to the symbol table to check what addresses have been assigned to global variables and known functions. If the symbol table has been stripped or cleaned out before being converted into an executable, tools will find it harder to determine addresses or understand anything about the program. Example: Consider the following program written in C: A C compiler that parses this code will contain at least the following symbol table entries: In addition, the symbol table may also contain entries generated by the compiler for intermediate expression values (e.g., the expression that casts the i loop variable into a double, and the return value of the call to function bar()), statement labels, and so forth. Example: SysV ABI: An example of a symbol table can be found in the SysV Application Binary Interface (ABI) specification, which mandates how symbols are to be laid out in a binary file, so that different compilers, linkers and loaders can all consistently find and work with the symbols in a compiled object. Example: SysV ABI: The SysV ABI is implemented in the GNU binutils' nm utility. This format uses a sorted memory address field, a "symbol type" field, and a symbol identifier (called "Name").The symbol types in the SysV ABI (and nm's output) indicate the nature of each entry in the symbol table. Each symbol type is represented by a single character. For example, symbol table entries representing initialized data are denoted by the character "d" and symbol table entries for functions have the symbol type "t" (because executable code is located in the text section of an object file). Additionally, the capitalization of the symbol type indicates the type of linkage: lower-case letters indicate the symbol is local and upper-case indicates external (global) linkage. Example: the Python symbol table: The Python programming language includes extensive support for creating and manipulating symbol tables. Properties that can be queried include whether a given symbol is a free variable or a bound variable, whether it is block scope or global scope, whether it is imported, and what namespace it belongs to. Example: Dynamic symbol tables: Some programming languages allow the symbol table to be manipulated at run-time, so that symbols can be added at any time. Racket is an example of such a language.Both the LISP and the Scheme programming languages allow arbitrary, generic properties to be associated with each symbol.The Prolog programming language is essentially a symbol-table manipulation language; symbols are called atoms, and the relationships between symbols can be reasoned over. Similarly, OpenCog provides a dynamic symbol table, called the atomspace, which is used for knowledge representation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Castell's sign** Castell's sign: Castell's sign is a medical sign assessed to evaluate splenomegaly and typically part of an abdominal examination. It is an alternative physical examination maneuver to percussion over Traube's space. Castell's sign: Splenomegaly, although associated with numerous diseases, remains one of the more elusive physical exam findings in the abdomen. Conditions such as infectious mononucleosis, thalassemia, and cirrhotic liver disease may all involve splenomegaly and as a result, the search for a reliable sign associated with this condition has been sought for generations. Currently, several such signs of splenomegaly exist, all of whose utility has been debated in medical literature. The presence or absence of splenomegaly, however, can be reliably appreciated on physical exam using Castell's sign in conjunction with other clinical information, increasing the positive predictive value of the test. When used in a decision-making rubric, Castell's sign becomes a valuable part of deciding whether to pursue further imaging. Technique: Castell's method involves first placing the patient in the supine position. With the patient in full inspiration and then full expiration, percuss the area of the lowest intercostal space (eighth or ninth) in the left anterior axillary line. If the note changes from resonant on full expiration to dull on full inspiration, the sign is regarded as positive. The resonant note heard upon full expiration is likely to be due to the air-filled stomach or splenic flexure of the colon. When the patient inspires, the spleen moves inferiorly along the posterolateral abdominal wall. If the spleen is enlarged enough that the inferior pole reaches the eighth or ninth intercostal space, a dull percussion note will be appreciated, indicating splenomegaly. Technique: Some limitations, however, were also reported by Castell in his original paper. First the presence of gross splenomegaly or profuse fluid in the stomach or colon may lead to the absence of a resonant percussion note on full expiration. Also, later articles have criticized the maneuver's reliability as befalling to more obese individuals and the amount of time the patient is post-prandial. Interpretation: The 1993 systematic review by the Rational Clinical Examination found that Castell's sign was the most sensitive physical examination maneuver for detecting splenomegaly when comparing palpation, Nixon's sign (another percussion sign), and Traube's space percussion: sensitivity = 82% specificity = 83%In asymptomatic patients where there is a very low clinical suspicion for splenomegaly, physical examination alone is unlikely to rule-in splenomegaly due to the inadequate sensitivity of the examination. Similar to many other findings in medicine, Castell's sign must be combined with clinical findings to rule in splenomegaly. To achieve a positive predictive value over 90%, the pretest probability must be 70%. Grover et al., recommends a greater than 10% preexamination clinical suspicion of splenic enlargement to effectively rule in the diagnosis of splenomegaly with physical exam. However, a 10% pretest probability only yields a positive predictive value of 35%.To rule out an enlarged spleen, a pretest probability of 30% or less will yield a negative predictive value over 90% (calculation) Given the paucity of physical exam findings to evaluate possible splenomegaly, Castell's sign is the most sensitive, and is thus a good tool to teach in an advanced-type physical diagnosis course. Castell's has been shown to be superior in sensitivity to other spleen percussion signs as well as palpation, which is not likely useful due to the extreme enlargement necessary to feel the spleen below the costal margin. Castell's sign is thus, in the appropriate clinical scenario, an important part of the abdominal physical exam. History: Donald O. Castell first described his sign in the 1967 paper, “The Spleen Percussion Sign” published in Annals of Internal Medicine. Castell, a George Washington Medical School graduate, is also a Navy-trained gastroenterologist. While stationed at the Great Lakes naval base in northern Illinois, Castell studied 20 male patients, 10 of whom had a positive percussion [Castell's] sign and 10 patient controls with negative percussion signs. The spleen of each patient was then quantitatively measured using chromium-labeled erythrocytes and radioisotope photoscan of the spleen. Castell showed those patients in the control group had a mean spleen size of 75 cm2 with a range of 57 cm2 to 75 cm2, while those who had a positive percussion sign had a mean spleen size of 93 cm2 with a range of 77 cm2 to 120 cm2. Castell concluded that his technique of spleen percussion was thus useful in identifying “slight to moderate degrees of splenic enlargement” and, as a result, constituted a “valuable diagnostic technique.”
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Loose snow avalanche** Loose snow avalanche: A loose snow avalanche is an avalanche formed in snow with little internal cohesion among individual snow crystals. Usually very few fatalities occur from loose snow avalanches, as the avalanches have a tendency to break beneath the person and are usually small even having a path as small as a few centimeters, and as a result are sometimes called "harmless sloughs" that usually at most cause the person to merely fall. However based on the terrain loose snow avalanches can grow large, and have been known to carry people off a cliff and into a crevass or bury them in a gully, and even completely destroy houses and other similar buildings. Ideal conditions for a loose snow avalanche are steep slope angles of 40 degrees and more, persistent sub-zero temperatures and low humidity, moderate to heavy snowfall, and also in an area where winds are very light or are not affecting the density of the snow. This produces light, fluffy snow that is hard or unable to pack.Small loose snow avalanches can even be a sign of stability of the snow, as slabs are triggered by a hard layer of snow over a very soft and weak layer of snow, and loose snow avalanches consist of a very soft layer of snow on a hard layer or the ground.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Solar eclipse of April 18, 1977** Solar eclipse of April 18, 1977: An annular solar eclipse took place at the Moon's descending node of the orbit on Monday, April 18, 1977. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. An annular solar eclipse occurs when the Moon's apparent diameter is smaller than the Sun's, blocking most of the Sun's light and causing the Sun to look like an annulus (ring). An annular eclipse appears as a partial eclipse over a region of the Earth thousands of kilometres wide. Annularity was visible in South West Africa (today's Namibia), Angola, Zambia, southeastern Zaire (today's Democratic Republic of Congo), northern Malawi, Tanzania, Seychelles and the whole British Indian Ocean Territory. Related eclipses: Eclipses in 1977 A partial lunar eclipse on Monday, 4 April 1977. An annular solar eclipse on Monday, 18 April 1977. A penumbral lunar eclipse on Tuesday, 27 September 1977. A total solar eclipse on Wednesday, 12 October 1977. Solar eclipses of 1975–1978 There were 8 solar eclipses (at 6 month intervals) between May 11, 1975 and October 2, 1978. Related eclipses: Saros 138 It is a part of Saros cycle 138, repeating every 18 years, 11 days, containing 70 events. The series started with partial solar eclipse on June 6, 1472. It contains annular eclipses from August 31, 1598, through February 18, 2482 with a hybrid eclipse on March 1, 2500. It has total eclipses from March 12, 2518, through April 3, 2554. The series ends at member 70 as a partial eclipse on July 11, 2716. The longest duration of totality will be only 56 seconds on April 3, 2554. Related eclipses: Inex series This eclipse is a part of the long period inex cycle, repeating at alternating nodes, every 358 synodic months (≈ 10,571.95 days, or 29 years minus 20 days). Their appearance and longitude are irregular due to a lack of synchronization with the anomalistic month (period of perigee). However, groupings of 3 inex cycles (≈ 87 years minus 2 months) comes close (≈ 1,151.02 anomalistic months), so eclipses are similar in these groupings. Related eclipses: Metonic series The metonic series repeats eclipses every 19 years (6939.69 days), lasting about 5 cycles. Eclipses occur in nearly the same calendar date. In addition, the octon subseries repeats 1/5 of that or every 3.8 years (1387.94 days). All eclipses in this table occur at the Moon's descending node.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Peirce's criterion** Peirce's criterion: In robust statistics, Peirce's criterion is a rule for eliminating outliers from data sets, which was devised by Benjamin Peirce. Outliers removed by Peirce's criterion: The problem of outliers In data sets containing real-numbered measurements, the suspected outliers are the measured values that appear to lie outside the cluster of most of the other data values. The outliers would greatly change the estimate of location if the arithmetic average were to be used as a summary statistic of location. The problem is that the arithmetic mean is very sensitive to the inclusion of any outliers; in statistical terminology, the arithmetic mean is not robust. Outliers removed by Peirce's criterion: In the presence of outliers, the statistician has two options. First, the statistician may remove the suspected outliers from the data set and then use the arithmetic mean to estimate the location parameter. Second, the statistician may use a robust statistic, such as the median statistic. Peirce's criterion is a statistical procedure for eliminating outliers. Outliers removed by Peirce's criterion: Uses of Peirce's criterion The statistician and historian of statistics Stephen M. Stigler wrote the following about Benjamin Peirce: "In 1852 he published the first significance test designed to tell an investigator whether an outlier should be rejected (Peirce 1852, 1878). The test, based on a likelihood ratio type of argument, had the distinction of producing an international debate on the wisdom of such actions (Anscombe, 1960, Rider, 1933, Stigler, 1973a)." Peirce's criterion is derived from a statistical analysis of the Gaussian distribution. Unlike some other criteria for removing outliers, Peirce's method can be applied to identify two or more outliers. Outliers removed by Peirce's criterion: "It is proposed to determine in a series of m observations the limit of error, beyond which all observations involving so great an error may be rejected, provided there are as many as n such observations. The principle upon which it is proposed to solve this problem is, that the proposed observations should be rejected when the probability of the system of errors obtained by retaining them is less than that of the system of errors obtained by their rejection multiplied by the probability of making so many, and no more, abnormal observations." Hawkins provides a formula for the criterion. Outliers removed by Peirce's criterion: Peirce's criterion was used for decades at the United States Coast Survey. Outliers removed by Peirce's criterion: "From 1852 to 1867 he served as the director of the longitude determinations of the U. S. Coast Survey and from 1867 to 1874 as superintendent of the Survey. During these years his test was consistently employed by all the clerks of this, the most active and mathematically inclined statistical organization of the era." Peirce's criterion was discussed in William Chauvenet's book. Applications: An application for Peirce's criterion is removing poor data points from observation pairs in order to perform a regression between the two observations (e.g., a linear regression). Peirce's criterion does not depend on observation data (only characteristics of the observation data), therefore making it a highly repeatable process that can be calculated independently of other processes. This feature makes Peirce's criterion for identifying outliers ideal in computer applications because it can be written as a call function. Applications: Previous attempts In 1855, B. A. Gould attempted to make Peirce's criterion easier to apply by creating tables of values representing values from Peirce's equations. A disconnect still exists between Gould's algorithm and the practical application of Peirce's criterion. Applications: In 2003, S. M. Ross (University of New Haven) re-presented Gould's algorithm (now called "Peirce's method") with a new example data set and work-through of the algorithm. This methodology still relies on using look-up tables, which have been updated in this work (Peirce's criterion table).In 2008, an attempt to write a pseudo-code was made by a Danish geologist K. Thomsen. While this code provided some framework for Gould's algorithm, users were unsuccessful in calculating values reported by either Peirce or Gould. Applications: In 2012, C. Dardis released the R package "Peirce" with various methodologies (Peirce's criterion and the Chauvenet method) with comparisons of outlier removals. Dardis and fellow contributor Simon Muller successfully implemented Thomsen's pseudo-code into a function called "findx". The code is presented in the R implementation section below. References for the R package are available online as well as an unpublished review of the R package results.In 2013, a re-examination of Gould's algorithm and the utilisation of advanced Python programming modules (i.e., numpy and scipy) has made it possible to calculate the squared-error threshold values for identifying outliers. Applications: Python implementation In order to use Peirce's criterion, one must first understand the input and return values. Regression analysis (or the fitting of curves to data) results in residual errors (or the difference between the fitted curve and the observation points). Therefore, each observation point has a residual error associated with a fitted curve. By taking the square (i.e., residual error raised to the power of two), residual errors are expressed as positive values. If the squared error is too large (i.e., due to a poor observation) it can cause problems with the regression parameters (e.g., slope and intercept for a linear curve) retrieved from the curve fitting. Applications: It was Peirce's idea to statistically identify what constituted an error as "too large" and therefore being identified as an "outlier" which could be removed from the observations to improve the fit between the observations and a curve. K. Thomsen identified that three parameters were needed to perform the calculation: the number of observation pairs (N), the number of outliers to be removed (n), and the number of regression parameters (e.g., coefficients) used in the curve-fitting to get the residuals (m). The end result of this process is to calculate a threshold value (of squared error) whereby observations with a squared error smaller than this threshold should be kept and observations with a squared error larger than this value should be removed (i.e., as an outlier). Applications: Because Peirce's criterion does not take observations, fitting parameters, or residual errors as an input, the output must be re-associated with the data. Taking the average of all the squared errors (i.e., the mean-squared error) and multiplying it by the threshold squared error (i.e., the output of this function) will result in the data-specific threshold value used to identify outliers. Applications: The following Python code returns x-squared values for a given N (first column) and n (top row) in Table 1 (m = 1) and Table 2 (m = 2) of Gould 1855. Due to the Newton-method of iteration, look-up tables, such as N versus log Q (Table III in Gould, 1855) and x versus log R (Table III in Peirce, 1852 and Table IV in Gould, 1855) are no longer necessary. Applications: Python code Java code R implementation Thomsen's code has been successfully written into the following function call, "findx" by C. Dardis and S. Muller in 2012 which returns the maximum error deviation, x . To complement the Python code presented in the previous section, the R equivalent of "peirce_dev" is also presented here which returns the squared maximum error deviation, x2 . These two functions return equivalent values by either squaring the returned value from the "findx" function or by taking the square-root of the value returned by the "peirce_dev" function. Differences occur with error handling. For example, the "findx" function returns NaNs for invalid data while "peirce_dev" returns 0 (which allows for computations to continue without additional NA value handling). Also, the "findx" function does not support any error handling when the number of potential outliers increases towards the number of observations (throws missing value error and NaN warning). Applications: Just as with the Python version, the squared-error (i.e., x2 ) returned by the "peirce_dev" function must be multiplied by the mean-squared error of the model fit to get the squared-delta value (i.e., Δ2). Use Δ2 to compare the squared-error values of the model fit. Any observation pairs with a squared-error greater than Δ2 are considered outliers and can be removed from the model. An iterator should be written to test increasing values of n until the number of outliers identified (comparing Δ2 to model-fit squared-errors) is less than those assumed (i.e., Peirce's n).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fumonisin B2** Fumonisin B2: Fumonisin B2 is a fumonisin mycotoxin produced by the fungi Fusarium verticillioides (formerly Fusarium moniliforme) and Aspergillus niger.It is a structural analog of fumonisin B3, while it is lacking one hydroxy group compared to fumonisin B1.Fumonisin B2 is more cytotoxic than fumonisin B1. Fumonisin B2 inhibits sphingosine acyltransferase. Fumonisin B2 and other fumonisins frequently contaminate maize and other crops, while recently it has been shown using LC–MS/MS that FB2 can contaminate coffee beans as well.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ophicleide (organ stop)** Ophicleide (organ stop): Ophicleide ( OFF-ih-klyde) and Contra Ophicleide are powerful pipe organ reed pipes used as organ stops. The name comes from the early brass instrument, the ophicleide, forerunner of the euphonium. Ophicleide (organ stop): The Ophicleide is generally at 16 ft (4.9 m) pitch, and the Contra Ophicleide at 32 ft (9.8 m). While they can be 8 ft (2.4 m) or 16 ft (4.9 m) reeds in a manual division, they are most commonly found in the pedal division of the organ. Voiced to develop both maximum fundamental tone (as in the Bombarde) and overtone series (as in the Posaune), if the classic voicing technique and use of terminology are followed, the Ophicleide and Contra Ophicleide are among the most powerful and loudest organ stops. Generally the only types of stop more powerful are the various forms of Trompette en chamade. However, the Ophicleides require an extremely large instrument to balance their sound, and so are rarely built today, except into the largest of organs (about one hundred ranks and up). The Grand Ophicleide in the Boardwalk Hall Organ, Atlantic City, New Jersey, is recognized as the loudest organ stop in the world, voiced on 100 in (2.5 m) of wind pressure. Its tone is described by Guinness World Records as having "a pure trumpet note of ear-splitting volume, more than six times the volume of the loudest locomotive whistle."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Product category volume** Product category volume: In Marketing, Product category volume (PCV) is the weighted measure of distribution based on store sales within the product category. PCV is a refinement of all commodity volume (ACV). It examines the share of the relevant product category sold by stores in which a given product has gained distribution.Distribution metrics quantify the availability of products sold through retailers, usually as a percentage of all potential outlets. Often, outlets are weighted by their share of category sales or “all commodity” sales. For marketers who sell through resellers, distribution metrics reveal a brand's percentage of market access. Balancing a firm's efforts in “push” (building and maintaining reseller and distribution support) and “pull” (generating customer demand) is an ongoing strategic concern for marketers. Purpose: Product category volume measures a firm's ability to convey a product to its customers in terms of total category sales among outlets carrying the brand. It helps marketers understand whether a given product is gaining distribution in outlets where customers look for its category, as opposed to simply high-traffic stores where the product may get lost in the aisles.When detailed sales data are available, PCV can provide a strong indication of the market share within a category to which a given brand has access. If sales data are not available, marketers can calculate an approximate PCV by using square footage devoted to the relevant category as an indication of the importance of that category to a particular outlet or store type. Construction: Product category volume (PCV) is the percentage share (or dollar value) of category sales made by stores that stock at least one SKU of the brand in question, in comparison with all stores in their universe. Product Category Volume (PCV) Distribution (%) = 100 x Total Category Sales of Outlets Carrying Brand ($) ÷ Total Category Sales of All Outlets ($)Product Category Volume (PCV) Distribution ($) = Total Category Sales of Outlets Carrying Brand ($)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rose hip seed oil** Rose hip seed oil: Rose hip seed oil is a pressed seed oil, extracted from the seeds of the wild rose bush Rosa rubiginosa (Spanish: rosa mosqueta) in the southern Andes. Rosehip seed oil can also be extracted from Rosa canina, a wild rose species native to Europe, northwest Africa, and western Asia. The fruits of the rosehip have been used in folk medicine for a long time. Rosehips have prophylactic and therapeutic actions against the common cold, infectious diseases, gastrointestinal disorders, urinary tract diseases, and inflammatory diseases. Nutrition: The oil contains provitamin A (mostly beta-Carotene). It has been wrongly said to contain retinol (vitamin A) which is a vitamin solely made by animals from provitamin A. It does however contain levels (up to .357 mg/L) of tretinoin or all-trans retinoic acid, a vitamin A acid that retinol converts to.Similarly, while the fruit is rich in vitamin C, the oil does not contain any, as it is a water-soluble vitamin.Rose hip seed oil is high in the essential fatty acids: linoleic acid or omega-6, and α-linolenic acid or omega-3. Nutrition: Rose hips are remarkable fruits for their traditional pharmaceutical uses, which may be partly attributed to their rich profile of bioactives, especially antioxidant phenolics (Olsson et al., 2005). The seed lipids of rose hips contain high amounts of polyunsaturated fatty acids (Szentmihalyi et al., 2002). Rose hips are popular due to their food, phytomedicine, and cosmo-nutraceutical uses (Uggla et al., 2003). The fruits (rose hips) of Rosa canina in particular contain high content of vitamin C and proanthocyanidins and are used for various food and pharmaceutical applications (Osmianski et al., 1986). Nutrition: This chapter mainly focuses on the traditional pharmaceutical and food science applications of rose hips and the essential oil of a widely distributed species of rose hips, R. canina L https://www.researchgate.net/publication/283507224_Rose_Hip_Rosa_canina_L_oils/link/5a6dfc610f7e9bd4ca6d46bd/download Uses: Researchers have tested the efficacy of topical rose hip seed oil together with an oral fat-soluble vitamins on different inflammatory dermatitis such as eczema, neurodermatitis, and cheilitis, with promising findings of the topical use of rose hip seed oil on these inflammatory dermatose. Due its high composition of UFAs and antioxidants, rose hip oil has relatively high protection against inflammation and oxidative stress.Research on rose hip oil has shown that it reduces skin pigmentation, reduces discolouration, acne lesions, scars and stretch marks, as well as retaining the moisture of the skin and delaying the appearance of wrinkles. Cosmetologists recommend wild rose seed oil as a natural skin-vitaliser.A 2014 study on the nutritional composition and phytochemical composition of the rose hip seed and the fatty acid and sterol compositions of the seed oil showed that rose hip seed and seed oil were good sources of phytonutrients. Consumption of foods rich in phytonutrients is recommended to reduce the risk of chronic diseases. The nutritional composition and the presence of bioactive compounds make the rose hip seed a valuable source of phytonutrients. The rose hip seed was highly rich in carbohydrates and ascorbic acid, and the rose hip seed oil was highly rich in polyunsaturated fatty acids and phytosterols. The rose hip seed and seed oil proved to have antioxidant activity. The findings of the study indicated that the rose hip seed and seed oil may be proposed as ingredients in functional food formulations and dietary supplements.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flame test** Flame test: A flame test is an analytical procedure used in chemistry to detect the presence of certain elements, primarily metal ions, based on each element's characteristic flame emission spectrum (which may be affected by the presence of chloride ion). The color of flames in general also depends on temperature and oxygen fed; see flame color. Process: The test involves introducing a sample of the element or compound to a hot, non-luminous flame, and observing the color of the flame that results. The idea of the test is that sample atoms evaporate and since they are hot, they emit light when being in flame. The solvent of the solution evaporates first, leaving finely divided solid particles which move to the hottest region of the flame where gaseous atoms and ions are produced through the dissociation of molecules. Here electrons are excited by the heat, and they spontaneously emit photon to decay to lower energy states. Bulk sample emits light too, but its light is not good for analysis. Bulk samples emit light with hydrochloric acid to remove traces of previous analytes. The compound is usually made into a paste with concentrated hydrochloric acid, as metal halides, being volatile, give better results. Different flames should be tried to avoid wrong data due to "contaminated" flames, or occasionally to verify the accuracy of the color. In high-school chemistry courses, wooden splints are sometimes used, mostly because solutions can be dried onto them, and they are inexpensive. Nichrome wire is also sometimes used. When using a splint, one must be careful to wave the splint through the flame rather than holding it in the flame for extended periods, to avoid setting the splint itself on fire. The use of cotton swab or melamine foam (used in "eraser" cleaning sponges) as a support has also been suggested. Process: Sodium is a common component or contaminant in many compounds and its spectrum tends to dominate over others. The test flame is often viewed through cobalt blue glass to filter out the yellow of sodium and allow for easier viewing of other metal ions. Results: The flame test is relatively quick and simple to perform and can be carried out with the basic equipment found in most chemistry laboratories. However, the range of elements positively detectable under these conditions is small, as the test relies on the subjective experience of the experimenter rather than any objective measurements. The test has difficulty detecting small concentrations of some elements, while too strong a result may be produced for certain others, which tends to cause fainter colors to not appear. Results: Although the flame test only gives qualitative information, not quantitative data about the proportion of elements in the sample, quantitative data can be obtained by the related techniques of flame photometry or flame emission spectroscopy. Flame atomic absorption spectroscopy Instruments, made by e.g. PerkinElmer or Shimadzu, can be operated in emission mode according to the instrument manuals. Common elements: Some common elements and their corresponding colors are: Gold, silver, platinum, palladium, and a number of other elements do not produce a characteristic flame color, although some may produce sparks (as do metallic titanium and iron); salts of beryllium and gold reportedly deposit pure metal on cooling.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CXCR4** CXCR4: C-X-C chemokine receptor type 4 (CXCR-4) also known as fusin or CD184 (cluster of differentiation 184) is a protein that in humans is encoded by the CXCR4 gene. The protein is a CXC chemokine receptor. Function: CXCR-4 is an alpha-chemokine receptor specific for stromal-derived-factor-1 (SDF-1 also called CXCL12), a molecule endowed with potent chemotactic activity for lymphocytes. CXCR4 is one of several chemokine co-receptors that HIV can use to infect CD4+ T cells. HIV isolates that use CXCR4 are traditionally known as T-cell tropic isolates. Typically, these viruses are found late in infection. It is unclear as to whether the emergence of CXCR4-using HIV is a consequence or a cause of immunodeficiency.CXCR4 is upregulated during the implantation window in natural and hormone replacement therapy cycles in the endometrium, producing, in presence of a human blastocyst, a surface polarization of the CXCR4 receptors suggesting that this receptor is implicated in the adhesion phase of human implantation.CXCR4's ligand SDF-1 is known to be important in hematopoietic stem cell homing to the bone marrow and in hematopoietic stem cell quiescence. It has been also shown that CXCR4 signalling regulates the expression of CD20 on B cells. Until recently, SDF-1 and CXCR4 were believed to be a relatively monogamous ligand-receptor pair (other chemokines are promiscuous, tending to use several different chemokine receptors). Recent evidence demonstrates ubiquitin is also a natural ligand of CXCR4. Ubiquitin is a small (76-amino acid) protein highly conserved among eukaryotic cells. It is best known for its intracellular role in targeting ubiquitylated proteins for degradation via the ubiquitin proteasome system. Evidence in numerous animal models suggests ubiquitin is anti-inflammatory immune modulator and endogenous opponent of proinflammatory damage associated molecular pattern molecules. It is speculated this interaction may be through CXCR4 mediated signalling pathways. MIF is an additional ligand of CXCR4CXCR4 is present in newly generated neurons during embryogenesis and adult life where it plays a role in neuronal guidance. The levels of the receptor decrease as neurons mature. CXCR4 mutant mice have aberrant neuronal distribution. This has been implicated in disorders such as epilepsy.CXCR4 dimerization is dynamic and increases with concentration. Clinical significance: Drugs that block the CXCR4 receptor appear to be capable of "mobilizing" hematopoietic stem cells into the bloodstream as peripheral blood stem cells. Peripheral blood stem cell mobilization is very important in hematopoietic stem cell transplantation (as a recent alternative to transplantation of surgically harvested bone marrow) and is currently performed using drugs such as G-CSF. G-CSF is a growth factor for neutrophils (a common type of white blood cells), and may act by increasing the activity of neutrophil-derived proteases such as neutrophil elastase in the bone marrow leading to proteolytic degradation of SDF-1. Plerixafor (AMD3100) is a drug, approved for routine clinical use, which directly blocks the CXCR4 receptor. It is a very efficient inducer of hematopoietic stem cell mobilization in animal and human studies. In a small human clinical trial to evaluate the safety and efficacy of fucoidan ingestion (brown seaweed extract), 3g daily of 75% w/w oral fucoidan for 12 days increased the proportion of CD34+CXCR4+ from 45 to 90% and the serum SDF-1 levels, which could be useful in CD34+ cells homing/mobilization via SDF-1/CXCR4 axis.It has been associated with WHIM syndrome. WHIM like mutations in CXCR4 were recently identified in patients with Waldenström's macroglobulinemia, a B-cell malignancy. The presence of CXCR4 WHIM mutations has been associated with clinical resistance to ibrutinib in patients with Waldenström's macroglobulinemia.While CXCR4's expression is low or absent in many healthy tissues, it was demonstrated to be expressed in over 23 types of cancer, including breast cancer, ovarian cancer, melanoma, and prostate cancer. Expression of this receptor in cancer cells has been linked to metastasis to tissues containing a high concentration of CXCL12, such as lungs, liver and bone marrow. However, in breast cancer where SDF1/CXCL12 is also expressed by the cancer cells themselves along with CXCR4, CXCL12 expression is positively correlated with disease free (metastasis free) survival. CXCL12 (over-)expressing cancers might not sense the CXCL12 gradient released from the metastasis target tissues since the receptor, CXCR4, is saturated with the ligand produced in an autocrine manner. Another explanation of this observation is provided by a study that shows the ability of CXCL12 (and CCL2) producing tumors to entrain neutrophils that inhibit seeding of tumor cells in the lung. Drug response: Chronic exposure to THC has been shown to increase T lymphocyte CXCR4 expression on both CD4+ and CD8+ T lymphocytes in rhesus macaques. It has been shown that BCR signalling inhibitors also affect CXCR4 pathway and thus CD20 expression. Interactions: CXCR4 has been shown to interact with USP14.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RAD54B** RAD54B: DNA repair and recombination protein RAD54B is a protein that in humans is encoded by the RAD54B gene.The protein encoded by this gene belongs to the DEAD-like helicase superfamily. It shares similarity with Saccharomyces cerevisiae RAD54 and RDH54, both of which are involved in homologous recombination and repair of DNA. This protein binds to double-stranded DNA, and displays ATPase activity in the presence of DNA. This gene is highly expressed in testis and spleen, which suggests active roles in meiotic and mitotic recombination. Homozygous mutations of this gene were observed in primary lymphoma and colon cancer. Interactions: RAD54B has been shown to interact with RAD51. Cancer: The RAD54B gene is somatically mutated or deleted in numerous types of cancer including colorectal cancer (~3.3%), breast cancer (~3.4%), and lung cancer (~2.6%). In North America, these three cancers alone account for about 20,500 individuals diagnosed annually with RAD54B defective cancer. In a pre-clinical study, colon cancer cells defective in RAD54B were determined to be selectively killed by inhibitors of the DNA repair protein PARP1. Inhibitors of PARP1 likely impede alternative DNA repair responses that might otherwise compensate for loss of the RAD54B pathway in cancer cells. Thus RAD54B-deficient cancer cells treated with a PARP1 inhibitor are apparently more vulnerable to killing by naturally occurring DNA damages than non-cancerous cells without a RAD54 defect (see article Synthetic lethality).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Methylotroph** Methylotroph: Methylotrophs are a diverse group of microorganisms that can use reduced one-carbon compounds, such as methanol or methane, as the carbon source for their growth; and multi-carbon compounds that contain no carbon-carbon bonds, such as dimethyl ether and dimethylamine. This group of microorganisms also includes those capable of assimilating reduced one-carbon compounds by way of carbon dioxide using the ribulose bisphosphate pathway. These organisms should not be confused with methanogens which on the contrary produce methane as a by-product from various one-carbon compounds such as carbon dioxide. Methylotroph: Some methylotrophs can degrade the greenhouse gas methane, and in this case they are called methanotrophs. The abundance, purity, and low price of methanol compared to commonly used sugars make methylotrophs competent organisms for production of amino acids, vitamins, recombinant proteins, single-cell proteins, co-enzymes and cytochromes. Metabolism: The key intermediate in methylotrophic metabolism is formaldehyde, which can be diverted to either assimilatory or dissimilatory pathways. Methylotrophs produce formaldehyde through oxidation of methanol and/or methane. Methane oxidation requires the enzyme methane monooxygenase (MMO). Methylotrophs with this enzyme are given the name methanotrophs. The oxidation of methane (or methanol) can be assimilatory or dissimilatory in nature (see figure). If dissimilatory, the formaldehyde intermediate is oxidized completely into CO 2 to produce reductant and energy. If assimilatory, the formaldehyde intermediate is used to synthesize a 3-Carbon ( C3 ) compound for the production of biomass. Many methylotrophs use multi-carbon compounds for anabolism, thus limiting their use of formaldehyde to dissimilatory processes, however methanotrophs are generally limited to only {\textstyle {\ce {C1}}} metabolism. Metabolism: Catabolism Methylotrophs use the electron transport chain to conserve energy produced from the oxidation of C1 compounds. An additional activation step is required in methanotrophic metabolism to allow degradation of chemically-stable methane. This oxidation to methanol is catalyzed by MMO, which incorporates one oxygen atom from O2 into methane and reduces the other oxygen atom to water, requiring two equivalents of reducing power. Methanol is then oxidized to formaldehyde through the action of methanol dehydrogenase (MDH) in bacteria, or a non-specific alcohol oxidase in yeast. Electrons from methanol oxidation are passed to a membrane-associated quinone of the electron transport chain to produce ATP .In dissimilatory processes, formaldehyde is completely oxidized to CO 2 and excreted. Formaldehyde is oxidized to formate via the action of Formaldehyde dehydrogenase (FALDH), which provides electrons directly to a membrane associated quinone of the electron transport chain, usually cytochrome b or c. In the case of NAD + associated dehydrogenases, NADH is produced.Finally, formate is oxidized to CO 2 by cytoplasmic or membrane-bound Formate dehydrogenase (FDH), producing NADH and CO 2 Anabolism The main metabolic challenge for methylotrophs is the assimilation of single carbon units into biomass. Through de novo synthesis, methylotrophs must form carbon-carbon bonds between 1-Carbon ( C1 ) molecules. This is an energy intensive process, which facultative methylotrophs avoid by using a range of larger organic compounds. However, obligate methylotrophs must assimilate C1 molecules. There are four distinct assimilation pathways with the common theme of generating one C3 molecule. Bacteria use three of these pathways while Fungi use one. All four pathways incorporate 3 C1 molecules into multi-carbon intermediates, then cleave one intermediate into a new C3 molecule. The remaining intermediates are rearranged to regenerate the original multi-carbon intermediates. Metabolism: Bacteria Each species of methylotrophic bacteria has a single dominant assimilation pathway. The three characterized pathways for carbon assimilation are the ribulose monophosphate (RuMP) and serine pathways of formaldehyde assimilation as well as the ribulose bisphosphate (RuBP) pathway of CO2 assimilation. Metabolism: Ribulose bisphosphate (RuBP) cycle Unlike the other assimilatory pathways, bacteria using the RuBP pathway derive all of their organic carbon from CO 2 assimilation. This pathway was first elucidated in photosynthetic autotrophs and is better known as the Calvin Cycle. Shortly thereafter, methylotrophic bacteria who could grow on reduced C1 compounds were found using this pathway.First, 3 molecules of ribulose 5-phosphate are phosphorylated to ribulose 1,5-bisphosphate (RuBP). The enzyme ribulose bisphosphate carboxylase (RuBisCO) carboxylates these RuBP molecules which produces 6 molecules of 3-phosphoglycerate (PGA). The enzyme phosphoglycerate kinase phosphorylates PGA into 1,3-diphosphoglycerate (DPGA). Reduction of 6 DPGA by the enzyme glyceraldehyde phosphate dehydrogenase generates 6 molecules of the C3 compound glyceraldehyde-3-phosphate (GAP). One GAP molecule is diverted towards biomass while the other 5 molecules regenerate the 3 molecules of ribulose 5-phosphate. Metabolism: Ribulose monophosphate (RuMP) cycle A new pathway was suspected when RuBisCO was not found in the methanotroph Methylmonas methanica. Through radio-labelling experiments, it was shown that M. methanica used the Ribulose monophate (RuMP) pathway. This has led researchers to propose that the RuMP cycle may have preceded the RuBP cycle.Like the RuBP cycle, this cycle begins with 3 molecules of ribulose-5-phosphate. However, instead of phosphorylating ribulose-5-phosphate, 3 molecules of formaldehyde form a C-C bond through an aldol condensation, producing 3 C6 molecules of 3-hexulose 6-phosphate (hexulose phosphate). One of these molecules of hexulose phosphate is converted into GAP and either pyruvate or dihydroxyacetone phosphate (DHAP). The pyruvate or DHAP is used towards biomass while the other 2 hexulose phosphate molecules and the molecule of GAP are used to regenerate the 3 molecules of ribulose-5-phosphate. Metabolism: Serine cycle Unlike the other assimilatory pathways, the serine cycle uses carboxylic acids and amino acids as intermediates instead of carbohydrates. First, 2 molecules of formaldehyde are added to 2 molecules of the amino acid glycine. This produces two molecules of the amino acid serine, the key intermediate of this pathway. These serine molecules eventually produce 2 molecules of 2-phosphoglycerate, with one C3 molecule going towards biomass and the other being used to regenerate glycine. Notably, the regeneration of glycine requires a molecule of CO 2 as well, therefore the Serine pathway also differs from the other 3 pathways by its requirement of both formaldehyde and CO 2 Yeasts Methylotrophic yeast metabolism differs from bacteria primarily on the basis of the enzymes used and the carbon assimilation pathway. Unlike bacteria which use bacterial MDH, methylotrophic yeasts oxidize methanol in their peroxisomes with a non-specific alcohol oxidase. This produces formaldehyde as well as hydrogen peroxide. Compartmentalization of this reaction in peroxisomes likely sequesters the hydrogen peroxide produced. Catalase is produced in the peroxisomes to deal with this harmful by-product. Metabolism: Dihydroxyacteone (DHA) cycle The dihydroxyacetone (DHA) pathway, also known as the xylulose monophosphate (XuMP) pathway, is found exclusively in yeast. This pathway assimilates three molecules of formaldehyde into 1 molecule of DHAP using 3 molecules of xylulose 5-phosphate as the key intermediate. DHA synthase acts as a transferase (transketolase) to transfer part of xylulose 5-phosphate to DHA. Then these 3 molecules of DHA are phosphorylated to DHAP by triokinase. Like the other cycles, 3 C3 molecules are produced with 1 molecule being directed for use as cell material. The other 2 molecules are used to regenerate xylulose 5-phosphate. Environmental Implications: As key players in the carbon cycle, methylotrophs work to reduce global warming primarily through the uptake of methane and other greenhouse gases. In aqueous environments, methanogenic bacteria produce 40-50% of the world's methane. Symbiosis between methanogens and methanotrophic bacteria greatly decreases the amount of methane released into the atmosphere.This symbiosis is also important in the marine environment. Marine bacteria are very important to food webs and biogeochemical cycles, particularly in coastal surface waters but also in other key ecosystems such as hydrothermal vents. There is evidence of widespread and diverse groups of methylotrophs in the ocean that have potential to significantly impact marine and estuarine ecosystems. One-carbon compounds used as a carbon and energy source by methylotrophs are found throughout the ocean. These compounds include methane, methanol, methylated amines, methyl halides, and methylated sulfur compounds, such as dimethylsulfide (DMS) and dimethylsulfoxide (DMSO). Some of these compounds are produced by phytoplankton and some come from the atmosphere. Studies incorporating a wider range of one-carbon substrates have found increasing diversity of methylotrophs, suggesting that the diversity of this bacterial group has not yet fully been explored.Because these compounds are volatile and impact the climate and atmosphere, research on the interaction of these bacteria with these one-carbon compounds can also help understanding of air-sea fluxes of these compounds, which impact climate predictions. For example, it is uncertain whether the ocean acts as a net source or sink of atmospheric methanol, but a diverse set of methylotrophs use methanol as their main energy source. In some regions, methylotrophs have been found to be a net sink of methanol, while in others a product of methylotroph activity, methylamine, has been found to be emitted from the ocean and form aerosols. The net direction of these fluxes depends on the utilization by methylotrophs. Environmental Implications: Studies have found that methylotrophic capacity varies with the productivity of a system, so the impacts of methylotrophy are likely seasonal. Because some of the one-carbon compounds used by methylotrophs, such as methanol and TMAO, are produced by phytoplankton, their availability will vary temporally and seasonally depending on phytoplankton blooms, weather events, and other ecosystem inputs. This means that methylotrophic metabolism is expected to follow similar dynamics, which will then impact biogeochemical cycles and carbon fluxes.Impacts of methylotrophs were also found in deep-sea hydrothermal vents. Methylotrophs, along with sulfur oxidizers and iron oxidizers, expressed key proteins associated with carbon fixation. These types of studies will contribute to further understanding of deep sea carbon cycling and the connectivity between deep ocean and surface carbon cycling. The expansion of omics technologies has accelerated research on the diversity of methylotrophs, their abundance and activity in a variety of environmental niches, and their interspecies interactions. Further research must be done on these bacteria and the overall effect of bacterial drawdown and transformation of one-carbon compounds in the ocean. Current evidence points to a potentially substantial role for methylotrophs in the ocean in the cycling of carbon but also potentially in the global nitrogen, sulfur and phosphorus cycles as well as the air-sea flux of carbon compounds, which could have global climate impacts.The use of methylotrophs in the agricultural sector is another way in which they can potentially impact the environment. Traditional chemical fertilizers supply nutrients not readily available from soil but can have some negative environmental impacts and are costly to produce. Methylotrophs have high potential as alternative biofertilizers and bioinoculants due to their ability to form mutualistic relationships with several plant species. Methylotrophs provide plants with nutrients such as soluble phosphorus and fixed nitrogen and also play a role in the uptake of said nutrients. Additionally, they can help plants respond to environmental stressors through the production of phytohormones. Methylotrophic growth also inhibits the growth of harmful plant pathogens and induces systemic resistance. Methylotrophic biofertilizers used either alone or together with chemical fertilizers have been shown to increase both crop yield and quality without loss of nutrients.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Location intelligence** Location intelligence: In business intelligence, location intelligence (LI), or spatial intelligence, is the process of deriving meaningful insight from geospatial data relationships to solve a particular problem. It involves layering multiple data sets spatially and/or chronologically, for easy reference on a map, and its applications span industries, categories and organizations. Location intelligence: Maps have been used to represent information throughout the ages, but what might be referenced as the first example of true location 'intelligence' was in London in 1854 when John Snow was able to debunk theories about the spread of cholera by overlaying a map of the area with the location of water pumps and was able to narrow the source to a single water pump. This layering of information over a map was able to identify relationships between different sets of geospatial data. Location intelligence: Location or geographical information system (GIS) tools enable spatial experts to collect, store, analyze and visualize data. Location intelligence experts can use a variety of spatial and business analytical tools to measure optimal locations for operating a business or providing a service. Location intelligence experts begin with defining the business ecosystem which has many interconnected economic influences. Such economic influences include but are not limited to culture, lifestyle, labor, healthcare, cost of living, crime, economic climate and education. Further definitions: The term "location intelligence" is often used to describe the people, data and technology employed to geographically "map" information. These mapping applications like Polaris Intelligence can transform large amounts of data linked to location (e.g. POIs, demographics, geofences) into color-coded visual representations (heat maps and thematic maps of variables of interest) that make it easy to see trends and generate meaningful intelligence. The creation of location intelligence is directed by domain knowledge, formal frameworks, and a focus on decision support. Location cuts across through everything i.e. devices, platforms, software and apps, and is one of the most important ingredients of understanding context in sync with social data, mobile data, user data, sensor data. Further definitions: Location intelligence is also used to describe the integration of a geographical component into business intelligence processes and tools, often incorporating spatial database and spatial OLAP tools. Further definitions: In 2012, Wayne Gearey from the real estate industry (JLL) offered the first applied course on location intelligence at the University of Texas at Dallas in which he defined location intelligence as the process for selecting the optimal location that will support workplace success and address a variety of business and financial objectives.Pitney Bowes MapInfo Corporation describes location intelligence as follows: "Spatial information, commonly known as "Location", relates to involving, or having the nature of where. Spatial is not constrained to a geographic location however most common business uses of spatial information deal with how spatial information is tied to a location on the earth. Miriam-Webster® defines Intelligence as "The ability to learn or understand, or the ability to apply knowledge to manipulate one`s environment." Combining these terms alludes to how you achieve an understanding of the spatial aspect of information and apply it to achieve a significant competitive advantage."Definition by Esri is as follows: "Location intelligence is achieved via visualization and analysis of data. By adding layers of geographic data—such as demographics, traffic, and weather—to a smart map or dashboard, organizations can use intelligence tools to identify where an event has taken place, understand why it is happening, and gain insight into what caused it."Definition by Yankee Group within their White Paper "Location Intelligence in Retail Banking: "...a business management term that refers to spatial data visualization, contextualization and analytical capabilities applied to solve a business problem." Commercial applications: Location intelligence is used by a broad range of industries to improve overall business results. Applications include: Communications and telecommunications: Network planning and design, boundary identification, identifying new customer markets. Financial services: Optimize branch locations, market analysis, share of wallet and cross-sell activities, mergers & acquisitions, industry sector analysis, risk management. Government: Census updates, law enforcement crime analysis, emergency response, environmental and land management, electoral redistricting, tax jurisdiction assignment, urban planning. Healthcare: Site selection, market segmentation, network analysis, growth assessments, spread of disease. Higher education: Student Recruitment, Alumni & Donor Tracking, Campus Mapping. Hotels and restaurants: Customer profile analysis, site selection, target marketing, expansion planning. Insurance: Address validation, underwriting and risk management, claims management, marketing and sales analysis, market penetration studies. K-12: School site selection, enrollment planning, school attendance area modification (boundary change), school consolidation, district consolidation, student achievement plotting. Media: Target market identification, subscriber demographics, media planning. Real estate: Site reports, comprehensive site analysis, demographic analysis, growth pattern analysis, retail modeling, presentation quality maps. Retail: Site selection, maximize per-store sales, identify under-performing stores, market analysis, retail leakage and supply gap analysis. Transportation: Transport planning, route monitoring.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fluperolone** Fluperolone: Fluperolone is a synthetic glucocorticoid corticosteroid which was never marketed. An acetate ester of fluperolone, fluperolone acetate, in contrast, has been marketed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Angustific acid** Angustific acid: Angustific acid A and angustific acid B are antiviral compounds isolated from Kadsura angustifolia. They are triterpenoids.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sony Digital Paper DPTS1** Sony Digital Paper DPTS1: The Sony Digital Paper DPT-S1 or Sony DPT-S1 is a discontinued 13.3-inch (approaching A4) E ink e-reader by Sony, aimed at professional business users. The DPT-S1 Digital Paper can display only PDF files at their native size and lacks the ability to display any other e-book formats. The reader has been criticized for being too expensive for most consumers, with an initial price of US$1,100, falling to $700 at its end. The reader is lightweight and has low power consumption, a Wi-Fi connection, and a stylus for making notes or highlights. Sony announced the discontinuation of the DPT-S1 in late 2016. Its successors are the Sony DPT-RP1 (released 2017, 13.3-inch screen) and Sony DPT-CP1 (released 2018, 10.3-inch screen), all inside the Sony DPT line of products. Specifications: The 13.3-inch e-Ink Mobius electronic paper screen has a resolution of 1200 × 1600 pixels, with a capacitive touchscreen. The device has an ARM Cortex-A8 at 1 GHz microprocessor. It was built on a SoC circuit made by Freescale. The amount of RAM was not published anywhere. Its internal storage, 4 GB, is shared between system and user; however, it is possible to expand the storage with a microSD card. It weighs 358 g (0.8 pounds) with a thickness of 6.8 mm. Novel to the DPT-S1 was the ability to interface with specific corporate networks by adding encryption, thus allowing legal professionals to make use of it in their workflow by integrating handwritten annotations into PDFs that could propagate when copied.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Keyboard bass** Keyboard bass: Keyboard bass (shortened to keybass and sometimes referred as a synth-bass) is the use of a smaller, low-pitched keyboard with fewer notes than a regular keyboard or pedal keyboard to substitute for the deep notes of a bass guitar or double bass in music. History: Early keyboard bass The pipe organ is the first, and the forefather of keyboard bass instruments. The bass pedal keyboard was developed in the 13th century. The keys for the hands are also capable of playing very low pipe tones. History: 1960s The earliest keyboard bass instrument was the 1960 Fender Rhodes piano bass, pictured to the right. The piano bass was essentially an electric piano containing the same pitch range as the most widely-used notes on an electric bass (or the double bass), which could be used to perform bass lines. It could be placed on top of a piano or organ, or mounted on a stand. Keyboard players such as The Doors' Ray Manzarek placed his Fender Rhodes piano bass on top of his Vox Continental or Gibson G-101 organ to play bass lines. About the same time, Hohner of Germany introduced a purely electronic bass keyboard, the Basset, which had a two-octave keyboard and rudimentary controls allowing a choice of tuba or string bass sounds. The Basset was in due course replaced by the Bass 2 and, in the mid-1970s, the Bass 3. All three were transistorized; the Basset was among the earliest solid-state electronic instruments. Similar instruments were produced in Japan under the "Raven" and "Rheem Kee Bass" (sic) names. History: 1970s and 1980s In the 1970s, a variant form of keyboard bass, bass pedals, became popular. Bass pedals are pedal keyboards operated by musicians using their feet. The guitar players or bass players of bands such as Genesis' Mike Rutherford, Yes' Chris Squire, John Paul Jones of Led Zeppelin during acoustic sets, Geddy Lee of Rush, The Police (bassist Sting), or Atomic Rooster (organist Vincent Crane) use the bass pedals to play bass lines. Stevie Wonder pioneered the use of synthesizer keyboard bass, notably on "Boogie on Reggae Woman". Funk, R&B, G Rap and hiphop musicians such as George Clinton & Parliament, Funkadelic, Roger & Zapp, Dr. Dre, E-40, EPMD, and Kashif used synth bass. During these decades the keyboard bass in its original form was still in use by some bands such as the B-52's, who used a Korg SB-100 "Synth-Bass". History: 1990s-2020s Starting in the 1990s, MIDI keyboard controllers, often the smaller 25-note models, are often used in some groups to play bass lines with virtual instruments, mostly synths and samplers. Keyboard bass instruments are a common alternative to bass guitars in rap, modern R&B, pop and in electronic dance music genres such as house music. MIDI keyboards used by bedroom producers and studio musicians alike, thanks to their affordability, portability, and the fact that they can be used to control multiple virtual instruments, rather than simply bass. As well, bassists from bands such as No Doubt sometimes perform bass lines on 25-note MIDI keyboards. Jack White of The White Stripes uses a vintage Rhodes Piano Bass live, particularly on performances of "My Doorbell". During Lady Gaga's The Monster Ball Tour, keyboardist and bassist Lanar "Kern" Brantley played synth bass on the Roland GAIA and Roland V-Synth.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Morning care** Morning care: Morning care is a hygiene routine provided by personal support workers, nursing assistants, nurses, and other workers for patients and residents of care facilities each morning. The care routine typically includes washing the face, combing hair, shaving, putting on cosmetics, toileting, getting dressed, and similar activities. Nurses may also check the patients' temperature, check medical equipment, replenish IV bags, change dressings, or do other daily or semi-daily tasks at this time. Morning care: Most morning care duties are basic activities of daily living. Different people require different levels of support for morning care, depending on their performance status. Some people may be able to complete morning care with little or no support from healthcare workers, while others may require the worker to perform all the tasks completely. Patient preferences may dictate aspects of morning care, such as the order of tasks done, the type of soap used, or whether bathing is a morning or afternoon activity.Some basic housekeeping, such as changing bedsheets, may be done at the same time.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**User environment management** User environment management: User environment management (also abbreviated to UEM) is the management of a computer user's experience within their desktop environment. The user environment: In a modern workplace, an organisation grants each user access to an operating system and the applications required for their role, applying corporate policy to ensure the user has appropriate access levels. This typically includes items such as the file systems, printers, and applications they should and shouldn't have access to. Within this framework, each user has a preferred way of operating, and they make several changes to enable them to work most efficiently. Common user changes include email signatures, language settings, and the environment's "look and feel." The combination of corporate policy and user preference is described as the "user personality." Users develop an attachment to their PCs, although “their attachment is not to the device itself, but to the way in which they do their jobs today”.Managing user personalization is a complex task that involves considering numerous factors and variables. As the desktop computing environment has evolved, the methods for delivering the desktop and applications have also expanded, adding to the complexity of managing the user's personality. History: The personal computer was originally introduced to the workplace as a standalone device. Over time, these devices were networked, and network-attached storage was introduced to enable the sharing of resources and information. Several advancements and new technologies from software companies have extended and improved this model. Citrix offers the ability to store the desktop environment centrally and publish it to remote users. Microsoft acquired some of this technology to develop their terminal server solution. History: Virtualization is a technology that evolved from mainframe computer, initially into the x86 architecture servers, and now enables virtualized desktop environments. This advancement is largely driven by VMware and Citrix. A further technology, application streaming, offers an alternative method for delivering applications to users. Softricity was a leader in this technology before being purchased by Microsoft, who brought the solution to market as Microsoft Application Virtualization. The current environment: An IT administrator now has a variety of options when delivering a desktop and applications to a user; personal computer, virtual desktops, terminal servers, application virtualization, application streaming. Typically a combination of these is used to address all the requirements and constraints placed on an organization. Market analysts suggest that these technologies are complementary and will exist in tandem, rather than the newest technologies dominating. One growing segment of the market is the increase in provisioned and virtualized desktops, which can be managed centrally and address many of the limitations associated with distributed desktop computing. Several analysts have stated that the future of PC desktops will be heterogeneous (i.e. differing Windows desktop delivery methods will coexist). Key to this is how system administrators design the user experience, the closer the basic look and feel is to how users have previously interacted with their desktops, the more accepting of the new technology they will be. “The nature of pooling or sharing desktops means that each user does not always log on to the same virtual desktop each time they log in and as a result, organizations need to properly plan for this version of musical chairs. It is absolutely critical when using pooled virtual machines (sometimes called dynamic pools) that you have a method of deploying applications and settings to users that is fast, robust and automated.”. It is from this growing requirement that user environment management developed. Critical to the user environment is making sure that user profiles are portable in one manner or another from one session to the next. User environment management: User environment management is a software solution which enables corporate policy and user preference data, the ‘user personality’, to be abstracted from the delivered operating system and applications and centrally managed. This personality can then be associated with the variety of delivery mechanisms an organization uses ‘on-demand’, enabling dynamic personalization of provisioned desktops and easier migration of users to newer technologies such as virtual desktops. User environment management can be applied to all Citrix, VMware and Microsoft delivery methods, including virtual, provisioned, streamed and published environments. User environment management: Due to the extensive nature of user environment management, there are a number of solutions in the market which address only part of the solution such as Group Policy Preferences, and DesktopAuthority (Dell) for policy management. Tranxition Software offers migration of profile settings with data and profile policy control. A few companies offer a comprehensive and complete user environment management solution (i.e. the ability to control both profile settings and offer a portable user experience). These solutions work across multiple Windows workspaces including physical, virtual, and cloud environments. The advantage to cross environment support is having a singular User Environment that works across all of your Windows workspaces. This approach helps onboard users from one workspace to the next, some even across Windows OS version changes. Vendors that support this comprehensive User Environment view include Unisys, Ivanti and Liquidware Liquidware. It should also be noted that both VMware and Citrix have basic tools that are mainly available within their workspace offerings and are generally siloed for use only within those specific workspaces only.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Most Improved Player** Most Improved Player: In some sports, a Most Improved Player award is given to players who have improved the most over the year. Greek Basket League Most Improved Player Israeli Basketball Premier League Most Improved Player NBA Most Improved Player Award NBA G League Most Improved Player Award PWI Most Improved Wrestler of the Year WNBA Most Improved Player Award Other uses: "Most Improved Player" (The Good Place), an episode of the American comedy television series The Good Place
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**White roll** White roll: White roll is the white line that borders the top of the upper lip. It's an adnexal mass of specialized glands and fat. White roll occurs naturally for nearly everyone, although it can be not white and less visible for dark skinned individuals. Well defined white roll indicates youthfulness and is considered aesthetically pleasing.With age, white roll often becomes less defined. This is due to sun damage, changes in facial fat, and decreasing collagen in the area right above the lip. White roll can also be accentuated using injectable fillers.Sometimes white roll is absent or damaged, due to trauma or cleft lip, in which case it could be reconstructed with plastic surgery. However, it is difficult to achieve satisfactory results using surgery, because malalignment of even a millimeter is noticeable and unattractive. It is therefore sometimes tattooed instead.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**World Summit on Evolution** World Summit on Evolution: The World Summit on Evolution is an evolutionary biology meeting hosted at the Galapagos Islands by Universidad San Francisco de Quito (USFQ), an Ecuadorian private liberal arts university. Its focus is on recent research and new advances in our understanding of evolution and the diversity of life.The summit hosts more than 150 participants presenting invited and submitted talks, poster sessions and scientific-outreach talks. It has been called "The Woodstock of Evolution" bringing together experts and students from widely different areas of evolutionary biology that rarely meet. It has attracted researchers working on evolution from over 15 different countries, including Peter and Rosemary Grant, Niles Eldredge, Antonio Lazcano, Douglas Futuyma, Lynn Margulis, Ada Yonath, William H. Calvin and Daniel Dennett. Objectives: Objectives: Join experts from different branches of evolutionary biology to discuss on the impacts of recent discoveries in order to integrate them inside the basic concepts of evolution. Objectives: Through a series of presentations and discussions the participants ask the big questions: What is the evidence for the theory of evolution? How has each field and their respective approaches deepened our understating? And where are the future horizons? Bringing together international experts and students for debate helps to answer these questions and hopefully lead to decisions that will shape the direction of evolutionary science in the foreseeable future. Objectives: Remind the scientific community of the importance of the Galapagos Islands and the discoveries produced thanks to their particular natural resources. Present the islands as a living and dynamic laboratory of evolution. Promote Ecuador, its research community and its academic institutions. Subjects: Subjects: Origin and diversification of life—How did the first living cells originate, clues provided by RNA, new paradigms in prokaryotic and early eukaryotic evolution. Evolution of plants and animals—The origin of animals and fungi, evolution of tropical plants and social behavior in animals. Human Evolution—The study of human diet and digestive system explains human evolution and Darwin's ideas applied to human evolution. Evolution and infectious agents—The origin and evolution of AIDS and how bacteria acquire pathogenic features Creationism and Intelligent Design—Containing the spread of creationism and intelligent design, while improving the public’s understanding of evolution throughout the Americas and elsewhere. Location: The World Summit on Evolution takes place at Galapagos Academic Institute for the Arts and Sciences (GAIAS), part of the Universidad San Francisco de Quito. GAIAS was established in 2002 at the capital town of the Galapagos province, Puerto Baquerizo Moreno, on the island of San Cristobal, one of the largest of the Galapagos Islands. Its 4.5 hectare campus is the only one located on the historically significant Galapagos islands. GAIAS was founded on the principle that would become a first-rate institution for international students and researchers. The Galapagos Islands inspired Charles Darwin to define his evolutionary theory, which revolutionized human understanding in relation to the diversity of species, including humans. His ideas were presented in On the Origin of Species. The Galapagos Islands, are important for the scientific studies that have been developed over the centuries after his visit. Past and future summits: 9–12 June 2005 - First World Summit on Evolution 22–26 August 2009 - Second World Summit on Evolution The Second World Summit on Evolution was launched to celebrate Charles Darwin's 200th birthday.The 2009 summit included the first meeting of the Sociedad Iberoamericana de Biología Evolutiva (SIBE). SIBE led to the establishment of academic and intellectual bonds between the Spanish- and Portuguese-speaking specialists in evolutionary biology. Past and future summits: 1–5 June 2013 - Third World Summit on Evolution The summit adopted the theme ‘Why Does Evolution Matter’. 200-attendees met, to listen to 12 keynote speakers, 20 oral presentations and 31 posters by faculty, postdocs and graduate and undergraduate students. The Summit encompassed five sessions: evolution and society, pre-cellular evolution and the RNA world, behavior and environment, genome, and microbes and diseases. USFQ and GAIAS launched officially the Lynn Margulis Center for Evolutionary Biology and showcased the Galapagos Science Center, developed in partnership with the University of North Carolina at Chapel Hill.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kvikk case** Kvikk case: The Kvikk case is about a variety of birth defects in the children of the men who served on HNoMS Kvikk. An investigation found that the ship's electronic systems were not to blame; no other cause has been established. Suspicion arose when two former officers accidentally met in the orthopedic department at Haukeland University Hospital in Bergen, and it was later revealed that in all eleven children already had been born with birth defects from 1987 to 1994. In the end, the case counted 17 injured children, and it was also discovered that the phenomenon of birth defects already had started in 1983. Among the claimed birth defects are clubfoot, thumb hypoplasia, hip dysplasia, congenital heart defects, structural brain damage, cataracts, and other defects. Some of the children have also had developmental delays and behavioral problems.Kvikk was the only vessel in the Norwegian navy that was used as an electronic warfare (EW) vessel, and one widely discussed theory was that the powerful electromagnetic radiation from the boat's radio communication masts and radar led to several of those who served aboard the ship having children with clubfoot, and in some cases stillborn children. The idea was that the powerful radiation possibly damaged genetic material in the sperm of the men who worked aboard. A total of 17 out of 85 children of officers who served at Kvikk have been born with birth defects.Of the other theories about the cause of the deformities is one that Kvikk was the only vessel that was used to experiment with different types of camouflage paint. 1987-94: Kvikk used for electronic warfare: In 1987 Kvikk was equipped for electronic warfare, partly by getting an extra radar sender stern rated to 750 watts, which then was used very actively to create radar jamming during exercises and tests. Kvikk went out of service as an EW-vessel in December 1994. 1987-94: Kvikk used for electronic warfare: Risk for radiation injuries The Norwegian Armed Forces knew before the case came up that very powerful radiation had been measured on Kvikk. There had also been measured heavy radiation at other defense vessels as well as land installations that all exceeded NATO's limits for radiation hazard, such as the HNoMS Narvik and HNoMS Tjeld. Kvikk, however, is a much smaller vessel than those, and the radiation distance to the crew was thus smaller. It was also not uncommon for the crew on Kvikk to reside around the mast during noise transmissions, and they were thus directly exposed to strong electromagnetic radiation from the mast that it has been speculated in whether may have affected their genetic material. 1987-94: Kvikk used for electronic warfare: As an additional risk factor the radar on Kvikk had a stabilizer that were made to ensure that the radar beam was kept level with the horizon so that it also would work in choppy seas, but this mechanism had a weakness that made the radar tip over many times and thus sending radiation directly down on the deck. Therefore the crew in many cases have been directly exposed to radiation at a very close hold when they were on the deck.In addition, four fathers who worked as electronic service technicians at the workshop of Haakonsvern got children with chromosome abnormalities. They worked, among other things, to correct errors and deficiencies in telecommunications equipment on Kvikk and other marine vessels, and were therefore under testing exposed to high levels of electromagnetic radiation, mostly from radars and communications equipment. 1987-94: Kvikk used for electronic warfare: It is shown in research from the 60's and 70's that non-ionizing radiation in the microwave range can provide genotoxic mechanisms in germ cells in animals which are then relayed to the offspring, as well as practical examples have shown that radiation have led to infertility in humans, but little recent research supports this.The Norwegian Navy, the Norwegian Radiation Protection Authority (NRPA) and a research group at Norwegian University of Science and Technology have concluded that there is no demonstrable link between the non-ionizing radiation on board and that the children were born with birth defects. The parents in the case have stated that they don't trust the research. 1996: The case comes forward: The case became known as the Haakonsvern Navy Base in 1996 issued a press release stating that an unusually high number of children of the employees at the naval base were damaged at birth. Verdens Gang (VG) was the first newspaper that took hold of the case, and the newspaper also found the relationship between the children who were born with birth defects and that their fathers had worked at Kvikk. VG immediately published a headline that read "Crown Prince Haakon Magnus of Norway is one of the many who is currently aboard the MTB vessels and may be exposed to radiation." The navy quickly made a decision to investigate whether there was a correlation between the electromagnetic radiation aboard Kvikk and the genes of those who had served there, but then Kvikk was already broken up - only nine days after the original press release from Haakonsvern. 1998: Initial research by the navy: After three officers in February 1996 had notified the navy inspectorate that they had got children with birth defects after serving on Kvikk, the navy the same year gave an internal message that they did not want to hear any more about such abnormalities from the naval staff.The navy began to investigate whether the damages in this and similar cases could be due to radar and radio radiation. In 1998 they concluded that this was not the case and that there was no relationship between serving on the ship and having children with birth defects, so that there was no basis for liability. In the report, the Navy went far in rejecting any possible link whatsoever between birth defects to their children and that the fathers had served on Kvikk, and suggested statistical clustering as an explanation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Model year** Model year: The model year (sometimes abbreviated "MY") is a method of describing the version of a product which has been produced over multiple years. The model year may or may not be the same as the calendar year in which the product was manufactured. Automobiles: United States and Canada Automobiles in the United States and Canada are identified and regulated by model year, whereas other markets use production date (month/year) to identify specific vehicles, and model codes in place of the "year" (model year) in the North American make-model-year identifier. Automobiles: In technical documents generated within the auto industry and its regulating agencies such as the U.S. National Highway Traffic Safety Administration and United States Environmental Protection Agency and Transport Canada and Environment Canada, the letters "MY" often precede the year (as in "MY2019" or "MY93"). Even without this prefix, however, in the North American context it is usually the model year rather than the vehicle's calendar year of production that is being referred to. Automobiles: The new model year typically begins in August to September of the preceding calendar year, though can be as early as February, such being the case for the fourth generation 2022 Acura MDX, which started production in January 2021. This was partly due to the advertising of a new model being coordinated with the launch of the new television season in late September, because of the heavy dependence between television to offer products from automakers to advertise, and the car companies to launch their new models at a high-profile time of year. Imported cars use the model year convention in the U.S. and Canada, even when this system is not used for the same cars when sold in other countries. Automobiles: The concept of yearly styling updates (a practice adopted from the fashion industry) was introduced to General Motors' range of cars by Alfred P. Sloan in the 1920s. This was an early form of planned obsolescence in the car industry, where yearly styling changes meant consumers could easily discern a car's newness, or lack of it. Other major changes to the model range usually coincided with the launch of the new model year., for example the 1928 model year of the Ford Model A began production in October 1927 and the 1955 model year of the Ford Thunderbird began production in September 1954.Model year followed with calendar year until the mid 1930s until then president Franklin D. Roosevelt signed an executive order to release vehicle model years in the fall of the preceding year in order to standardize employment in the automotive industry. The practice of beginning production of next year's model before the end of the year has become a long-standing tradition in America.For purposes such as VINs and emissions control, regulations allow cars of a given model year to be sold starting on January 2 of the previous calendar year. For example, a 2019 model year vehicle can legally go on sale starting January 2, 2018. This has resulted in a few cars in the following model year being introduced in advertisements during the NFL Super Bowl in February. A notable example of an early model year launch would be the Ford Mustang, introduced as an early 1965 model (informally referred to as "19641⁄2") in April 1964 at the World's Fair, several months before the usual start of the 1965 model year in August 1964. Automobiles: For recreational vehicles, the U.S. Federal Trade Commission allows a manufacturer to use a model year up to two years before the date that the vehicle was manufactured. Automobiles: Other countries In other countries, it is more common to identify specific vehicles by their year and month of production, and cars of a particular type by their generation, using terms such as "Mark III" or by the manufacturer's code for that kind of car (such as "BL" being the code for a Mazda 3 built between November 2008 and June 2013). Automobiles: In Europe, the lesser use of model years as a descriptor is partly because since the 1980s many vehicles are introduced at the Geneva Motor Show in March, the Frankfurt Motor Show in September or the Paris Motor Show in September. New models have increasingly been launched in June or July.As with the rest of Europe, the motor industry in the United Kingdom did not regularly make use of model years in the way common in the US, since cars were not as regularly updated or altered. Some exceptions existed; for instance in the 1950s and 1960s the Rootes Group deliberately copied American practice and performed annual small alterations to its key models such as the Hillman Minx and the Humber Super Snipe. However these were still not identified by model years but by Series numbers, sometimes with alphabetical designations (such as the Minx Series IIIA, IIIB and IIIC) to distinguish what were mostly cosmetic updates rather than mechanical or structural improvements. As in America, the British motor industry did generally announce new models (or updates to existing ones) in September. However this was the norm long before it became practice in the US and did not originate with the television season. Instead it began because the long-established practice in the manufacturing industries of the English Midlands, especially Coventry and Birmingham where the British car industry developed out of the established bicycle and machine tool trades, was to close factories and give workers a two-week holiday in August or September. This was used as a chance to renew tooling in the factory and was an ideal time to introduce new products which would begin production when the workers returned and the factory restarted. Thus the working year in the car industry ran from September to September. New or improved models would be announced in the summer and would be displayed at the British Motor Show which was held in October. Here they would be seen by the wider industry and buying public for the first time, just as the cars produced in the previous weeks began reaching the dealerships ready for sale. Therefore, car models intended for sale during, say, 1960, would be announced and displayed in the third quarter of 1959, with sales beginning before the end of the year, and any improvements intended for 1961 would be announced in September 1960 and displayed at the 1960 Motor Show in October. Automobiles: This convention was not absolute; for instance the original Vauxhall Victor was officially announced in February 1957 with sales beginning later the same month, and subsequent additions and updates to the Victor range were all introduced in February - notably Vauxhall's factory was outside the traditional centre of the industry, being in Luton, and so did not follow the common working calendar. Being owned by General Motors, Vauxhall also generally made minor changes to its cars year by year, even referring to 'model years' in some of its literature, but these did not have the same official weight or significant to buyers as they did in America. During the 1960s British car makers began giving journalists access to upcoming models earlier in the year to get announcements out ahead of their rivals and clear of the busy September period. This developed into increasingly lavish and sophisticated media and marketing events happening earlier and earlier in the year. Changes to working practices, the in-roads made to the British market by car makers from other countries and the decline in market share by British firms finally led to the tradition of new models being introduced in September being abandoned, although the British Motor Show continued to be held in October. Automobiles: VIN encoding The standardized format of the vehicle identification number (VIN) used in the United States and Canada includes the model year of the vehicle as the 10th digit. The actual date that the vehicle was produced is not part of the VIN, but it is required to be shown on the vehicle safety certification label. Other products: In addition to automobiles, some other products that often have model years include:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hydrodynamic reception** Hydrodynamic reception: In animal physiology, hydrodynamic reception refers to the ability of some animals to sense water movements generated by biotic (conspecifics, predators, or prey) or abiotic sources. This form of mechanoreception is useful for orientation, hunting, predator avoidance, and schooling. Frequent encounters with conditions of low visibility can prevent vision from being a reliable information source for navigation and sensing objects or organisms in the environment. Sensing water movements is one resolution to this problem.This sense is common in aquatic animals, the most cited example being the lateral line system, the array of hydrodynamic receptors found in fish and aquatic amphibians. Arthropods (including crayfish and lobsters) and some mammals (including pinnipeds and manatees) can use sensory hairs to detect water movements. Systems that detect hydrodynamic stimuli are also used for sensing other stimuli. For example, sensory hairs are also used for the tactile sense, detecting objects and organisms up close rather than via water disturbances from afar. Relative to other sensory systems, our knowledge of hydrodynamic sensing is rather limited. This could be because humans do not have hydrodynamic receptors, which makes it difficult for us to understand the importance of such a system. Generating and measuring a complex hydrodynamic stimulus can also be difficult. Overview of hydrodynamic stimuli: Definition “Hydrodynamic” refers to the motion of water against an object that causes a force to be exerted upon it. A hydrodynamic stimulus is therefore a detectable disturbance caused by objects moving in a fluid. The geometry of the disturbance depends on properties of the object (shape, size, velocity) and also on properties of the fluid, such as viscosity and velocity. These water movements are not only relevant to animals that can detect them, but constitute a branch of physics, fluid dynamics, that has importance in areas such as meteorology, engineering, and astronomy. Overview of hydrodynamic stimuli: A frequent hydrodynamic stimulus is a wake, consisting of eddies and vortices that an organism leaves behind as it swims, affected by the animal's size, swimming pattern, and speed. Although the strength of a wake decreases over time as it moves away from its source, vortex structure of a goldfish's wake can remain for about thirty seconds, and increased water velocity can be detected several minutes after production. Overview of hydrodynamic stimuli: Uses of hydrodynamic stimuli Since movement of an object through water inevitably creates movement of the water itself, and this resulting water motion persists and travels, the detection of hydrodynamic stimuli is useful for sensing conspecifics, predators, and prey. Many studies are based upon the question of how an aquatic organism can capture prey despite darkness or apparent lack of visual or other sensory systems and find that the sensing of hydrodynamic stimuli left by prey is probably responsible. As for detection of conspecifics, harbor seal pups will enter the water with their mother, but eventually ascend to obtain oxygen, and then dive again to rejoin the mother. Observations suggest that the tracking of water movements produced by the mother and other pups allows this rejoining to occur. Through these trips and the following of conspecifics, pups might learn routes to avoid predators and good places to find food, showing the possible significance of hydrodynamic detection to these seals. Overview of hydrodynamic stimuli: Hydrodynamic stimuli also function in exploration of the environment. For example, blind cave fish create disturbances in the water and use distortions of this self-generated field to complete spatial tasks, such as avoiding surrounding obstacles. Overview of hydrodynamic stimuli: Visualizing hydrodynamic stimuli Since water movements are difficult for humans to observe, researchers can visualize the hydrodynamic stimuli that animals detect via particle image velocimetry (PIV). This technique tracks fluid motions by particles put into the water that can be more easily imaged compared to the water itself. The direction and speed of water movement can be defined quantitatively. This technique assumes that the particles will follow the flow of the water. Invertebrates: To detect water movement, many invertebrates have sensory cells with cilia that project from the body surface and make direct contact with surrounding water. Typically, the cilia include one kinocilium surrounded by a group of shorter stereocilia. Deflection of stereocilia toward the kinocilium by movement of water around the animal stimulates some sensory cells and inhibits others. Water velocity is thus related to the amount of deflection of certain stereocilia, and sensory cells send information about this deflection to the brain via firing rates of afferent nerves. Cephalopods, including the squid Loligo vulgaris and cuttlefish Sepia officinalis, have ciliated sensory cells arranged in lines at different locations on the body. Although these cephalopods have only kinocilia and no stereocilia, the sensory cells and their arrangement are analogous to the hair cells and lateral line in vertebrates, indicating convergent evolution. Invertebrates: Arthropods are different from other invertebrates as they use surface receptors in the form of mechanosensory setae to function in both touch and hydrodynamic sensing. These receptors can also be deflected by solid objects or water flow. They are located on different body regions depending on the animal, such as on the tail for crayfish and lobsters. Neural excitation occurs when setae are moved in one direction, while inhibition occurs with movement in the opposite direction. Fish: Fish and some aquatic amphibians detect hydrodynamic stimuli via their lateral line organs. This system consists of an array of sensors called neuromasts arranged along the length of the fish's body. Neuromasts can be free-standing (superficial neuromasts) or within fluid-filled canals (canal neuromasts). The sensory cells within neuromasts are polarized hair cells within a gelatinous cupula. The cupula, and the stereocilia within, are moved a certain amount depending on the movement of the surrounding water. Afferent nerve fibers are excited or inhibited depending on whether the hair cells they arise from are deflected in the preferred or opposite direction. Lateral line receptors form somatotopic maps within the brain informing the fish of amplitude and direction of flow at different points along the body. These maps are located in the medial octavolateral nucleus (MON) of the medulla and in higher areas such as the torus semicircularis. Mammals: Detection of hydrodynamic stimuli in mammals typically occurs through use of hairs (vibrissae) or “push-rod” mechanoreceptors, as in platypuses. When hairs are used, they are often in the form of whiskers and contain a follicle-sinus complex (F-SC), making them different from the hairs with which humans are most familiar. Mammals: Pinnipeds Pinnipeds, including sea lions and seals, use their mystacial vibrissae (whiskers) for active touch, including size and shape discrimination, and texture discrimination in seals. When used for touch, these vibrissae are moved to the forward position and kept still while the head moves, thus moving the vibrissae on the surface of an object. This is in contrast to rodents, which move the whiskers themselves to explore objects. More recently, research has been done to see if pinnipeds can use these same whiskers to detect hydrodynamic stimuli in addition to tactile stimuli. While this ability has been verified behaviorally, the specific neural circuits involved have not yet been determined. Mammals: Seals Research on the ability of pinnipeds to detect hydrodynamic stimuli was first done on harbor seals (Phoca vitulina). It had been unclear how seals could find food in dark waters. It was found that a harbor seal that could use only its whiskers for sensory information (due to being blindfolded and wearing headphones), could respond to weak hydrodynamic stimuli produced by an oscillating sphere within the range of frequencies that fish would generate. As with active touch, whiskers are not moved during sensing, but are projected forward and remain in that position. Mammals: To find whether seals could actually follow hydrodynamic stimuli using their vibrissae rather than just detect them, a blindfolded harbor seal with headphones can be released into a tank in which a toy submarine has left a hydrodynamic trail. After protracting its vibrissae to the most forward position and making lateral head movements, the seal can locate and follow a trail of 40 meters even when sharp turns to the trail are added. When whisker movements are prevented with a mask covering the muzzle, the seal cannot locate and follow the trail, indicating use of information obtained by the whiskers. Mammals: Trails produced by live animals are more complex than that produced by a toy submarine, so the ability of seals to follow trails produced by other seals can also be tested. A seal is capable of following this center of this trail, either following the direct path of the trail or using an undulatory pattern involving crossing the trail repeatedly. This latter pattern might allow the seal to track a fish swimming in a zigzagging motion, or assist with tracking weak trails by comparing the surrounding water with the prospective trail.Other studies have shown that the harbor seal can distinguish between the hydrodynamic trails left by paddles of different sizes and shapes, a finding in agreement with what the lateral line in goldfish is capable of doing. Discrimination between different fish species might have adaptive value if it allows seals to capture those with highest energy content. Seals can also detect a hydrodynamic trail produced by a fin-like paddle up to 35 seconds old with an accuracy rate greater than chance. Accuracy diminishes as the trail becomes older. Mammals: Sea lions The California sea lion (Zalophus californianus) have mystacial vibrissae that differ from those of seals, but it can detect and follow a trail made by a small toy submarine. Sea lions use an undulatory pattern of tracking similar to that in seals, but do not perform as well with increased delay before they are allowed to swim and locate the trail. Mammals: Species differences in vibrissae Studies raise the question of how detection of hydrodynamic stimuli in these animals is possible given the movement of the vibrissae due to water flow during swimming. Whiskers vibrate with a certain frequency based on swim speed and properties of the whisker. Detection of the water disturbance caused by this vibrissal movement should overshadow any stimulus produced by a distant fish due to its proximity. For seals, one proposal is that they might sense changes in the baseline frequency of vibration to detect hydrodynamic stimuli produced by another source. However, a more recent study shows that the morphology of the seal's vibrissae actually prevents vortices produced by the whiskers from creating excessive water disturbances.In harbor seals, the structure of the vibrissal shaft is undulated (wavy) and flattened. This specialization is also found in most true seals. In contrast, the whiskers of the California sea lion are circular or elliptical in cross-section and are smooth. Mammals: When seals swim with their vibrissae projected forward, the flattened, undulated structure prevents the vibrissae from bending backward or vibrating to produce water disturbances. Thus, the seal prevents noise from the whiskers by a unique whisker structure. However, sea lions appear to monitor modulations of the characteristic frequency of the whiskers to obtain information about hydrodynamic stimuli. This different mechanism might be responsible for the sea lion's worse performance in tracking an aging hydrodynamic trail. Since the whiskers of the sea lion must recover its characteristic frequency after the frequency is altered by a hydrodynamic stimulus, this could reduce the whisker's temporal resolution. Mammals: Manatees Similar to the vibrissae of seals and sea lions, Florida manatees also use hairs for detecting tactile and hydrodynamic stimuli. However, manatees are unique since these tactile hairs are located over the whole post-cranial body in addition to the face. These hairs have different densities at different locations of the body, with higher density on the dorsal side and density decreasing ventrally. The effect of this distribution in spatial resolution is unknown. This system, distributed over the whole body, could localize water movements analogous to a lateral line. Mammals: Research is currently being done to test detection of hydrodynamic stimuli in manatees. While the anatomy of the follicle-sinus complexes of manatees have been well studied, there is much to learn about the neural circuits involved if such detection is possible and the way in which the hairs encode information about strength and location of a stimulus via timing differences in firing. Mammals: Platypuses In contrast to the sinus hairs that other mammals use to detect water movements, evidence indicates that platypuses use specialized mechanoreceptors on the bill called “push-rods”. These look like small domes on the surface, which are the ends of rods that are attached at the base but can move freely otherwise. Mammals: Using these push-rods in combination with electroreceptors, also on the bill, allows the platypus to find prey with its eyes closed. While researchers initially believed that the push-rods could only function when something is in contact with the bill (implicating their use for a tactile sense), it is now believed that they can also be used at a distance to detect hydrodynamic stimuli. The information from push-rods and electroreceptors combine in the somatosensory cortex in a structure with stripes similar to the ocular dominance columns for vision. In the third layer of this structure, sensory inputs from push-rods and electroreceptors may combine so that the platypus can use the time difference between arrival of each type of signal at the bill (with hydrodynamic stimuli arriving after electrical signals) to determine the location of prey. That is, different cortical neurons could encode the delay between detection of electrical and hydrodynamic stimuli. However, a specific neural mechanism for this is not yet known. Mammals: Other mammals The family Talpidae includes the moles, shrew moles, and desmans. Most members of this family have Eimer's organs, touch-sensitive structures on the snout. The desmans are semi-aquatic and have small sensory hairs that have been compared to the neuromasts of the lateral line. These hairs are termed “microvibrissae” due to their small size, ranging from 100 to 200 micrometers. They are located with the Eimer's organs on the snout and might sense water movements.Soricidae, a sister family of Talpidae, contains the American water shrew. This animal can obtain prey during the night despite the darkness. To discover how this is possible, a study controlling for use of electroreception, sonar, or echolocation showed that this water shrew is capable of detecting water disturbances made by potential prey. This species probably uses its vibrissae for hydrodynamic (and tactile) sensing based on behavioral observations and their large cortical representation. Mammals: While not well studied, the Rakali (Australian water rat) may also be able to detect water movements with its vibrissae as these have a large amount of innervation, though further behavioral studies are needed to confirm this.While tying the presence of whiskers to hydrodynamic reception has allowed the list of mammals with this special sense to grow, more research still needs to be done on the specific neural circuits involved.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Arsenate reductase (cytochrome c)** Arsenate reductase (cytochrome c): Arsenate reductase (cytochrome c) (EC 1.20.2.1, arsenite oxidase) is an enzyme with systematic name arsenite:cytochrome c oxidoreductase. This enzyme catalyses the following chemical reaction arsenite + H2O + 2 oxidized cytochrome c ⇌ arsenate + 2 reduced cytochrome c + 2 H+Arsenate reductase is a molybdoprotein isolated from alpha-proteobacteria that contains iron-sulfur clusters.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Opaque set** Opaque set: In discrete geometry, an opaque set is a system of curves or other set in the plane that blocks all lines of sight across a polygon, circle, or other shape. Opaque sets have also been called barriers, beam detectors, opaque covers, or (in cases where they have the form of a forest of line segments or other curves) opaque forests. Opaque sets were introduced by Stefan Mazurkiewicz in 1916, and the problem of minimizing their total length was posed by Frederick Bagemihl in 1959.For instance, visibility through a unit square can be blocked by its four boundary edges, with length 4, but a shorter opaque forest blocks visibility across the square with length 2.639 . It is unproven whether this is the shortest possible opaque set for the square, and for most other shapes this problem similarly remains unsolved. The shortest opaque set for any bounded convex set in the plane has length at most the perimeter of the set, and at least half the perimeter. For the square, a slightly stronger lower bound than half the perimeter is known. Another convex set whose opaque sets are commonly studied is the unit circle, for which the shortest connected opaque set has length 2+π . Without the assumption of connectivity, the shortest opaque set for the circle has length at least π and at most 4.7998 Several published algorithms claiming to find the shortest opaque set for a convex polygon were later shown to be incorrect. Nevertheless, it is possible to find an opaque set with a guaranteed approximation ratio in linear time, or to compute the subset of the plane whose visibility is blocked by a given system of line segments in polynomial time. Definitions: Every set S in the plane blocks the visibility through a superset of S , its coverage C . C consists of points for which all lines through the point intersect S . If a given set K forms a subset of the coverage of S , then S is said to be an opaque set, barrier, beam detector, or opaque cover for K . If, additionally, S has a special form, consisting of finitely many line segments whose union forms a forest, it is called an opaque forest. There are many possible opaque sets for any given set K , including K itself, and many possible opaque forests. For opaque forests, or more generally for systems of rectifiable curves, their length can be measured in the standard way. For more general point sets, one-dimensional Hausdorff measure can be used, and agrees with the standard length in the cases of line segments and rectifiable curves.Most research on this problem assumes that the given set K is a convex set. When it is not convex but merely a connected set, it can be replaced by its convex hull without changing its opaque sets. Some variants of the problem restrict the opaque set to lie entirely inside or entirely outside K . In this case, it is called an interior barrier or an exterior barrier, respectively. When this is not specified, the barrier is assumed to have no constraints on its location. Versions of the problem in which the opaque set must be connected or form a single curve have also been considered. It is not known whether every convex set P has a shortest opaque set, or whether instead the lengths of its opaque sets might approach an infimum without ever reaching it. Every opaque set for P can be approximated arbitrarily closely in length by an opaque forest, and it has been conjectured that every convex polygon has an opaque forest as its shortest opaque set, but this has not been proven. Bounds: When the region to be covered is a convex set, the length of its shortest opaque set must be at least half its perimeter and at most its perimeter. For some regions, additional improvements to these bounds can be made. Bounds: Upper bound If K is a bounded convex set to be covered, then its boundary ∂K forms an opaque set whose length is the perimeter |∂K| . Therefore, the shortest possible length of an opaque set is at most the perimeter. For sets K that are strictly convex, meaning that there are no line segments on the boundary, and for interior barriers, this bound is tight. Every point on the boundary must be contained in the opaque set, because every boundary point has a tangent line through it that cannot be blocked by any other points. The same reasoning shows that for interior barriers of convex polygons, all vertices must be included. Therefore, the minimum Steiner tree of the vertices is the shortest connected opaque set, and the traveling salesperson path of the vertices is the shortest single-curve opaque set. However, for interior barriers of non-polygonal convex sets that are not strictly convex, or for barriers that are not required to be connected, other opaque sets may be shorter; for instance, it is always possible to omit the longest line segment of the boundary. In these cases, the perimeter or Steiner tree length provide an upper bound on the length of an opaque set. Bounds: Lower bound There are several proofs that an opaque set for any convex set K must have total length at least |∂K|/2 , half the perimeter. One of the simplest involves the Crofton formula, according to which the length of any curve is proportional to its expected number of intersection points with a random line from an appropriate probability distribution on lines. It is convenient to simplify the problem by approximating K by a strictly convex superset, which can be chosen to have perimeter arbitrarily close to the original set. Then, except for the tangent lines to K (which form a vanishing fraction of all lines), a line that intersects K crosses its boundary twice. Therefore, if a random line intersects K with probability p , the expected number of boundary crossings is 2p . But each line that intersects K intersects its opaque set, so the expected number of intersections with the opaque set is at least p , which is at least half that for K . By the Crofton formula, the lengths of the boundary and barrier have the same proportion as these expected numbers.This lower bound of |∂K|/2 on the length of an opaque set cannot be improved to have a larger constant factor than 1/2, because there exist examples of convex sets that have opaque sets whose length is close to this lower bound. In particular, for very long thin rectangles, one long side and two short sides form a barrier, with total length that can be made arbitrarily close to half the perimeter. Therefore, among lower bounds that consider only the perimeter of the coverage region, the bound of |∂K|/2 is best possible. However, getting closer to |∂K|/2 in this way involves considering a sequence of shapes rather than just a single shape, because for any convex set K that is not a triangle, there exists a δ such that all opaque sets have length at least |∂K|/2+δ Specific shapes For a triangle, as for any convex polygon, the shortest connected opaque set is its minimum Steiner tree. In the case of a triangle, this tree can be described explicitly: if the widest angle of the triangle is 2π/3 (120°) or more, it uses the two shortest edges of the triangle, and otherwise it consists of three line segments from the vertices to the Fermat point of the triangle. However, without assuming connectivity, the optimality of the Steiner tree has not been demonstrated. Izumi has proven a small improvement to the perimeter-halving lower bound for the equilateral triangle. Bounds: For a unit square, the perimeter is 4, the perimeter minus the longest edge is 3, and the length of the minimum Steiner tree is 2.732 . However, a shorter, disconnected opaque forest is known, with length 2.639 . It consists of the minimum Steiner tree of three of the square's vertices, together with a line segment connecting the fourth vertex to the center. Ross Honsberger credits its discovery to Maurice Poirier, a Canadian schoolteacher, but it was already described in 1962 and 1964 by Jones. It is known to be optimal among forests with only two components, and has been conjectured to be the best possible more generally, but this remains unproven. The perimeter-halving lower bound of 2 for the square, already proven by Jones, can be improved slightly, to 2.00002 , for any barrier that consists of at most countably many rectifiable curves, improving similar previous bounds that constrained the barrier to be placed only near to the given square.The case of the unit circle was described in a 1995 Scientific American column by Ian Stewart, with a solution of length 2+π , optimal for a single curve or connected barrier but not for an opaque forest with multiple curves. Vance Faber and Jan Mycielski credit this single-curve solution to Menachem Magidor in 1974. By 1980, E. Makai had already provided a better three-component solution, with length approximately 4.7998 , rediscovered by John Day in a followup to Stewart's column. The unknown length of the optimal solution has been called the beam detection constant. Algorithms: Two published algorithms claim to generate the optimal opaque forest for arbitrary polygons, based on the idea that the optimal solution has a special structure: a Steiner tree for one triangle in a triangulation of the polygon, and a segment in each remaining triangle from one vertex to the opposite side, of length equal to the height of the triangle. This structure matches the conjectured structure of the optimal solution for a square. Although the optimal triangulation for a solution of this form is not part of the input to these algorithms, it can be found by the algorithms in polynomial time using dynamic programming. However, these algorithms do not correctly solve the problem for all polygons, because some polygons have shorter solutions with a different structure than the ones they find. In particular, for a long thin rectangle, the minimum Steiner tree of all four vertices is shorter than the triangulation-based solution that these algorithms find. No known algorithm has been guaranteed to find a correct solution to the problem, regardless of its running time.Despite this setback, the shortest single-curve barrier of a convex polygon, which is the traveling salesperson path of its vertices, can be computed exactly in polynomial time for convex polygons by a dynamic programming algorithm, in models of computation for which sums of radicals can be computed exactly. There has also been more successful study of approximation algorithms for the problem, and for determining the coverage of a given barrier. Algorithms: Approximation By the general bounds for opaque forest length in terms of perimeter, the perimeter of a convex set approximates its shortest opaque forest to within a factor of two in length. In two papers, Dumitrescu, Jiang, Pach, and Tóth provide several linear-time approximation algorithms for the shortest opaque set for convex polygons, with better approximation ratios than two: For general opaque sets, they provide an algorithm whose approximation ratio is at most The general idea of the algorithm is to construct a "bow and arrow" like barrier from the minimum-perimeter bounding box of the input, consisting of a polygonal chain stretched around the polygon from one corner of the bounding box to the opposite corner, together with a line segment connecting a third corner of the bounding box to the diagonal of the box. Algorithms: For opaque sets consisting of a single arc, they provide an algorithm whose approximation ratio is at most The resulting barrier is defined by a supporting line of the input shape. The input projects perpendicularly onto an interval of this line, and the barrier connects the two endpoints of this interval by a U-shaped curve stretched tight around the input, like the optimal connected barrier for a circle. The algorithm uses rotating calipers to find the supporting line for which the length of the resulting barrier is minimized. Algorithms: For connected opaque sets, they provide an algorithm whose approximation ratio is at most 1.5716 . This method combines the single-arc barrier with special treatment for shapes that are close to an equilateral triangle, for which the Steiner tree of the triangle is a shorter connected barrier. Algorithms: For interior barriers, they provide an algorithm whose approximation ratio is at most 1.7168 . The idea is to use a generalization suggested by Shermer of the structure of the incorrect earlier algorithms (a Steiner tree on a subset of the points, together with height segments for a triangulation of the remaining input), with a fast approximation for the Steiner tree part of the approximation.Additionally, because the shortest connected interior barrier of a convex polygon is given by the minimum Steiner tree, it has a polynomial-time approximation scheme. Algorithms: Coverage The region covered by a given forest can be determined as follows: Find the convex hull of each connected component of the forest. Algorithms: For each vertex p of the hull, sweep a line circularly around p , subdividing the plane into wedges within which the sweep line crosses one of the hulls and wedges within which the sweep line crosses the plane without obstruction. The union of the covered wedges forms a set Cp Find the intersection of all of the sets Cp . This intersection is the coverage of the forest.If the input consists of n line segments forming m connected components, then each of the n sets Cp consists of at most 2m wedges. It follows that the combinatorial complexity of the coverage region, and the time to construct it, is O(m2n2) as expressed in big O notation.Although optimal in the worst case for inputs whose coverage region has combinatorial complexity matching this bound, this algorithm can be improved heuristically in practice by a preprocessing phase that merges overlapping pairs of hulls until all remaining hulls are disjoint, in time log 2⁡n) . If this reduces the input to a single hull, the more expensive sweeping and intersecting algorithm need not be run: in this case the hull is the coverage region. Curve-free opaque sets: Mazurkiewicz (1916) showed that it is possible for an opaque set to avoid containing any nontrivial curves and still have finite total length. A simplified construction of Bagemihl (1959), shown in the figure, produces an example for the unit square. The construction begins with line segments that form an opaque set with an additional property: the segments of negative slope block all lines of non-negative slope, while the segments of positive slope block all lines of non-positive slope. In the figure, the initial segments with this property are four disjoint segments along the diagonals of the square. Then, it repeatedly subdivides these segments while maintaining this property. At each level of the construction, each line segment is split by a small gap near its midpoint into two line segments, with slope of the same sign, that together block all lines of the opposite sign that were blocked by the original line segment. The limit set of this construction is a Cantor space that, like all intermediate stages of the construction, is an opaque set for the square. With quickly decreasing gap sizes, the construction produces a set whose Hausdorff dimension is one, and whose one-dimensional Hausdorff measure (a notion of length suitable for such sets) is finite.The distance sets of the boundary of a square, or of the four-segment shortest known opaque set for the square, both contain all distances in the interval from 0 to 2 . However, by using similar fractal constructions, it is also possible to find fractal opaque sets whose distance sets omit infinitely many of the distances in this interval, or that (assuming the continuum hypothesis) form a set of measure zero. History: Opaque sets were originally studied by Stefan Mazurkiewicz in 1916. Other early works on opaque sets include the papers of H. M. Sen Gupta and N. C. Basu Mazumdar in 1955, and by Frederick Bagemihl in 1959, but these are primarily about the distance sets and topological properties of barriers rather than about minimizing their length. In a postscript to his paper, Bagemihl asked for the minimum length of an interior barrier for the square, and subsequent work has largely focused on versions of the problem involving length minimization. They have been repeatedly posed, with multiple colorful formulations: digging a trench of as short a length as possible to find a straight buried telephone cable, trying to find a nearby straight road while lost in a forest, swimming to a straight shoreline while lost at sea, efficiently painting walls to render a glass house opaque, etc. History: The problem has also been generalized to sets that block all geodesics on a Riemannian manifold, or that block lines through sets in higher-dimensions. In three dimensions, the corresponding question asks for a collection of surfaces of minimum total area that blocks all visibility across a solid. However, for some solids, such as a ball, it is not clear whether such a collection exists, or whether instead the area has an infimum that cannot be attained.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spontaneous osteonecrosis of the knee** Spontaneous osteonecrosis of the knee: Spontaneous osteonecrosis of the knee is the result of vascular arterial insufficiency to the medial femoral condyle of the knee resulting in necrosis and destruction of bone. It is often unilateral and can be associated with a meniscal tear. Signs and symptoms: The condition is usually characterized by a sudden onset of knee pain, worse at night, or during weight-bearing such as standing or running. Nevertheless, it can also occur during rest or without any weight-bearing. About 94% of the cases affect the medial condyle of the femur. This is because the blood supply for the medial condyle is less than the blood supply for the lateral condyle of the femur. The condition may deteriorate, causing asymmetrical walking or running pattern. Sometimes, they have a history of osteoporosis or osteopenia.Localised tenderness over the medial knee is the most common finding of the condition. It is usually happening on one side, without a previous history of trauma. SONK should be considered together with differential diagnosis of osteoarthritis, tear of medial meniscus, and tibial plateau fracture. SONK usually has a sudden onset of knee pain, while osteoarthritis has a progressive, gradual onset of knee pain. Cause: It is more common in females over the age of 50 with possible risk factors of corticosteroid use, lupus, alcoholism, pancreatitis, sickle cell anemia, and rheumatoid arthritis. Diagnosis: In the early stages of the disease, there are no obvious X-ray findings. The presence of radiolucent area in the epiphyseal region and flattening of the femoral condyle can be found in late stages of the disease. MRI has been proven to be both sensitive and specific for the disease. Both T1 and T2 imaging of the MRI shows bone marrow oedema, subchondral low signal, subchondral crescent linear focus, and focal epiphyseal contour depression. Treatment: Total knee arthroplasty (TKA) is the standard of care. However, in SONK, often just one side of the knee joint is afflicted, so unicompartmental knee arthroplasty (UKA) can be considered as an alternative that leads to a shorter recovery time. A meta-analysis concluded that UKA was "an excellent alternative to TKA" with few complications and good survivorship. Sources: Souza, Thomas (2007). Lower Extremity: Technique and Management. Palmer College of Chiropractic.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Strangulation in domestic violence** Strangulation in domestic violence: Strangulation in the context of domestic violence is a potentially lethal form of assault. Unconsciousness may occur within seconds of strangulation and death within minutes. Strangulation can be difficult to detect and until recently was often not treated as a serious crime. However, in many jurisdictions, strangulation is now a specific criminal offense, or an aggravating factor in assault cases. Differences from choking: Although sometimes the words are used interchangeably, "strangulation" and "choking" are not the same thing. Choking is when air flow is blocked by food or a foreign object in the trachea – something that can be addressed by the Heimlich maneuver. Strangulation, by contrast, is defined by reduced air flow and/or blood flow to or from the brain via the intentional external compression of blood vessels or the airway in the neck. Notably, however, many victims of strangulation refer to the assault as "choking". Differences from choking: Both manual strangulation (i.e., gripping the throat with one's hands) and ligature strangulation (e.g., belts, scarves) have been reported in intimate partner violence cases. Epidemiology: A systematic review of 23 articles based on 11 surveys in 9 countries (N=74,785, about two-thirds of whom were women) found that 3.0% to 9.7% of women reported that they had at some time been strangled by an intimate partner. A total of 0.4% to 2.4% – with 1.0% being typical – reported that they had experienced it in the past year, and women were between 2 times and 14 times more likely to be strangled by an intimate partner than were men.The most recent national survey in the U.S. that asked about strangulation by an intimate partner asked 16,507 adults (55% of whom were women) if a partner had tried to hurt them by choking or suffocating them. A total of 9.7% responded that a partner had done so at some point in their lifetime; 0.9% reported that it had happened during the past year.The prevalence of strangulation appears to be decreasing in Canada, the only country with multiple cross-sectional surveys that measure strangulation.The first major study of surviving victims of strangulation assault found that 99% of the 300 victims in criminal cases involving "choking" were female. In 2000, a meta-analytic review of gender differences in physical aggression against a heterosexual partner concluded that ". ..'choke or strangle' is very clearly a male act, whether based on self- or partner reports." A similar conclusion was reached in a 2014 multi-nation review: "…women are more likely than men to report that they were strangled by an intimate partner."A series of studies conducted in Canada found the same gender discrepancy and reported that strangulation by an intimate is more common among disabled persons, cohabiting (vs. married) persons, and those in a step- (vs. biological) family. Women who had been abused by an intimate partner reported higher rates of strangulation.Strangulation is sometimes fatal. According to a large U.S. case control study, prior strangulation is a substantial and unique predictor of attempted and completed homicide of women by a male intimate partner. The study showed that the odds of becoming an attempted homicide victim increased 7-fold and the odds of becoming a homicide victim increased 8-fold for women who had been strangled by their partner. When over three dozen other characteristics of the victim, perpetrator, and incident were taken into account, however, strangulation no longer was a unique predictor. Strangulation is so common in battering (50% or more battered women report that they've been strangled) that it doesn't differentiate abuse in which the victim survives or dies. Experience of strangulation: Strangulation has been likened to drowning and researchers at the University of Pennsylvania have likened non- or near-fatal strangulation to water boarding, which is widely considered a form of torture. Experience of strangulation: A special issue of the Domestic Violence Report devoted to the crime of strangulation states: "Many domestic violence offenders and rapists do not strangle their partners to kill them; they strangle them to let them know they can kill them—any time they wish. Once victims know this truth, they live under the power and control of their abusers day in and day out." Outcomes: Strangulation can produce minor injuries, serious bodily injury, and death. Evidence of the assault can be difficult to detect because many victims may not have visible injuries and/or their symptoms may be nonspecific. Outcomes: Victims may have internal injuries, such as laryngo-tracheal injuries, digestive tract injuries, vascular injuries, nervous system injuries and orthopedic injuries. Clinical symptoms of these internal injuries may include neck and sore-throat pain, voice changes (hoarse or raspy voice or the inability to speak), coughing, swallowing abnormalities, and changes in mental status, consciousness and behavior. Neurological symptoms may include vision changes, dimming, blurring, decrease of peripheral vision and seeing "stars" or "flashing lights." Post-anoxic encephalopathy, psychosis, seizures, amnesia, cerebrovascular accident and progressive dementia may be indicative of neuropsychiatric effects.Signs of life-threatening or near fatal strangulation may include sight impairment, loss of consciousness, urinary or fecal incontinence and petechiae (pinpoint hemorrhages). Even victims with seemingly minimal injuries and/or symptoms may die hours, days, or weeks later because of progressive, irreversible encephalopathy.Some visible signs of strangulation a victim may incur include injuries to the face, eyes, ears, nose, mouth, chin, neck, head, scalp, chest and shoulders: redness, scratches or abrasions, fingernail impressions in the skin, deep fingernail claw marks, ligature marks ("rope burns"), thumbprint-shaped bruises, blood-red eyes, pinpoint red spots called "petechiae" or blue fingernails. Laws: Because of involvement of the medical profession, specialized training for police and prosecutors, and ongoing research, strangulation has become a focus of policymakers and professionals working to reduce intimate partner violence and sexual assault.As of November 2014, 44 U.S. states, the District of Columbia, the federal government and two territories have some form of strangulation or impeded breathing statute. Twenty-three states and one territory have enacted legislation making strangulation a felony. One state legislature, Utah, passed a joint resolution which made legislative findings that can help prosecutors apply existing assault statutes with a special emphasis on non-fatal strangulation assaults. In 2013, Congress re-authorized the Violence Against Women Act and added, for the first time, strangulation and suffocation as a specific federal felony. Improving detection and intervention: Starting in 1995, the work of Gael Strack and Casey Gwinn in San Diego has helped identify and address challenges in detecting, investigating, and prosecuting strangulation and suffocation offenses in intimate partner violence, sexual assault, elder abuse, and child abuse cases. In 2011, Strack and Gwinn created the Training Institute on Strangulation Prevention, the most comprehensive training program in the United States for the documentation, investigation, and prosecution of non and near-fatal strangulation assaults. They have published multiple state-specific books to guide the investigation and prosecution of non and near-fatal strangulation assaults.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Intermodulation intercept point** Intermodulation intercept point: The intermodulation intercept point in electronics is a measure of an electrical device's linearity. When driven by two sinusoidal waveforms, it is the theoretical power level at which the power of the desired tone and the nth-order (where n is odd) intermodulation product intersect.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Clerk of the course** Clerk of the course: A clerk of the course is an official in various types of racing. Horse racing: In horse racing, the clerk of the course is the person responsible for track management and raceday preparation at a racecourse. Important tasks of the role include: deciding whether the course is fit to race; declaring the official going on the day of racing; monitoring the going in the run up to the race, and covering or watering the track as necessary to maintain a particular going; protecting sections of the turf from over use; and on National Hunt courses, preparing and managing fences.They may also work with racecourse management on optimising the racecourse's fixture list. Auto racing: In auto racing, the clerk of the course is a designated official in charge of managing various aspects of circuit operations, including communication with course marshals, dispatching safety and rescue teams, oversight of track conditions, deploying and withdrawing the safety car and determining whether or not to suspend a race in case of dangerous conditions. Generally, the clerk of the course is directly subordinate to the race director or chief steward.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fujitsu Celsius** Fujitsu Celsius: The Fujitsu Celsius is a line of laptop and workstation computers manufactured by Fujitsu. The brand name has also been used for graphic accelerators.The laptops have Intel Core vPro, i5, or i7 processors, while the workstations have one or two Intel Xeon processors. History and usage: The computers are intended for applications such as computer-aided design, digital content creation, geographical information systems work, architecture, engineering, financial forecasting, flux balance analysis, scientific simulation, electronic design, and virtual reality. History and usage: Fujitsu Celsius equipment was used to stitch together thousands of individual photographic images to create large-scale 360-degree panoramic images: an 80-gigapixel image of London was published in November 2010, stating: “using this excellent workstation allowed this record-breaking photo to be created a few weeks faster than would have been possible on any other available PC.” The 320-gigapixel photomosaic of London published in February 2013 was prepared on a Celsius R920 in three months' time. Models: Desktops Celsius R630 specs Celsius R920 - 2012's model; The R920 is able to accept up to 512 GB of RAM. Laptops Celsius H710 and Celsius H910 - 2011's models.TFTS, “Fujitsu Brings Out Pair Of New Laptops, The Celsius H710 And The Celsius H910 [Described As Workstation Laptops, Fujitsu's New Hardware Offers Variety Of Configurations And High Power]”, Steve Anderson, May 23, 2011 http://nexus404.com/Blog/2011/05/23/fujitsu-brings-pair-laptops-celsius/
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carbonated soda treatment of phytobezoars** Carbonated soda treatment of phytobezoars: Carbonated soda treatment of phytobezoars is the use of carbonated soda to try to dissolve a phytobezoar. Bezoars consist of a solid and formed mass trapped in the gastrointestinal system, usually in the stomach. These can also form in other locations.Carbonated soda has been proposed for the treatment of gastric phytobezoars. In about 50% of cases studied, carbonated soda alone was found to be effective in gastric phytobezoar dissolution. Unfortunately, this treatment can result in the potential of developing small bowel obstruction in a minority of cases, necessitating surgical intervention. It is one of many other stomach disorders that can have similar symptoms.Gastric phytobezoars are a form of intestinal blockage and are seen in those with poor gastric motility. The preferred treatment of bezoars includes different therapies and/or fragmentation to avoid surgery. Phytobezoars are most common and consist of various undigested substances including lignin, cellulose, tannins, celery, pumpkin skin, grape skins, prunes, raisins, vegetables and fruits. Phytobezoars can form after eating persimmons and pineapples. These are more difficult to treat and are referred to as diospyrobezoars. Treatment: Carbonated soda may help to dissolve phytobezoars. It can be given by a naso-gastric tube in children. Carbonated soda can also be given by mouth and during endoscopy. It is effective in about half of the cases.It promotes dissolution by endoscopic techniques in the majority of the patients left, leading to a final success rate up to 91.3%. In some cases, regular use of Coca-Cola resulted in no recurrence 3–15 months after the first episode. Treatment has varied widely. Coca-Cola has been administrated either as drinking beverage or as lavage. Some are treated with various combinations of drink, injection and irrigation. The volume of Coca-Cola in treatment varies along with daily dose and time of treatment. Dosages varied from 500 mL up to 3000 mL and treatment period 24 hours to 6 weeks. When lavage is used, a double-lumen nasogastric tube or two separate tubes using 3000 mL of Coca-Cola is administered during a 12-hour period. Alternative treatments are the use of cellulase, acetylcysteine, papain, pancreatic enzymes, saline solution, 0.1 N HCl and sodium bicarbonate. The protocol for the treatment of phytobezoars with Coca-Cola, i.e., dosage and timing, has not been standardized; further investigation has been encouraged. Contraindications: Trichobezoars do not respond to treatment with Coca-Cola but instead this type may have to be surgically removed. Persimmon diospyrobezoars sometimes are resistant to Coca-Cola and require a different treatment. This can include endoscopic fragmentation and/or surgical approaches especially in urgent cases where the patient exhibits gastrointestinal bleeding. Adverse effects and interactions: Adverse effects have been observed with the use of papain such as gastric ulcer, hyponatremia and oesophageal perforation. These effects have not been observed with the use of Coca-Cola. Glucose levels during the administration of Coca-Cola have not been addressed. Pharmacology and interactions: In addition to Coca-Cola, meat tenderizer has been used to dissolve bezoars of the stomach. When treatment with Coca-Cola is combined with endoscopic methods, the success of treatment approaches 90%. The mechanism by which Coca-Cola dissolves the bezoar is based upon its low pH, CO2 bubbles, and sodium bicarbonate content. Pharmacology and interactions: "...patients given a continuous infusion of Coca-Cola by nasogastric tube over 12 hours showed complete resolution of bezoars. If you cannot find a can of Coke, perhaps Pepsi will do the trick, assuming it does not cause dysPEPSIa."Some clinicians have described the mode of interaction is based upon the acidification of the gastric contents and the release of CO2 that causes disintegration. Three and a half liters given nasogastrically over 12 hours has been found to dissolve these bezoars. Coca-Cola has a pH of 2.6. This is due to carbonic and phosphoric acid which resemble gastric acid. Gastric acid is believed to facilitate the digestion of fibers. In Coca-Cola, NaHCO3 has a mucolytic effect and CO2 bubbles enhance dissolving the bezoar. Coca-Cola reduces the size and softens the make-up of the bezoar, and combined with other treatments, enhances the dissolution. History: A phytobezoar was first successfully treated with Coca-Cola lavage in 2002.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lithium disilicate** Lithium disilicate: Lithium disilicate (Li2Si2O5) is a chemical compound that is a glass ceramic. It is widely used as a dental ceramic due to its strength, machinability and translucency. Use: Lithium disilicate has found applications in dentistry as a dental ceramic material for dental restorations such as crowns, bridges, and veneers in the form of Li2Si2O5. Lithium disilicate has an unusual microstructure that consists of many randomly oriented small and interlocking plate-like needle-like crystals. This structure causes cracks to be deflected, blunted, and/or to branch, which prevents cracks from growing. Lithium disilicate has a biaxial flexible strength in the range of 360 MPa to 400 MPa; in comparison, for metal ceramics this is around 80 to 100 MPa, for veneered zirconia it is approximately 100 MPa, and for leucite glass ceramic it is approximately 150 to 160 MPa. It has high hardness (5.92 +/- 0.18 GPa) and fracture toughness (3.3 +/- 0.14 MPa m1/2). In addition, it can be made to have an appearance that very closely resembles that of natural human teeth. Use: Lithium disilicate is also used as a non-conductive seal, enamel or feed-through insulator in nickel superalloys or stainless steel, as it has a high thermal expansion.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Performance portability** Performance portability: Performance portability refers to the ability of computer programs and applications to operate effectively across different platforms. Developers of performance portable applications seek to support multiple platforms without impeding performance, and ideally while minimizing platform-specific code.It is a sought after commodity within the HPC (high performance computing) community, however there is no universal or agreed upon way to measure it. There is some contention as to whether portability refers to the portability of an application or the portability of the source code. Performance can be measured in two ways: either by comparing an optimized version of an application with its portable version; or to compare the theoretical peak performance of an application based on how many FLOPs are performed, with the data moved from main-memory to the processor. The diversity of hardware makes developing software that works across a wide variety of machines increasingly important for the longevity of the application. Contentions: The term performance portability is frequently used in industry and generally refers to: "(1) the ability to run one application across multiple hardware platforms; and (2) achieving some notional level of performance on these platforms." For example, at the 2016 DOE (United States Department of Energy) Centers of Excellence Performance Portability Meeting, John Pennycook (Intel), stated “An application is performance portable if it achieves a consistent level of performance [e.g. defined by execution time or other figure of merit, not percentage of peak FLOPS (floating point operations per second] across platforms relative to the best known implementation on each platform.” More directly, Jeff Larkin (NVIDIA) noted that performance portability was when "The same source code will run productively on a variety of different architectures."Performance portability is a key topic of discussion within the HPC (high performance computing) community. Collaborators from industry, academia, and DOE national laboratories meet annually at the Performance, Portability, and Productivity at HPC Forum, launched in 2016, to discuss ideas and progress toward performance portability goals on current and future HPC platforms. Relevance: Performance portability retains relevance among developers due to constantly evolving computing architectures that threaten to make applications designed for current hardware obsolete. Performance portability represents the assumption that a developer's singular codebase will continue to perform within acceptable limits on newer architectures and on a variety of current architectures that the code hasn't yet been tested on. The increasing diversity of hardware makes developing software that works across a wide variety of machines necessary for longevity and continued relevance.One prominent proponent of performance portability is the United States Department of Energy's (DOE) Exascale Computing Project (ECP). The ECP's mission of creating an exascale computing ecosystem requires a diverse array of hardware architectures, which has made performance portability an ongoing concern and something that must be prepared for in order to effectively use exascale supercomputers. At the 2016 DOE Centers of Excellence Performance Portability Meeting, Bert Still (Lawrence Livermore National Laboratory) stated that performance portability was "a critical ongoing issue" for the ECP due to their continuing use of diverse platforms. Since 2016 the DOE has hosted workshops exploring the continued importance of performance portability. Companies and groups in attendance of the 2017 meeting include the National Energy Research Scientific Computing Center (NERSC), Lawrence Livermore National Laboratory (LLNL), Sandia National Laboratories (SNL), Oak Ridge National Laboratory (ORNL), International Business Machines (IBM), Argonne National Laboratory (ANL), Los Alamos National Laboratory (LANL), Intel, and NVIDIA. Measuring Performance Portability: Quantifying when a program reaches performance portability is dependent on two factors. The first factor, portability, can be measured by the total lines of code that are used across multiple architectures vs. the total lines of code that are intended for a single architecture. There is some contention as to whether portability refers to the portability of an application (i.e. does it run everywhere or not), or the portability of source code (i.e. how much code is specialized). The second factor, performance, can be measured in a few ways. One method is to compare the performance of platform optimized version of an application vs. the performance of a portable version of the same application. Another method is to construct a roofline performance model, which provides the theoretical peak performance of an application based on how many FLOPs are performed vs. the data moved from main-memory to the processor over the course of program execution.There are currently no universal standards for what truly makes code or an application performance portable, and no agreement about whether proposed measurement methods accurately capture the concerns that are relevant to code teams. During the 2016 DOE (United States Department of Energy) Centers of Excellence Performance Portability Meeting, speaker David Richards, from Lawrence Livermore National Laboratory, stated that, "A code is performance portable when the application team says its performance portable!"A study from 2019 titled Performance Portability across Diverse Computer Architectures analyzed multiple parallel programming models across a diverse set of architectures in order to determine the current state of performance portability. The study concluded that when writing performance portable code it's important to use open (standard) programming models supported by multiple vendors across multiple hardware platforms, expose maximal parallelism at all levels of the algorithm and application, develop and improve codes on multiple platforms simultaneously, and multi-objective auto-tuning can help find suitable parameters in a flexible codebase to achieve good performance on all platforms.Studies from 2022 are postulated that an adequate and inclusive definition of the performance portability of a parallel application is desirable, but rather complex, and it is doubtful whether such a definition would be accepted by most researchers and developers in the scientific community. Furthermore, the changes that have occurred in the past two decades in the development of parallel programming models, especially with the addition of new portable performance abstractions to current versions and those that will be added in the coming years, outline a new trend in the field. Measuring Performance Portability: This trend indicates that the performance portability that parallel programming models will provide to applications will be more significant than the performance portability that applications can provide themselves on their own. In other words, it is proposed that parallel programming models will become more descriptive than prescriptive models, thus transferring a great deal of responsibility from the programmer to the programming model implementation and its underlying compiler, which ultimately determine the degree of performance portability of the application. This is a fundamental conceptual change in how applications will be developed in the foreseeable future. As a result of these changes, it is necessary to raise the abstraction level of the definition of performance portability. Measuring Performance Portability: In other words, these studies propose a definition for performance portability that is parallel programming model-centric rather than application-centric. Framework and Non-Framework Solutions: There are a number of programming applications and systems that help programmers make their applications performance portable. Some frameworks that claim to support functional portability include OpenCL, SYCL, Kokkos, RAJA, Java, OpenMP, OpenACC. These programming interfaces support multi-platform multiprocessing programming in particular programming languages. Some non-framework solutions include Self-tuning and Domain-specific language.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lacrosse strategy** Lacrosse strategy: The game of lacrosse is played using a combination of offensive and defensive strategies. Offensively, the objective of the game is to score by shooting the ball into an opponent's goal, using the lacrosse stick to catch, carry, and pass the ball. Defensively, the objective is to keep the opposing team from scoring and to dispossess them of the ball through the use of stick checking and body contact or positioning. Settled offense: the 2-3-1: The most common offense used in settled situations is known as the "2-3-1" (sometimes counted in the opposite direction, as a 1-3-2, or shortened to be called the 1-3 or 13). The numbering begins with the two midfielders at the top of the field, then continues to the two attackmen on the wings and the midfielder on the crease, and finally the last attackman located at "X", the position behind the goal. Settled offense: the 2-3-1: The offensive team should pass the ball around the perimeter and look for weaknesses in the defense. They will also rotate, in two triangles. The midfielders will rotate in a triangle, across the top, and to and from the crease. The attackmen will rotate to and from X, and across the crease to the opposite wing. A player may "carry" the ball in a rotation as well; for example, a middle at the top right will carry to the top left, while the top left middle will cut to the crease, and the crease midfielder will cut into the space where the ball carrier came from. Settled offense: the 2-3-1: Driving to the goal is aided by the triangle rotations, as it is difficult for the defense to keep track of the cutting players and the player who is attempting to drive to the goal. Some players prefer to drive from the midfield positions, as they do not have to turn to shoot, and they are often driving against a short stick defender. Other players prefer to drive from the wings and X, as it is often easier to pass to other players, who are more likely to be facing the goal for an easier shot. Other settled offenses: There are other offenses which are commonly used. The second most common offense is probably the 1-4-1, where there is only one midfielder at the top of the field, two midfielders on the crease and two attackmen on the wings, and an attackmen at X. The formation somewhat resembles a cross. This offense is often considered better for driving, because it has a large amount of open space. The offense also allows the two players on the crease to attempt to set picks to get open, and screen the goalie on shots from the outside. Other settled offenses: Another common offense is known as the "invert", and is more commonly used at the college level. This offense is structurally the same as a 2-3-1, but the midfielders and attackmen switch, or invert their positions. This is designed to allow midfielders to drive from the wings and from X against a short-stick midfielder, instead of against a longstick defensemen who would normally defend the player at these positions. It also allows attackmen to drive from the top of the field, where may they have a better chance of scoring, because they are facing the goal. Other settled offenses: However, this offense can leave the team open to a fast-break, because when the ball is turned over, the midfielders are very far from their own defensive half. Some teams elect to send their attackmen back to their defensive half if there is a fast break, although this has the obvious disadvantage of forcing a primarily offensive player to play defense. Other settled offenses: Another offense is the 1-3-2, which is somewhat similar to the 2-3-1 flipped upside down. In this offense, there is one midfielder at the top of the field, two midfielders on the wings and an attackmen on the crease, and two attackmen below the goal line. This offense is designed to allow the attackmen more opportunities to drive from below the goal, and allows the top midfielder a very open field to drive to the goal. Other settled offenses: Another, less commonly used offense is the 2-2-2. In this offense there are two midfielders at the top left and top right corners, a midfielder and attackmen on the crease, and two attackmen on the bottom right and bottom left corners. This offense resembles an X. This is a very complicated offense, and is quite rare even at the college level. It is designed to allow the crease players to set picks and cut, as in the 1-4-1, as well as allow players to drive from the corners. When properly implemented, it is very difficult to defend. If an individual defender is beaten on a drive, another defender must come from the crease, leaving a player open close to the goal, or from a corner, in which case it may not arrive in time. It is very difficult for the offense, as offensive player must routinely make long passes between the corners, and must all be able to handle the ball competently, as they cannot make a short outlet pass if they get into trouble. Defense in settled situations: As in basketball, there are two basic defensive styles: man-to-man and zone. Man-to-man, or simply man defense, is more commonly used in lacrosse. This is due to the lack of a shot clock at most levels of competition, so a man defense will tend to force more turnovers.In a man defense, every defensive player will be responsible for one offensive player, as well as having a support responsibility in the "slide" system. A slide is when a defensive player is beaten on a drive, and another player must slide over to stop the player with the ball, and the other defensive players must attempt to cover the uncovered players until the defense is recovered. Teams can use a "slide from adjacent" meaning that a defender will have responsibility to stop the driving player if he breaks through on his side; in this system, if the attackmen at X is driving to the right, the defensemen covering the right wing attackman would slide, whereas if he drove left, the defensemen covering the left wing attackmen would slide. The other option is implement a "slide from the crease" whereby the defensive player on the crease always slides, and one of the other defenders will pick up his man. Defense in settled situations: The other option is to use a zone defense. The most common, generic defense, is known as a 3-3 (similar to the 2-3 in basketball). The area above the goal line is divided like a rectangle into 6 zones. The long stick defensemen will cover the bottom 3 zones, and the midfielders will cover the top 3 zones. Unlike in most man defenses, defenders do not go very far below the goal line, so often an attackmen at X can simply hold the ball and wait for an open pass. This is why some run four long poles (one LSM) on the corners and have two midfielders in the middle, then when the ball goes to X the low midfielder drops and plays him behind the goal the top midfielder fills into the low position until the ball is moved from X and then the midfielders go back to their original positions in the 3-3. Defense in settled situations: Other zone defenses are more tailored to the offense that is being used, such as the 2-3-1 zone, in which the defensive players mirror the offensive players positioning, but when offensive players rotate, they are "passed off" to the next zone, rather than being followed as in a man defense system. Defense in settled situations: Overall, man defenses are favored as they allow players to strip the ball and attempt to intercept passes more aggressively. However, they also require faster athletes, and it can be difficult to recover when a defender is beaten. As a result, some teams choose to run a zone defense. Zone defenses also can be easier for younger athletes, and are used more commonly at the youth level. Zone defenses can also be used when a team has a large lead, and wants to make sure they opposing team cannot score goals quickly. Fast-break situations: Fast breaks occur when an offensive player has the ball, and comes into the defensive half without anyone covering him. Fast breaks usually occur because a player caught an outlet pass from the goalie, won a face-off, or stripped the ball on defense and carries it the other way. Fast-break situations: One way of aligning for a fast break is the "L". In this case, one attackman aligns to the top right, another to the right just above the goal line, and the last on the left just above the goal line, forming an L. The fast break player who is carrying the ball comes into the top left. If the fast break player is coming on the right, then the top attackman will simply switch to the left side. The fast break player will attempt to draw a defensemen, then pass to the top attackmen. Fast-break situations: The other common way of aligning is in a V. In this case, one attackman is aligned on the crease, and the other two are aligned at top right and top left. The fast break player comes down the middle of the field, and can pass to the left or right. The goal of both fast break offenses is to draw a defensive player, and quickly pass, until a player is uncovered and open for a shot. Fast-break situations: There is only one commonly used defensive system for fast breaks, the triangle zone. Defensive players begin in a triangle, with one player at the top, or "point" and two players low. Once the ball arrives, defensive players will rotate to where the ball is passed, and do everything they can to prevent a goal until help arrives. It is very important that defensemen remain close to the goalie, or it will be easier to get an offensive player open for a shot. Unsettled clears: The term "clear" refers to when the defense gets the ball in their half of the field, and tries to get the ball across half-field to their offensive zone. The term "ride" refers to the efforts of the opposing attackmen and midfielders to recover the ball before it can be brought into their defensive zone.When the defense gets the ball in an unsettled situation, such as a defensive player picking up a ground ball, intercepting a pass, or the goalie making a save, the defense will try to spread out and try to get open for a pass. When a defensive player picks the ball up behind the goal, or on a far when, they will often pass to the goalie, because the goalie cannot be checked while standing in the crease, and thus has several seconds to look for an open pass. Generally, the defensemen will try to move out to the "alley" between the box line and the sideline, and midfielders will try to break upfield and towards the corners, all in an attempt to get a player open, and force the opposing players to spread themselves out. Unsettled clears: The players who are riding will generally try to harass the player with the ball before he can pass to the goalie or pass upfield, and drop back, generally to around the box line, if they are unsuccessful at doing this. The reason for dropping back to the box line is that the defense has 10 seconds from when they pick the ball up to clear the ball across midfield. This means that the riding players can force a turnover by stripping the ball or intercepting a pass, or by forcing the clearing team to make many passes and take more than 10 seconds in clearing the ball. Rides and clears in settled situations: A settled clear occurs when one team gets the ball in their defensive half of the field after a stoppage in play. Because of the stoppage, both teams will have time to set their players up in an optimal fashion. As a result, teams generally use scripted plays in settled rides and clears. Rides and clears in settled situations: One clear that is commonly used in settled situations is known as an "L" clear. In this clear, a midfielder begins with the ball, the goalie is in the middle of the field, one defenseman is in the "alley" even with the midfielder, another defensemen is in the top left corner, and the last defensemen is above the box line in the middle of the field. One midfielder is in the top left of the field, and the final midfielder is across the midfield line, above the defenseman. Rides and clears in settled situations: There are several different systems that the riding team may use. In a zone clear, the riding players divide into 6 different zones, with the attackmen generally around the box line, and the midfielders around the half-field line. Zone rides generally aim to force the clearing team to make long passes, or take 20 seconds to cross the half-field line. Zone rides are usually easier for less defensively skilled attackmen, and are less likely to result in fast breaks. Rides and clears in settled situations: Another system is to use man-to-man defense, usually leaving the goalie or a defenseman with weak skills unguarded, and forcing them to bring the ball up on their own. Man-to-man rides often work well with attackmen who are also very good at defense. The danger in this system is that the unguarded player may simply break through and easily clear the ball, or that the man-to-man coverage may be blown, and a fast break given up. Rides and clears in settled situations: An aggressive variation of the man-to-man system is known at the "ten man ride". In this system, the riding team's goalie will come out of the goal to cover an attackman, allowing a defenseman to cover a midfielder, which in turn allows a midfielder to cover a defenseman, and finally one of the attackmen to cover the goalie. The danger in this system is that it is possible that none of the clearing midfielders will cross the mid-field line, so that the defenseman will not be able to stay onsides and cover him (remember, teams must always have 4 players on their defensive side of the field). It is also possible that the clearing team may catch the goalie out of the cage, and be able to take an open shot. This riding system is often used when a team is down and needs to recover the ball to catch up. Man-up and Man-down: Man-up and Man-down, also known as power play, or extra man opportunity (EMO), refers to situations where one team is shorthanded as a result of a penalty. The offensive team attempts to take advantage of their extra player to score a goal, while the defensive team will try to stop them from scoring until they get back to even-handed. Man-up and Man-down: Because they are a player short, the defense, called man down defense (MDD), must resort to a zone, and there are several zones they can choose from. One is known as the box-and-one defense, because one defenseman covers the man on the crease, while the other 4 players form a box. This defense is used against an offense with only one player on the crease. Another zone is the 2-3, which is almost exactly like the basketball defense of the same name, with three low zones and 2 higher zones. Man-up and Man-down: The offense often chooses to run the same base offense as they run in settled situations, such as a 2-3-1, although offenses with two players on the crease, such as the 1-4-1, are less common. Another common offense that is run in man-up situations is the 3-2-1, also known as the "circle" offense, because no player is on the crease, and all of the players are on the perimeter in a circle. Lastly, there is the 3-3 offense, which has no player at X, and 3 midfielders across the top. One drawback to the 3-3 is that, on a shot, there is no attackman at X to back the shot up if it misses the goal. Man-up and Man-down: A common offensive tactic in man-up situations is the use of the carry, whereby an offensive player carries the ball from one perimeter position to another. This can be disorienting for a defense, because they must "pass off" the player to another zone, or else the whole defense must rotate. Often, after a carry, a player may pass back to the position he came from, because there may be a hole there. The offense will also try to draw a defensive player out to them, with the threat of a shot, and make a quick pass in an attempt to get an open shot. Man-up and Man-down: Defensive strategy centers around staying close to the goal, and not allowing shots from close range. As a result of being a man down, the defense must be less aggressive against long-range shooters, so they may allow long range shots that have a low percentage of scoring. This is similar to allowing a player to shoot a 3-pointer in a basketball fast break: the shot is lower percentage than a close shot, so the defense allows it rather than overplaying the shooter and allowing a layup. Defensive players must also be careful not to be stretched from their zone, by going too high or too far outside, and allow another player to sneak in at the other end of their zone. Substitutions: In lacrosse, substitution can occur in a variety of ways. Substitutions: The primary method for substituting players is through the special substitution box, an area located between the two team benches that allows for on-the-fly substitutions. All on-the-fly substitutions must be through this "box."Substitutions can also be made when the ball goes out on either sideline. In this case, if the coach desires to substitute players, he can ask for a horn to signify this "regular substitution". Play is delayed until the team(s) substituting have the players they want on the field. "Regular substitution" can also occur after goals are scored and penalties are reported to the scorer's table. These do not need a horn request. However, since 2013, horns no longer exist in the NCAA rules. Since 2014, horns have also been removed from the National-Federation of High School State Associations rulebook. This means that a coach is no longer allowed to request a horn to make a substitution on a ball out-of-bounds on the sideline. Horns still exist at the youth level, however coaches at other levels have to adapt to playing lacrosse without horns. In addition, the substitution box has been expanded to 20 yards, to allow for increased on-the-fly substitution. The horn is only used for signalling issues at the table that require the officials' attention, such as a player illegally entering the field, and the ends of periods.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Galactography** Galactography: Galactography or ductography (or galactogram, ductogram) is a medical diagnostic procedure for viewing the milk ducts. The procedure involves the radiography of the ducts after injection of a radiopaque substance into the duct system through the nipple. The procedure is used for investigating the pathology of nipple discharge. Galactography is capable of detecting smaller abnormalities than mammograms, MRI or ultrasound tests. With galactography, a larger part of the ductal system can be visualized than with the endoscopic investigation of a duct (called galactoscopy or ductoscopy). Galactography: Causes for nipple discharge include duct ectasia, intraductal papilloma, and occasionally ductal carcinoma in situ or invasive ductal carcinoma.The standard treatment of galactographically suspicious breast lesions is to perform a surgical intervention on the concerned duct or ducts: if the discharge clearly stems from a single duct, then the excision of the duct (microdochectomy) is indicated; if the discharge comes from several ducts or if no specific duct could be determined, then a subareolar resection of the ducts (Hadfield's procedure) is performed instead.To avoid infection, galactography should not be performed when the nipple discharge contains pus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nasal dental click** Nasal dental click: The dental nasal click is a click consonant found primarily among the languages of southern Africa. The symbol in the International Phonetic Alphabet for a nasal dental click with a velar rear articulation is ⟨ŋ͡ǀ⟩ or ⟨ŋ͜ǀ⟩, commonly abbreviated to ⟨ŋǀ⟩, ⟨ᵑǀ⟩ or ⟨ǀ̃⟩; a symbol abandoned by the IPA but still preferred by some linguists is ⟨ŋ͡ʇ⟩ or ⟨ŋ͜ʇ⟩, abbreviated ⟨ŋʇ⟩, ⟨ᵑʇ⟩ or ⟨ʇ̃⟩. For a click with a uvular rear articulation, the equivalents are ⟨ɴ͡ǀ, ɴ͜ǀ, ɴǀ, ᶰǀ⟩ and ⟨ɴ͡ʇ, ɴ͜ʇ, ɴʇ, ᶰʇ⟩. Nasal dental click: Sometimes the accompanying letter comes after the click letter, e.g. ⟨ǀŋ⟩ or ⟨ǀᵑ⟩; this may be a simple orthographic choice, or it may imply a difference in the relative timing of the releases. Features: Features of the dental nasal click: The airstream mechanism is lingual ingressive (also known as velaric ingressive), which means a pocket of air trapped between two closures is rarefied by a "sucking" action of the tongue, rather than being moved by the glottis or the lungs/diaphragm. The release of the forward closure produces the "click" sound. Voiced and nasal clicks have a simultaneous pulmonic egressive airstream. Features: Its place of articulation is dental, which means it is articulated with either the tip or the blade of the tongue at the upper teeth, termed respectively apical and laminal. Note that most stops and liquids described as dental are actually denti-alveolar. Its phonation is voiced, which means the vocal cords vibrate during the articulation. It is a nasal consonant, which means air is allowed to escape through the nose, either exclusively (nasal stops) or in addition to through the mouth. It is a central consonant, which means it is produced by directing the airstream along the center of the tongue, rather than to the sides. Occurrence: Dental nasal clicks are found primarily in the various Khoisan language families of southern Africa and in some neighboring Bantu languages, such as Yeyi and Fwe. Glottalized dental nasal click: All Khoisan languages, and a few Bantu languages, have glottalized nasal clicks. These are formed by closing the glottis so that the click is pronounced in silence; however, any preceding vowel will be nasalized.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mechanical filter (respirator)** Mechanical filter (respirator): Mechanical filters are a class of filter for air-purifying respirators that mechanically stops particulates from reaching the wearer's nose and mouth. They come in multiple physical forms. Mechanism of operation: Mechanical filter respirators retain particulate matter such as dust created during woodworking or metal processing, when contaminated air is passed through the filter material. Wool is still used today as a filter, along with plastic, glass, cellulose, and combinations of two or more of these materials. Since the filters cannot be cleaned and reused and have a limited lifespan, cost and disposability are key factors. Single-use, disposable and replaceable-cartridge models exist.Mechanical filters remove contaminants from air in the following ways: by interception when particles following a line of flow in the airstream come within one radius of a fiber and adhere to it; by impaction, when larger particles unable to follow the curving contours of the airstream are forced to embed in one of the fibers directly; this increases with diminishing fiber separation and higher air flow velocity by an enhancing mechanism called diffusion, where gas molecules collide with the smallest particles, especially those below 100 nm in diameter, which are thereby impeded and delayed in their path through the filter; this effect is similar to Brownian motion and increases the probability that particles will be stopped by either of the two mechanisms above; it becomes dominant at lower air flow velocities by using electret filter material (usually, electrospun plastic fibers) to attract or repel particles with an electrostatic charge, so that they are more likely to collide with the filter surface by using certain coatings on the fibers that kill or deactivate infectious particles colliding with them (such as salt) by using gravity and allowing particles to settle into the filter material (this effect is typically negligible); and by using the particles themselves, after the filter has been used, to act as a filter medium for other particles.Considering only particulates carried on an air stream and a fiber mesh filter, diffusion predominates below the 0.1 μm diameter particle size. Impaction and interception predominate above 0.4 μm. In between, near the most penetrating particle size of 0.3 μm, diffusion and interception predominate.For maximum efficiency of particle removal and to decrease resistance to airflow through the filter, particulate filters are designed to keep the velocity of air flow through the filter as low as possible. This is achieved by manipulating the slope and shape of the filter to provide larger surface area.High-efficiency particulate air (HEPA)filters are all filters meeting certain efficiency standards. A HEPA filter must remove at least 99.97% (US) or 99.95% (EU) of all airborne particulates with aerodynamic diameter of 0.3 μm. Particles both smaller and larger are easier to catch, and thus removed with a higher efficiency. People often assume that particles smaller than 0.3 microns would be more difficult to filter efficiently; however, the physics of Brownian motion at such smaller sizes boosts filter efficiency (see figure). Mechanism of operation: Materials Mechanical filters can be made of a fine mesh of synthetic polymer fibers. The fibers are produced by melt blowing. The fibers are charged as they are blown to produce an electret, and then layered to form a nonwoven polypropylene fabric. Mechanism of operation: Exhalation valves Some masks have check valves, that let the exhaled air go out unfiltered. The certification grade of the mask (as N95 or FFP2) is about the mask itself and it does not warrant any safety about the air that is expelled by the wearer through the valve. A mask with valve will reduce inwards leakages, thus improving the wearer protection.Unfiltered-exhalation valves are sometimes found in both filtering facepiece and elastomeric respirators; PAPRs cannot by nature ever filter exhaled air. As a result, these masks are believed to be incapable of source control, which is protecting others against an infection in the wearer's breath. They are not generally designed for healthcare use, as of 2017. Despite the aforementioned belief, a 2020 research by the NIOSH and CDC shows that an uncovered exhalation valve already provides source control on a level similar to, or even better than, surgical masks.During the COVID-19 pandemic, masks with unfiltered-exhalation valves did not meet the requirements of some mandatory mask orders. It is possible to seal some unfiltered exhalation valves or to cover it with an additional surgical mask; this might be done where mask shortages make it necessary. Uses: Filtering facepiece respirators Filtering facepiece respirator (FFPs) are disposable face masks produced from a whole piece of filtering material. FFPs (such as N95 masks) are discarded when they become unsuitable for further use due to considerations of hygiene, excessive resistance, or physical damage.Mass production of filtering facepieces started in 1956. The air was purified with nonwoven filtering material consisting of polymeric fibers carrying a strong electrostatic charge. Respirator was used in nuclear industry, and then in other branches of economy. For ~60 years, more than 6 billion respirators were manufactured. Unfortunately, the developers overestimated the efficiency (APF 200-1000 compared to the modern value of 10–20), which led to serious errors in the choice of personal protective equipment by employers. Uses: Elastomeric respirators Elastomeric respirators are reusable devices with exchangeable cartridge filters that offer comparable protection to N95 masks. The filters must be replaced when soiled, contaminated, or clogged.They may have exhalation valves. Full-face versions of elastomeric respirators seal better and protect the eyes. Fitting and inspection is essential to effectiveness. Powered air-purifying respirators (PAPRs) PAPRs are masks with an electricity-powered blower that blows air through a filter to the wearer. Because they create positive pressure, they need not be tightly fitted. PAPRs typically do not filter exhaust from the wearer. Shortcomings: The electrostatic filters in respirators are much easier to breathe through than cloth masks, however, when respirators are worn with additional coverings, such as surgical mask material, then they can make breathing harder for the wearer. As a result, exposure to carbon dioxide may exceed its OELs (0.5% by volume for 8-hour shift; 1.4% for 15 minutes exposure), with CO2 levels inside reaching up to 2.6% for elastomeric respirators and up to 3.5 for FFRs. Mean values for several models; some models may provide a stronger exposure to carbon dioxide. These values are comparable to the CO2 levels that normally occur within the trachea, and the volume inside a respirator facepiece is a fraction of the total volume inhaled with each breath, so the total CO2 concentration for each breath is much less than the concentration within the small volume of the facepiece itself. Skin irritation and acne (from humidity and skin contact) can be an annoyance. The UK HSE textbook recommends limiting the use of respirators without air supply to 1 hour, while OSHA recommends respirator use for up to eight hours. Shortcomings: Almost all filtration methods perform poorly outside when environmental airborne water levels are high, causing saturation and clogging, increasing breathing resistance, and the collection of water on the electrostatic filter fibers can reduce the efficiency of the filter. Bidirectional air flow (as used on masks without an exhalation valve) compounds this problem further. Design standards are typically used for 'indoor' settings only. Filtration standards: U.S. standards (N95 and others) In the United States, the National Institute for Occupational Safety and Health defines the following categories of particulate filters according to their NIOSH air filtration rating. (Categories highlighted in blue have not actually been applied to any products.) Additionally, HE (high-efficiency) filters are the class of particulate filter used with powered air-purifying respirators. These are 99.97% efficient against 0.3 micron particles, the same as a P100 filter.During the COVID-19 pandemic, the US Occupational Safety and Health Administration issued an equivalency table, giving similar foreign standards for each US standard. Filtration standards: In the United States, N95 respirators are designed and/or made by companies such as 3M, Honeywell, Cardinal Health, Moldex, Kimberly-Clark, Alpha Pro Tech, Gerson, Prestige Ameritech and Halyard Health. In Canada, N95s are made by AMD Medicom, Vitacore, Advanced Material Supply, Eternity and Mansfield Medical. The Taiwanese company Makrite makes N95s as well as similar respirators for a number of other countries. Degil is a label for some of Makrite's respirators. Filtration standards: European standards (FFP2 and others) European standard EN 143 defines the 'P' classes of particle filters that can be attached to a face mask, and European standard EN 149 defines the following classes of "filtering half masks" or "filtering facepieces" (FFP), that is respirators that are entirely or substantially constructed of filtering material: Both European standard EN 143 and EN 149 test filter penetration with dry sodium chloride and paraffin oil aerosols after storing the filters at 70 °C (158 °F) and −30 °C (−22 °F) for 24 h each. The standards include testing mechanical strength, breathing resistance and clogging. EN 149 tests the inward leakage between the mask and face, where 10 human subjects perform 5 exercises each. The truncated mean of average leakage from 8 individuals must not exceed the aforementioned values.: § 8.5 In Germany, FFP2 respirators are made by companies such as Dräger, Uvex and Core Medical. In Belgium, Ansell makes FFP2 masks. In France, the company Valmy makes them. In the United Kingdom, the company Hardshell has recently begun making FFP2 masks. Filtration standards: Other standards (KN95 and others) Respirator standards around the world loosely fall into the two camps of US- and EU-like grades. According to 3M, respirators made according to the following standards are equivalent to US N95 or European FFP2 respirators "for filtering non-oil-based particles such as those resulting from wildfires, PM 2.5 air pollution, volcanic eruptions, or bioaerosols (e.g. viruses)": Chinese KN95 (GB2626-2006): similar to US. Has category KN (non-oily particles) and KP (oily particles), 90/95/100 versions. EU-style leakage requirements. In China, KN95 respirators are made by companies such as Guangzhou Harley, Guangzhou Powecom, Shanghai Dasheng and FLTR. Filtration standards: Korean 1st Class (KMOEL - 2017–64), also referred to as "KF94": EU grades, KF 80/94/99 for second/first/special. In Korea, KF94 respirators are made by companies such as LG, Soomlab, Airqueen, Kleannara, Dr. Puri, Bluna and BOTN. The Hong Kong company Masklab also makes KF-style respirators. Filtration standards: Australian/New Zealand P2 (AS/NZ 1716:2012): similar to EU grades.The NPPTL has also published a guideline for using non-NIOSH masks instead of the N95 in the COVID-19 response. The OSHA has a similar document. The following respirator standards are considered similar to N95 in the US: Japanese DS2/RS2 (JMHLW-Notification 214, 2018): EU-like grades with two-letter prefix – first letter D/R stands for disposable or replaceable; second letter S/L stands for dry (NaCl) or oily (DOP oil) particles. Japanese DS2 respirators are made by companies such as Hogy Medical, Koken, Shigematsu, Toyo Safety, Trusco, Vilene and Yamamoto Safety. Filtration standards: Mexican N95 (NOM-116-2009): same grades as in NIOSH. Brazilian PFF2 (ABNT/NBR 13698:2011): EU-like grades. Disinfection and reuse: Hard filtering facepiece respirator masks are generally designed to be disposable, for 8 hours of continuous or intermittent use. One laboratory found that there was a decrease in fit quality after five consecutive donnings. Once they are physically too clogged to breathe through, they must be replaced. Disinfection and reuse: Hard filtering facepiece respirator masks are sometimes reused, especially during pandemics, when there are shortages. Infectious particles could survive on the masks for up to 24 hours after the end of use, according to studies using models of SARS-CoV-2; In the COVID-19 pandemic, the US CDC recommended that if masks run short, each health care worker should be issued with five masks, one to be used per day, such that each mask spends at least five days stored in a paper bag between each use. If there are not enough masks to do this, they recommend sterilizing the masks between uses. Some hospitals have been stockpiling used masks as a precaution. The US CDC issued guidelines on stretching N95 supplies, recommending extended use over re-use. They highlighted the risk of infection from touching the contaminated outer surface of the mask, which even professionals frequently unintentionally do, and recommended washing hands every time before touching the mask. To reduce mask surface contamination, they recommended face shields, and asking patients to wear masks too ("source masking").Apart from time, other methods of disinfection have been tested. Physical damage to the masks has been observed when microwaving them, microwaving them in a steam bag, letting them sit in moist heat, and hitting them with excessively high doses of ultraviolet germicidal irradiation (UVGI). Chlorine-based methods, such as chlorine bleach, may cause residual smell, offgassing of chlorine when the mask becomes moist, and in one study, physical breakdown of the nosepads, causing increased leakage. Fit and comfort do not seem to be harmed by UVGI, moist heat incubation, and microwave-generated steam.Some methods may not visibly damage the mask, but they ruin the mask's ability to filter. This has been seen in attempts to sterilize by soaking in soap and water, heating dry to 160 °C (320 °F), and treating with 70% isopropyl alcohol, and hydrogen peroxide gas plasma (made under a vacuum with radio waves). The static electrical charge on the microfibers (which attracts or repels particles passing through the mask, making them more likely to move sideways and hit and stick to a fiber; see electret) is destroyed by some cleaning methods. UVGI (ultraviolet light), boiling water vapour, and dry oven heating do not seem to reduce the filter efficiency, and these methods successfully decontaminate masks.UVGI (an ultraviolet method), ethylene oxide, dry oven heating and vaporized hydrogen peroxide are currently the most-favoured methods in use in hospitals, but none have been properly tested. Where enough masks are available, cycling them and reusing a mask only after letting it sit unused for five days is preferred.It has been shown that masks can also be sterilized by ionizing radiation. Gamma radiation and high energy electrons penetrate deeply into the material and can be used to sterilize large batches of masks within a short time period. The masks can be sterilized up to two times but have to be recharged after every sterilization as the surface charge is lost upon radiation. Disinfection and reuse: A recent development is a composite fabric that can deactivate both biological and chemical threats
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Free Parking** Free Parking: Free Parking is a Parker Brothers card game inspired by the "Free Parking" space of the Monopoly board game. Game play: The game is played by two to four players, and game play focuses around using time on a parking meter to gain points; the first to 200 points wins. Each player has their own parking meter and a hand of cards. A player begins a turn by drawing a card, always drawing enough to reach six cards in their hand. The player then plays one of the following cards on his turn: Point cards, which deduct time from the player's meter in exchange for points. They come in multiples of 10 up through 60 points, and have pictures corresponding to illustrations in the Monopoly game, as well as captions such as "with your banker" or "meeting your cousin at the bus station" – errands on which to spend the time on one's meter. Game play: Feed the Meter cards, which add time to the player's meter. They come in increments of 20, 30, 40, and 60 minutes. Free Parking, which protects him from Officer Jones until his next turn. The player of this card flips his meter so that the Free Parking symbol faces outward (rather than toward the player, the default setting). At the start of his next turn, he is allowed to play a point card without deducting time from his meter. Game play: Time Expires, which forces another player to reduce his meter to 0 minutes (making him "in violation").There are two cards that may be played at any time and do not count as cards played on one's turn: Officer Jones, which may be played against any player who is "in violation." When used, the selected player must discard one of his played point cards, lowering his current point total. One house rule is to make the Officer Jones card universal – that is, when played, it affects all players "in violation", even potentially the one who played it. Game play: Talk Your Way Out of It, which cancels any action against the one who plays it, including Officer Jones and many Second Chance cards.In addition to these cards, on his turn, a player may choose to draw an orange Second Chance card (derived from Monopoly's Chance cards). These cards cause a variety of effects, including moving meters up and down, taking and giving point cards, and even trading hands, meters, or places between players. The images on the Second Chance cards, like those on the point cards, have their origins in the Monopoly game. Game play: If a player so desires, instead of drawing and playing a card on his turn, he may opt to exchange three cards. He discards three cards from his hand and then draws three new cards from the pile, bringing his hand back to five cards – the same number as if he had drawn up to six and played a card on his turn. When taking this option, the player forgoes his opportunity to play a card, but not to take a Second Chance card; he may still elect to take one following the card exchange.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Journal of Liposome Research** Journal of Liposome Research: The Journal of Liposome Research is a peer-reviewed academic journal that publishes original research on the topics of liposomes and related systems, lipid-based delivery systems, lipid biology, and both synthetic and physical lipid chemistry. The journal also publishes special issues focusing on particular topics and themes within the general scope of the journal and abstracts and conference proceedings including those from the International Liposome Society. The journal is owned by Informa plc
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Meta Content Framework** Meta Content Framework: Meta Content Framework (MCF) is a specification of a content format for structuring metadata about web sites and other data. History: MCF was developed by Ramanathan V. Guha at Apple Computer's Advanced Technology Group between 1995 and 1997. Rooted in knowledge-representation systems such as CycL, KRL, and KIF, it sought to describe objects, their attributes, and the relationships between them.One application of MCF was HotSauce, also developed by Guha while at Apple. It generated a 3D visualization of a web site's table of contents, based on MCF descriptions. By late 1996, a few hundred sites were creating MCF files and Apple HotSauce allowed users to browse these MCF representations in 3D.When the research project was discontinued, Guha left Apple for Netscape, where, in collaboration with Tim Bray, he adapted MCF to use XML and created the first version of the Resource Description Framework (RDF). MCF format: An MCF file consists of one or more blocks, each corresponding to an entity. A block looks like this:The identifier is a unique identifier for that entity (more on the scope of the identifier below) and is used to refer to that entity. The following lines each specify a property and one or more values, separated by commas. Each value can be a reference to another entity (via its identifier), a string (enclosed by double quotes) or a number. For example:NOTE: The identifier must not include a comma (,) and must not be enclosed within double quotes. MCF format: A common parsing failure is due to odd number of unescaped double quotes in text. For instance, "foo bar" baz" needs to be "foo bar\" baz". Commas within double quotes are not considered as value separators. Every entity has at least one property: typeOf.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dinicotinic acid** Dinicotinic acid: Dinicotinic acid (pyridine-3,5-dicarboxylic acid) is a heterocyclic organic compound, more precisely a heteroaromatic. It is one of many pyridinedicarboxylic acids and consists of a pyridine ring carrying to carboxy groups in the 3- and 5-positions. Preparation and properties: Dinicotinic acid can be formed by heating pyridine-2,3,5,6-tetracarboxylic acid or carbodinicotinic acid (pyridine-2,3,5-tricarboxylic acid).The acid is sparingly soluble in water and ether. Its melting point of 323 °C is the highest among pyridinedicarboxylic acids. Upon heating, it decarboxylates and decomposes to nicotinic acid:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Octavia Dobre** Octavia Dobre: Octavia A. Dobre is a professor and research chair of Memorial University. She is a Fellow of the Engineering Institute of Canada and a Fellow of the IEEE.She is the editor-in-chief of the IEEE Open Journal of the Communications Society and former editor-in-chief of IEEE Communications Letters. She is also a member of the board of governors of the IEEE Communications Society. Research: Dobre's research interests lie in wireless communications, optical communications, underwater communications, and signal processing for communications.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Text file** Text file: A text file (sometimes spelled textfile; an old alternative name is flatfile) is a kind of computer file that is structured as a sequence of lines of electronic text. A text file exists stored as data within a computer file system. In operating systems such as CP/M and MS-DOS, where the operating system does not keep track of the file size in bytes, the end of a text file is denoted by placing one or more special characters, known as an end-of-file (EOF) marker, as padding after the last line in a text file. On modern operating systems such as Microsoft Windows and Unix-like systems, text files do not contain any special EOF character, because file systems on those operating systems keep track of the file size in bytes. Most text files need to have end-of-line delimiters, which are done in a few different ways depending on operating system. Some operating systems with record-orientated file systems may not use new line delimiters and will primarily store text files with lines separated as fixed or variable length records. Text file: "Text file" refers to a type of container, while plain text refers to a type of content. At a generic level of description, there are two kinds of computer files: text files and binary files. Data storage: Because of their simplicity, text files are commonly used for storage of information. They avoid some of the problems encountered with other file formats, such as endianness, padding bytes, or differences in the number of bytes in a machine word. Further, when data corruption occurs in a text file, it is often easier to recover and continue processing the remaining contents. A disadvantage of text files is that they usually have a low entropy, meaning that the information occupies more storage than is strictly necessary. Data storage: A simple text file may need no additional metadata (other than knowledge of its character set) to assist the reader in interpretation. A text file may contain no data at all, which is a case of zero-byte file. Encoding: The ASCII character set is the most common compatible subset of character sets for English-language text files, and is generally assumed to be the default file format in many situations. It covers American English, but for the British pound sign, the euro sign, or characters used outside English, a richer character set must be used. In many systems, this is chosen based on the default locale setting on the computer it is read on. Prior to UTF-8, this was traditionally single-byte encodings (such as ISO-8859-1 through ISO-8859-16) for European languages and wide character encodings for Asian languages. Encoding: Because encodings necessarily have only a limited repertoire of characters, often very small, many are only usable to represent text in a limited subset of human languages. Unicode is an attempt to create a common standard for representing all known languages, and most known character sets are subsets of the very large Unicode character set. Although there are multiple character encodings available for Unicode, the most common is UTF-8, which has the advantage of being backwards-compatible with ASCII; that is, every ASCII text file is also a UTF-8 text file with identical meaning. UTF-8 also has the advantage that it is easily auto-detectable. Thus, a common operating mode of UTF-8 capable software, when opening files of unknown encoding, is to try UTF-8 first and fall back to a locale dependent legacy encoding when it definitely isn't UTF-8. Formats: On most operating systems, the name text file refers to a file format that allows only plain text content with very little formatting (e.g., no bold or italic types). Such files can be viewed and edited on text terminals or in simple text editors. Text files usually have the MIME type text/plain, usually with additional information indicating an encoding. Formats: Microsoft Windows text files MS-DOS and Microsoft Windows use a common text file format, with each line of text separated by a two-character combination: carriage return (CR) and line feed (LF). It is common for the last line of text not to be terminated with a CR-LF marker, and many text editors (including Notepad) do not automatically insert one on the last line. Formats: On Microsoft Windows operating systems, a file is regarded as a text file if the suffix of the name of the file (the "filename extension") is .txt. However, many other suffixes are used for text files with specific purposes. For example, source code for computer programs is usually kept in text files that have file name suffixes indicating the programming language in which the source is written. Formats: Most Microsoft Windows text files use ANSI, OEM, Unicode or UTF-8 encoding. What Microsoft Windows terminology calls "ANSI encodings" are usually single-byte ISO/IEC 8859 encodings (i.e. ANSI in the Microsoft Notepad menus is really "System Code Page", non-Unicode, legacy encoding), except for in locales such as Chinese, Japanese and Korean that require double-byte character sets. ANSI encodings were traditionally used as default system locales within Microsoft Windows, before the transition to Unicode. By contrast, OEM encodings, also known as DOS code pages, were defined by IBM for use in the original IBM PC text mode display system. They typically include graphical and line-drawing characters common in DOS applications. "Unicode"-encoded Microsoft Windows text files contain text in UTF-16 Unicode Transformation Format. Such files normally begin with byte order mark (BOM), which communicates the endianness of the file content. Although UTF-8 does not suffer from endianness problems, many Microsoft Windows programs (i.e. Notepad) prepend the contents of UTF-8-encoded files with BOM, to differentiate UTF-8 encoding from other 8-bit encodings. Formats: Unix text files On Unix-like operating systems, text files format is precisely described: POSIX defines a text file as a file that contains characters organized into zero or more lines, where lines are sequences of zero or more non-newline characters plus a terminating newline character, normally LF. Additionally, POSIX defines a printable file as a text file whose characters are printable or space or backspace according to regional rules. This excludes most control characters, which are not printable. Formats: Apple Macintosh text files Prior to the advent of macOS, the classic Mac OS system regarded the content of a file (the data fork) to be a text file when its resource fork indicated that the type of the file was "TEXT". Lines of classic Mac OS text files are terminated with CR characters.Being a Unix-like system, macOS uses Unix format for text files. Uniform Type Identifier (UTI) used for text files in macOS is "public.plain-text"; additional, more specific UTIs are: "public.utf8-plain-text" for utf-8-encoded text, "public.utf16-external-plain-text" and "public.utf16-plain-text" for utf-16-encoded text and "com.apple.traditional-mac-plain-text" for classic Mac OS text files. Rendering: When opened by a text editor, human-readable content is presented to the user. This often consists of the file's plain text visible to the user. Depending on the application, control codes may be rendered either as literal instructions acted upon by the editor, or as visible escape characters that can be edited as plain text. Though there may be plain text in a text file, control characters within the file (especially the end-of-file character) can render the plain text unseen by a particular method.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Self-handicapping** Self-handicapping: Self-handicapping is a cognitive strategy by which people avoid effort in the hopes of keeping potential failure from hurting self-esteem. It was first theorized by Edward E. Jones and Steven Berglas, according to whom self-handicaps are obstacles created, or claimed, by the individual in anticipation of failing performance.Self-handicapping can be seen as a method of preserving self-esteem but it can also be used for self-enhancement and to manage the impressions of others. This conservation or augmentation of self-esteem is due to changes in causal attributions or the attributions for success and failure that self-handicapping affords. There are two methods that people use to self-handicap: behavioral and claimed self-handicaps. People withdraw effort or create obstacles to successes so they can maintain public and private self-images of competence. Self-handicapping: Self-handicapping is a widespread behavior amongst humans that has been observed in a variety of cultures and geographic areas. For instance, students frequently participate in self-handicapping behavior to avoid feeling bad about themselves if they do not perform well in class. Self-handicapping behavior has also been observed in the business world. The effects of self-handicapping can be both large and small and found in virtually any environment wherein people are expected to perform. Overview and relevance: The first method people use to self-handicap is when they make a task harder for themselves in fear of not successfully completing that task, so that if they do in fact fail, they can simply place the blame on the obstacles rather than placing the blame on themselves. This is known to researchers as behavioral handicapping, in which the individual actually creates obstacles to performance. Examples of behavioral handicaps include alcohol consumption, the selection of unattainable goals, and refusal to practice a task or technique (especially in sports and the fine arts). Some of these behaviors include procrastination, self-fulfilling prophecies of negative expectations, learned helplessness, self-handicapping, success avoidance, failures in self-regulation, addictions and risky behaviors. Overview and relevance: The second way that people self-handicap is by coming up with justifications for their potential failures, so that if they do not succeed in the task, they can point to their excuses as the reasons for their failures. This is known as claimed self-handicapping, in which the individual merely states that an obstacle to performance exists. Examples of claimed self-handicaps include declarations that one is experiencing physical symptoms. When people engage in these behaviors they are doing so in order to protect their own self-esteem or to reduce or inhibit unpleasant emotions. These patterns of behaviors fall under different categories of Personality Disorders, some of which include dependent, borderline personality disorder, and obsessive personality disorders. Overview and relevance: Self-handicapping behavior allows individuals to externalize failures but internalize success, accepting credit for achievements but allowing excuses for failings. Individuals that showed signs of unstable self-esteem were more likely to exhibit self-handicapping behaviors in an attempt to externalize failure and internalize success by action or performance setting choice. An example of self-handicapping is the student who spends the night before an important exam partying rather than studying. The student fears failing his exam and appearing incapable. In partying the night before the exam the student has engaged in self-defeating behavior and increased the likelihood of poor exam performance. However, in the event of failure, the student can offer fatigue and a hangover, rather than lack of ability, as plausible explanations. Furthermore, should the student receive positive feedback about his exam, his achievement is enhanced by the fact that he succeeded, despite the handicap. When faced with the possibility of failure, students exhibit some behaviors resulting in a reduction in overall effort, not allotting the proper amount of time to work or produce work, or postponing/procrastinating their work.The end goal is to find a way to blame academic failures on anything but their abilities themselves (Torok et. al 2018). These findings from this study combat Festinger’s social comparison theory (Festinger 1954.) This social comparison theory states that it is a fundamental human motive to obtain information based on their environment and gain feedback about their capabilities as such instead of setting themselves up for failure and refusing to take the blame. Overview and relevance: A theory more closely related to the findings of the self-handicapping phenomenon with a similar background and basis is that of Covington's Self Worth Theory. According to Covington's theory, schools today have what is called a “zero sum scoring system”. This meaning that the recompense available in a classroom is limited and that one student wins, this means that other students are always meant to lose (Covington 1992). Individuals self worth is based on their perceived performance and abilities. Covington and Omelich describe effort as a "double-edged sword". This meaning that students feel a sense of urgency to make efforts in order to avoid being punished by their instructor causing them to experience feelings of negativity and guilt. All the while, students also face the chance of making an effort that involves the possible chance of feelings such as humiliation or shame if or when their effort results in failure. If their effort is proven to be unsuccessful, they believe that they will be seen as an unsuccessful person. According to Covington this means that students have only two choices. The first one being that they refuse to make an effort and take on the negativity of failure or perceived punishment. The second being that they in fact make an effort which gives creates this sense of vulnerability and opens the door to being judged as being unintelligent or lacking the proper abilities. This is why we see so many students using this self-handicapping mechanism as a last resort effort to protect themselves and their perceived positive self-image. Overview and relevance: Individual differences People differ in the extent to which they self-handicap and most research on individual differences has used the Self-Handicapping Scale (SHS). The SHS was developed as a means of measuring individuals' tendency to employ excuses or create handicaps as a means to protect one's self-esteem. Research to date shows that SHS has adequate construct validity. For example, individuals who score high on the SHS put in less effort and practice less when concerned about their ability to perform well in a given task. They are also more likely than those rated low self-handicappers (LSH) to mention obstacles or external factors that may hinder their success, prior to performing.A number of characteristics have been related to self-handicapping (e.g. hypochondriasis) and research suggests that those more prone to self-handicapping may differ motivationally compared to those that do not rely on such defensive strategies. For example, fear of failure, a heightened sensitivity to shame and embarrassment upon failure, motivates self-handicapping behavior. Students who fear failure are more likely to adopt performance goals in the classroom or goals focused on the demonstration of competence or avoidance of demonstrating incompetence; goals that heighten one's sensitivity to failure.A student, for example, may approach course exams with the goal of not performing poorly as this would suggest a lack of ability. To avoid ability attributions and the shame of failure, the student fails to adequately prepare for an exam. While this may provide temporary relief, it renders one's ability conceptions more uncertain, resulting in further self-handicapping. Overview and relevance: Gender differences While research suggests that claimed self-handicaps are used by men and women alike, several studies have reported significant differences. While research assessing differences in reported self-handicapping have revealed no gender differences or greater self-handicapping among females, the vast majority of research suggests that males are more inclined to behaviourally self-handicap. These differences are further explained by the different value men and women ascribe to the concept of effort. Major theoretical approaches: The root of research on the act of self-handicapping can be traced back to Adler's studies about self-esteem. In the late 1950s, Goffman and Heider published research concerning the manipulation of outward behavior for the purpose of impression management. It was not until 30 years later that self-handicapping behavior was attributed to internal factors. Until this point, self-handicapping only encompassed the usage of external factors, such as alcohol and drugs. Self-handicapping is usually studied in an experimental setting, but is sometimes studied in an observational environment. Major theoretical approaches: Previous research has established that self-handicapping is motivated by uncertainty about one's ability or, more generally, anticipated threats to self-esteem. Self-handicapping can be exacerbated by self-presentational concerns but also occurs in situations where such concerns are at a minimum. Major empirical findings: Experiments on self-handicapping have depicted the reasons why people self-handicap and the effects that it has on those people. Self-handicapping has been observed in both laboratory and real world settings. Studying the psychological and physical effects of self-handicapping has allowed researchers to witness the dramatic effects that it has on attitude and performance. Major empirical findings: Jones and Berglas gave people positive feedback following a problem-solving test, regardless of actual performance. Half the participants had been given fairly easy problems, while the others were given difficult problems. Participants were then given the choice between a "performance-enhancing drug" and a drug that would inhibit it. Those participants who received the difficult problems were more likely to choose the impairing drug, and participants who faced easy problems were more likely to choose the enhancing drug. It is argued that the participants presented with hard problems, believing that their success had been due to chance, chose the impairing drug because they were looking for an external attribution (what might be called an "excuse") for expected poor performance in the future, as opposed to an internal attribution.More recent research finds that, generally, people are willing to use handicaps to protect their self-esteem (e.g., discounting failings) but are more reluctant to employ them for self-enhancement. (e.g., to further credit their success). Major empirical findings: Rhodewalt, Morf, Hazlett, and Fairfield (1991) selected participants who scored high or low on the Self-Handicapping Scale (SHS) and who had high or low self-esteem. They presented participants with a handicap and then with success or failure feedback and asked participants to make attributions for their performance. The results showed that both self-protection and self-enhancement occurred, but only as a function of levels of self-esteem and the level of tendency to self-handicap. Participants who were high self-handicappers, regardless of their level of self-esteem, used the handicap as a means of self-protection but only those participants with high self-esteem used the handicap to self-enhance.In a further study, Rhodewalt (1991) presented the handicap to only half of the participants and gave success and failure feedback. The results provided evidence for self-protection but not for self-enhancement. Participants in the failure feedback, handicap absent group, attributed their failures to their own lack of ability and reported lower self-esteem to the handicap-present, failure-feedback condition. Furthermore, the handicap-present failure group reported levels of self-esteem equal to that of the successful group. This evidence highlights the importance of self-handicaps in self-protection although it offers no support for the handicap acting to self-enhance.Another experiment, by Martin Seligman and colleagues, examined whether there was a correlation between explanatory styles and the performance of swimmers. After being given false bad times on their preliminary events, the swimmers who justified their poor performance to themselves in a pessimistic way did worse on subsequent performances. In contrast, the subsequent performances of those swimmers who had more optimistic attributions concerning their poor swimming times were not affected. Those who had positive attributions were more likely to succeed after given false times because they were self-handicapping. They attributed their failure to an external force rather than blaming themselves. Therefore, their self-esteem remained intact, which led to their success in subsequent events. This experiment demonstrates the positive effects that self-handicapping can have on an individual because when they attributed the failure to an external factor, they did not internalize the failure and let it psychologically affect them. Major empirical findings: Previous research has looked at the consequences of self-handicapping and have suggested that self-handicapping leads to a more positive mood (at least in the short term) or at least guards against a drop in positive mood after failure. Thus, self-handicapping may serve as a means of regulating one's emotions in the course of protecting one's self-esteem. However, based on past evidence that positive mood motivates self-protective attributions for success and failure and increases the avoidance of negative feedback, recent research has focused on mood as an antecedent to self-handicapping; expecting positive mood to increase self-handicapping behaviour. Results have shown that people who are in positive mood are more likely to engage in self-handicapping, even at the cost of jeopardizing future performance. Major empirical findings: Research suggests that among those who self-handicap, self-imposed obstacles may relieve the pressure of a performance and allow one to become more engaged in a task. While this may enhance performance in some situations for some individuals, in general, research indicates that self-handicapping is negatively associated with performance, self-regulated learning, persistence and intrinsic motivation. Additional long-term costs of self-handicapping include worse health and well-being, more frequent negative moods and higher use of various substances.Zuckerman and Tsai assessed self-handicapping, well-being, and coping among college students on two occasions over several months. Self-handicapping assessed on the first occasion predicted coping with problems by denial, blaming others and criticizing oneself as well as depression and somatic complaints. Depression and somatic complaints also predicted subsequent self-handicapping. Thus, the use of self-handicapping may lead to not only uncertainty as to one's ability but also ill-being, which in turn may lead to further reliance on self-handicapping. Applications: There are many real world applications for this concept. For example, if people predict they are going to perform poorly on tasks, they create obstacles, such as taking drugs and consuming alcohol, so that they feel that they have diverted the blame from themselves if they actually do fail. In addition, another way that people self-handicap is by creating already-made excuses just in case they fail. For example, if a student feels that he is going to perform badly on a test, then he might make up an excuse for his potential failure, such as telling his friends that he does not feel well the morning of the test. Applications: Occurrence in sports Previous research has suggested that because in Physical Education (PE) students are required to overtly display their physical abilities and incompetence could be readily observed by others, PE is an ideal setting to observe self-handicapping. Because of its prevalence in the sporting world, self-handicapping behaviour has become of interest to sports psychologists who are interested in increasing sports performance. A study published in the year 2017 showed that self-esteem had a negative effect on self-handicapping. They found that when it came to mastery goals there was a positive effect resulting from self-esteem. It also suggested that there was a negative effect on performance-avoidance goals when it came to self-esteem. The findings from this study stipulate that by improving an individual's self-esteem and working towards mastery goals while also lowering the number of performance-avoidance goals present should prove to be pertinent strategies involved in reducing self-handicapping in physical education. Recent research has examined the relationship between behavioural and claimed self-handicaps and athletic performance as well as the effects self-handicapping has on anxiety and fear of failure before athletic performance. Controversies: One controversy was revealed in a study done at the University of Wyoming. Previous research indicated a negative correlation between self-handicapping behaviors and boosting one's self-esteem; it was also shown that people who focus on the positive attributes of themselves are less likely to self-handicap. This study, however, demonstrates that this claim is only partially accurate because the reduction of self-handicapping is only apparent in an area unrelated to the present self-esteem risk. As a result, the attempt to protect self-esteem becomes a detriment to future success in that area.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cambering** Cambering: Cambering is a phenomenon typically seen at a valley crest or plateau margin whereby blocks of competent strata such as sandstone stretch, tilt or rotate with respect to underlying incompetent rock layers such as clay or mudstone. It results from the weaker underlying strata deforming under the weight of the strata above it. Cambering is associated with valley bulging and the development of gulls on the upper slopes. Cambering: Valley bulging is the development of an anticlinal structure in the underlying weaker strata, the long axis of which is broadly coincident with the orientation of the valley. Valley bulging was first described in England within the upper Derwent catchment in Derbyshire and is also encountered within the Cotswolds. Cambering: Gulls are fractures in competent strata that typically form parallel to a valley side in association with cambering. They occur on a range of scales with widths from millimetres to tens of metres. The gulls may be voids open to the sky or they may be partially or wholly filled with unconsolidated material such as earth or brecciated rock. Multiple gulls and areas of cambering occur in the Cotswolds where for example the limestones of the Inferior Oolite overlie relatively weak Liassic mudstones.The development of one or more of this suite of features has been linked to the rapid incision of a valley through competent strata into less competent strata below particularly in periglacial conditions. Recognition of these features is important for engineering geologists advising developers of physical infrastructure such as roads, bridges and dams in such areas.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tricin** Tricin: Tricin is a chemical compound. It is an O-methylated flavone, a type of flavonoid. It can be found in rice bran and sugarcane. Glycosides: Tricin 4'-glucoside (Tricin-4'-O-beta-D-glucopyranaoside, CAS number 71855-50-0) Tricin 5-glucoside (Tricin 5-O-beta-D-glucopyranoside, CAS number 32769-00-9) Tricin 7-O-glucoside (Tricin 7-O-beta-D-glucopyranoside, CAS number 32769-01-0) Biosynthesis: The biosynthesis of flavones has not yet been elucidated in full; however, most of the mechanistic and enzymatic steps have been discovered and studied. In biosynthesizing tricin, there is first stepwise addition of malonyl CoA via the polyketide pathway and p-coumaroyl Coa via the phenylpropanoid pathway. These additions are mediated by the sequential action of chalcone synthase and chalcone isomerase to yield naringenin chalcone and the flavanone, naringenin, respectively. CYP93G1 of the CYP450 superfamily in rice then desaturates naringenin into apigenin. After this step, it is proposed that flavonoid 3’5’-hydroxylase (F3’5’H) changes apigenin into tricetin. Upon formation of tricetin, 3’-O-methyltransferase and 5’-O-methyltransferase adds methoxy groups to tricetin to form tricin. Other compounds formed from tricin: Three flavonolignans derived from tricin have been isolated from oats Avena sativa.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Xerox art** Xerox art: Xerox art (sometimes, more generically, called copy art, electrostatic art, scanography or xerography) is an art form that began in the 1960s. Prints are created by putting objects on the glass, or platen, of a copying machine and by pressing "start" to produce an image. If the object is not flat, or the cover does not totally cover the object, or the object is moved, the resulting image is distorted in some way. The curvature of the object, the amount of light that reaches the image surface, and the distance of the cover from the glass, all affect the final image. Often, with proper manipulation, rather ghostly images can be made. Basic techniques include: Direct Imaging, the copying of items placed on the platen (normal copy); Still Life Collage, a variation of direct imaging with items placed on the platen in a collage format focused on what is in the foreground/background; Overprinting, the technique of constructing layers of information, one over the previous, by printing onto the same sheet of paper more than once; Copy Overlay, a technique of working with or interfering in the color separation mechanism of a color copier; Colorizing, vary color density and hue by adjusting the exposure and color balance controls; Degeneration is a copy of a copy degrading the image as successive copies are made; Copy Motion, the creation of effects by moving an item or image on the platen during the scanning process. Each machine also creates different effects. Accessible art: Xerox art appeared shortly after the first Xerox copying machines were made. It is often used in collage, mail art and book art. Publishing collaborative mail art in small editions of Xerox art and mailable book art was the purpose of International Society of Copier Artists (I.S.C.A.) founded by Louise Odes Neaderland. Accessible art: Throughout the history of copy art San Francisco and Rochester are mentioned frequently. Rochester was known as the Imaging Capital of the World with Eastman Kodak and Xerox, while many artists with innovative ideas created cutting edge works in San Francisco. Alongside the computer boom a copy art explosion was taking place. Copy shops were springing up all over San Francisco, and access to copiers made it possible to create inexpensive art of unique imagery. Multiple prints of assemblage and collage meant artists could share work more freely. Print on demand meant making books and magazines at the corner copy shop without censorship and with only a small outlay of funds. Comic book artists could quickly use parts of their work over and over. Early history 1960s–1970s: The first artists recognized to make copy art are Charles Arnold, Jr., and Wallace Berman. Charles Arnold, Jr., an instructor at Rochester Institute of Technology, made the first photocopies with artistic intent in 1961 using a large Xerox camera on an experimental basis. Berman, called the "father" of assemblage art, would use a Verifax photocopy machine (Kodak) to make copies of the images, which he would often juxtapose in a grid format. Berman was influenced by his San Francisco Beat circle and by Surrealism, Dada, and the Kabbalah. Sonia Landy Sheridan began teaching the first course in the use of copiers at the Art Institute of Chicago in 1970.In the 1960s and 1970s, Esta Nesbitt was one of the earliest artists experimenting with xerox art. She invent three xerography techniques, named transcapsa, photo-transcapsa, and chromacapsa. Nesbitt worked closely with Anibal Ambert and Merle English at Xerox Corporation, and the company sponsored her art research from 1970 until 1972.Seth Siegelaub and Jack Wendler made Untitled (Xerox Book) with artists Carl Andre, Robert Barry, Douglas Huebler, Joseph Kosuth, Sol LeWitt, Robert Morris, and Lawrence Weiner in 1968.Copy artists' dependence upon the same machines does not mean that they share a common style or aesthetic. Artists as various as Ian Burn (a conceptual/process artist who made another Xerox Book in 1968), Laurie-Rae Chamberlain (a punk-inspired colour Xeroxer exhibiting in the mid 1970s) and Helen Chadwick (a feminist artist using her own body as subject matter in the 1980s) have employed photocopiers for very different purposes. Early history 1960s–1970s: Other artists who have made significant use of the machines include: Carol Key, Sarah Willis, Joseph D. Harris, Tyler Moore, the Copyart Collective of Camden, as well as: in continental Europe Guy Bleus Alighiero Boetti (Nove Xerox AnneMarie, 1969) Bruno Munari (Xerografie series, begun in 1963) M. Vänçi Stirnemann Vittore Baroni Piermario Cianiin the UK Graham Harwood Tim Head David Hockney Alison Marchant Russell Millsin Brazil Paulo Bruscky León Ferrari Hudinilson Jr. Early history 1960s–1970s: Eduardo Kac Letícia Parente Mário Ramiroin Canada Evergonin the US Pati Hill Ginny Lloyd Tom Norton Sonia Landy SheridanIn the mid-1970s Pati Hill did art experiments with an IBM copier. Hill's resulting xerox artwork was exhibited at Centre Pompidou, Paris, the Musée d’Art Moderne de la Ville de Paris, and the Stedelijk Museum, Amsterdam, among other venues in Europe and the US. Recognition of the art form: San Francisco had an active Xerox arts scene that started in 1976 at the LaMamelle gallery with the All Xerox exhibit and in 1980 the International Copy Art Exhibition, curated and organized by Ginny Lloyd, was also held at LaMamelle gallery. The exhibition traveled to San Jose, California, and Japan. Lloyd also made the first copy art billboard (the first of three) with a grant from Eyes and Ears Foundation. Recognition of the art form: A gallery named Studio 718 moved into the Beat poet area of San Francisco's North Beach neighborhood. It shared space in part with Postcard Palace, where several copy artists sold postcard editions; the space also housed a Xerox 6500. At around the same time color copy calendars produced in multiple editions made by Barbara Cushman sold at her store and gallery, A Fine Hand. Recognition of the art form: In the 1980, Marilyn McCray curated the Electroworks Exhibit held at the Cooper-Hewitt Museum in New York and International Museum of Photography at George Eastman House. On view at the Cooper Hewitt were more than 250 examples of prints, limited-edition books, graphics, animation, textiles, and 3-D pieces produced by artists and designers. Galeria Motivation of Montreal, Canada, held an exhibit of copy art in 1981. PostMachina, an exhibit in Bologna, Italy, held in 1984, featured copy art works.In May 1987, artist and curator George Muhleck wrote in Stuttgart about the international exhibition "Medium: Photocopie" that it inquired into "new artistic ways of handling photocopy." The book which accompanied the exhibition was sponsored mainly by the Goethe Institut of Montreal, with additional support from the Ministere des Affaires Culturelles du Quebec. Recognition of the art form: The complete collection I.S.C.A. Quarterlies is housed at the Jaffe Book Arts Collection of the Special Collections of the Wimberly Library at Florida Atlantic University in Boca Raton, Florida. The collection began in 1989 with several volumes donated by the Bienes Museum of the Modern Book, in Fort Lauderdale, FL. The Jaffe hosted an exhibition in 2010 of copy art by Ginny Lloyd, showcasing her works and copy art collection. She lectures and teaches workshops at the Jaffe on copy art history and techniques. She previously taught the workshop in 1981 at Academie Aki, Other Books and So Archive, and Jan Van Eyck Academie in The Netherlands; Image Resource Center in Cleveland and University of California - Berkeley. Recognition of the art form: In 2017–2018, the Whitney Museum of American Art in New York presented Experiments in Electrostatics: Photocopy Art from the Whitney’s Collection, 1966–1986, organized by curatorial fellow Michelle Donnelly. Current artwork: Copiers add to the arts, as can be seen by surrealist Jan Hathaway's combining color xerography with other media, Carol Heifetz Neiman's layering prismacolor pencil through successive runs of a color photocopy process (1988-1990), or R.L. Gibson's use of large scale xerography such as in Psychomachia (2010). Current artwork: In 1991, independent filmmaker Chel White completed a 4-minute animated film titled "Choreography for Copy Machine (Photocopy Cha Cha)". All of the film's images were created solely by using the unique photographic capabilities of a Sharp mono-colour photocopier to generate sequential pictures of hands, faces, and other body parts. Layered colors were created by shooting the animation through photographic gels. The film achieves a dream-like aesthetic with elements of the sensual and the absurd. The Berlin International Film Festival describes it as "a swinging essay about physiognomy in the age of photo-mechanical reproduction. The Austin Film Society dubs it, "Doubtlessly the best copy machine art with delightfully rhythmic sequences of images, all to a cha-cha-cha beat." The film screened in a special program at the 2001 Sundance Film Festival, and was awarded Best Animated Short Film at the 1992 Ann Arbor Film Festival.Manufacturers of the machines are an obvious source of funding for artistic experimentation with copiers and such companies as Rank, Xerox, Canon and Selex have been willing to lend machines, sponsor shows and pay for artists-in-residence programs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Glutaryl chloride** Glutaryl chloride: Glutaryl chloride or pentanedioyl dichloride is an organic compound with the formula C5H6Cl2O2, or (CH2)3(COCl)2. It is the diacid chloride derivative of glutaric acid. It is a colorless liquid although commercial samples can appear darker.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Solar cycle (calendar)** Solar cycle (calendar): The solar cycle is a 28-year cycle of the Julian calendar, and 400-year cycle of the Gregorian calendar with respect to the week. It occurs because leap years occur every 4 years, typically observed by adding a day to the month of February, making it February 29th. There are 7 possible days to start a leap year, making a 28-year sequence.This cycle also occurs in the Gregorian calendar, but it is interrupted by years such as 1800, 1900, 2100, 2200, 2300 and 2500, which are divisible by four but which are common years. This interruption has the effect of skipping 16 years of the solar cycle between February 28 and March 1. Because the Gregorian cycle of 400 years has exactly 146,097 days, i.e. exactly 20,871 weeks, one can say that the Gregorian so-called solar cycle lasts 400 years.Calendar years are usually marked by Dominical letters indicating the first Sunday in a new year, thus the term solar cycle can also refer to a repeating sequence of Dominical letters. Unless a year is not a leap year due to Gregorian exceptions, a sequence of calendars is reused every 28 years.Sun-based calendars are first thought to be used by the Egyptians, who based it around the annual sunrise of the Dog Star and flooding of the Nile River.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Variable compression ratio** Variable compression ratio: Variable compression ratio (VCR) is a technology to adjust the compression ratio of an internal combustion engine while the engine is in operation. This is done to increase fuel efficiency while under varying loads. Variable compression engines allow the volume above the piston at top dead centre to be changed. Higher loads require lower ratios to increase power, while lower loads need higher ratios to increase efficiency, i.e. to lower fuel consumption. For automotive use this needs to be done as the engine is running in response to the load and driving demands. The 2019 Infiniti QX50 is the first commercially available vehicle that uses a variable compression ratio engine. Advantages: Gasoline engines have a limit on the maximum pressure during the compression stroke, after which the fuel/air mixture detonates rather than burns. To achieve higher power outputs at the same speed, more fuel must be burned and therefore more air is needed. To achieve this, turbochargers or superchargers are used to increase the inlet pressure. This would result in detonation of the fuel/air mixture unless the compression ratio was decreased, i.e. the volume above the piston made greater. This can be done to a greater or lesser extent with massive increases in power being possible. The down side of this is that under light loading, the engine can lack power and torque. The solution is to be able to vary the inlet pressure and adjust the compression ratio to suit. This gives the best of both worlds, a small efficient engine capable of great power on demand. In addition, VCR allows free use of different fuels besides petrol e.g. LPG or ethanol . Advantages: Cylinder displacement is altered by using a hydraulic system connected to the crankshaft, and adjusted according to the load and acceleration required. Production: Variable compression engines have existed for decades but only in laboratories for the purposes of studying combustion processes. These designs usually have a second adjustable piston set in the head opposing the working piston. Production: In 2018 Infiniti began production of their variable compression turbo engine, which uses a mechanical linkage to achieve the variability. It was installed in their QX50 SUV. The engine can produce any compression ratio from 8:1 to 14:1. The highest torque is achieved at 8:1, giving high acceleration, while the best gas mileage (fuel efficiency) is achieved at 14:1. The electronic engine controller responds to the pressure on the gas pedal, in real-time, altering the compression ratio seamlessly. Although this engine has a displacement of 2.0 L, and is an inline-four engine, it does not use balance shafts to eliminate the secondary vibrations. It is inherently balanced by the mechanical linkage. Two-stroke engines: Due to the comparative simplicity of cylinder head design (lacking intake valves) it is somewhat easier to implement in two-stroke engines. From the late 90s on up models which expand on this idea have been available, such as from Yamaha, which dynamically vary the size of the combustion chamber. As of late (in the 2000s) this technology has seen some renewed interest, due it being able to burn a wide range of fuels (e.g. including alcohols) such as the Lotus Omnivore.A much earlier commercialized two-stroke engine, but very small (18 cc) and not powerful enough to be very successful, was the Lohmann engine produced in the early 1950's as a retrofit engine for bicycles [4]. This engine had a one-piece cylinder head and sleeve, whose distance from the crankshaft was adjusted by a jackscrew operated by cables from a twist grip on the handlebar. Compression adjustment was essential to the operation of this engine because it used compression ignition of a fuel mixture which was introduced prior to the compression stroke and which therefore ignited whenever the compression brought it to a sufficient temperature. This meant that the compression needed would vary with air temperature, engine temperature, and fuel type: with too much compression the engine would suffer premature ignition and with too little it would fail to ignite at all. Thus the operator had to adjust the compression continually as operating conditions varied. The Lohmann engine was produced for only about five years because the control of compression (simultaneously with fuel flow) required considerable practice, and because even at optimal adjustment it provided no more power than a moderately fit rider could provide without assistance. Engine designs: The first VCR engine built and tested was by Harry Ricardo in the 1920s. This work led to him devising the octane rating system that is still in use today. Many companies have been undertaking their own research into VCR Engines, including Saab, Nissan, Volvo, PSA/Peugeot-Citroën and Renault. The 2019 Infiniti QX50 is available with a production version of the turbocharged variable compression engine. Engine designs: Peugeot MCE-5 The Peugeot design works by varying the effective length of the con-rods connecting the piston to the crank. When the con-rod is shorter, the compression ratio is lower and vice versa. On the left hand-side of the diagram is the conventional piston of an internal combustion engine. On the right is an hydraulic cylinder with double-acting piston. This acts through a rod-crank system with a gear wheel, whose movement adjusts the effective con-rod length and thus the compression ratio in the left cylinder. Engine designs: Saab SVC SAAB Automobile rekindled interest in variable compression when they introduced their SVC engine to the world at the Geneva motor show in 2000. SAAB had been involved in working with the 'Office of Advanced Automotive Technologies', to produce a modern petrol VCR engine that showed an efficiency comparable with that of a Diesel. The SAAB SVC was an advanced and workable addition to the world of VCR engines, but it never reached production due to the company's bankruptcy in 2016. Engine designs: The design, an implementation of the Larsen VCR engine, consisted of a monobloc head, which contained all of the valve gear, and the crankshaft/crankcase assembly. These parts were connected by a pivot which allowed 4 degrees of movement controlled by a hydraulic actuator. This mechanism allows the distance between the crankshaft centre line and the cylinder crown to be varied. Unlike the Peugeot design, the effective connecting rod length is fixed. Engine designs: A supercharger was chosen in preference to a turbocharger to achieve the necessary response time and high boost pressure. Engine designs: To alter Vc, the SVC 'lowers' the cylinder head closer to the crankshaft. It does this by replacing the typical one-part engine block with a two-part unit, with the crankshaft in the lower block and the cylinders in the upper portion. The two blocks are hinged together at one side (imagine a book, lying flat on a table, with the front cover held an inch or so above the title page). By pivoting the upper block around the hinge point, the Vc (imagine the air between the front cover of the book and the title page) can be modified. In practice, the SVC adjusts the upper block through a small range of motion, using a hydraulic actuator.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Radical 5** Radical 5: Radical 5 or radical second (乙部) meaning "second" is one of 6 of the 214 Kangxi radicals that are composed of only one stroke. However, this radical is mainly used to categorize miscellaneous characters otherwise not belonging to any radical, mainly featuring a hook or fold, and 乙 is the character with the least amount of strokes. In the ancient Chinese cyclic character numeral system, 乙 represents the second Celestial stem (天干 tiāngān). In the Kangxi Dictionary, there are 42 characters (out of 49,030) to be found under this radical. Radical 5: In mainland China, 乙 along with other 14 associated indexing components, including 乚, etc., are affiliated to a new radical 乛 (乛部), which is the 5th principal indexing component in the Table of Indexing Chinese Character Components predominantly adopted by Simplified Chinese dictionaries. Usually, only several out of the 15 variant components are listed under radical 乛 in dictionary indexes. Derived characters: In the Unihan Database, 亀 (Japanese simplified form of 龜) falls under Radical 5 + 10 strokes, while other variants of 龜 (including Simplified Chinese 龟) fall under Radical 213 (龜 "turtle"), causing an inconsistency. However, in most Japanese dictionaries, 亀 is treated as a variant of Radical 213 (龜) and indexed Radical 213 + 0 strokes. Sinogram: As an independent character it is a Jōyō kanji, or a Kanji used in writing the Japanese language. It is a secondary school kanji. It is also used in the Chinese language It means "secondary" and is mainly used in compounds.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Attilio Meucci** Attilio Meucci: Attilio Meucci is an Italian statistician and financial engineer, who specializes in quantitative risk management and quantitative portfolio management. Education: Attilio Meucci earned a BA in Physics from the University of Milan, an MA in Economics from Bocconi University, and a PhD in Mathematics from the University of Milan. Career: Meucci was the chief risk officer at KKR; the chief risk officer and head of portfolio construction at Kepos Capital LP.; head of research at Bloomberg LP's portfolio analytics and risk platform; a researcher at POINT, Lehman Brothers' portfolio analytics and risk platform; a trader at the hedge fund Relative Value International; and a consultant at Bain & Co, a strategic consulting firm.Meucci is the founder of Advanced Risk and Portfolio Management (ARPM), under whose umbrella he designed and teaches the six-day Advanced Risk and Portfolio Management Bootcamp (ARPM Bootcamp), and manages the charity One More Reason.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Endocrine Research** Endocrine Research: Endocrine Research is a peer-reviewed medical journal that covers endocrinology in the broadest context. Subjects of interest include: receptors and mechanism of action of hormones, methodological advances in the detection and measurement of hormones; structure and chemical properties of hormones. Editor: The editor in chief of Endocrine Research is Michael Katz (San Antonio, Texas).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Donepezil** Donepezil: Donepezil, sold under the brand name Aricept among others, is a medication used to treat dementia of the Alzheimer's type. It appears to result in a small benefit in mental function and ability to function. Use, however, has not been shown to change the progression of the disease. Treatment should be stopped if no benefit is seen. It is taken by mouth or via a transdermal patch.Common side effects include nausea, trouble sleeping, aggression, diarrhea, feeling tired, and muscle cramps. Serious side effects may include abnormal heart rhythms, urinary incontinence, and seizures. Donepezil is a centrally acting reversible acetylcholinesterase inhibitor and structurally unrelated to other anticholinesterase agents.Donepezil was approved for medical use in the United States in 1996. It is available as a generic medication. In 2020, it was the 112th most commonly prescribed medication in the United States, with more than 5 million prescriptions. Medical uses: Alzheimer's disease There is no evidence that donepezil or other similar agents alter the course or progression of Alzheimer's disease. Six-to-twelve-month controlled studies have shown modest benefits in cognition or behavior. The UK National Institute for Clinical Excellence (NICE) recommends donepezil as an option in the management of mild to moderate Alzheimer's disease. The person should, however, be reviewed frequently and if there is no significant benefit it should be stopped. In 2006, the U.S. Food and Drug Administration (FDA) also approved donepezil for treatment of mild, moderate and severe dementia in Alzheimer's disease. Medical uses: Other Lewy body dementia: Some studies have shown benefits of donepezil for the treatment of cognitive and behavioral symptoms in Lewy body dementia. Traumatic brain injury: Some research suggests an improvement in memory dysfunction in patients with traumatic brain injury with donepezil use. Vascular dementia: Studies have shown that donepezil may improve cognition in patients with vascular dementia but not overall global functioning. Dementia associated with Parkinson disease: Some evidence suggests that donepezil can improve cognition, executive function, and global status in Parkinson disease dementia. Adverse effects: In clinical trials the most common adverse events leading to discontinuation were nausea, diarrhea, and vomiting. Other side effects included difficulty sleeping, muscle cramps and loss of appetite. Most side effects were observed in patients taking the 23 mg dose compared to 10 mg or lower doses. Side effects are mild and transient in most patients, lasting up to three weeks and usually improved even with continued use.Donepezil, like other cholinesterase inhibitors, can cause nightmares due to enhanced activation of the visual association cortex during REM sleep. Dosing donepezil in the morning can reduce the frequency of nightmares. Adverse effects: Precautions Donepezil should be used with caution in people with heart disease, cardiac conduction disturbances, chronic obstructive pulmonary disease, asthma, severe cardiac arrhythmia and sick sinus syndrome.People with peptic ulcer disease or taking NSAIDs should use with caution because increased risk of gastrointestinal bleeding was noted. Slow heart beat and fainting in people with heart problems were also seen. These symptoms may appear more frequent when initiating treatment or increasing the donepezil dose. Although occurrence of seizures is rare, people who have a predisposition to seizures should be treated with caution.If daily donepezil has suspended for 7 days or less, restarting at the same dose is recommended, while if the suspension lasts longer than 7 days, retitrate from 5 mg daily is suggested. Mechanism of action: Donepezil binds and reversibly inactivates the cholinesterases, thus inhibiting hydrolysis of acetylcholine. This increases acetylcholine concentrations at cholinergic synapses.The precise mechanism of action of donepezil in patients with Alzheimer's disease is not fully understood. Certainly, Alzheimer's disease involves a substantial loss of the elements of the cholinergic system and it is generally accepted that the symptoms of Alzheimer's disease are related to this cholinergic deficit, particularly in the cerebral cortex and other areas of the brain.In addition to its actions as an acetylcholinesterase inhibitor, donepezil has been found to act as a potent agonist of the σ1 receptor (Ki = 14.6 nM), and has been shown to produce specific antiamnestic effects in animals mainly via this action.Some noncholinergic mechanisms have also been proposed. Donepezil upregulates the nicotinic receptors in the cortical neurons, adding to neuroprotective property. It inhibits voltage-activated sodium currents reversibly and delays rectifier potassium currents and fast transient potassium currents, although this action is unlikely to contribute to clinical effects. Synergy: Donepezil was claimed to act synergistically with an agent called FK962 [283167-06-6] & FK960 [133920-70-4]. {potential activation of somatostatinergic neurotransmission} History: Research leading to the development of donepezil began in 1983, at Eisai, and in 1996, Eisai received approval from the United States Food and Drug Administration (FDA) for donepezil under the brand Aricept, which it co-marketed with Pfizer. The team at Eisai was led by Hachiro Sugimoto.As of 2011, Aricept was the world's best-selling Alzheimer's disease treatment. The first generic donepezil became available in November 2010, with the US FDA approval of a formulation prepared by Ranbaxy Labs. Research: Donepezil has been tested in other cognitive disorders, including Lewy body dementia, and vascular dementia, but it is not currently approved for these indications. Donepezil has also been found to improve sleep apnea in people with Alzheimer's. It also improves gait in people with mild Alzheimer's.Donepezil has also been studied in people with mild cognitive impairment, schizophrenia, attention deficit disorder, post-coronary artery bypass surgery cognitive impairment, cognitive impairment associated with multiple sclerosis, CADASIL syndrome, and Down syndrome. A three-year National Institutes of Health trial in people with mild cognitive impairment reported donepezil was superior to placebo in delaying rate of progression to dementia during the initial 18 months of the study, but this was not sustained at 36 months. In a secondary analysis, a subgroup of individuals with the apolipoprotein E4 genotype showed sustained benefits with donepezil throughout the study. At this time, though, donepezil is not indicated for prevention of dementia. Research: Cognitive enhancement Donepezil has shown mixed results for improving cognitive abilities in healthy individuals. A 2009 double-blind, placebo controlled study (n=24) investigating Donepezil's effects across a variety of memory tests in reported an improvement in spatial memory accuracy both before (90 minutes after dosing) and at theoretical Tmax (210 minutes after dosing). However, a later 2011 paper featuring two study double-blind, placebo controlled experiments evaluating Donepezil's effects in older but healthy subjects reported impairment after acute (5 hours after dose) and chronic (4 weeks) donepezil administration. Research: ADHD The addition of donepezil with existing ADHD medications has shown mixed results. In those with Tourette syndrome and ADHD, donepezil may reduce tics while it had no effect on ADHD's symptoms. Pervasive developmental disorder Donepezil, along with other cholinesterase inhibitors, is suggested as having potential for trouble behaviors:- irritability, hyperactivity, and difficulty in social communication which are typically seen in those with pervasive developmental disorder, pervasive developmental disorder not otherwise specified, and autism-spectrum disorder. Research: Anorexia nervosa Donepezil is furthermore suggested as a feasible therapeutic option for anorexia nervosa. Emerging literature reports that a subset of patients suffering from restrictive anorexia nervosa have enhanced habit formation compared with healthy controls. Habit formation is modulated by striatal cholinergic interneurons. Based on the physiopathology of anorexia nervosa, namely in terms of cholinergic deficiencies, the effects of donepezil and other drugs that act as cholinesterase inhibitors could thus be effective in the treatment of the disorder.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The Science Fiction Film Source Book** The Science Fiction Film Source Book: The Science Fiction Film Source Book is a book by David Wingrove published in 1985. Plot summary: The Science Fiction Film Source Book is a book consisting of list of science fiction film plot summaries, with information about producers, directors, and more. Reception: Dave Langford reviewed The Science Fiction Film Source Book for White Dwarf #73, and stated that "To pick a random example, the entry on Wizards dismisses it as 'comic-orientated' without even mentioning the influence of Vaughn Bode, or Ian Miller's powerfully effective backgrounds. Nitpicking, though, is a game with no ending."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shu Jie Lam** Shu Jie Lam: Shu Jie Lam is a Malaysian-Chinese research chemist specialising in biomolecular engineering. She is researching star polymers designed to attack superbugs as antibiotics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alexandrov's uniqueness theorem** Alexandrov's uniqueness theorem: The Alexandrov uniqueness theorem is a rigidity theorem in mathematics, describing three-dimensional convex polyhedra in terms of the distances between points on their surfaces. It implies that convex polyhedra with distinct shapes from each other also have distinct metric spaces of surface distances, and it characterizes the metric spaces that come from the surface distances on polyhedra. It is named after Soviet mathematician Aleksandr Danilovich Aleksandrov, who published it in the 1940s. Statement of the theorem: The surface of any convex polyhedron in Euclidean space forms a metric space, in which the distance between two points is measured by the length of the shortest path from one point to the other along the surface. Within a single shortest path, distances between pairs of points equal the distances between corresponding points of a line segment of the same length; a path with this property is known as a geodesic. Statement of the theorem: This property of polyhedral surfaces, that every pair of points is connected by a geodesic, is not true of many other metric spaces, and when it is true the space is called a geodesic space. The geodesic space formed from the surface of a polyhedron is called its development. Statement of the theorem: The polyhedron can be thought of as being folded from a sheet of paper (a net for the polyhedron) and it inherits the same geometry as the paper: for every point p within a face of the polyhedron, a sufficiently small open neighborhood of p will have the same distances as a subset of the Euclidean plane. The same thing is true even for points on the edges of the polyhedron: they can be modeled locally as a Euclidean plane folded along a line and embedded into three-dimensional space, but the fold does not change the structure of shortest paths along the surface. However, the vertices of the polyhedron have a different distance structure: the local geometry of a polyhedron vertex is the same as the local geometry at the apex of a cone. Any cone can be formed from a flat sheet of paper with a wedge removed from it by gluing together the cut edges where the wedge was removed. The angle of the wedge that was removed is called the angular defect of the vertex; it is a positive number less than 2π. The defect of a polyhedron vertex can be measured by subtracting the face angles at that vertex from 2π. For instance, in a regular tetrahedron, each face angle is π/3, and there are three of them at each vertex, so subtracting them from 2π leaves a defect of π at each of the four vertices. Statement of the theorem: Similarly, a cube has a defect of π/2 at each of its eight vertices. Descartes' theorem on total angular defect (a form of the Gauss–Bonnet theorem) states that the sum of the angular defects of all the vertices is always exactly 4π. In summary, the development of a convex polyhedron is geodesic, homeomorphic (topologically equivalent) to a sphere, and locally Euclidean except for a finite number of cone points whose angular defect sums to 4π.Alexandrov's theorem gives a converse to this description. It states that if a metric space is geodesic, homeomorphic to a sphere, and locally Euclidean except for a finite number of cone points of positive angular defect (necessarily summing to 4π), then there exists a convex polyhedron whose development is the given space. Moreover, this polyhedron is uniquely defined from the metric: any two convex polyhedra with the same surface metric must be congruent to each other as three-dimensional sets. Limitations: The polyhedron representing the given metric space may be degenerate: it may form a doubly-covered two-dimensional convex polygon (a dihedron) rather than a fully three-dimensional polyhedron. In this case, its surface metric consists of two copies of the polygon (its two sides) glued together along corresponding edges. Limitations: Although Alexandrov's theorem states that there is a unique convex polyhedron whose surface has a given metric, it may also be possible for there to exist non-convex polyhedra with the same metric. An example is given by the regular icosahedron: if five of its triangles are removed, and are replaced by five congruent triangles forming an indentation into the polyhedron, the resulting surface metric stays unchanged.The development of any polyhedron can be described concretely by a collection of two-dimensional polygons together with instructions for gluing them together along their edges to form a metric space, and the conditions of Alexandrov's theorem for spaces described in this way are easily checked. However, the edges where two polygons are glued together could become flat and lie in the interior of faces of the resulting polyhedron, rather than becoming polyhedron edges. (For an example of this phenomenon, see the illustration of four hexagons glued to form an octahedron.) Therefore, even when the development is described in this way, it may not be clear what shape the resulting polyhedron has, what shapes its faces have, or even how many faces it has. Alexandrov's original proof does not lead to an algorithm for constructing the polyhedron (for instance by giving coordinates for its vertices) realizing the given metric space. In 2008, Bobenko and Izmestiev provided such an algorithm. Their algorithm can approximate the coordinates arbitrarily accurately, in pseudo-polynomial time. Related results: One of the first existence and uniqueness theorems for convex polyhedra is Cauchy's theorem, which states that a convex polyhedron is uniquely determined by the shape and connectivity of its faces. Alexandrov's theorem strengthens this, showing that even if the faces are allowed to bend or fold, without stretching or shrinking, then their connectivity still determines the shape of the polyhedron. In turn, Alexandrov's proof of the existence part of his theorem uses a strengthening of Cauchy's theorem by Max Dehn to infinitesimal rigidity.An analogous result to Alexandrov's holds for smooth convex surfaces: a two-dimensional Riemannian manifold whose Gaussian curvature is everywhere positive and totals 4π can be represented uniquely as the surface of a smooth convex body in three dimensions. The uniqueness of this representation is a result of Stephan Cohn-Vossen from 1927, with some regularity conditions on the surface that were removed in later research. Its existence was proven by Alexandrov, using an argument involving limits of polyhedral metrics. Aleksei Pogorelov generalized both these results, characterizing the developments of arbitrary convex bodies in three dimensions.Another result of Pogorelov on the geodesic metric spaces derived from convex polyhedra is a version of the theorem of the three geodesics: every convex polyhedron has at least three simple closed quasigeodesics. These are curves that are locally straight lines except when they pass through a vertex, where they are required to have angles of less than π on both sides of them.The developments of ideal hyperbolic polyhedra can be characterized in a similar way to Euclidean convex polyhedra: every two-dimensional manifold with uniform hyperbolic geometry and finite area, combinatorially equivalent to a finitely-punctured sphere, can be realized as the surface of an ideal polyhedron.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sexual fetishism** Sexual fetishism: Sexual fetishism or erotic fetishism is a sexual fixation on a nonliving object or nongenital body part. The object of interest is called the fetish; the person who has a fetish for that object is a fetishist. A sexual fetish may be regarded as a non-pathological aid to sexual excitement, or as a mental disorder if it causes significant psychosocial distress for the person or has detrimental effects on important areas of their life. Sexual arousal from a particular body part can be further classified as partialism.While medical definitions restrict the term sexual fetishism to objects or body parts, fetish can, in common discourse, also refer to sexual interest in specific activities. Definitions: In common parlance, the word fetish is used to refer to any sexually arousing stimuli, not all of which meet the medical criteria for fetishism. This broader usage of fetish covers parts or features of the body (including obesity and body modifications), objects, situations and activities (such as smoking or BDSM). Paraphilias such as urophilia, necrophilia and coprophilia have been described as fetishes.Originally, most medical sources defined fetishism as a sexual interest in non-living objects, body parts or secretions. The publication of the DSM-III in 1980 changed that by excluding arousal from body parts in its diagnostic criteria for fetishism. In 1987, a revised edition of the DSM-III (DSM-III-R) introduced a new diagnosis for body part arousal, called partialism. The DSM-IV retained this distinction. Martin Kafka argued that partialism should be merged into fetishism because of overlap between the two conditions, and the DSM-5 subsequently did so in 2013. The ICD-10 definition (World Health Organization's International Classification of Diseases) is still limited to non-living objects. Types: In a review of 48 cases of clinical fetishism in 1983, fetishes included clothing (58.3%), rubber and rubber items (22.9%), footwear (14.6%), body parts (14.6%), leather (10.4%), and soft materials or fabrics (6.3%).A 2007 study counted members of Internet discussion groups with the word fetish in their name. Of the groups about body parts or features, 47% belonged to groups about feet (podophilia), 9% about body fluids (including urophilia, scatophilia, lactaphilia, menophilia, mucophilia), 9% about body size, 7% about hair (hair fetish), and 5% about muscles (muscle worship). Less popular groups focused on navels (navel fetishism), legs, body hair, mouth, and nails, among other things. Of the groups about clothing, 33% belonged to groups about clothes worn on the legs or buttocks (such as stockings or skirts), 32% about footwear (shoe fetishism), 12% about underwear (underwear fetishism), and 9% about whole-body wear such as jackets. Less popular object groups focused on headwear, stethoscopes, wristwear, pacifiers, and diapers (diaper fetishism).Erotic asphyxiation is the use of choking to increase the pleasure in sex. The fetish also includes an individualized part that involves choking oneself during the act of masturbation, which is known as auto-erotic asphyxiation. This usually involves a person being connected and strangled by a homemade device that is tight enough to give them pleasure but not tight enough to suffocate them to death. This is dangerous due to the issue of hyperactive pleasure seeking which can result in strangulation when there is no one to help if the device gets too tight and strangles the user.Devotism involves being attracted to body modifications on another person that are the result of amputation. Devotism is only a sexual fetish when the person who has the fetish considers the amputated body part on another person the object of sexual interest. Cause: Fetishism usually becomes evident during puberty, but may develop prior to that. No single cause for fetishism has been conclusively established.Some explanations invoke classical conditioning. In several experiments, men have been conditioned to show arousal to stimuli like boots, geometric shapes or penny jars by pairing these cues with conventional erotica. According to John Bancroft, conditioning alone cannot explain fetishism, because it does not result in fetishism for most people. He suggests that conditioning combines with some other factor, such as an abnormality in the sexual learning process.Theories of sexual imprinting propose that humans learn to recognize sexually desirable features and activities during childhood. Fetishism could result when a child is imprinted with an overly narrow or incorrect concept of a sex object. Imprinting seems to occur during the child's earliest experiences with arousal and desire, and is based on "an egocentric evaluation of salient reward- or pleasure-related characteristics that differ from one individual to another."Neurological differences may play a role in some cases. Vilayanur S. Ramachandran observed that the region processing sensory input from the feet lies immediately next to the region processing genital stimulation, and suggested an accidental link between these regions could explain the prevalence of foot fetishism. In one unusual case, an anterior temporal lobectomy relieved an epileptic man's fetish for safety pins.Various explanations have been put forth for the rarity of female fetishists. Most fetishes are visual in nature, and males are thought to be more sexually sensitive to visual stimuli. Roy Baumeister suggests that male sexuality is unchangeable, except for a brief period in childhood during which fetishism could become established, while female sexuality is fluid throughout life. Diagnosis: The ICD-10 defines fetishism as a reliance on non-living objects for sexual arousal and satisfaction. It is only considered a disorder when fetishistic activities are the foremost source of sexual satisfaction, and become so compelling or unacceptable as to cause distress or interfere with normal sexual intercourse. The ICD's research guidelines require that the preference persists for at least six months, and is markedly distressing or acted on.Under the DSM-5, fetishism is sexual arousal from nonliving objects or specific nongenital body parts, excluding clothes used for cross-dressing (as that falls under transvestic disorder) and sex toys that are designed for genital stimulation. In order to be diagnosed as fetishistic disorder, the arousal must persist for at least six months and cause significant psychosocial distress or impairment in important areas of their life. In the DSM-IV, sexual interest in body parts was distinguished from fetishism under the name partialism (diagnosed as Paraphilia NOS), but it was merged with fetishistic disorder for the DSM-5.The ReviseF65 project has campaigned for the ICD diagnosis to be abolished completely to avoid stigmatizing fetishists. Sexologist Odd Reiersøl argues that distress associated with fetishism is often caused by shame, and that being subject to diagnosis only exacerbates that. He suggests that, in cases where the individual fails to control harmful behavior, they instead be diagnosed with a personality or impulse control disorder. Treatment: According to the World Health Organization, fetishistic fantasies are common and should only be treated as a disorder when they impair normal functioning or cause distress. Goals of treatment can include elimination of criminal activity, reduction in reliance on the fetish for sexual satisfaction, improving relationship skills, reducing or removing arousal to the fetish altogether, or increasing arousal towards more acceptable stimuli. The evidence for treatment efficacy is limited and largely based on case studies, and no research on treatment for female fetishists exists.Cognitive behavioral therapy is one popular approach. Cognitive behavioral therapists teach clients to identify and avoid antecedents to fetishistic behavior, and substitute non-fetishistic fantasies for ones involving the fetish. Aversion therapy and covert conditioning can reduce fetishistic arousal in the short term, but requires repetition to sustain the effect. Multiple case studies have also reported treating fetishistic behavior with psychodynamic approaches.Antiandrogens may be prescribed to lower sex drive. Cyproterone acetate is the most commonly used antiandrogen, except in the United States, where it may not be available. A large body of literature has shown that it reduces general sexual fantasies. Side effects may include osteoporosis, liver dysfunction, and feminization. Case studies have found that the antiandrogen medroxyprogesterone acetate is successful in reducing sexual interest, but can have side effects including osteoporosis, diabetes, deep vein thrombosis, feminization, and weight gain. Some hospitals use leuprorelin and goserelin to reduce libido, and while there is presently little evidence for their efficacy, they have fewer side effects than other antiandrogens. A number of studies support the use of selective serotonin reuptake inhibitors (SSRIs), which may be preferable over antiandrogens because of their relatively benign side effects. Pharmacological agents are an adjunctive treatment which are usually combined with other approaches for maximum effect.Relationship counselors may attempt to reduce dependence on the fetish and improve partner communication using techniques like sensate focusing. Partners may agree to incorporate the fetish into their activities in a controlled, time-limited manner, or set aside only certain days to practice the fetishism. If the fetishist cannot sustain an erection without the fetish object, the therapist might recommend orgasmic reconditioning or covert sensitization to increase arousal to normal stimuli (although the evidence base for these techniques is weak). Occurrence: The prevalence of fetishism is not known with certainty. Fetishism is more common in males. In a 2011 study, 30% of men reported fetishistic fantasies, and 24.5% had engaged in fetishistic acts. Of those reporting fantasies, 45% said the fetish was intensely sexually arousing. In a 2014 study, 26.3% of women and 27.8% of men acknowledged any fantasies about "having sex with a fetish or non-sexual object". A content analysis of the sample's favorite fantasies found that 14% of the male fantasies involved fetishism (including feet, nonsexual objects, and specific clothing), and 4.7% focused on a specific body part other than feet. None of the women's favorite fantasies had fetishistic themes. Another study found that 28% of men and 11% of women reported fetishistic arousal (including feet, fabrics, and objects "like shoes, gloves, or plush toys"). 18% of men in a 1980 study reported fetishistic fantasies.Fetishism to the extent that it becomes a disorder appears to be rare, with less than 1% of general psychiatric patients presenting fetishism as their primary problem. It is also uncommon in forensic populations. History: The word fetish derives from the French fétiche, which comes from the Portuguese feitiço ("spell"), which in turn derives from the Latin facticius ("artificial") and facere ("to make"). A fetish is an object believed to have supernatural powers, or in particular, a human-made object that has power over others. Essentially, fetishism is the attribution of inherent value or powers to an object. Fétichisme was first used in an erotic context by Alfred Binet in 1887. A slightly earlier concept was Julien Chevalier's azoophilie. History: Early perspectives on cause Alfred Binet suspected fetishism was the pathological result of associations. He argued that, in certain vulnerable individuals, an emotionally rousing experience with the fetish object in childhood could lead to fetishism. Richard von Krafft-Ebing and Havelock Ellis also believed that fetishism arose from associative experiences, but disagreed on what type of predisposition was necessary.The sexologist Magnus Hirschfeld followed another line of thought when he proposed his theory of partial attractiveness in 1920. According to his argument, sexual attractiveness never originates in a person as a whole but always is the product of the interaction of individual features. He stated that nearly everyone had special interests and thus suffered from a healthy kind of fetishism, while only detaching and overvaluing of a single feature resulted in pathological fetishism. Today, Hirschfeld's theory is often mentioned in the context of gender role specific behavior: females present sexual stimuli by highlighting body parts, clothes or accessories; males react to them. History: Sigmund Freud believed that sexual fetishism in men derived from the unconscious fear of the mother's genitals, from men's universal fear of castration, and from a man's fantasy that his mother had had a penis but that it had been cut off. He did not discuss sexual fetishism in women. In 1951, Donald Winnicott presented his theory of transitional objects and phenomena, according to which childish actions like thumb sucking and objects like cuddly toys are the source of manifold adult behavior, amongst many others fetishism. He speculated that the child's transitional object became sexualized. Other animals: Human fetishism has been compared to Pavlovian conditioning of sexual response in other animals. Sexual attraction to certain cues can be artificially induced in rats. Both male and female rats will develop a sexual preference for neutrally or even noxiously scented partners if those scents are paired with their early sexual experiences. Injecting morphine or oxytocin into a male rat during its first exposure to scented females has the same effect. Rats will also develop sexual preferences for the location of their early sexual experiences, and can be conditioned to show increased arousal in the presence of objects such as a plastic toy fish. One experiment found that rats which are made to wear a Velcro tethering jacket during their formative sexual experiences exhibit severe deficits in sexual performance when not wearing the jacket. Similar sexual conditioning has been demonstrated in gouramis, marmosets and Japanese quails.Possible boot fetishism has been reported in two different primates from the same zoo. Whenever a boot was placed near the first, a common chimpanzee born in captivity, he would invariably stare at it, touch it, become erect, rub his penis against the boot, masturbate, and then consume his ejaculate. The second, a guinea baboon, would become erect while rubbing and smelling the boot, but not masturbate or touch it with his penis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rossby number** Rossby number: The Rossby number (Ro), named for Carl-Gustav Arvid Rossby, is a dimensionless number used in describing fluid flow. The Rossby number is the ratio of inertial force to Coriolis force, terms |v⋅∇v|∼U2/L and Ω×v∼UΩ in the Navier–Stokes equations respectively. It is commonly used in geophysical phenomena in the oceans and atmosphere, where it characterizes the importance of Coriolis accelerations arising from planetary rotation. It is also known as the Kibel number.The Rossby number (Ro, not Ro) is defined as Ro =ULf, where U and L are respectively characteristic velocity and length scales of the phenomenon, and sin ⁡ϕ is the Coriolis frequency, with Ω being the angular frequency of planetary rotation, and ϕ the latitude. Rossby number: A small Rossby number signifies a system strongly affected by Coriolis forces, and a large Rossby number signifies a system in which inertial and centrifugal forces dominate. For example, in tornadoes, the Rossby number is large (≈ 103), in low-pressure systems it is low (≈ 0.1–1), and in oceanic systems it is of the order of unity, but depending on the phenomena can range over several orders of magnitude (≈ 10−2–102). As a result, in tornadoes the Coriolis force is negligible, and balance is between pressure and centrifugal forces (called cyclostrophic balance). Cyclostrophic balance also commonly occurs in the inner core of a tropical cyclone. In low-pressure systems, centrifugal force is negligible, and balance is between Coriolis and pressure forces (called geostrophic balance). In the oceans all three forces are comparable (called cyclogeostrophic balance). For a figure showing spatial and temporal scales of motions in the atmosphere and oceans, see Kantha and Clayson.When the Rossby number is large (either because f is small, such as in the tropics and at lower latitudes; or because L is small, that is, for small-scale motions such as flow in a bathtub; or for large speeds), the effects of planetary rotation are unimportant and can be neglected. When the Rossby number is small, then the effects of planetary rotation are large, and the net acceleration is comparably small, allowing the use of the geostrophic approximation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Yakovlevian torque** Yakovlevian torque: Yakovlevian torque (also known as occipital bending (OB) or counterclockwise brain torque) is the tendency of the right side of the human brain to be warped slightly forward relative to the left and the left side of the human brain to be warped slightly backward relative to the right. This is responsible for certain asymmetries, such as how the lateral sulcus of the human brain is often longer and less curved on the left side of the brain relative to the right. Stated in another way, Yakovlevian torque can be defined by the existence of right-frontal and left-occipital petalias, which are protrusions of the surface of one hemisphere relative to the other. It is named for Paul Ivan Yakovlev (1894–1983), a Russian-American neuroanatomist from Harvard Medical School. Effects: Handedness A 2012 literature review showed that morphometry studies had consistently found that handedness-related effects corresponded to the extent of the Yakovlevian torque; increased torque, as measured by increased size of the right-frontal petalia and the left-occipital petalia, tends to be more common in right-handed individuals. Individuals with mixed-handedness or left-handedness show reduced levels of Yakovlevian torque. Developmental stuttering Reduced right-frontal and left-occipital petalias and reversed petalia asymmetries (that is, left-frontal and right occipital petalias) have been associated with developmental stuttering in both adults and pre-adolescent boys. This may be tied to the lateral sulcus housing Broca's area, which plays a significant role in production of language. Effects: Bipolar disorder Increased size of the left-occipital petalia, resulting from an abnormally high degree of Yakovlevian torque has been associated with bipolar disorder. Maller et al. 2015 found that increased asymmetry of the occipital lobe, or occipital bending, was four times more prevalent in subjects with bipolar disorder than in healthy controls. This applied both to patients with bipolar disorder type I and type II. Presence in primates: Yakovlevian torque is found in modern humans and fossil hominids, appearing reliably as early as Homo erectus. The patterning of petalias in extinct human ancestors is examined via endocasts, wherein a cast is made of the cranial vault: the asymmetries of human ancestors can be measured from these casts because petalias leave impressions inside the cranial vault.Some authors have reported that similar petalia patterns are found in a number of primates including Old World monkeys, New World monkeys and Great apes, but others report different protrusions; these differences seem to be tied to which techniques are used to measure the petalia, so it is not well-understood if all primates demonstrate Yakovlevian torque.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gene Dogs** Gene Dogs: Gene Dogs are fictional characters appearing in American comic books published by Marvel Comics, particularly in the Marvel UK imprint. Originally members of an elite counter-terrorism squad known as Team Omega, five dying soldiers become the Gene Dogs after an experimental medical process saves their lives by modifying their DNA, a process which also grants them superhuman abilities. After their transformation, the squad continue to work as a team, now employed by a secret new European defense organization, S.T.O.R.M. Gene Dogs: The characters first appeared in Gene Dogs #1 (Oct. 1993), the first part of a four-issue limited series that was promoted as part of Marvel UK's "Gene Pool" event. The Gene Dogs were created by John Freeman and Dave Taylor. Publication history: Along with Gun Runner and Genetix, Gene Dogs was one of three limited series launched together under the 'Gene Pool' banner and "linked by a common thread - genetic mutation". As part of this promotion, the first issue of each series was shipped polybagged with collectable "Gene Cards" profiling the characters. Publication history: Like the other Marvel UK titles of the time, the Gene Dogs series was set in the shared Marvel Universe and used Marvel UK's villainous Mys-Tech corporation as an important part of its plot. Gene Dogs also briefly crossed over with Genetix, with the Genetix team guest-starring in Gene Dogs #2 (Nov. 1993) and the Gene Dogs later appearing in Genetix. Publication history: Writer John Freeman has since commented that Gene Dogs was "intended to be a British X-Men" Fictional history: The soldiers who would come to be called the Gene Dogs were originally part of an elite counter-terrorist squad called Team Omega. During a mission in the Congo, they were attacked by an unknown creature and infected with a deadly virus. In order to save their lives, they were spliced with animal DNA, as well as receiving bio-wetware chips in the cerebral cortex to enhance their abilities. Members: The Gene Dogs consist of: Tyr (Marc Devlin) - Spliced with dinosaur DNA, he has superhuman strength and durability. Tyr is bad-tempered and violent. He was Panther's lover and resented Cat for taking her place. Pacer (Carlos De Silva) - He has the genes of an unknown predatory animal. In addition to heightened physical abilities and fighting skills, he also has superior tracking skills. Howitzer (Shaka) - Spliced with a large sea mammal, he is immensely strong but requires a special cooling suit to maintain his body temperature. He specializes in heavy weapons and is the most calm and level-headed of the group. Kestrel (Annie Jones) - Able to create crystalline wings which allow flight and have razor-sharp edges. She is also a computer expert. Cat (Emma Malone) - Expert fighter. She also has telepathic abilities. Cat was sent to learn who the traitor in the group was. Panther (Corinne Walton) - A traitor working for Mys-Tech. She was Tyr's lover and had enhanced senses and athletic abilities.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pill puzzle** Pill puzzle: The pill jar puzzle is a probability puzzle, which asks the expected value of the number of half-pills remaining when the last whole pill is popped from a jar initially containing n whole pills and the way to proceed is by removing a pill from the bottle at random. If the pill removed is a whole pill, it is broken into two half pills. One half pill is consumed and the other one is returned to the jar. If the pill removed is a half pill, then it is simply consumed and nothing is returned to the jar. Mathematical derivation: The problem becomes very easy to solve once a binary variable Xk defined as Xk = 1, if the kth half pill remains inside the jar after all the whole pills are removed. The kth half pill is defined as the result of the breaking of the kth whole pill being removed from the jar. Xk = 1 if out of the n − k + 1 pills (n − k whole pills + kth half pill), the one half pill is removed at the very end. This occurs with probability 1/(n − k + 1). Mathematical derivation: The expected value is then given by, E(X1) + E(X2) + ... + E(Xn). Since E(Xk) = P(Xk = 1) = 1/(n − k + 1), the sought expected value is 1/n + 1/(n − 1) + 1/(n − 2) + ... + 1 = Hn (the nth harmonic number).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**International Business Communication Standards** International Business Communication Standards: The International Business Communication Standards (IBCS) are practical proposals for the design of business communication published for free use under a Creative-Commons-Lizenz (CC BY-SA). In most cases, applying IBCS means the proper conceptual, perceptual and semantic design of charts and tables. Requirements: Business Communication meets the IBCS Standards if it complies with the three rule sets comprising the three pillars of IBCS: Conceptual rules help to clearly relay content by using an appropriate storyline. They are based on the work of authors such as Barbara Minto. They owe wide acceptance to their scientific, experimental, or practical experience basis. They correspond with SUCCESS - a rule set for business communication SAY and STRUCTURE. Requirements: Perceptual rules help to clearly relay content by using an appropriate visual design. They are based on the work of authors such as William Playfair, Willard Cope Brinton, Gene Zelazny, Edward Tufte and Stephen Few. Again, these rules owe wide acceptance to their scientific, experimental, and/or practical experience basis.They correspond with the SUCCESS rule sets EXPRESS, SIMPLIFY, CONDENSE, and CHECK. Requirements: Semantic rules help to clearly relay content by using a uniform notation (IBCS Notation). They are based on the work of Rolf Hichert and other contributors of the IBCS Association. As they are manifested by convention, semantic rules must first be more widely accepted to become a standard.They correspond with the SUCCESS rule set UNIFY. IBCS Notation: IBCS Notation is the designation for the semantic rule set suggested by IBCS. IBCS Notation covers the unification of terminology (e.g. words, abbreviations, and number formats), descriptions (e.g. messages, titles, legends, and labels), dimensions (e.g. measures, scenarios, and time periods), analyses (e.g. scenario analyses and time series analyses), and indicators (e.g. highlighting indicators and scaling indicators). IBCS Association: The review and further development of the IBCS is an ongoing process controlled by the IBCS Association. The IBCS Association is a non-profit organization that publishes the Standards for free and engages in extensive consultation and discussion prior to issuing new versions. This includes worldwide solicitation for public comment. Release of IBCS Version 1.0: The active members accepted the released Version 1.0 of the IBCS Standards at the General Assembly of June 18, 2015 in Amsterdam. Current themes of the further development of the IBCS standards were discussed at the Annual Conference in Warsaw of June 3, 2016. The version 1.1 of the standards was confirmed by the active members at the Annual Conference in Barcelona of June 1, 2017. More than 80 professionals of 12 counties attended the Annual Conference. IBCS Association: The Annual Conference in London of June 8, 2018 took place at the Headquarters to The Institute of Chartered Accountants in England and Wales. The Annual Conference 2019 was in Vienna. The Keyspeaker Yuri Engelhardt talked about "The language of graphics and visual notations." The Annual Conference 2020 and 2021 were held virtually. More than 200 participants from many countries attended these conferences. The Annual Conference 2022 were held hybrid in Berlin and online, about 300 participants from 44 countries attend this conference.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Meyer hardness test** Meyer hardness test: The Meyer hardness test is a hardness test based upon projected area of an impression. The hardness, H , is defined as the maximum load, max divided by the projected area of the indent, Ap max Ap. Meyer hardness test: This is a more fundamental measurement of hardness than other hardness tests which are based on the surface area of an indentation. The principle behind the test is that the mean pressure required to test the material is the measurement of the hardness of the material. Units of megapascals (MPa) are frequently used for reporting Meyer hardness, but any unit of pressure can be used.The test was originally defined for spherical indenters, but can be applied to any indenter shape. It is often the definition used in nanoindentation testing. An advantage of the Meyer test is that it is less sensitive to the applied load, especially compared to the Brinell hardness test. For cold worked materials the Meyer hardness is relatively constant and independent of load, whereas for the Brinell hardness test it decreases with higher loads. For annealed materials the Meyer hardness increases continuously with load due to strain hardening.Based on Meyer's law hardness values from this test can be converted into Brinell hardness values, and vice versa.The Meyer hardness test was devised by Eugene Meyer of the Materials Testing Laboratory at the Imperial School of Technology, Charlottenburg, Germany, circa 1908.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Three-finger toxin** Three-finger toxin: Three-finger toxins (abbreviated 3FTx) are a protein superfamily of small toxin proteins found in the venom of snakes. Three-finger toxins are in turn members of a larger superfamily of three-finger protein domains which includes non-toxic proteins that share a similar protein fold. The group is named for its common structure consisting of three beta strand loops connected to a central core containing four conserved disulfide bonds. The 3FP protein domain has no enzymatic activity and is typically between 60-74 amino acid residues long. Despite their conserved structure, three-finger toxin proteins have a wide range of pharmacological effects. Most members of the family are neurotoxins that act on cholinergic intercellular signaling; the alpha-neurotoxin family interacts with muscle nicotinic acetylcholine receptors (nAChRs), the kappa-bungarotoxin family with neuronal nAChRs, and muscarinic toxins with muscarinic acetylcholine receptors (mAChRs). Structure: The three-finger toxin superfamily is defined by a common tertiary structure consisting of three beta strand-containing loops (designated loops I, II, and III) projecting from a small hydrophobic core containing four conserved disulfide bonds. This structure is thought to resemble a hand with three fingers, giving rise to the name. The proteins are typically 60-74 amino acid residues long, though some have additional N- or C-terminal extensions. An additional disulfide bond may be present in either loop I or loop II. The superfamily can be broadly divided into three classes: short-chain toxins have under 66 residues and four core disulfide bonds. Structure: long-chain toxins have at least 66 residues, a disulfide bond in loop II, and possibly a C-terminal extension. non-conventional toxins have a disulfide bond in loop I and possibly terminal extensions. Structure: Oligomerization Most 3FTx proteins are monomers. However, some 3FTx subgroups form functional non-covalent homodimers. The kappa-bungarotoxin group is the best characterized dimeric 3FTx, and interacts through an antiparallel dimer interface composed of the outer strand of loop III. Haditoxin is another example of a dimeric 3FTx; it is a member of the short-chain group and has a similar dimer interface but distinct pharmacology compared to the long-chain-like kappa-bungarotoxins.A few examples of covalently linked dimers have also been described. These proteins, from the non-conventional group, are linked through intermolecular disulfide bonds. Some, such as irditoxin, are heterodimers linked by cysteines in loops I and II. Others, such as alpha-cobrotoxin, can form both homodimers and heterodimers which have distinct pharmacological activities in vitro, though their functional significance is unclear due to their very low concentration in venom. Function: Despite their conserved shared structure, 3FTx proteins have a wide range of pharmacological effects mediating their toxicity. Many members of the family are neurotoxins that bind to receptor proteins in the cell membrane, particularly nicotinic acetylcholine receptors. Others, including the second-largest 3FTx subgroup, are cardiotoxins. Function: Cellular targets Nicotinic acetylcholine receptors Many of the most well-characterized 3FTx proteins exert their toxic effects through binding to nicotinic acetylcholine receptors (nAChRs), a family of ligand-gated ion channels. 3FTx binding interferes with cholinergic intercellular signaling particularly at neuromuscular junctions and causes paralysis. The alpha-neurotoxin family is a group of 3FTx proteins that bind muscle nAChRs, preventing the binding of the neurotransmitter acetylcholine. Alpha-bungarotoxin, the alpha-neurotoxin from the many-banded krait (Bungarus multicinctus), has a long history of use in molecular biology research; it was through the study of this toxin that nAChRs were isolated and characterized, which facilitated the study of the subunit composition of tissue-specific nAChRs and the detailed pharmacological understanding of the neuromuscular junction. In general, short-chain 3FTx members of this group bind muscle nAChRs only, and long-chain members bind both muscle and neuronal receptors. This 3FTx group is sometimes referred to as the "curaremimetic" toxins due to the similarity of their effects with the plant alkaloid curare.Other groups of 3FTx proteins also bind to different nAChR subtypes; for example, kappa-neurotoxins, which are long-chain dimers, bind neuronal nAChRs, and haditoxin, which is a short-chain dimer, binds both muscle and neuronal subtypes. Non-conventional 3FTx proteins also often bind nAChRs; these were thought to be weaker toxins when first discovered, but the class has been found to possess a range of binding affinities. Recently, a new class of nAChR antagonist 3FTx proteins called omega-neurotoxins has been described. Function: Muscarinic acetylcholine receptors A smaller class of 3FTx proteins binds instead to muscarinic acetylcholine receptors, a family of G-protein-coupled receptors. Muscarinic toxins can be either receptor agonists or receptor antagonists, and in some cases the same 3FTx protein is an agonist at one receptor subtype and an antagonist at another. Muscarinic toxins are generally of the short-chain type. Acetylcholinesterase A class of 3FTx proteins called fasciculins bind the enzyme acetylcholinesterase and inhibit its activity by blocking access of acetylcholine to the enzyme's active site, thereby preventing acetylcholine breakdown. This class derives its name from its clinical effect, causing muscle fasciculations. Function: Cardiac targets The second-largest class of 3FTx proteins causes toxicity in cardiac myocytes and can cause increased heart rate and eventually cardiac arrest. These cardiotoxins also often have generalized cytotoxic effects and are sometimes known as cytolysins. The protein targets in myocytes are not generally known for this class, though some members may cause physical damage to the cell by establishing pores in the cell membrane.Another class, called the beta-cardiotoxins, causes decreased heart rate and are thought to function as beta blockers, antagonists for the beta-1 and beta-2 adrenergic receptors. Function: Less common targets There are known 3FTx proteins that target a variety of additional protein targets to exert their toxic effects. For example, L-type calcium channels are targeted by calciseptine and platelet aggregation is inhibited via interactions with adhesion proteins by dendroaspin and related proteins. In some cases no toxicity is observed as a result of the 3FTx-target interaction; for example, the mambalgin family of 3FTx proteins interacts with acid-sensing ion channels to produce analgesia without apparent toxic effect in laboratory tests. Function: Orphan 3FTx proteins Bioinformatics-based surveys of known protein sequences have often identified a number of sequences likely to form a 3FTx protein structure but whose function has not been experimentally characterized. Thus, it is not known whether these "orphan" proteins are in fact toxins or what their cellular targets might be. Genomics studies of gene expression in snakes have shown that members of protein families traditionally considered toxins are widely expressed in snake body tissues and that this expression pattern occurs outside the highly venomous superfamily Caenophidia. Function: Structure-function activity relationships Because 3FTx proteins of similar structure bind a diverse range of cellular protein targets, the relationships between 3FTx protein sequence and their biological activity have been studied extensively, especially among the alpha-neurotoxins. Known functional sites conferring binding affinity and specificity are concentrated in the loops of 3FTx proteins. For example, the crystal structure of alpha-bungarotoxin in complex with the extracellular domain of the alpha-9 nAChR subunit indicates a protein-protein interaction mediated through loops I and II, with no contacts formed by loop III. Interaction surfaces have been mapped for a number of toxins and vary in which loops participate in binding; erabutoxin A uses all three loops to bind nAChRs, while the dendroaspin interaction with adhesion proteins is mediated by three residues in loop III. In some 3FTx proteins with a C-terminal extension, these residues also participate in forming key binding interactions.The cardiotoxin/cytolysin 3FTx subgroup has a somewhat different set of functionally significant residues due to its distinct mechanism of action, likely involving interactions with phospholipids in the cell membrane, as well as possible functionally significant interactions with other cell-surface molecules such as glycosaminoglycans. A hydrophobic patch of residues contiguous in tertiary structure but distributed over all three loops has been identified as functionally significant in combination with a set of conserved lysine residues conferring local positive charge.Because of their structural similarity and functional diversity, 3FTx proteins have been used as model systems for the study of protein engineering. Their high binding specificity against targets of pharmacological interest, lack of enzymatic activity, and low immunogenicity have also prompted interest in their potential as drug leads. Evolution: Although three-finger proteins in general are widely distributed among metazoans, three-finger toxins appear only in snakes. They are usually considered to be restricted to the Caenophidia lineage (the taxon containing all venomous snakes), though at least one putative 3FTx homolog has been identified in the genome of the Burmese python, a member of a sister taxon. Early work in analyzing protein homology by sequence alignment in the 1970s suggested 3FTx proteins may have evolved from an ancestral ribonuclease; however, more recent molecular phylogeny studies indicate that 3FTx proteins evolved from non-toxic three-finger proteins.Among venomous snakes, the distribution of 3FTx proteins varies; they are particularly enriched in venom from the family Elapidae. In the king cobra (Ophiophagus hannah) and Eastern green mamba (Dendroaspis angusticeps), 3FTx proteins make up about 70% of the protein toxins in venom; in the desert coral snake (Micrurus tschudii) the proportion is reported as high as 95%.Genes encoding three-finger toxins are thought to have evolved through gene duplication. Traditionally, this has been conceptualized as repeated events of duplication followed by neofunctionalization and recruitment to gene expression patterns restricted to venom glands. However, it has been argued that this process should be extremely rare and that subfunctionalization better explains the observed distribution. More recently, non-toxic 3FP proteins have been found to be widely expressed in many different tissues in snakes, prompting the alternative hypothesis that proteins of restricted expression in saliva were selectively recruited for toxic functionality. There is evidence that most types of 3FTx proteins have been subject to positive selection (that is, diversifying selection) in their recent evolutionary history, possibly due to an evolutionary arms race with prey species. Notable exceptions are the dimeric kappa-bungarotoxin family, likely as a result of evolutionary constraints on the dimer interface, and the cardiotoxin/cytotoxin family, in which a larger fraction of the protein's residues are believed to have functional roles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tuxera** Tuxera: Tuxera Inc. (natively Tuxera Oy) is a Finnish company that develops and sells file systems, flash management and networking software. The company was founded in 2008 and is headquartered in Espoo, Finland. Tuxera's other offices are located in the US, South Korea, Japan, Hungary, Germany, Taiwan and China.The company focuses on data management software for embedded systems: industry-standard file system technologies (APFS, exFAT, FAT, HFS+, NTFS), other embedded proprietary file systems, flash translation layer software, and networking stacks. Tuxera has network file systems that support enterprise storage use cases as well. History: The origin of the company dates back to the open-source NTFS development in the late 1990s. NTFS had been introduced in 1993 by Microsoft as the file system for Windows NT. At that time Anton Altaparmakov emerged as the lead developer and maintainer of the Linux NTFS kernel driver. Meanwhile, Szabolcs Szakacsits continued to lead a platform-independent project under the name NTFS-3G. In 2006, NTFS-3G became the first driver to gain full read and write support. Commercial activity started in 2007 and the company was founded next year. In 2009 the company signed agreements with Microsoft, which was followed by global expansion and establishing collaboration with chipset vendors and software platform companies.After several years of contributions to the Linux kernel, Tuxera joined the Linux Foundation in 2011.In 2019, the company became a board member of the SD Association. Tuxera also acquired Datalight that year, adding more file systems and flash management software to their offering. Later in 2021, Tuxera acquired HCC Embedded, adding more deeply embedded networking and storage software focused on real-time operating systems and micro-controllers. Embedded software products: Microsoft NTFS by Tuxera (formerly Tuxera NTFS) Tuxera develops a fully compatible NTFS file system driver for commercial use, primarily by OEMs and other device manufacturers. It's deployed in car IVIs, smart TVs, set-top boxes, smartphones, tablets, routers, NAS and other devices. It is available for Android and other Linux platforms, QNX, WinCE Series 40, Nucleus RTOS and VxWorks. Supported architectures are ARM architecture, MIPS architecture, PowerPC, SuperH and x86. Embedded software products: Microsoft exFAT by Tuxera (formerly Tuxera exFAT) Tuxera exFAT technology is used for SDXC memory card support. Tuxera was the first independent vendor to receive legal access to exFAT and TexFAT specifications, source code and verification tools from Microsoft. Tuxera exFAT can be found in automotive infotainment systems, Android phones and tablets from ASUS, Fujitsu, Panasonic, Pantech and others. Microsoft FAT by Tuxera (formerly Tuxera FAT) Tuxera FAT software provides interoperability and support for storage types such as SD memory card, CF card, Memory Stick, SSD, HDD via USB, SATA, eSATA, MMC and others. It is used by chipset and hardware manufacturers, and software and system integrators for full compliance with Microsoft patent licenses and GPL. NTFS-3G NTFS-3G is the original free-software "community edition" driver used widely in Linux distributions, including Fedora, Ubuntu, and others. On April 12, 2011 it was announced that Ntfsprogs project was merged with NTFS-3G. Reliance Velocity (formerly VelocityFS by Tuxera and Tuxera Flash File System) Tuxera also develops and commercializes its own proprietary Flash file system. Due to its fail-safe technology it can be found for instance in vehicles and cars, integrated with the event data recorder to make sure the data recorded from sensors is consistent even in case of a crash. Tuxera FAT+ In 2017, Tuxera introduced FAT+, a file system implementation for Universal Flash Storage cards and removable storage that is compatible with FAT32 but without the file size limitation of 4 GiB. It is royalty free for UFS card host devices and a standard recommended by the Universal Flash Storage Association. Consumer products: AllConnect (discontinued) AllConnect was a mobile app for streaming music, photos and videos from Android and iOS devices to DLNA receivers (smart TVs, set-top-boxes, wireless speakers, etc.). It was launched on November 12, 2013 under the name of Streambels. As of April, 2020, Tuxera discontinued development of the AllConnect technology and removed the Android and iOS apps from their respective stores. Consumer products: Microsoft NTFS for Mac by Tuxera (formerly Tuxera NTFS for Mac) Microsoft NTFS for Mac by Tuxera allows macOS computers to read and write NTFS partitions. By default, macOS provides only read access to NTFS partitions. The latest stable version of the driver is 2022, including support for Apple silicon, Intel and PowerPC Macs. Microsoft NTFS for Mac by Tuxera is bundled together with Tuxera Disk Manager to facilitate the format and maintenance of NTFS volumes in macOS. The software supports NTFS extended attributes and works with virtualization and encryption solutions including Parallels Desktop and VMware Fusion. Consumer products: SD Memory Card Formatter Tuxera, in association with SD Association, developed the official formatting application for Secure Digital memory cards, which is available as a free download for Windows and macOS.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Signal patch** Signal patch: A protein signal patch contains information to send a given protein to the indicated location in the cell. It is made up of amino acid residues that are distant to one another in the primary sequence, but come close to each other in the tertiary structure of the folded protein (see red patch in the diagram). Signal patches, unlike some signal sequences, are not cleaved from the mature protein after sorting. They are very difficult to predict. Nuclear localization signals are often signal patches although signal sequences also exist. They are found on proteins destined for the nucleus and enable their selective transport from the cytosol into the nucleus through the nuclear pore complexes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Boehm system** Boehm system: The Boehm system is a system of keywork for the flute, created by inventor and flautist Theobald Boehm between 1831 and 1847. History: Immediately prior to the development of the Boehm system, flutes were most commonly made of wood, with an inverse conical bore, eight keys, and tone holes (the openings where the fingers are placed to produce specific notes) that were small in size, and thus easily covered by the fingertips. Boehm's work was inspired by an 1831 concert in London, given by soloist Charles Nicholson, who, with his father in the 1820s, had introduced a flute constructed with larger tone holes than were used in previous designs. This large-holed instrument could produce greater volume of sound than other flutes, and Boehm set out to produce his own large-holed design. History: In addition to large holes, Boehm provided his flute with "full venting", meaning that all keys were normally open (previously, several keys were normally closed, and opened only when the key was operated). Boehm also wanted to locate tone holes at acoustically optimal points on the body of the instrument, rather than locations conveniently covered by the player's fingers. To achieve these goals, Boehm adapted a system of axle-mounted keys with a series of "open rings" (called brille, German for "eyeglasses", as they resembled the type of eyeglass frames common during the 19th century) that were fitted around other tone holes, such that the closure of one tone hole by a finger would also close a key placed over a second hole. History: In 1832 Boehm introduced a new conical-bore flute, which achieved a fair degree of success. Boehm, however, continued to look for ways to improve the instrument. Finding that an increased volume of air produced a stronger and clearer tone, he replaced the conical bore with a cylindrical bore, finding that a parabolic contraction of the bore near the embouchure hole improved the instrument's low register. He also found that optimal tone was produced when the tone holes were too large to be covered by the fingertips, and he developed a system of finger plates to cover the holes. These new flutes were at first made of silver, although Boehm later produced wooden versions. The cylindrical Boehm flute was introduced in 1847, with the instrument gradually being adopted almost universally by professional and amateur players in Europe and around the world during the second half of the 19th century. The instrument was adopted for the performance of orchestral and chamber music, opera and theater, wind ensembles (e.g., military and civic bands), and most other music which might be loosely described as relating to "Western classical music" (including, for example, jazz). Many further refinements have been made, and countless design variations are common among flutes today (the "offset G" key, addition of the low B foot, etc.) The concepts of the Boehm system have been applied across the range of flutes available, including piccolos, alto flutes, bass flutes, and so on, as well as other wind instruments. The material of the instrument may vary (many piccolos are made of wood, some very large flutes are wooden or even made of PVC). History: The flute is perhaps the oldest musical instrument, other than the human voice itself. There are very many flutes, both traversely blown and end-blown "fipple" flutes, currently produced which are not built on the Boehm model. History: The fingering system for the saxophone closely resembles the Boehm system. A key system inspired by Boehm's for the clarinet family is also known as the "Boehm system", although it was developed by Hyacinthe Klosé and not Boehm himself. The Boehm system was also adapted for a small number of flageolets. Boehm did work on a system for the bassoon, and Boehm-inspired oboes have been made, but non-Boehm systems remain predominant for these instruments. The Albert system is another key system for the clarinet.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Generative metrics** Generative metrics: Generative metrics is the collective term for three distinct theories of verse structure (focusing on the English iambic pentameter) advanced between 1966 and 1977. Inspired largely by the example of Noam Chomsky's Syntactic Structures (1957) and Chomsky and Morris Halle's The Sound Pattern of English (1968), these theories aim principally at the formulation of explicit linguistic rules that will generate all possible well-formed instances of a given meter (e.g. iambic pentameter) and exclude any that are not well-formed. T.V.F. Brogan notes that of the three theories, "[a]ll three have undergone major revision, so that each exists in two versions, the revised version being preferable to the original in every case." Halle–Keyser: The earliest (and most-discussed) theory of generative metrics is that put forth by Morris Halle and Samuel Jay Keyser — first in 1966 with respect to Chaucer's iambic pentameter, and in its full and revised form in 1971's English Stress: Its Forms, Its Growth, and Its Role in Verse. Halle and Keyser conceive of the iambic pentameter line as a series of (nominally) 10 Weak and Strong positions: W S W S W S W S W S but to accommodate acephalous lines, and feminine and triple endings, use this full formulation: (W) S W S W S W S W S (x) (x) where the first Weak position is optional, and the final 2 positions (which must be unstressed) are also optional. They then define their signal concept, the Stress Maximum, as a stressed syllable "located between two unstressed syllables in the same syntactic constituent within a line of verse". Finally, the fit between syllables and the positions they occupy are evaluated by these 2 hierarchical sets of correspondence rules:(i) A position (S or W) corresponds to either 1) a single syllable, or 2) a sonorant sequence incorporating at most two vowels (immediately adjoining to one another, or separated by a sonorant consonant).AND (ii) 1) Stressed syllables occur in S positions and in all S positions; or 2) Stressed syllables occur only in S positions, but not necessarily in all S positions; or 3) Stress Maxima occur only in S positions, but not necessarily in all S positions.Rules are evaluated in order. If rules (i)-1 or (ii)-1 or (ii)-2 are broken, this indicates increasing complexity of the line. But if (i)-2 or (ii)-3 are broken, the line is unmetrical. (Note that some sources erroneously state that the presence of a Stress Maximum makes a line unmetrical; this is false. In Halle & Keyser's theory a Stress Maximum in a W position makes a line unmetrical.) An example of Halle and Keyser's scansion is: / / / M / How many bards gild the lapses of time! W S W S W S W S W S Stresses are indicated by a slash "/" and Stress Maxima by "M". A single underline indicates a violation of (ii)-1; a double underline indicates a violation of (ii)-1 & 2. In addition, the Stress Maximum "lap", since it occurs on a W position, violating (ii)-3, should get a third underline, rendering the line unmetrical. (Because of display limitations, this is here indicated by striking out the "M".) Joseph C. Beaver, Dudley L. Hascall, and others have attempted to modify or extend the theory. Halle–Keyser: Criticism The Halle–Keyser system has been criticized because it can identify passages of prose as iambic pentameter.Later generative metrists pointed out that poets have often treated non-compound words of more than one syllable differently from monosyllables and compounds of monosyllables. Any normally weak syllable may be stressed as a variation if it is a monosyllable, but not if it is part of a polysyllable except at the beginning of a line or a phrase. Thus Shakespeare wrote: × × / / × / × / × / For the four winds blow in from every coast but wrote no lines of the form of: × × / / × / × / × / As gazelles leap a never-resting brook The stress patterns are the same, and in particular, the normally weak third syllable is stressed in both lines; the difference is that in Shakespeare's line the stressed third syllable is a one-syllable word, "four", whereas in the un-Shakespearean line it is part of a two-syllable word, "gazelles". (The definitions and exceptions are more technical than stated here.) Pope followed such a rule strictly, Shakespeare fairly strictly, Milton much less, and Donne not at all—which may be why Ben Jonson said Donne deserved hanging for "not keeping of accent".Derek Attridge has pointed out the limits of the generative approach; it has “not brought us any closer to understanding why particular metrical forms are common in English, why certain variations interrupt the metre and others do not, or why metre functions so powerfully as a literary device.” Generative metrists also fail to recognize that a normally weak syllable in a strong position will be pronounced differently, i.e. “promoted” and so no longer "weak." Magnuson–Ryder: A Distinctive Feature Analysis of verse was advanced by Karl Magnuson and Frank Ryder in 1970 and revised in 1971, based on their earlier work on German verse, and ultimately deriving from phonological distinctive feature principles of the Prague School. They similarly propose that iambic pentameter consists of a 10-position line of Odd and Even slots: O E O E O E O E O E However, in other meters these slots retain their identities of odd = "not metrically prominent" and even = "metrically prominent", so that (for example) trochaic tetrameter has the structure: E O E O E O E O They then label each syllable in the verse line, according to the presence (+) or absence (-) of 4 linguistic features: Word Onset, Weak, Strong, Pre-Strong. Each type of position has an "expected" set of values for these features: Thus: O E O E O E O E O E WO + - + + + - - + + + WK - + + - - - + - + + ST + - - + + + - + - - PS - - - - + - - - - - Batter my heart, three-personed God, for you The expected values are then compared to the actual values of the verse line. "Since the expectation matrix can never be fulfilled completely, it follows that one must assume all poetry to be unmetrical in some degree, and the task of prosody is to find the constraints upon the conditions under which a feature may occur in a nonaffirming relation to the matrix. These constraints are the Base Rules."Their revised theory claims to generate the vast majority of canonical English iambic pentameter using only 2 features — Strong (ST) and Pre-strong (PS) — and only 2 Base Rules constraining neighboring syllables in E O slots: 1. If the E slot contains [+PS], then the following O slot must contain [+PS]. Magnuson–Ryder: 2. If the E slot contains [-ST] and the following O slot contains [-PS], then that O slot must also contain [-ST].with the limitation that these Base Rules do not apply across line-juncture or major syntactic boundary. Criticism T.V.F. Brogan says of the theory, "It is fair to say that so far their approach has been considered unfruitful by most metrists." However, Derek Attridge considers that David Chisholm's modification of Magnuson–Ryder — along with Kiparsky's theory — "capture the details of English metrical practice more accurately than any of their [generative] predecessors". Kiparsky: Paul Kiparsky's theory, introduced in 1975 and radically revised in 1977, contrasts decisively with previous generative theories on certain key points. Though retaining the now-familiar 10-position line, he reintroduces metrical feet (a concept explicitly denied by other generative metrists) by "bracketing" Weak and Strong positions: (W S) (W S) (W S) (W S) (W S) Furthermore, Kiparsky's account "is based on a specific theory of English stress elaborated by Liberman and Prince (1977) as a counter-proposal to Chomsky and Halle's Sound Pattern of English." Conversely, he considers the syllables in a verse line to have a complex hierarchical structure — analogous to a core proposition in Chomsky's transformational grammar — as opposed to the previous theories which gave syllables a strictly linear treatment. Kiparsky: Once the verse text has been parsed and its syllables assigned "W" and "S" labels and hierarchical relationships, it can be compared with the metrical structure of the line (also labeled "W" and "S" and with its own less complex bracketed relationships — as above). "Labeling mismatches" may render the line more complex or unmetrical: different rules reflect different poets' practice. " 'Bracketing' mismatches occur when the two patterns of W and S agree but the brackets to each pattern are out of sync — as with trochaic words in an iambic line." (These only render the line more complex.) The most essential test of metricality is "that the more closely an S-syllable in W-position is bound (in the Liberman-Prince tree-notation) to the syllable that precedes it, the more metrically disruptive it is." Criticism Peter L. Groves has objected that "[a]ccording to Kiparsky, a line will be unmetrical for the vast majority of English poets (including Shakespeare) if [as below] it contains an S-syllable in W-position immediately preceded by a W-syllable which it commands; thus an innocuous line like [that below, from Othello] is ruled categorically unmetrical:" W S Give renew'd fire to our extincted Spirits (W S)(W S) (W S) (W S) (W S)X Sources: Attridge, Derek (1982). The Rhythms of English Poetry. New York: Longman. ISBN 0-582-55105-6. Beaver, Joseph C. (1974). "Generative Metrics". In Preminger, Alex; et al. (eds.). Princeton Encyclopedia of Poetry and Poetics (Enlarged ed.). Princeton, NJ: Princeton University Press. pp. 931–933. ISBN 0-691-01317-9. Sources: Brogan, T.V.F. (1999) [1981]. English Versification, 1570–1980: A Reference Guide With a Global Appendix (Hypertext ed.). Baltimore: Johns Hopkins University Press. ISBN 0-8018-2541-5.{{cite book}}: CS1 maint: url-status (link) (publisher and ISBN is for the original printed edition) Brogan, T.V.F. (1993). "Generative Metrics". In Preminger, Alex; Brogan, T.V.F.; et al. (eds.). The New Princeton Encyclopedia of Poetry and Poetics. New York: MJF Books. pp. 451–453. ISBN 1-56731-152-0. Sources: Groves, Peter L. (1998). Strange Music: The Metre of the English Heroic Line. ELS Monograph Series No.74. Victoria, BC: University of Victoria. ISBN 0-920604-55-2. Halle, Morris; Keyser, Samuel Jay (1972). "English III: The Iambic Pentameter". In Wimsatt, W.K. (ed.). Versification: Major Language Types. New York: New York University Press. pp. 217–237. ISBN 978-0814791554. Magnuson, Karl; Ryder, Frank G. (1970). "The Study of English Prosody: An Alternative Proposal". College English. 31 (8): 789–820. doi:10.2307/374226. JSTOR 374226. Magnuson, Karl; Ryder, Frank G. (1971). "Second Thoughts on English Prosody". College English. 33 (2): 198–216. doi:10.2307/374746. JSTOR 374746.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Coulomb blockade** Coulomb blockade: In mesoscopic physics, a Coulomb blockade (CB), named after Charles-Augustin de Coulomb's electrical force, is the decrease in electrical conductance at small bias voltages of a small electronic device comprising at least one low-capacitance tunnel junction. Because of the CB, the conductance of a device may not be constant at low bias voltages, but disappear for biases under a certain threshold, i.e. no current flows. Coulomb blockade: Coulomb blockade can be observed by making a device very small, like a quantum dot. When the device is small enough, electrons inside the device will create a strong Coulomb repulsion preventing other electrons to flow. Thus, the device will no longer follow Ohm's law and the current-voltage relation of the Coulomb blockade looks like a staircase.Even though the Coulomb blockade can be used to demonstrate the quantization of the electric charge, it remains a classical effect and its main description does not require quantum mechanics. However, when few electrons are involved and an external static magnetic field is applied, Coulomb blockade provides the ground for a spin blockade (like Pauli spin blockade) and valley blockade, which include quantum mechanical effects due to spin and orbital interactions respectively between the electrons. Coulomb blockade: The devices can comprise either metallic or superconducting electrodes. If the electrodes are superconducting, Cooper pairs (with a charge of minus two elementary charges −2e ) carry the current. In the case that the electrodes are metallic or normal-conducting, i.e. neither superconducting nor semiconducting, electrons (with a charge of −e ) carry the current. In a tunnel junction: The following section is for the case of tunnel junctions with an insulating barrier between two normal conducting electrodes (NIN junctions). In a tunnel junction: The tunnel junction is, in its simplest form, a thin insulating barrier between two conducting electrodes. According to the laws of classical electrodynamics, no current can flow through an insulating barrier. According to the laws of quantum mechanics, however, there is a nonvanishing (larger than zero) probability for an electron on one side of the barrier to reach the other side (see quantum tunnelling). When a bias voltage is applied, this means that there will be a current, and, neglecting additional effects, the tunnelling current will be proportional to the bias voltage. In electrical terms, the tunnel junction behaves as a resistor with a constant resistance, also known as an ohmic resistor. The resistance depends exponentially on the barrier thickness. Typically, the barrier thickness is on the order of one to several nanometers. In a tunnel junction: An arrangement of two conductors with an insulating layer in between not only has a resistance, but also a finite capacitance. The insulator is also called dielectric in this context, the tunnel junction behaves as a capacitor. In a tunnel junction: Due to the discreteness of electrical charge, current through a tunnel junction is a series of events in which exactly one electron passes (tunnels) through the tunnel barrier (we neglect cotunneling, in which two electrons tunnel simultaneously). The tunnel junction capacitor is charged with one elementary charge by the tunnelling electron, causing a voltage build up U=e/C , where C is the capacitance of the junction. If the capacitance is very small, the voltage build up can be large enough to prevent another electron from tunnelling. The electric current is then suppressed at low bias voltages and the resistance of the device is no longer constant. The increase of the differential resistance around zero bias is called the Coulomb blockade. Observation: In order for the Coulomb blockade to be observable, the temperature has to be low enough so that the characteristic charging energy (the energy that is required to charge the junction with one elementary charge) is larger than the thermal energy of the charge carriers. In the past, for capacitances above 1 femtofarad (10−15 farad), this implied that the temperature has to be below about 1 kelvin. This temperature range is routinely reached for example by Helium-3 refrigerators. Thanks to small sized quantum dots of only few nanometers, Coulomb blockade has been observed next above liquid helium temperature, up to room temperature.To make a tunnel junction in plate condenser geometry with a capacitance of 1 femtofarad, using an oxide layer of electric permittivity 10 and thickness one nanometer, one has to create electrodes with dimensions of approximately 100 by 100 nanometers. This range of dimensions is routinely reached for example by electron beam lithography and appropriate pattern transfer technologies, like the Niemeyer–Dolan technique, also known as shadow evaporation technique. The integration of quantum dot fabrication with standard industrial technology has been achieved for silicon. CMOS process for obtaining massive production of single electron quantum dot transistors with channel size down to 20 nm x 20 nm has been implemented. Single-electron transistor: The simplest device in which the effect of Coulomb blockade can be observed is the so-called single-electron transistor. It consists of two electrodes known as the drain and the source, connected through tunnel junctions to one common electrode with a low self-capacitance, known as the island. The electrical potential of the island can be tuned by a third electrode, known as the gate, which is capacitively coupled to the island. In the blocking state no accessible energy levels are within tunneling range of an electron (in red) on the source contact. All energy levels on the island electrode with lower energies are occupied. Single-electron transistor: When a positive voltage is applied to the gate electrode the energy levels of the island electrode are lowered. The electron (green 1.) can tunnel onto the island (2.), occupying a previously vacant energy level. From there it can tunnel onto the drain electrode (3.) where it inelastically scatters and reaches the drain electrode Fermi level (4.). The energy levels of the island electrode are evenly spaced with a separation of ΔE. This gives rise to a self-capacitance C of the island, defined as C=e2ΔE. Single-electron transistor: To achieve the Coulomb blockade, three criteria have to be met: The bias voltage must be lower than the elementary charge divided by the self-capacitance of the island: bias <eC The thermal energy in the source contact plus the thermal energy in the island, i.e. kBT, must be below the charging energy: kBT<e22C, or else the electron will be able to pass the QD via thermal excitation; and The tunneling resistance, Rt, should be greater than he2, which is derived from Heisenberg's uncertainty principle. Coulomb blockade thermometer: A typical Coulomb blockade thermometer (CBT) is made from an array of metallic islands, connected to each other through a thin insulating layer. A tunnel junction forms between the islands, and as voltage is applied, electrons may tunnel across this junction. The tunneling rates and hence the conductance vary according to the charging energy of the islands as well as the thermal energy of the system. Coulomb blockade thermometer: Coulomb blockade thermometer is a primary thermometer based on electric conductance characteristics of tunnel junction arrays. The parameter V½=5.439NkBT/e, the full width at half minimum of the measured differential conductance dip over an array of N junctions together with the physical constants provide the absolute temperature. Ionic Coulomb blockade: Ionic Coulomb blockade (ICB) is the special case of CB, appearing in the electro-diffusive transport of charged ions through sub-nanometer artificial nanopores or biological ion channels. ICB is widely similar to its electronic counterpart in quantum dots,[1] but presents some specific features defined by possibly different valence z of charge carriers (permeating ions vs electrons) and by the different origin of transport engine (classical electrodiffusion vs quantum tunnelling). Ionic Coulomb blockade: In the case of ICB, Coulomb gap ΔE is defined by dielectric self-energy of incoming ion inside the pore/channel and hence {\textstyle \Delta E} depends on ion valence z. ICB appears strong {\textstyle (\Delta E\gg k_{\rm {B}}T)} , even at the room temperature, for ions with >= 2 , e.g. for Ca 2+ ions. ICB has been recently experimentally observed in sub-nanometer MoS 2 pores.In biological ion channels ICB typically manifests itself in such valence selectivity phenomena as Ca 2+ conduction bands (vs fixed charge Qf ) and concentration-dependent divalent blockade of sodium current.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Esophagitis** Esophagitis: Esophagitis, also spelled oesophagitis, is a disease characterized by inflammation of the esophagus. The esophagus is a tube composed of a mucosal lining, and longitudinal and circular smooth muscle fibers. It connects the pharynx to the stomach; swallowed food and liquids normally pass through it.Esophagitis can be asymptomatic; or can cause epigastric and/or substernal burning pain, especially when lying down or straining; and can make swallowing difficult (dysphagia). The most common cause of esophagitis is the reverse flow of acid from the stomach into the lower esophagus: gastroesophageal reflux disease (GERD). Signs and symptoms: The symptoms of esophagitis include: Heartburn – a burning sensation in the lower mid-chest Nausea Dysphagia – swallowing is painful, with difficulty passing or inability to pass food through the esophagus Vomiting (emesis) Abdominal pain Cough Complications If the disease remains untreated, it can cause scarring and discomfort in the esophagus. If the irritation is not allowed to heal, esophagitis can result in esophageal ulcers. Esophagitis can develop into Barrett's esophagus and can increase the risk of esophageal cancer. Causes: Infectious esophagitis cannot be spread. However, infections can be spread by those who have infectious esophagitis. Esophagitis can develop due to many causes. GERD is the most common cause of esophagitis because of the backflow of acid from the stomach, which can irritate the lining of the esophagus. Other causes include: Medicines – Can cause esophageal damage that can lead to esophageal ulcers Nonsteroidal anti-inflammatory drugs (NSAIDS) – aspirin, naproxen sodium, and ibuprofen. Known to irritate the GI tract. Antibiotics – doxycycline and tetracycline Quinidine Bisphosphonates – used to treat osteoporosis Steroids Potassium chloride Chemical injury by alkaline or acid solutions Physical injury resulting from nasogastric tubes. Alcohol use disorder – Can wear down the lining of the esophagus. Crohn's disease – a type of IBD and an autoimmune disease that can cause esophagitis if it attacks the esophagus. Stress – Can cause higher levels of acid reflux Radiation therapy - Can affect the immune system. Allergies (food, inhalants) – Allergies can stimulate eosinophilic esophagitis. Infection - People with an immunodeficiencies have a higher chance of developing esophagitis. Vitamins and supplements (iron, vitamin C, and potassium) – Supplements and minerals can be hard on the GI tract. Vomiting – Acid can irritate esophagus. Hernias – A hernia can poke through the diaphragm muscle and can inhibit the stomach acid and food from draining quickly. Surgery Eosinophilic esophagitis, a more chronic condition with a theorized autoimmune component Mechanism: The esophagus is a muscular tube made of both voluntary and involuntary muscles. It is responsible for peristalsis of food. It is about 8 inches long and passes through the diaphragm before entering the stomach. The esophagus is made up of three layers: from the inside out, they are the mucosa, submucosa, muscularis externa. The mucosa, the inner most layer and lining of the esophagus, is composed of stratified squamous epithelium, lamina propria, and muscularis mucosae. At the end of the esophagus is the lower esophageal sphincter, which normally prevents stomach acid from entering the esophagus. Mechanism: If the sphincter is not sufficiently tight, it may allow acid to enter the esophagus, causing inflammation of one or more layers. Esophagitis may also occur if an infection is present, which may be due to bacteria, viruses, or fungi; or by diseases that affect the immune system.Irritation can be caused by GERD, vomiting, surgery, medications, hernias, and radiation injury. Inflammation can cause the esophagus to narrow, which makes swallowing food difficult and may result in food bolus impaction. Diagnosis: Esophagitis can be diagnosed by upper endoscopy, biopsy, upper GI series (or barium swallow), and laboratory tests.An upper endoscopy is a procedure to look at the esophagus by using an endoscope. While looking at the esophagus, the doctor is able to take a small biopsy. The biopsy can be used to confirm inflammation of the esophagus. Diagnosis: An upper GI series uses a barium contrast, fluoroscopy, and an X-ray. During a barium X-ray, a solution with barium or pill is taken before getting an X-ray. The barium makes the organs more visible and can detect if there is any narrowing, inflammation, or other abnormalities that can be causing the disease. The upper GI series can be used to find the cause of GI symptoms. An esophagram is if only the throat and esophagus are looked at.Laboratory tests can be done on biopsies removed from the esophagus and can help determine the cause of the esophagitis. Laboratory tests can help diagnose a fungal, viral, or bacterial infection. Scanning for white blood cells can help diagnose eosinophil esophagitis. Diagnosis: Some lifestyle indicators for this disease include stress, unhealthy eating, smoking, drinking, family history, allergies, and immunodeficiency. Types Reflux esophagitis Although it usually assumed that inflammation from acid reflux is caused by the irritant action on the mucosa by hydrochloric acid, one study suggests that the pathogenesis of reflux esophagitis may be cytokine-mediated. Diagnosis: Infectious esophagitis Esophagitis happens due to a viral, fungal, parasitic or bacterial infection. More likely to happen to people who have an immunodeficiency. Types include: Fungal Candida (Esophageal candidiasis)Viral Herpes simplex (Herpes esophagitis) CytomegalovirusDrug-induced esophagitis Damage to the esophagus due to medications. If the esophagus is not coated or if the medicine is not taken with enough liquid, it can damage the tissues. Diagnosis: Eosinophilic esophagitis Eosinophilic esophagitis is caused by a high concentration of eosinophils in the esophagus. The presence of eosinophils in the esophagus may be due to an allergen and is often correlated with GERD. The direction of cause and effect between inflammation and acid reflux is poorly established, with recent studies (in 2016) hinting that reflux does not cause inflammation. This esophagitis can be triggered by allergies to food or to inhaled allergens. This type is still poorly understood. Diagnosis: Lymphocytic esophagitis Lymphocytic esophagitis is a rare and poorly understood entity associated with an increased amount of lymphocytes in the lining of the esophagus. It was first described in 2006. Disease associations may include Crohn's disease, gastroesophageal reflux disease and coeliac disease. It causes similar changes on endoscopy as eosinophilic esophagitis including esophageal rings, narrow-lumen esophagus, and linear furrows. Caustic esophagitis Caustic esophagitis is the damage of tissue via chemical origin. This occasionally occurs through occupational exposure (via breathing of fumes that mix into the saliva which is then swallowed) or through pica. It occurred in some teenagers during the fad of intentionally eating Tide pods. By severity The severity of reflux esophagitis is commonly classified into four grades according to the Los Angeles Classification: Prevention: Since there can be many causes underlying esophagitis, it is important to try to find the cause to help to prevent esophagitis. To prevent reflux esophagitis, avoid acidic foods, caffeine, eating before going to bed, alcohol, fatty meals, and smoking. To prevent drug-induced esophagitis, drink plenty of liquids when taking medicines, take an alternative drug, and do not take medicines while lying down, before sleeping, or too many at one time. Esophagitis is more prevalent in adults and does not discriminate. Treatment: Lifestyle changes Losing weight, stop smoking and alcohol, lowering stress, avoid sleeping/lying down after eating, raising the head of the bed, taking medicines correctly, avoiding certain medications, and avoiding foods that cause the reflux that might be causing the esophagitis. Treatment: Medications Antacids To treat reflux esophagitis, over the counter antacids, medications that reduce acid production (H-2 receptor blockers), and proton pump inhibitors are recommended to help block acid production and to let the esophagus heal. Some prescription medications to treat reflux esophagitis include higher dose H-2 receptor blockers, proton pump inhibitors, and prokinetics, which help with the emptying of the stomach. However prokinetics are no longer licensed for GERD because their evidence of efficacy is poor, and following a safety review, licensed use of domperidone and metoclopramide is now restricted to short-term use in nausea and vomiting only. Treatment: For subtypes To treat eosinophilic esophagitis, avoiding any allergens that may be stimulating the eosinophils is recommended. As for medications, proton pump inhibitors and steroids can be prescribed. Steroids that are used to treat asthma can be swallowed to treat eosinophil esophagitis due to nonfood allergens. The removal of food allergens from the diet is included to help treat eosinophilic esophagitis. Treatment: For infectious esophagitis, medicine is prescribed based on what type of infection is causing the esophagitis. These medicines are prescribed to treat bacterial, fungal, viral, and/or parasitic infections. Procedures An endoscopy can be used to remove ill fragments. Surgery can be done to remove the damaged part of the esophagus. For reflux esophagitis, a fundooplication can be done to help strengthen the lower esophageal sphincter from allowing backflow of the stomach into the esophagus. For esophageal stricture, a gastroenterologist can perform a dilation of the esophagus.As of 2020 evidence for magnetic sphincter augmentation is poor. Prognosis: The prognosis for a person with esophagitis depends on the underlying causes and conditions. If a patient has a more serious underlying cause such as a digestive system or immune system issue, it may be more difficult to treat. Normally, the prognosis would be good with no serious illnesses. If there are more causes than one, the prognosis could move to fair. Terminology: The term is from Greek οἰσοφάγος "gullet" and -itis "inflammation".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Majumdar–Ghosh model** Majumdar–Ghosh model: The Majumdar–Ghosh model is a one-dimensional quantum Heisenberg spin model in which the nearest-neighbour antiferromagnetic exchange interaction is twice as strong as the next-nearest-neighbour interaction. It is a special case of the more general J1 -J2 model, with J1=2J2 . The model is named after Indian physicists Chanchal Kumar Majumdar and Dipan Ghosh.The Majumdar–Ghosh model is notable because its ground states (lowest energy quantum states) can be found exactly and written in a simple form, making it a useful starting point for understanding more complex spin models and phases. Definition: The Majumdar–Ghosh model is defined by the following Hamiltonian: H^=J∑j=1NS→j⋅S→j+1+J2∑j=1NS→j⋅S→j+2 where the S vector is a quantum spin operator with quantum number S = 1/2. Definition: Other conventions for the coefficients may be taken in the literature, but the most important fact is that the ratio of first-neighbor to second-neighbor couplings is 2 to 1. As a result of this ratio, it is possible to express the Hamiltonian (shifted by an overall constant) equivalently in the form H^=J4∑j=1N(S→j−1+S→j+S→j+1)2 The summed quantity is none other than the quadratic Casimir operator for representation of the spin algebra on the three consecutive sites j−1,j,j+1 , which in turn can be decomposed into a direct sum of spin 1/2 and 3/2 representations. It has the eigenvalues 12(12+1)=3/4 for the spin 1/2 subspace and 15 /4 for the spin 3/2 subspace. Ground states: It has been shown that the Majumdar–Ghosh model has two minimum energy states, or ground states, namely the states in which neighboring pairs of spins form singlet configurations. Ground states: The wavefunction for each ground state is a product of these singlet pairs. This explains why there must be at least two ground states with the same energy, since one may be obtained from the other by merely shifting, or translating, the system by one lattice spacing. Furthermore, it has been shown that these (and linear combinations of them) are the unique ground states. Generalizations: The Majumdar–Ghosh model is one of a small handful of realistic quantum spin models that may be solved exactly. Moreover, its ground states are simple examples of what are known as valence-bond solids (VBS). Thus the Majumdar–Ghosh model is related to another famous spin model, the AKLT model, whose ground state is the unique one dimensional spin one (S=1) valence-bond solid. Generalizations: The Majumdar–Ghosh model is also a useful example of the Lieb–Schultz–Mattis theorem which roughly states that an infinite, one dimensional, half-odd-integer spin system must either have no energy spacing (or gap) between its ground and excited states or else have more than one ground state. The Majumdar–Ghosh model has a gap and falls under the second case. The isotropy of the model is actually not important to the fact that it has an exactly dimerised ground state. For example, H^=J∑j=1N(XjXj+1+YjYj+1+δZjZj+1)+J2∑j=1N(XjXj+2+YjYj+2+δZjZj+2) also has the same aforementioned exactly dimerised ground state for all real δ>−1/2
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Laius complex** Laius complex: The Laius complex revolves around the paternal wish for filicide, particularly for the extinction of the male heir, in an attempt to ensure one will have no successors. Mythological background: Indo-European mythology contains a number of stories of foundlings, like Cyrus the Great or Romulus and Remus, outcast after a prophecy that they will replace the dynasty into which they are born. In Greek mythology, Cronus (Roman Saturn) had devoured his young because of his fear that one would supersede him. Laius in the story of Oedipus similarly casts the latter out to die as an infant because of (in the words of Sophocles) "some wicked spell … . Saying the child would kill its father". From myth to complex: Whereas Freud had laid stress on Oedipus's filial violence against his father, George Devereux in 1953 introduced the term 'Laius complex' to cover the corresponding feelings on the part of the father – what he called the "'counter-oedipal' (Laius) complex". Later explorations of masculinity have placed the aggressive aspects of the Laius complex within the broader frame of mammalian aggression against their young: what Gershon Legman called "the killing of the male (i.e. sexually uninteresting) children by the father".Two specific psychosexual aspects of the complex have been particularly highlighted. One lays stress on the magical thinking behind the complex – the unconscious belief that if one has no successors, one will be effectively immortal. The other emphasises the narcissism in the Laius/Oedipus relationship – the belief that there is only room for a single figure to exist in life, leading inevitably to the destruction of the one or the other competitor, father or son.How far the playing down of the Laius neurosis (in orthodox psychoanalysis) can be linked to what Julia Kristeva called Freud's "paternal vision of childhood", remains for the 21st century an open question.Bracha L. Ettinger introduced a Laius Complex 'per se', not in terms of 'counter-Oedipus' but prior to it, in the frame of feminist psychoanalysis in terms of the analyst's desire to destroy the patient and exploit its creativity and sexuality, not as a counter-transferential reaction to the patient (as 'son' or 'daughter'), but as a direct transference of the analyst toward the patient. Manifestations of such Laius Complex during treatment are, according to Ettinger, close to psychosis (not neurosis), and can lead to the production of psychotic folie-a-deux.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Open Insulin Project** Open Insulin Project: The Open Insulin Project is a community of researchers and advocates working to develop an open-source protocol for producing insulin that is affordable, has transparent pricing, and is community-owned. History: The Open Insulin Project was started in 2015 by Anthony Di Franco, himself a type 1 diabetic. He started the project in response to the unreasonably high prices of insulin in the US. The project has been housed in Counter Culture Labs, a community laboratory and maker space in the Bay Area. Other collaborators include ReaGent, BioCurious and BioFoundry. Goals: The project aims to develop both the methodology and hardware to allow communities and individuals to produce medical-grade insulin for the treatment of diabetes. These methods will be low-cost in order to combat the high price of insulin in places like the US. There is also potential for small-scale distributed production that may allow for improved insulin access in places with poor availability infrastructure. Access to insulin remains so insufficient around the globe that "Half of all people who need insulin lack the financial or logistical means to obtain adequate supplies". Motivation: Researcher Frederick Banting famously refused to put his name on the patent after discovering insulin in 1923. The original patent for insulin was later sold by his collaborators for just $1 to the University of Toronto in an effort to make it as available as possible. Despite this, for various reasons, there remains no generic version of insulin available in the US. Insulin remains controlled by a small number of large pharmaceutical companies and sold at prices unaffordable to many who rely on it to live, particularly those without insurance. This lack of availability has led to fatalities, such as Alec Smith, who died in 2017 due to lack of insulin. The Open Insulin Project is motivated by the urgent need to protect the health of those with diabetes regardless of their economic or employment status by developing low-cost methods for insulin production available for anyone to use. Progress and status: The project has genetically engineered microorganisms to produce long-acting (glargine) and short-acting (lispro) insulin analogs using standard techniques in biotechnology and according to their December 2018 release the "first major milestone ― the production of insulin at lab scale ― is almost complete".The cost to produce insulin via Open Insulin methods is estimated by the project to be such that "roughly $10,000 should be enough to get a group started with the equipment needed to produce enough insulin for 10,000 people".A more recent estimate (May 2020) by the Open Insulin Foundation states that it will cost $200,000 (one-time price, per patient of $7-$20) for used equipment and up to $1,000,000 (one-time price, per patient of $73) for new equipment. The average price per vial was estimated to be $7 with each patient needing two vials per month.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Moroxydine** Moroxydine: Moroxydine is an antiviral drug that was originally developed in the 1950s as an influenza treatment. It has potential applications against a number of RNA and DNA viruses. Structurally moroxydine is a heterocyclic biguanidine.It was reported in March 2014 that three kindergartens in two provinces of China had been found to be secretly dosing their students with moroxydine hydrochloride to try to prevent them from becoming ill. The kindergartens are paid only for the days that pupils attend and wanted to ensure that they maximised their earnings.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Italian robotics** Italian robotics: Robotics in Italy is a high technology area where Italy hosts numerous research centers. History: The origins of this technology in Italy, but also in the world, which requires knowledge of many sciences to be applied, beginning in the Italian Renaissance with the studies of Leonardo da Vinci. The first project documented of a robot, in particular of an android, is signed by Leonardo da Vinci in the 1495. Robotic Surgery: Almost all the Italian regions are equipped with robots in the operating room and about 18 thousand robotic surgery operations were carried out in 2017."The da Vinci robotic surgery - explains Walter Artibani, Director of the UO of Urology of the Integrated AOU of Verona and Secretary General of the Italian Society of Urology - is emblematic of minimally invasive surgery. Robotic Surgery: The robot allows a precision not comparable with other techniques and allows to overcome the limits linked to the difficulty of treating pathologies in difficult-to-reach anatomical sites with laparoscopy Italian urology is an excellence in the field of robotics. Robotic Surgery: In urology the reasons for success are many and simple: the precision of the robot allows greater ease of access to more complex anatomies, a demolithic and reconstructive precision, less blood loss, a reduction in post-operative hospitalization and a reduction in side effects. (erectile dysfunction and incontinence). Added to this are characteristics such as immersive three-dimensional vision able to multiply up to 10 times the normal vision of the human eye. Robotic Exoskeleton: The Italian Institute of Technology of Genoa started in December 2013 a research program called Robot Rehab that focuses on robotic rehabilitation by developing exoskeletons for the disabled, prosthetic devices and new rehabilitation instruments. This program is part of an important agreement with INAIL to provide potential applications in the Italian national health system in the short term. Robotic Plant: The Plantoid is a machine or a synthetic organism designed to behave, act and grow like a plant. The concept was published for the first time in 2010. A prototype for the European Space Agency is now under development. One of the first prototypes was realized by the Micro-Biorobotics Center of the Italian Institute of Technology in Pontedera in 2015. Robotic Runner: The Robot R1, the humanoid robot built by Italian Institute of Technology in Genoa. Robotic Runner: Sunday, October 14, 2018 at 10, at the start of the "StraGenova del cuore", the non-competitive race dedicated to the memory of the 43 victims of the collapse of Ponte Morandi, there will also be R1, the humanoid robot built by Italian Institute of Technology, in the special role of mascot which will have the task of symbolically giving the start to the race. Cyborg in Italy: The first experiment in the world, took place in Italy, in 2014 to have installed the bionic hand that perceives the touch was a Danish 36-year-old Dennis Aabo Sørensen.The bionic hand called LifeHand2 works with a sensitivity similar to the natural one: it is the first time that an artificial limb allows the wearer to perceive and recognize the objects he touches. The result, published in the journal Science Traslational Medicine, is an Italian research project and has origin from a vast international collaboration, coordinated by Silvestro Micera, of the Polytechnic of Lausanne. Cyborg in Italy: The project was developed largely in Italy, by the BioRobotics Institute of the Sant'Anna School of Advanced Studies in Pisa, in collaboration with the German University of Fribourg. A woman is the first Italian cyborg woman who has been implanted the bionic hand that perceives the contact with objects, made by the group of Silvestro Micera, the Scuola Superiore Sant'Anna and the Polytechnic of Lausanne. The intervention was performed in June 2016 at the Policlinico Gemelli in Rome by the group of neurologist Paolo Maria Rossini. In the experiment, which lasted six months, the bionic hand was implanted to Mrs. Almerina Mascarello, who lives in Veneto and who had lost her left hand in an accident.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nemerle** Nemerle: Nemerle is a general-purpose, high-level, statically typed programming language designed for platforms using the Common Language Infrastructure (.NET/Mono). It offers functional, object-oriented, aspect-oriented, reflective and imperative features. It has a simple C#-like syntax and a powerful metaprogramming system. In June 2012, the core developers of Nemerle were hired by the Czech software development company JetBrains. The team was focusing on developing Nitra, a framework to implement extant and new programming languages. Both the Nemerle language and Nitra have seemingly been abandoned or discontinued by JetBrains; Nitra has not been updated by its original creators since 2017 and Nemerle is now maintained entirely by the Russian Software Development Network, independently from JetBrains, although no major updates have been released yet and development is progressing very slowly. Neither Nemerle, nor Nitra have been mentioned or referenced by JetBrains for years. Nemerle: Nemerle is named after the Archmage Nemmerle, a character in the fantasy novel A Wizard of Earthsea by Ursula K. Le Guin. Features: Nemerle's most notable feature is the ability to mix styles of programming that are object-oriented and functional. Programs may be structured using object-oriented concepts such as classes and namespaces, while methods can (optionally) be written in a functional style. Other notable features include: strong type inference a flexible metaprogramming subsystem (using macros) full support for object-oriented programming (OOP), in the style of C#, Java, and C++ full support for functional programming, in the style of ML, OCaml, and Haskell, with these features: higher-order functions pattern matching algebraic types local functions tuples and anonymous types partial application of functionsThe metaprogramming system allows for great compiler extensibility, embedding domain-specific languages, partial evaluation, and aspect-oriented programming, taking a high-level approach to lift as much of the burden as possible from programmers. The language combines all Common Language Infrastructure (CLI) standard features, including parametric polymorphism, lambdas, extension methods etc. Accessing the libraries included in the .NET or Mono platforms is as easy as in C#. Features: Type inference Everything is an expression Tuples Pattern matching Functional types and local functions Variants Variants (called data types or sum types in SML and OCaml) are forms of expressing data of several different kinds: Metaprogramming Nemerle's macro system allows for creating, analysing, and modifying program code during compiling. Macros can be used in the form of a method call or as a new language construct. Many constructs within the language are implemented using macros (if, for, foreach, while, using etc.). Features: "if" macro example: Braceless syntax Similarly to the braceless syntax later added to Scala, Nemerle allows the programmer to optionally use a whitespace-sensitive syntax based on the off-side rule, similarly to Python. The following curly-brace snippet: could be rewritten as: Notably, it is not possible to break expressions or alternative clauses in matches over multiple lines without using a backslash \: In order to activate this syntax, the user must add #pragma indent to the top of the file or use the compiler option -i. IDE: Nemerle can be integrated into the integrated development environment (IDE) Visual Studio 2008. It also has a fully free IDE based on Visual Studio 2008 Shell (like Visual Studio Express Editions) and SharpDevelop (link to plugin source code). Nemerle can be also integrated into Visual Studio (up until 2017) using add-ins and extensions. Examples: Hello, World! The traditional Hello World! can be implemented in a more C#-like fashion: or more simply: Examples of macros Macros allow generating boilerplate code with added static checks performed by the compiler. They reduce the amount of code that must be written by hand, make code generation safer, and allow programs to generate code with compiler checks, while keeping source code relatively small and readable. Examples: String formatting The string formatting macro simplifies variables to string manipulations using $ notation: Declarative code generation StructuralEquality, Memoize, json, and with are macros which generate code in compile time. Though some of them (StructuralEquality, Memoize) can look like C# attributes, during compiling, they will be examined by the compiler and transformed to appropriate code using logic predefined by their macros. Examples: Database accessibility Using Nemerle macros for SQL you can write: instead of and this is not just hiding some operations in a library, but additional work performed by the compiler to understand the query string, the variables used there, and the columns returned from the database. The ExecuteReaderLoop macro will generate code roughly equivalent to what you would have to type manually. Moreover, it connects to the database at compilation time to check that your SQL query really makes sense. Examples: New language constructs With Nemerle macros you can also introduce some new syntax into the language: defines a macro introducing the ford (EXPR ; EXPR) EXPR syntax and can be used like ford (i ; n) print (i); Nemerle with ASP.NET Nemerle can be either embedded directly into ASP.NET: ...Or stored in a separate file and entered with a single line: PInvoke Nemerle can take advantage of native platform libraries. The syntax is very similar to C#'s and other .NET languages. Here is the simplest example:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metre per second squared** Metre per second squared: The metre per second squared is the unit of acceleration in the International System of Units (SI). As a derived unit, it is composed from the SI base units of length, the metre, and time, the second. Its symbol is written in several forms as m/s2, m·s−2 or ms−2, ms2 , or less commonly, as m/s/s.As acceleration, the unit is interpreted physically as change in velocity or speed per time interval, i.e. metre per second per second and is treated as a vector quantity. Example: An object experiences a constant acceleration of one metre per second squared (1 m/s2) from a state of rest, then it achieves the speed of 5 m/s after 5 seconds and 10 m/s after 10 seconds. The average acceleration a can be calculated by dividing the speed v (m/s) by the time t (s), so the average acceleration in the first example would be calculated: m/s (m/s)/s m/s 2 Related units: Newton's second law states that force equals mass multiplied by acceleration. Related units: The unit of force is the newton (N), and mass has the SI unit kilogram (kg). One newton equals one kilogram metre per second squared. Therefore, the unit metre per second squared is equivalent to newton per kilogram, N·kg−1, or N/kg.Thus, the Earth's gravitational field (near ground level) can be quoted as 9.8 metres per second squared, or the equivalent 9.8 N/kg. Related units: Acceleration can be measured in ratios to gravity, such as g-force, and peak ground acceleration in earthquakes. Unicode character: The "metre per second squared" symbol is encoded by Unicode at code point U+33A8 ㎨ SQUARE M OVER S SQUARED. This is for compatibility with East Asian encodings and not intended to be used in new documents.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Colnect** Colnect: Colnect Collectors Club Community is a website containing wiki-like collectables catalogs. It allows collectors to manage their personal collection using these catalogs and automatically match their swap/wish-lists with those of other collectors. Colnect provides a marketplace dedicated to buying and selling collectibles. Colnect's phone cards catalog is the biggest in the world. History: Colnect was founded in 2002 as Islands Phonecards Database with the aim to create a catalog of all phonecards. Since autumn 2008, stamps and coins are supported in addition to phone cards. In the meantime, 43 types of collectables are represented. Features: The collectables catalogs on Colnect are created by collectors using the site. New items are added by contributors and verified by volunteering editors. Though any collector may add their comments on a catalog item, actual changes are done only by site-trusted editors. All users can see the catalog information (issue dates, print runs, pictures, etc.). Registered users can additionally manage their personal collection by marking each item as belonging to their collection, swap list or wish list while browsing through the catalogs. These users automatically can match their swap list with a wish list of another user and vice versa. Users can purchase a Premium membership with additional features. Contributors get it for free. Premium members can download any list with collectables and open it as a spreadsheet. Colnect provides a marketplace for collectibles. It is publicly available since December 27, 2017. Sales on the marketplace are connected to the centralized catalogs. Sellers can post items for sale directly from the catalog and buyers can view information about a collectible on the item's page. Statistics: The site reports it caters for collectors from 113 countries. Hundreds of them are helping the site voluntarily, including the translators who translate Colnect into 62 languages. As of 2018 Colnect lists 435 Chinese transportation tickets, which are issued by 147 different Chinese companies. Rewards: On April 25, 2009 Colnect was announced as the winner in the European Startup 2.0 competition, out of about 200 competing companies and 11 finalists.On December 31, 2009 Colnect was a close runner up on the TechAviv Peer Awards startup companies competition. It has lost to 5min by a single vote.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Candocuronium iodide** Candocuronium iodide: Candocuronium iodide (INN, formerly chandonium, HS-310) is an aminosteroid neuromuscular-blocking drug. Its use in anesthesia for endotracheal intubation and to provide skeletal muscle relaxation during surgery or mechanical ventilation was briefly evaluated in clinical studies in India. However, further development was discontinued due to attendant cardiovascular effects, primarily tachycardia that was about the same as the clinically established pancuronium bromide. Candocuronium demonstrated a short duration in the body, but a rapid onset of action. It had little to no ganglion blocking activity, with a greater potency than pancuronium. Background: As with other neuromuscular-blocking agents, candocuronium preferentially antagonizes competitively the nicotinic subtype of acetylcholine receptors. The agent was developed by the laboratory of Harkishan Singh, Panjab University, Chandigarh, India, as part of the search for a non-depolarizing replacement for the most popular clinical depolarizing agent, suxamethonium (succinylcholine). Design of candocuronium: The mono- and bis-quaternary azasteroid series of compounds to which candocuronium belongs are based on the same principle that led to aminosteroids such as pancuronium, vecuronium and rocuronium: use of the steroid skeleton to provide a somewhat rigid distance between the two quaternary ammonium centers, with appendages incorporating fragments of choline or acetylcholine. The discovery program initiated by Singh initially led to the synthesis of the bis-quaternary non-depolarizing agent HS-342 (4,17a-dimethyl-4,17a-diaza-D-homo-5α-androstane dimethiodide) that was equipotent with tubocurarine and with one-third its duration of action, but not suitable for further clinical evaluation. Modifications of the HS-342 structure led to two other notable agents, HS-347 and HS-310 (subsequently named chandonium, then candocuronium). HS-347 was equipotent with tubocurarine but exhibited considerable ganglion blocking activity; candocuronium appeared to be suitably placed for clinical trials following encouraging preclinical evaluations. Modifications to the candocuronium design: Candocuronium did not provide the desired profile, and a further extension of research was undertaken to overcome its limitations. This led to four more potentially useful compounds, HS-692, HS-693, HS-704 and HS-705, whose onset and duration were indinguishable from candocuronium, but all demonstrated profound vagolytic effects and much weaker potencies than candocuronium. To improve on potency, further modifications of the candocuronium nucleus were undertaken, leading to the identification of yet another potentially useful compound, HS-626. Upon further preclinical evaluation, HS-626 demonstrated a slightly more desirable neuromuscular-blocking profile than that of candocuronium, but its overall improvement was insufficient to warrant advancement to clinical testing. Modifications at 3- and 16-positions of androstane nucleus: The discovery of candocuronium led to numerous related neuromuscular-blocking agents with short durations of action but also having attendant undesirable cardiovascular effects. The Marshall group then explored other modifications at the 3- and 16-positions of the androstane nucleus, and yielded an agent that can go through expanded evaluation to clinical testing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lean-burn** Lean-burn: Lean-burn refers to the burning of fuel with an excess of air in an internal combustion engine. In lean-burn engines the air–fuel ratio may be as lean as 65:1 (by mass). The air / fuel ratio needed to stoichiometrically combust gasoline, by contrast, is 14.64:1. The excess of air in a lean-burn engine emits far less hydrocarbons. High air–fuel ratios can also be used to reduce losses caused by other engine power management systems such as throttling losses. Principle: A lean burn mode is a way to reduce throttling losses. An engine in a typical vehicle is sized for providing the power desired for acceleration, but must operate well below that point in normal steady-speed operation. Ordinarily, the power is cut by partially closing a throttle. However, the extra work done in pumping air through the throttle reduces efficiency. If the fuel/air ratio is reduced, then lower power can be achieved with the throttle closer to fully open, and the efficiency during normal driving (below the maximum torque capability of the engine) can be higher. Principle: The engines designed for lean-burning can employ higher compression ratios and thus provide better performance, efficient fuel use and low exhaust hydrocarbon emissions than those found in conventional gasoline engines. Ultra lean mixtures with very high air–fuel ratios can only be achieved by direct injection engines. Principle: The main drawback of lean-burning is that a complex catalytic converter system is required to reduce NOx emissions. Lean-burn engines do not work well with modern 3-way catalytic converter—which require a pollutant balance at the exhaust port so they can carry out oxidation and reduction reactions—so most modern engines tend to cruise and coastdown at or near the stoichiometric point. Chrysler Electronic Lean-Burn: From 1976 through 1989, Chrysler equipped many vehicles with their Electronic Lean-Burn (ELB) system, which consisted of a spark control computer and various sensors and transducers. The computer adjusted spark timing based on manifold vacuum, engine speed, engine temperature, throttle position over time, and incoming air temperature. Engines equipped with ELB used fixed-timing distributors without the traditional vacuum and centrifugal timing advance mechanisms. The ELB computer also directly drove the ignition coil, eliminating the need for a separate ignition module. Chrysler Electronic Lean-Burn: ELB was produced in both open-loop and closed-loop variants; the open-loop systems produced exhaust clean enough for many vehicle variants so equipped to pass 1976 and 1977 US Federal emissions regulations, and Canadian emissions regulations through 1980, without a catalytic converter. The closed-loop version of ELB used an oxygen sensor and a feedback carburetor, and was phased into production as emissions regulations grew more stringent starting in 1981, but open-loop ELB was used as late as 1990 in markets with lax emissions regulations, on vehicles such as the Mexican Chrysler Spirit. The spark control and engine parameter sensing and transduction strategies introduced with ELB remained in use through 1995 on Chrysler vehicles equipped with throttle-body fuel injection. Heavy-duty gas engines: Lean-burn concepts are often used for the design of heavy-duty natural gas, biogas, and liquefied petroleum gas (LPG) fuelled engines. These engines can either be full-time lean-burn, where the engine runs with a weak air–fuel mixture regardless of load and engine speed, or part-time lean-burn (also known as "lean mix" or "mixed lean"), where the engine runs lean only during low load and at high engine speeds, reverting to a stoichiometric air–fuel mixture in other cases. Heavy-duty gas engines: Heavy-duty lean-burn gas engines admit twice as much air as theoretically needed for complete combustion into the combustion chambers. The extremely weak air–fuel mixtures lead to lower combustion temperatures and therefore lower NOx formation. While lean-burn gas engines offer higher theoretical thermal efficiencies, transient response and performance may be compromised in certain situations. However, advances in fuel control and closed loop technology by companies like North American Repower have led to production of modern CARB certified lean burn heavy duty engines for use in commercial vehicle fleets. Lean-burn gas engines are almost always turbocharged, resulting in high power and torque figures not achievable with stoichiometric engines due to high combustion temperatures. Heavy-duty gas engines: Heavy duty gas engines may employ precombustion chambers in the cylinder head. A lean gas and air mixture is first highly compressed in the main chamber by the piston. A much richer, though much lesser volume gas/air mixture is introduced to the precombustion chamber and ignited by spark plug. The flame front spreads to the lean gas air mixture in the cylinder. Heavy-duty gas engines: This two stage lean-burn combustion produces low NOx and no particulate emissions. Thermal efficiency is better as higher compression ratios are achieved. Manufacturers of heavy-duty lean-burn gas engines include MTU, Cummins, Caterpillar, MWM, GE Jenbacher, MAN Diesel & Turbo, Wärtsilä, Mitsubishi Heavy Industries, Dresser-Rand Guascor, Waukesha Engine and Rolls-Royce Holdings. Honda lean-burn systems: One of the newest lean-burn technologies available in automobiles currently in production uses very precise control of fuel injection, a strong air–fuel swirl created in the combustion chamber, a new linear air–fuel sensor (LAF type O2 sensor) and a lean-burn NOx catalyst to further reduce the resulting NOx emissions that increase under "lean-burn" conditions and meet NOx emissions requirements. Honda lean-burn systems: This stratified-charge approach to lean-burn combustion means that the air–fuel ratio is not equal throughout the cylinder. Instead, precise control over fuel injection and intake flow dynamics allows a greater concentration of fuel closer to the spark plug tip (richer), which is required for successful ignition and flame spread for complete combustion. The remainder of the cylinders' intake charge is progressively leaner with an overall average air:fuel ratio falling into the lean-burn category of up to 22:1. Honda lean-burn systems: The older Honda engines that used lean-burn (not all did) accomplished this by having a parallel fuel and intake system that fed a pre-chamber the "ideal" ratio for initial combustion. This burning mixture was then opened to the main chamber where a much larger and leaner mix then ignited to provide sufficient power. During the time this design was in production this system (CVCC, Compound Vortex Controlled Combustion) primarily allowed lower emissions without the need for a catalytic converter. These were carbureted engines and the relative "imprecise" nature of such limited the MPG abilities of the concept that now under MPI (Multi-Port fuel Injection) allows for higher MPG too. Honda lean-burn systems: The newer Honda stratified charge (lean-burn engines) operate on air–fuel ratios as high as 22:1. The amount of fuel drawn into the engine is much lower than a typical gasoline engine, which operates at 14.7:1—the chemical stoichiometric ideal for complete combustion when averaging gasoline to the petrochemical industries' accepted standard of C8H18. Honda lean-burn systems: This lean-burn ability by the necessity of the limits of physics, and the chemistry of combustion as it applies to a current gasoline engine must be limited to light load and lower RPM conditions. A "top" speed cut-off point is required since leaner gasoline fuel mixtures burn slower and for power to be produced combustion must be "complete" by the time the exhaust valve opens. Honda lean-burn systems: Applications 1992–95 Civic VX 1996–2005 Civic Hx 2002–05 Civic Hybrid 2000–06 Insight Manual transmission & Japanese spec Cvt only Toyota lean-burn engines: In 1984, Toyota released the 4A-ELU engine. This was the first engine in the world to use a lean-burn combustion control system with a lean mixture sensor, called "TTC-L" (Toyota Total Clean-Lean-Burn) by Toyota. Toyota also referred to an earlier lean burn system as "Turbulence Generating Pot" (TGP). TTC-L was used in Japan on Toyota Carina T150 replacing the TTC-V (Vortex) exhaust gas recirculation approach used earlier, Toyota Corolla E80, and Toyota Sprinter. The lean mixture sensor was provided in the exhaust system to detect air–fuel ratios leaner than the theoretical air–fuel ratio. The fuel injection volume was then accurately controlled by a computer using this detection signal to achieve lean air–fuel ratio feedback. Toyota lean-burn engines: For optimal combustion, the following items were applied: program independent injection that accurately changed the injection volume and timing for individual cylinders, platinum plugs for improving ignition performance with lean mixtures, and high performance igniters.The lean-burn versions of the 1587cc 4A-FE and 1762cc 7A-FE 4-cylinder engines have 2 inlet and 2 exhaust valves per cylinder. Toyota uses a set of butterflies to restrict flow in every second inlet runner during lean-burn operation. This creates a large amount of swirl in the combustion chamber. Injectors are mounted in the head, rather than conventionally in the intake manifold. Compression ratio 9.5:1. Toyota lean-burn engines: The 1998cc 3S-FSE engine is a direct injection petrol lean-burn engine. Compression ratio 10:1. Applications Nissan lean-burn engines: Nissan QG engines are a lean-burn aluminum DOHC 4-valve design with variable valve timing and optional NEO Di direct injection. The 1497cc QG15DE has a Compression ratio of 9.9:1 and 1769cc QG18DE 9.5:1. Applications Mitsubishi Vertical Vortex (MVV): In 1991, Mitsubishi developed and began producing the MVV (Mitsubishi Vertical Vortex) lean-burn system first used in Mitsubishi's 1.5 L 4G15 straight-4 single-overhead-cam 1,468-cc engine. The vertical vortex engine has an idle speed of 600 rpm and a compression ratio of 9.4:1 compared with respective figures of 700 rpm and 9.2:1 for the conventional version. The lean-burn MVV engine can achieve complete combustion with an air–fuel ratio as high as 25:1, this boasts a 10–20% gain in fuel economy (on the Japanese 10-mode urban cycle) in bench tests compared with its conventional MPI powerplant of the same displacement, which means lower CO2 emissions.The heart of the Mitsubishi's MVV system is the linear air–fuel ratio exhaust gas oxygen sensor. Compared with standard oxygen sensors, which essentially are on-off switches set to a single air/fuel ratio, the lean oxygen sensor is more of a measurement device covering the air/fuel ratio range from about 15:1 to 26:1.To speed up the otherwise slow combustion of lean mixtures, the MVV engine uses two intake valves and one exhaust valve per cylinder. The separate specially shaped (twin intake port design) intake ports are the same size, but only one port receives fuel from an injector. This creates two vertical vortices of identical size, strength and rotational speed within the combustion chamber during the intake stroke: one vortex of air, the other of an air/fuel mixture. The two vortices also remain independent layers throughout most of the compression stroke.Near the end of the compression stroke, the layers collapse into uniform minute turbulences, which effectively promote lean-burn characteristics. More importantly, ignition occurs in the initial stages of breakdown of the separate layers while substantial amounts of each layer still exist. Because the spark plug is located closer to the vortex consisting of air/fuel mixture, ignition arises in an area of the pentroof-design combustion chamber where fuel density is higher. The flame then spreads through the combustion chamber via the small turbulences. This provides stable combustion even at normal ignition-energy levels, thereby realizing lean-burn.The engine computer stores optimum air fuel ratios for all engine-operating conditions—from lean (for normal operation) to richest (for heavy acceleration) and all points in between. Full-range oxygen sensors (used for the first time) provide essential information that allows the computers to properly regulate fuel delivery. Diesel engines: All diesel engines can be considered to be lean-burning with respect to the total volume, however the fuel and air is not well mixed before the combustion. Most of the combustion occurs in rich zones around small droplets of fuel. Locally rich combustion is a source of particulate matter (PM) emissions. Footnotes: Citations References "Advanced Technology Vehicle Modeling in PERE, EPA, Office of Transportation and Air Quality"
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**B7 (protein)** B7 (protein): B7 is a type of integral membrane protein found on activated antigen-presenting cells (APC) that, when paired with either a CD28 or CD152 (CTLA-4) surface protein on a T cell, can produce a costimulatory signal or a coinhibitory signal to enhance or decrease the activity of a MHC-TCR signal between the APC and the T cell, respectively. Binding of the B7 of APC to CTLA-4 of T-cells causes inhibition of the activity of T-cells. There are two major types of B7 proteins: B7-1 or CD80, and B7-2 or CD86. It is not known if they differ significantly from each other. So far CD80 is found on dendritic cells, macrophages, and activated B cells, CD86 (B7-2) on B cells. The proteins CD28 and CTLA-4 (CD152) each interact with both B7-1 and B7-2. Costimulation: There are several steps to activation of the immune system against a pathogen. The T-cell receptor must first interact with the Major histocompatibility complex (MHC) surface protein. The CD4 or CD8 proteins on the T-cell surface form a complex with the CD3 protein, which can then recognize the MHC. This is also called "Signal 1" and its main purpose is to guarantee antigen specificity of the T cell activation. Costimulation: However, MHC binding itself is insufficient for producing a T cell response. In fact, lack of further stimulatory signals sends the T cell into anergy. The costimulatory signal necessary to continue the immune response can come from B7-CD28 and CD40–CD40L interactions. When CD40 on the APC binds CD40L(CD154) on the T cell, signals are sent back to both the APC and the T cell. (1) The signal from the APC to the T cell informs the T cell that it must express CD28 on its surface. Costimulation: (2) The signal from the T cell to the APC informs the APC to express B7 (which can be either B7.1 or B7.2). It is the B7-CD28 interaction that leads to activation of the T cell. Importantly, the B7-CD28 binding additionally instructs the T cell to produce CTLA-4 (the competitor for CD28). Since CTLA-4 also binds to B7 it decreases the B7 that can bind to CD28. The B7-CTLA-4 binding suppresses T cell activation. The balance between the opposing signals generated by B7-CD28 and B7-CTLA-4 binding regulates the intensity of the T cell response. There are other activation signals which play a role in immune responses. In the TNF family of molecules, the protein 4-1BB (CD137) on the T cell may bind to 4-1BB ligand (4-1BBL) on the APC. Costimulation: The B7 (B7-1/B7-2) protein is present on the APC surface, and it interacts with the CD28 receptor on the T cell surface. This is one source of "Signal 2" (cytokines can also contribute to T-cell activation, called "Signal 3"). This interaction produces a series of downstream signals which promote the target T cell's survival and activation. Costimulation: Blockade of CD28 is effective in stopping T cell activation, a mechanism that the immune system uses to down-regulate T cell activation. T cells can express the surface protein CTLA-4 (CD152) as well, which can also bind B7, but with twenty times greater affinity for B7 proteins, and lacks the ability to activate T cells. As a result, the T cell is blocked from receiving the B7 protein signal and is not activated. CTLA-4-knockout mice are unable to stop immune responses, and develop a fatal massive lymphocyte proliferation. Members of the family: Apart from B7-1 and B7-2, there are other proteins grouped in the B7 family, as summarized in the following table.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Iridium tetrachloride** Iridium tetrachloride: Iridium tetrachloride is an inorganic compound with the approximate formula IrCl4(H2O)n. It is a water-soluble dark brown amorphous solid. A well defined derivative is ammonium hexachloroiridate ((NH4)2IrCl6). It is used to prepare catalysts, such as the Henbest Catalyst for transfer hydrogenation of cyclohexanones.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mariptiline** Mariptiline: Mariptiline (EN-207) is a tricyclic antidepressant (TCA) which was developed in the early 1980s, but was never marketed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Salicylaldehyde dehydrogenase** Salicylaldehyde dehydrogenase: In enzymology, a salicylaldehyde dehydrogenase (EC 1.2.1.65) is an enzyme that catalyzes the chemical reaction salicylaldehyde + NAD+ + H2O ⇌ salicylate + NADH + 2 H+The 3 substrates of this enzyme are salicylaldehyde, NAD+, and H2O, whereas its 3 products are salicylate, NADH, and H+. This enzyme belongs to the family of oxidoreductases, specifically those acting on the aldehyde or oxo group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is salicylaldehyde:NAD+ oxidoreductase. This enzyme participates in naphthalene and anthracene degradation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tonic vibration reflex** Tonic vibration reflex: Tonic vibration reflex is a sustained contraction of a muscle subjected to vibration. This reflex is caused by vibratory activation of muscle spindles — muscle receptors sensitive to stretch. Tonic vibration reflex: Tonic vibration reflex is evoked by placing a vibrator — which in this case is typically an electrical motor with an eccentric load on its shaft — on a muscle's tendon. 30–100 Hz vibration activates receptors of the skin, tendons and, most importantly, muscle spindles. Muscle spindle discharges are sent to the spinal cord through afferent nerve fibers, where they activate polysynaptic reflex arcs, causing the muscle to contract. Tonic vibration reflex: The effects of sustained vibratory stimulation on muscle contraction, posture and kinesthetic perceptions are much more complex than merely contraction of the muscle being vibrated. Russian scientists Victor Gurfinkel, Mikhail Lebedev, Andrew Polyakov and Yuri Levick used vibratory stimulation to study human posture control and spectral characteristics of electromyographic (EMG) activity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zatosetron** Zatosetron: Zatosetron (LY-277,359) is a drug which acts as an antagonist at the 5HT3 receptor It is orally active and has a long duration of action, producing antinauseant effects but without stimulating the rate of gastrointestinal transport. It is also an effective anxiolytic in both animal studies and human trials, although with some side effects at higher doses.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Octadecyltrimethoxysilane** Octadecyltrimethoxysilane: Octadecyltrimethoxysilane (OTMS) is an organosilicon compound. This colorless liquid is used for preparing hydrophobic coatings and self-assembled monolayers. It is sensitive toward water, irreversibly degrading to a siloxane polymer. It places a C18H39SiO3 "cap" on oxide surfaces. The formation of OTMS monolayers is used for converting hydrophilic surfaces to hydrophobic surfaces, e.g. for use in certain areas of nanotechnology and analytical chemistry.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Riccardo Rebonato** Riccardo Rebonato: Riccardo Rebonato is Professor of Finance at EDHEC Business School and EDHEC-Risk Institute, Scientific Director of the EDHEC Risk Climate Impact Institute (ERCII), and author of journal articles and books on Mathematical Finance, covering derivatives pricing, risk management, asset allocation and climate change. Prior to this, he was Global Head of Rates and FX Analytics at PIMCO. Professor Rebonato is a specialist in asset pricing and its applications to bond portfolio management, fixed-income derivatives and the impact of climate change on asset prices and risk management. He is Series Editor for the Elements in Quantitative Finance, Cambridge University Press. Riccardo Rebonato: Academically, he is an editor of financial journals and was until 2016 a visiting lecturer at Oxford University and adjunct professor at Imperial College’s Tanaka Business School. He used to sit on the board of directors of the International Swaps and Derivatives Association (ISDA) and the board of trustees for the Global Association of Risk Professionals (GARP). He is currently on the Board of the Nine Dots Prize. Previously, he was global head of market risk and global head of the Quantitative Research Team at the Royal Bank of Scotland (RBS), and sat on the Investment Committee of RBS Asset Management. He was Head of the Complex Derivatives Trading Desk and Research Group at Barclays Capital. Riccardo Rebonato: He holds a doctorate in nuclear engineering from Politecnico di Milano 'Leonardo da Vinci', Italy and a PhD in condensed matter physics/science of materials from Stony Brook University, NY. He was Junior Research Fellow in Physics at Corpus Christi College, Oxford (1988-1989), and Post-Doctoral Fellow at the Physical Chemistry Laboratory, Oxford University (1987-1989).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**English pewter** English pewter: While the term pewter covers a range of tin-based alloys, the term English pewter has come to represent a strictly-controlled alloy, specified by BSEN611-1 and British Standard 5140, consisting mainly of tin (ideally 92%), with the balance made up of antimony and copper. Significantly, it is free of lead and nickel. Although the exact percentages vary between manufacturers, a typical standard for present-day pewter is approximately 91% tin, 7.5% antimony and 1.5% copper. English pewter: By the 15th century, the Worshipful Company of Pewterers controlled pewter constituents in England. This company originally had two grades of pewter, but in the 16th century a third grade was added. The first type, known as "fine metal", was used for tableware. It consisted of tin with as much copper as it could absorb, which is about 1%. The second type, known as "trifling metal" or "trifle", was used for holloware. It is made up of fine metal with approximately 4% lead. The last type of pewter, known as "lay" or "ley" metal, was used for items that were not in contact with food or drink. It consisted of tin with 15% lead. These three alloys were used, with little variation, until the 20th century.Lead was removed from the composition in 1974, by BS5140, reinforced by the European directive BSEN611 in 1994. English pewter: Until the end of the 18th century, the only method of manufacture was by casting and the soldering of components. From the last quarter of the 18th century, improvement in alloys (e.g. britannia metal) and techniques allowed objects to be made from pewter by stamping and spinning.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fan-gating** Fan-gating: Fan-gating (also known as "Like-gating") is the practice of acquiring more fans for a Facebook page by requiring Facebook users to "like" the page in order to access specific content associated with the page. This content is typically exclusive features, promotional offers, games or other material.On August 8, 2014, Facebook updated their platform policies to forbid businesses and developers from using like-gating on new apps, while existing apps will need to comply by November 5, 2014. Issues with overuse: If fan-gating is overused with an inadequate payoff for users who agree, it can lead to Like fatigue. Further, users may become bored if they encounter fan-gates frequently.The main problem is that there is no interest in the content of the brand. Using the method of the "Fan-gating" you confuse Facebook statistics. Damage the user experience, resulting less time to your fan page. This affects advertisers. Response from Facebook: In August 2014 Facebook updated its platform policy to combat fan-gating, with an explanation that artificial incentives do not benefit people nor advertisers. Since then a lot of webmasters and app developers shifted away from the practice of like-gating.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Xanthorhamnin** Xanthorhamnin: Xanthorhamnin is a chemical compound. It can be isolated from buckthorn berries (Rhamnus catharticus).The aglycone of xanthorhamnin is rhamnetin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Keyboardist** Keyboardist: A keyboardist or keyboard player is a musician who plays keyboard instruments. Until the early 1960s musicians who played keyboards were generally classified as either pianists or organists. Since the mid-1960s, a plethora of new musical instruments with keyboards have come into common usage, such as synthesizers and digital piano, requiring a more general term for a person who plays them. In the 2010s, professional keyboardists in popular music often play a variety of different keyboard instruments, including piano, tonewheel organ, synthesizer, and clavinet. Some keyboardists may also play related instruments such as piano accordion, melodica, pedal keyboard, or keyboard-layout bass pedals. Notable electronic keyboardists: There are many famous electronic keyboardists in metal, rock, pop and jazz music. A complete list can be found at List of keyboardists. Notable electronic keyboardists: The use of electronic keyboards grew in popularity throughout the 1960s, with many bands using the Hammond organ, Mellotron, and electric pianos such as the Fender Rhodes. The Doors became the first rock group to use the Moog synthesizer on a record on 1967's "Strange Days". Other bands, including the Moody Blues, the Rolling Stones and the Beatles, would go on to add it to their records, both to provide sound effects and as a musical instrument in its own right. In 1966, Billy Ritchie became the first keyboard player to take a lead role in a rock band, replacing guitar, and thereby preparing the ground for others such as Ray Manzarek, Keith Emerson and Rick Wakeman.In the late 1960s, French musician Jean Michel Jarre, a pioneer of modern electronic music, started to experiment with synthesizers and other electronic devices. As synthesizers became more affordable and less unwieldy, many more bands and producers began using them, eventually paving the way for bands that consisted solely of synthesizers and other electronic instruments such as drum machines by the late 1970s/early 1980s. Some of the first bands that used this set up were Kraftwerk, Suicide and the Human League. Rock groups also began using synthesizers and electronic keyboards alongside the traditional line-up of guitar, bass and drums; particularly in progressive rock groups such as Yes, Genesis, Emerson, Lake & Palmer and Pink Floyd. Fleetwood Mac, who had originated as a blues rock band, moved towards pop and soft rock and became known for synthesizer-infused hits in the 1980s such as "Everywhere" and "Little Lies". Notable electronic keyboardists: Keyboardists are often hired in cover bands and tribute bands, to replicate the original keyboard parts and other instrumental parts such as strings or horn section where it would be logistically difficult or too expensive to hire people to play the actual instruments.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Allison Transmission** Allison Transmission: Allison Transmission is an American manufacturer of commercial duty automatic transmissions and hybrid propulsion systems. Allison products are specified by over 250 vehicle manufacturers and are used in many market sectors, including bus, refuse, fire, construction, distribution, military, and specialty applications. With headquarters in Indianapolis, Indiana, Allison Transmission has regional offices all over the world and manufacturing facilities in Indianapolis, Chennai, India, and Szentgotthárd, Hungary. History: Racing team Allison began in 1909 when James A. Allison, along with three business partners, helped fund and build the Indianapolis Motor Speedway. In 1911, Allison's new track held the first Indianapolis 500 mile race. In addition to funding several race teams, James Allison founded the Speedway Racing Team Company on September 14, 1915 and quickly gained a reputation for his work on race cars and automotive technology in general. Allison built a shop near the track and changed the team's name to the Allison Experimental Company; the shop later became Plant No. 1. History: Wartime aviation When World War I began, Allison suspended racing, and the Allison Experimental Company began machining parts, tools, and masters for the Liberty airplane engine — the main power plant used in the US war effort. After the war, Allison entered a car in the 1919 Indy 500 and won. It was the last race Allison's team ever entered as he turned his company's attention to aviation engineering, renaming it to Allison Engineering Company; the aviation-focused company developed steel-backed bronze sleeve bearings for the crankshaft and connecting rods, and high-speed reduction gearing to turn propellers and Roots-type blowers. The company's reputation and expertise in aviation was the major factor in General Motors decision to buy the company following James Allison's death in 1928. History: Shortly after the sale to General Motors on April 1, 1929, Allison engineers began work on a 12-cylinder engine to replace the aging Liberty engines. The result was the V1710 12-cylinder aircraft engine and it made the company, renamed to the Allison Division of GM in 1934, also known as the Allison Engine Company, a major force in aviation. Plant 3 was built in 1939, a 360,000 sq ft (33,000 m2) factory to build V1710 engines. Due to demand during World War II, Allison would add a second factory (Plant 5) and 23,000 new employees; by the end of the war, Allison had built 70,000 V1710 engines. History: Early transmission development Alongside the development and production of the V1710, engineers at GM began designing the CD-850 cross-drive steering transmission for tracked military vehicles in 1941; the design was completed in 1944 and Allison was awarded the contract to manufacture the prototypes. In February 1945, General Motors formed the Allison Transmission Engineering Section, dividing the subsidiary into Aircraft Operations and Transmission Operations in 1946. The CD-850 combined range change, steering and braking. Allison stopped producing the CD-850 in 1986, but a licensed version was produced in Spain for more than a decade afterward.General Motors began developing automatic transmissions with a hydraulic torque converter in the 1930s under its Product Study Group, offering it as an option for Oldsmobile for the first time in 1940. After World War II, Allison Transmission turned its attention to civilian transportation. Allison designed, developed and manufactured the first-ever automatic transmissions for heavy-duty vehicles including delivery trucks, city buses, and locomotives, starting from 1948. In addition, Allison marketed transmissions for off-highway heavy-duty vehicles under the brand Powershift TORQMATIC, with the first TG series transmissions being produced in July 1948. History: V-Drive At approximately the same time the CD-850 was going into production, GMC Truck and Coach Division requested that GM develop a V-Drive transmission with a torque converter in 1945 for transit bus use, replacing the Spicer manual transmission then offered. These buses had rear-mounted engines and to maximize passenger space, the engine compartment was minimized; the V-Drive transmission was named for the 63° angle of intersection between the transmission shaft input (from the engine) and output (to the rear axle). Development of the V-Drive transmission was led by Bob Schaefer, an emigrant from Germany who had joined GM in 1942 after helping to lead the Twin Disc Company, which was one of the licensees of the Ljungstroms hydraulic torque converter. Schaefer was reassigned from the Detroit Transmission Division to Allison in 1946. History: The first production V-Drive transmissions were delivered in October 1947, with the first major contract being for 900 buses in 1948, for New York City. The VS-2 was introduced in 1955, which added a two-speed input splitter; a version with both hydraulic and direct clutches was introduced in 1958 (VH), and production of the original V-Drive transmissions was concluded in July 1976, with 65,389 produced. History: Commercial transmissions In addition to the transit bus market, Allison began developing automatic transmissions for commercial trucks in 1953. This effort resulted in the MT-25, which designated the intended application ("M"edium "T"rucks) and maximum input power, 250 hp (190 kW). The MT-25 was a 6-speed automatic, using a two-speed high/low splitter and three-speed double planetary gear train. The splitter was equipped with a hydraulic retarder. Because of the additional cost of the automatic transmission, sales were initially slow until Allison began targeting specific markets that required both on- and off-road driving as well as frequent stops and starts, such as concrete mixing and garbage trucks in the early 1960s. The MT-25 was fitted first as an option branded POWERMATIC by Chevrolet, exclusive to that brand for the first year, but was soon offered by other truck manufacturers including Ford (1957), Reo (1958), Dodge (1958), Diamond T (1959), White (1961), and International Harvester (1961); production of the MT-25 continued into the early 1970s.The MT-25 was supplemented in September 1970 by a second-generation lighter-duty automatic transmission, the four-speed AT-540, which Allison developed jointly with Hydramatic Division in the late 1960s; the AT-540 was targeted specifically for on-highway use and shared similarities with automobile transmissions to reduce the cost penalty to equip on-highway trucks with automatic transmissions. Later, the MT-25 itself was replaced by the MT-640 and a heavier-duty version, the HT-740, was introduced; the new MT and HT were both derived from the AT-540. As an option, the MT-6nn and HT-7nn series transmissions could be equipped with a lower fifth gear for severe off-road conditions. In 1970, GM combined the Allison and Detroit Diesel divisions as the Detroit Diesel Allison Division of GM.The 500-series transmissions (AT-540, etc.) were rated to accept input power of up to 235 hp (175 kW) and were intended for vehicles up to 30,000 lb (14,000 kg) gross vehicle weight (GVW). The medium-duty 600-series had increased ratings to 300 hp (220 kW) and 73,280 lb (33,240 kg) GVW, while the heavy-duty 700-series were rated to 445 hp (332 kW) and 80,000 lb (36,000 kg) GVW. In 1976, a 700-series V-Drive transmission was introduced for buses, the V730. The AT/MT/HT were still being produced in 1998.Allison also produced off-highway transmissions in the 1960s, starting with the "Dual Path Powershift" DP 8000 series. The first electronic controls were fitted to the off-highway DP 8000 series transmission in 1971. Electronic controls (branded the Allison Transmission Electronic Control or ATEC system) were added to the MT/HT/V730 in 1983, improving fuel economy by more precisely controlling shifts. History: World Transmission The third-generation six-speed World Transmission (WT) was introduced in 1991, replacing the second-generation AT/MT/HT/V730 lines. Development of the WT had begun in the mid-1980s, prior to the sale of Detroit Diesel to Roger Penske in 1987. The WT used the WT electronic control (WTEC) system to control the internal clutches during shifting, equipped with a control unit that adapts to variations during use. The WT line was split into MD (medium duty), HD (heavy duty, introduced in 1993), and B (T-drive buses) lines; the MD and HD lines were later renamed to the 3000 and 4000 Series, respectively. History: As of 1998 in the United States, Allison had built 92% of the transmissions in school buses; 75% of transit bus transmissions, 65% of heavy-duty garbage truck transmissions, and 32% of all medium-duty truck transmissions.Allison followed the WT (3000 and 4000 Series) line with the 1000 and 2000 Series starting in 1999. The 1000 Series transmission incorporated many features from the WT line for light-duty trucks, including the electronic control system, and was initially available as an option with the 6.6L GM/Isuzu Duramax diesel engine and the 8.1L Vortec gasoline engine for the trucks based on the GMT800 platform.In 2007, GM sold Allison Transmission to private equity firms Carlyle Group and Onex Corporation for US$5.6 billion. Timeline: 1940s 1949—Allison begins production of CD-850 tank transmission, division's most historically significant transmission December 1949—First rail car transmission is produced; installed in the Budd Rail Car 1950s 1954—First off-highway transmissions (CRT-5530/CRT-3330) 1955—Allison develops the MT-25/POWERMATIC transmission for on-highway use with Chevrolet 1960s October 1960—First Allison XT-1410-2 transmission is produced June 1961—Allison announces MT Series transmissions July 1962—Allison TT-2000 Hydro Powershift transmission is introduced March 1965—Introduction of dual path DP-8000, largest single-package Allison Powershift transmission to date November 1966—Lithium-chlorine fuel cell is unveiled June 1967—Allison begins production of new DP-8960 for large off-highway trucks October 1967—First prototype of the Allison-equipped U.S. Army main battle tank is unveiled in Washington, D.C. Timeline: February 1969—Allison introduces electric gearshift control system for off-highway vehicles July 1969—Apollo 11 astronauts make man's first landing on the Moon; propellant tanks built by Allison are part of the Service Module 1970s September 1970—Merge with Detroit Diesel Engine to form Detroit Diesel Allison Division, headquarters in Detroit January 1971—Allison introduces first 4-speed automatic transmission for 72,000 lb (33,000 kg). GVW highway vehicles; Allison model HT-740 April 1973—First fully automatic transmission for large trucks, scrapers and other types of heavy-duty off-highway vehicles is introduced; Allison model CLBT 750 1974—First European office is established 1980s October 1982—A new generation heavy-duty automatic transmission, the Allison DP 8962, is announced; incorporates over 15 new technology internal changes May 1983—GM sells Allison Gas Turbine Division; Allison becomes part of newly formed GM Power Products and Defense Operations Group June 1986—First X200 military transmission is released December 1987—Detroit Diesel Allison becomes Allison Transmission, Division of General Motors 1990s February 1991—Allison introduces electronically controlled World Transmissions November 1995—Allison adopts lean manufacturing principles and begins implementing Allison Production System (APS), a cellular manufacturing system; some 10,000 machines and support equipment are re-arranged through all plants 1999--Hybrid bus program is demonstrated for New York City Transit Authority June 1999—Allison introduces 1000 Series and 2000 Series fully automatic transmissions 2000s 2000--Hybrid electric program is launched September 2000—Test Track 2000 is first customer ride and drive simulating real-world operating conditions; held at Walt Disney World in Orlando, Florida January 2001—Allison unveils first-of-its-kind parallel hybrid technology November 2003—Allison's Ultimate Truck Driving Adventure takes ride and drive experience to extremes in the high desert of Nevada November 2003—Allison Vocational Models are released to better serve specific applications May 2005--Shanghai Customization Center is opened June 2007—GM announced that it was selling Allison Transmission to private equity firms The Carlyle Group and Onex Corporation, in a deal valued at $5.6 billion. The transaction closed on August 7, 2007. Timeline: 2008—Allison introduces on-board prognostics on model-year 2009 automatic transmissions 2009—Allison took an approximately 10% stake in UK-based Torotrak manufacturer of Infinitely Variable Transmission (IVT). 2010s 2010-Manufacturing plant opened in Chennai, India also establishing regional headquarters with executive, marketing and sales offices June 2010-Allison dedicates a new hybrid manufacturing plant in Indianapolis, Indiana.March 15, 2012 Initial public offering of 26.3 million shares of Allison Transmission stock at $23/share on the New York Stock Exchange under the symbol ALSN. October 27, 2013 Allison 10-speed TC10 transmission available for order at Navistar Current revenues were at $1.985 Billion a decrease from 2014. Products: Allison markets its transmissions by vocational series according to the intended use; for example, the Tractor Series is sold for and installed in Class 8 tractors, while the Motorhome Series is marketed to manufacturers of recreational vehicles. A transmission is given a designation specific to the vocational series, but is otherwise identical mechanically to other transmissions sold for other vocational series; for example, the Bus Series B210 / B220 / B295 transmissions are also sold with identical gearing as: Collectively, these are grouped into the 1000/2000 Series transmission family; transmissions within a family share the same basic dimensions, power input capabilities, and weight. Allison transmission families include the 1000/2000 Series, 3000 Series, 4000 Series, 5000 Series, 6000 Series, 8000 Series, 9000 Series, and Tractor Series. Each transmission family is given a generational designation based on the electronic control system; parts generally are not interchangeable between generations within a specific family: Gen 1 / Gen 2 (aka World Transmission or World Transmission Electronic Controller (WTEC) / WTEC II) —1991–98 Gen 3 — 1998–2004/05 Gen 4 — 2004/05–2012 Gen 5 — 2012–present Hybrid bus series First generationGM-Allison introduced hybrid vehicle technology for transit buses in 2003. Allison hybrid transit bus products were initially branded as the Allison Electric Drives EP System, which included the following components: EV Drive Unit – integrating the Generator and Electric Motor in the diagram Inverter (Dual Power Inverter Module, DPIM) – integrating the Charger and Converter in the diagram Battery (Energy Storage System, ESS, or Energy Storage Unit) Hybrid Control Modules (Transmission Control Module, TCM; and Vehicle Control Module, VCM)Allison characterizes the system as the "Two-Mode Compound Split Parallel Hybrid Architecture".: 4  As installed in buses, the EP System has two operating modes or speed ranges, with the changeover generally occurring between 15 and 25 mph (24 and 40 km/h).: 14  Under full-throttle, the vehicle's initial launch in the low-speed mode is boosted by the output motor. As vehicle speed increases, the input motor begins to dominate,: 18  resulting in nearly total mechanical output only. Through 2011, GM intended to introduce 16 passenger car and truck hybrid models based on the Allison split-mode system. The primary benefit of the Allison hybrid system is in recapturing kinetic energy during regenerative braking and storing it as electrical energy, which can later be converted back to kinetic energy through an output motor, which assists in accelerating the vehicle, reducing demand on the engine and consequently fuel consumption.: 12  Fuel economy is improved by up to 60%, and acceleration can also be improved compared to a conventional bus.To the operator, the hybrid system is automatic and requires no special training. Under normal in-motion operation, engine speed is controlled by the TCM, which commands a torque and speed point based on the needs of the hybrid system. During startup and shutdown, the TCM commands only a speed requirement.: 11 The EV Drive Unit is installed in lieu of a conventional transmission and acts as a continuously variable transmission controlled electronically; it integrates two motor-generators (Motor A and Motor B, on the input and output, respectively), three planetary gear sets, one rotating clutch, and one stationary clutch.: 7  From the engine, power is transferred to the input shaft through an input damper instead of the conventional torque converter found in an automatic transmission. The input shaft is coupled to the main shaft and Motor A through a planetary gearset (P1), and Motor A is coupled to Motor B through another planetary gearset (P2). Motor B is coupled to the output shaft through a third planetary gearset (P3) and the stationary (C1) and rotating (C2) clutches.: 17  Both motors are three-phase AC induction motors and automatically switch from motoring to generation when the mechanical rotation frequency exceeds the stator field frequency.: 13, 15 There are two drive units available (EP40 or H 40 EP; and EP50 or H 50 EP). The H40 is intended for regular transit bus use, while the H50 is for articulated and suburban coaches, similar in size and application to the B400 and B500 Bus Series transmissions, respectively. The H40 has a continuous input capacity of 280 hp (210 kW) and 910 lb⋅ft (1,230 N⋅m) of torque, while the respective H50 input limits are 330 hp (250 kW) and 1,050 lb⋅ft (1,420 N⋅m).The DPIM includes an inverter for each motor; the continuous and peak output are 160 and 300 kW, respectively.: 19  The ESS uses nickel-metal hydride batteries, air-cooled using internal fans, and weighs approximately 915 lb (415 kg).: 22  The ESS is made of three sub-strings wired in parallel with a storage capacity of 450 A and 624 VDC. Each sub-string uses two 312 V sub-packs in series, which are made of 40 7.8-volt modules. Six battery control information modules (BCIM) monitor temperature, one in each sub-pack.: 23  The DPIM and ESS have been improved since the initial introduction, and newer models generally can replace earlier units. In addition, newer installations include a DC-DC converter, a solid-state device that converts the high-voltage traction motor energy to 12/24V accessory power.As of 2008, there are more than 2,700 GM-Allison hybrid buses operating in 81 cities in the U.S., Canada and Europe. This includes: Translink (Vancouver) Dresden, Germany King County Metro Transit Authority Massachusetts Bay Transportation Authority Minneapolis-Saint Paul Metro Transit Regional Transportation Commission of Southern Nevada Southeastern Pennsylvania Transportation Authority Washington Metropolitan Area Transit Authority Regional Transportation District Denver, CO Maryland Transit Administration Indianapolis Public Transportation Corporation Chicago Transit AuthoritySecond generationAllison introduced its second-generation eGen Flex diesel-electric hybrid drive unit in 2022, partnering with Gillig; the first units will be delivered to IndyGo, serving Indianapolis. eGen Flex is available as multiple models, designated eGen Flex 40, 40 CertPlus, 40 Max, or 40 Max CertPlus (equivalent to the H 40 in physical size, input, and output capabilities); or the eGen Flex 50, 50 CertPlus, 50 Max, or 50 Max CertPlus (equivalent to the H 50). The "Max" models are capable of operating on electric power alone for up to 10 mi (16 km), depending on the axle ratio and duty cycle. Products: Electric axles In 2020, Allison introduced a line of motor-integrated electric axles, branded eGen Power. The first model, 100D, was designated for its gross axle weight rating (GAWR) of 10.4 t (23,000 lb) and (D)ual electric motors; 100D has a continuous and peak power output of 424 and 648 kW (569 and 869 hp), respectively, with a maximum torque of 46,800 N⋅m (34,500 lbf⋅ft). In 2021, Allison expanded the range with the 100S (a single-motor variant of the 100D, with continuous and peak power output of 212 and 324 kW (284 and 434 hp), respectively and a maximum 23,500 N⋅m (17,300 lbf⋅ft) of torque) and the 130D (a variant of the 100D with a higher 13 t (29,000 lb) GAWR for the European and Asia Pacific markets).The Allison eGen Power integrated axle also includes a multi-speed gearbox to optimize both launch and cruising speeds; it was designed to be a drop-in replacement for existing axles for medium- and heavy-duty trucks and buses, allowing more flexibility in battery placement. Products: Current products (Gen 5) Notes Discontinued Off-highway The model designations for off-highway transmissions marketed under the Powershift TORQMATIC brand were in the format AAAA 1234, where: For example, the TT 2220 was a twin-turbine 2000 series automatic transmission with two forward speeds and a maximum input torque capacity of 250 lb⋅ft (340 N⋅m). Products: On-highway First Generation Allison V transmission—VH, VH2, VH4, VH5, VH6, VH7, VH9, VS1, VS2-6, VS2-8 Allison MT transmission—MT25, MT30, MT31, MT40, MT41, MT42 Allison HT transmission—HT70 Second Generation Allison M and MH marine reverse and reduction gear Allison AT transmission—AT540, AT542, AT543, AT545 (4 speeds) Allison MT transmission—MT640, MT643, MT644, MT647, MT648, MT650, MT653DR, MT654CR, MTB643, MTB644, MTB647, MTB648, MTB653DR, MTB654CR Allison HT transmission—HT740D, HT740RS, HT741, HT746, HT747, HT748, HTB748, HT750CRD, HT750DRD, HT754CRD, HT755CRD, HT755DRD, HTB755CRD, HTB755DRD Allison V transmission—V730, V731, VR731, VR731RH Third Generation Allison World Transmission (MD and HD)—MD3060, MD3060P, MD3560, MD3560P, MD3066, MD3066P, HD4060, HD4060P, HD4560, HD4560P
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Meiocardia vulgaris** Meiocardia vulgaris: Meiocardia delicata is a species of bivalve in the family Glossidae. Taxonomy: Lovell Augustus Reeve described this species in 1845, placing it in the genus Isocardia.In 1994, S. Kosuge & T. Kase described a junior synonym M. delicata; this was synonymized in 1995 by Akihiko Matsukuma and Tadashige Habe.Reeve gave this species the specific epithet vulgaris "common" to reflect "the abundant importation of this once rare and highly praised shell." Distribution: The type locality of M. vulgaris and its junior synonym are China and Okinawa, Japan, respectively.Its distribution includes: China, Taiwan, the Philippines, Malaysia, Indonesia, Queensland, Australia, the Andaman Islands, Myanmar, Northeast India, Oman, Zanzibar and Madagascar.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hot foil trick** Hot foil trick: The hot foil trick is a magic trick in which the magician places a small piece of tin or aluminium foil in a volunteer's hand, and the foil begins to rapidly increase in temperature until the volunteer has to drop it to avoid scalding their hand, and the foil is reduced to ashes on the ground. This effect is achieved by, shortly before performing the trick, surreptitiously exposing the foil to a chemical (such as mercury(II) chloride) which will cause it to rapidly oxidise. This trick can be very dangerous, since many of the chemicals used it to perform it are highly toxic; mercury(II) chloride was at one point commonly sold in magic stores in the United States, but as of 2009 most such shops had stopped stocking it due to its toxic nature.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Microtubule associated serine/threonine kinase 3** Microtubule associated serine/threonine kinase 3: Microtubule associated serine/threonine kinase 3 is a protein that in humans is encoded by the MAST3 gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded