text
stringlengths
60
353k
source
stringclasses
2 values
**Oblaat** Oblaat: In Japan, oblaat (オブラート, oburāto) is a thin, edible layer of starch used to wrap some candies and pharmaceuticals, similar to capsules. Description: Many types of Japanese candy are wrapped in oblate film, which is an edible, thin cellophane made of rice starch. It has no taste nor odor, and is transparent. It is useful to preserve gelatinous sweets by absorbing humidity. In America, these films are called oblate discs, blate papes, and edible films. They are most commonly used to take powdered herbs, supplements, and medications, allowing the user to consume multiple grams at one time more quickly and pleasantly than with capsules or other methods. Etymology: The name comes from the Dutch word oblaat, referring to sacramental bread, in turn from Latin oblātum ("offered"). History: Oblaat was introduced to Japan by Dutch pharmaceutical companies in the late 19th century to wrap bad tasting medicine so that it could be swallowed without tasting any bitter powder. Oblaat's moisture-absorbing properties have since given rise to its use for a candy wrapper, to keep the pieces of candy from sticking together.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lazabemide** Lazabemide: Lazabemide (proposed trade names Pakio, Tempium) is a reversible and selective inhibitor of monoamine oxidase B (MAO-B) that was under development as an antiparkinsonian agent but was never marketed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Utility functions on divisible goods** Utility functions on divisible goods: This page compares the properties of several typical utility functions of divisible goods. These functions are commonly used as examples in consumer theory. The functions are ordinal utility functions, which means that their properties are invariant under positive monotone transformation. For example, the Cobb–Douglas function could also be written as: log log ⁡y . Such functions only become interesting when there are two or more goods (with a single good, all monotonically increasing functions are ordinally equivalent). The utility functions are exemplified for two goods, x and y . px and py are their prices. wx and wy are constant positive parameters and r is another constant parameter. uy is a utility function of a single commodity ( y ). I is the total income (wealth) of the consumer. Acknowledgements: This page has been greatly improved thanks to comments and answers in Economics StackExchange.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kelvin–Voigt material** Kelvin–Voigt material: A Kelvin-Voigt material, also called a Voigt material, is the most simple model viscoelastic material showing typical rubbery properties. It is purely elastic on long timescales (slow deformation), but shows additional resistance to fast deformation. It is named after the British physicist and engineer Lord Kelvin and German physicist Woldemar Voigt. Definition: The Kelvin-Voigt model, also called the Voigt model, is represented by a purely viscous damper and purely elastic spring connected in parallel as shown in the picture. If, instead, we connect these two elements in series we get a model of a Maxwell material. Since the two components of the model are arranged in parallel, the strains in each component are identical: Total =εS=εD. where the subscript D indicates the stress-strain in the damper and the subscript S indicates the stress-strain in the spring. Similarly, the total stress will be the sum of the stress in each component: Total =σS+σD. Definition: From these equations we get that in a Kelvin-Voigt material, stress σ, strain ε and their rates of change with respect to time t are governed by equations of the form: σ(t)=Eε(t)+ηdε(t)dt, or, in dot notation: σ=Eε+ηε˙, where E is a modulus of elasticity and η is the viscosity. The equation can be applied either to the shear stress or normal stress of a material. Effect of a sudden stress: If we suddenly apply some constant stress σ0 to Kelvin-Voigt material, then the deformations would approach the deformation for the pure elastic material σ0/E with the difference decaying exponentially: ε(t)=σ0E(1−e−t/τR), where t is time and τR=ηE is the retardation time. If we would free the material at time t1 , then the elastic element would retard the material back until the deformation becomes zero. The retardation obeys the following equation: ε(t>t1)=ε(t1)e−(t−t1)/τR. The picture shows the dependence of the dimensionless deformation Eε(t)σ0 on dimensionless time t/τR . In the picture the stress on the material is loaded at time t=0 , and released at the later dimensionless time t1∗=t1/τR Since all the deformation is reversible (though not suddenly) the Kelvin–Voigt material is a solid. Effect of a sudden stress: The Voigt model predicts creep more realistically than the Maxwell model, because in the infinite time limit the strain approaches a constant: lim t→∞ε=σ0E, while a Maxwell model predicts a linear relationship between strain and time, which is most often not the case. Although the Kelvin-Voigt model is effective for predicting creep, it is not good at describing the relaxation behavior after the stress load is removed. Dynamic modulus: The complex dynamic modulus of the Kelvin-Voigt material is given by: E⋆(ω)=E+iηω. Thus, the real and imaginary components of the dynamic modulus are: E1=ℜ[E(ω)]=E, E2=ℑ[E(ω)]=ηω. Note that E1 is constant, while E2 is directly proportional to frequency (where the apparent viscosity, η , is the constant of proportionality).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Catabolite Control Protein A** Catabolite Control Protein A: Catabolite Control Protein A (CcpA) is a master regulator of carbon metabolism in gram-positive bacteria. It is a member of the LacI/GalR transcription regulator family. In contrast to most LacI/GalR proteins, CcpA is allosterically regulated principally by a protein-protein interaction, rather than a protein-small molecule interaction. CcpA interacts with the phosphorylated form of Hpr and Crh, which is formed when high concentrations of glucose or fructose-1,6-bisphosphate are present in the cell. Interaction of Hpr or Crh modulates the DNA sequence specificity of CcpA, allowing it to bind operator DNA to modulate transcription. Small molecules glucose-6-phosphate and fructose-1,6-bisphosphate are also known allosteric effectors, fine-tuning CcpA function. Structure: The DNA-binding functional unit of CcpA consists of a homodimer. The N-terminal region of each monomer form a DNA-binding site while the C-terminal portion forms a "regulatory" domain. A short linker connects the N-terminal DNA binding domain and the C-terminal regulatory domain, which partially contacts DNA when bound. The LacI/GalR subfamily can be functionally subdivided based on the presence or absence of a "YxxPxxxAxxL" motif in the liker sequence; CcpA belongs to the subdivision containing this motif. The regulatory domain is further subdivided into a N-terminal and C-terminal subdomain. Small molecule effector binding occurs in the cleft between these subdomains. Binding to phosphorylated Hpr/Crh occurs along the regulatory domain's N-subdomain.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ofqual exam results algorithm** Ofqual exam results algorithm: In 2020, Ofqual, the regulator of qualifications, exams and tests in England, produced a grades standardisation algorithm to combat grade inflation and moderate the teacher-predicted grades for A level and GCSE qualifications in that year, after examinations were cancelled as part of the response to the COVID-19 pandemic. History: In late March 2020, Gavin Williamson, the secretary of state for education in Boris Johnson's Conservative government, instructed the head of Ofqual, Sally Collier, to "ensure, as far as is possible, that qualification standards are maintained and the distribution of grades follows a similar profile to that in previous years". On 31 March, he issued a ministerial direction under the Children and Learning Act 2009.Then, in August, 82% of 'A level' grades were computed using an algorithm devised by Ofqual. More than 4.6 million GCSEs in England – about 97% of the total – were assigned solely by the algorithm. Teacher rankings were taken into consideration, but not the teacher-assessed grades submitted by schools and colleges.On 25 August, Collier, who oversaw the development of Williamson's algorithm calculation, resigned from the post of chief regulator of Ofqual following mounting pressure. History: Vocational qualifications The algorithm was not applied to vocational and technical qualifications (VTQs), such as BTECs, which are assessed on coursework or as short modules are completed, and in some cases adapted assessments were held. Nevertheless, because of the high level of grade inflation resulting from Ofqual's decision not to apply the algorithm to A levels and GCSEs, Pearson Edexcel, the BTEC examiner, decided to cancel the release of BTEC results on 19 August, the day before they were due to be released, to allow them to be re-moderated in line with Ofqual's grade inflation. The algorithm: Ofqual's Direct Centre Performance model is based on the record of each centre (school or college) in the subject being assessed. Details of the algorithm were not released until after the results of its first use in August 2020, and then only in part. The algorithm: Schools were asked to make a fair and objective judgement of the grade they believed a student would have achieved, but in addition to rank the students within each grade. This was because the statistical standardisation process required more granular information than the grade alone. Some examining boards issued guidance on the process of forming the judgement to be used within centres, where several teachers taught a subject. This was to be submitted 29 May 2020. The algorithm: For A-level students, their school had already included a predicted grade as part of the UCAS university application reference. This was submitted by 15 January (15 October 2019 for Oxbridge and medicine) and had been shared with the students. This UCAS predicted grade is not the same as the Ofqual predicted grade. The normal way to test a predictive algorithm is to run it against the previous year's data: this was not possible as the teacher rank order was not collected in previous years. Instead, tests used the rank order that had emerged from the 2019 final results. Effects of the algorithm: The A-level grades were announced in England, Wales and Northern Ireland on 13 August 2020. Nearly 36% were lower than teachers' assessments (the CAG) and 3% were down two grades. Side-effects of the algorithm: Students at small schools or taking minority subjects, such as are offered at small private schools, could see their grades being higher than their teacher predictions. Such students traditionally have a narrower range of marks, the weaker students having been invited to leave. Students at large state schools, sixth-form colleges and FE colleges who have open access policies and historically have educated BAME students or vulnerable students saw their results plummet, in order to fit the historic distribution curve.Students found the system unfair, and pressure was applied on Williamson to explain the results and to reverse his decision to use the algorithm that he had commissioned and Ofqual had implemented. On 12 August Williamson announced 'a triple lock' that let students appeal the result using an undefined valid mock result. But on 15 August, the advice was published with eight conditions set which differed from the minister's statement. Hours after the announcement, Ofqual suspended the system. On 17 August, Ofqual accepted that students should be awarded the CAG grade, instead of the grade predicted by the algorithm.UCAS said on 19 August that 15,000 pupils were rejected by their first-choice university on the algorithm-generated grades. After the Ofqual decision to use unmoderated teacher predictions, many affected students had grades to meet their offer, and reapplied. 90% of them said they aimed to study at top-tier universities. The effect was that top-tier universities appeared to have a capacity problem.The Royal Statistical Society said they had offered to help with the construction of the algorithm, but withdrew that offer when they saw the nature of the non-disclosure agreement they would have been required to sign. Ofqual was not prepared to discuss it and delayed replying by 55 days. Legal opinion: Lord Falconer, a former attorney general, opined that three laws had been broken, and gave an example of where Ofqual had ignored a direct instruction of the Secretary of State for Education.Falconer said the formula for standardising grades was in breach of the overarching objectives under which Ofqual was established by the Apprenticeships, Skills, Children and Learning Act 2009. The objectives require that the grading system gives a reliable indication of the knowledge, skills and understanding of the student, and that it allows for reliable comparisons to be made with students taking exams graded by other boards and to be made with students who took comparable exams in previous years.The Labour Party suggested that the process was unlawful in that the students were given no appeal mechanism, stating: "There will be a mass of discriminatory impacts by operating the process on the basis of reflecting the previous years' results from their institutions", and "It is bound to disadvantage a whole range of groups with protected characteristics, in breach of a range of anti-discrimination legislation."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Xaa-Pro aminopeptidase** Xaa-Pro aminopeptidase: Xaa-Pro aminopeptidase (EC 3.4.11.9, X-Pro aminopeptidase, proline aminopeptidase, aminopeptidase P, aminoacylproline aminopeptidase) is an enzyme. This enzyme catalyses the following chemical reaction Release of any N-terminal amino acid, including proline, that is linked to proline, even from a dipeptide or tripeptideThis enzyme is Mn2+-dependent.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lysidine (nucleoside)** Lysidine (nucleoside): Lysidine is an uncommon nucleoside, rarely seen outside of tRNA. It is a derivative of cytidine in which the carbonyl is replaced by the amino acid lysine. The third position in the anti-codon of the Isoleucine-specific tRNA, is typically changed from a cytidine which would pair with guanosine to a lysidine which will base pair with adenosine. Uridine could not be used at this position even though it is a conventional partner for adenosine since it will also "wobble base pair" with guanosine. So lysidine allows better translation fidelity. Lysidine is denoted as L or k2C (lysine bound to C2 atom of cytidine).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TetR** TetR: Tet Repressor proteins (otherwise known as TetR) are proteins playing an important role in conferring antibiotic resistance to large categories of bacterial species. Tetracycline (Tc) is a broad family of antibiotics to which bacteria have evolved resistance. Tc normally kills bacteria by binding to the bacterial ribosome and halting protein synthesis. The expression of Tc resistance genes is regulated by the repressor TetR. TetR represses the expression of TetA, a membrane protein that pumps out substances toxic to the bacteria like Tc, by binding the tetA operator. In Tc-resistant bacteria, TetA will pump out Tc before it can bind to the ribosome because the repressive action of TetR on TetA is halted by binding of Tc to TetR. Therefore, TetR may have an important role in helping scientists to better understand mechanisms of antibiotic resistance and how to treat antibiotic resistant bacteria. TetR is one of many proteins in the TetR protein family, which is so named because TetR is the most well characterized member.TetR is used in artificially engineered gene regulatory networks because of its capacity for fine regulation of promoters. In the absence of Tc or analogs like ATc, basal expression of TetR-regulated promoters is low, but expression rises sharply in the presence of even a minute quantity of Tc. The tetA gene is also present in the widely used E. coli cloning vector pBR322, where it is often referred to by the name of its tetracycline-resistance phenotype, TetR, not to be confused with TetR. Structure & Function: TetR functions as a homodimer. Each monomer consists of ten alpha helices connected by loops and turns. The overall structure of TetR can be broken down into two DNA-binding domains (one per monomer) and a regulatory core, which is responsible for tetracycline recognition and dimerization. TetR dimerizes by making hydrophobic contacts within the regulatory core. There is a binding cavity for tetracycline in the outer helices of the regulatory domain. When tetracycline binds this cavity, it causes a conformational change that affects the DNA-binding domain so that TetR is no longer able to bind DNA. As a result, TetA and TetR are expressed. There is still some debate in the field whether tetracycline derivatives alone can cause this conformational change or whether tetracycline must be in complex with magnesium to bind TetR. (TetR typically binds tetracycline-Mg2+ complexes inside bacteria, but TetR binding to tetracycline alone has been observed in vitro.) The DNA-binding domains of TetR recognize a 15 base pair palindromic sequence of the TetA operator. These domains mainly consist of a helix-turn-helix (HTH) motif that is common in TetR protein family members (see below). However, the N-terminal residues preceding this motif have also been shown to be important for DNA binding. Although these residues do not directly contact the DNA, they pack against the HTH and this packing is essential for binding. The HTH motifs have mostly hydrophobic interactions with major grooves of the target DNA. Binding of TetR to its target DNA sequence causes changes in both the DNA and TetR. TetR causes widening of the major grooves as well as kinking of the DNA; one helix of the HTH motif of TetR adopts a 310 helical turn as the result of complex DNA interactions. TetR Protein Family: As of June 2005, this family of proteins had about 2,353 members that are transcriptional regulators. (Transcriptional regulators control gene expression.) These proteins contain a helix-turn-helix (HTH) motif that is the DNA-binding domain. The second helix is considered to be most important for DNA sequence specificity and often recognizes nucleic acids within the major groove of the double helix. In the majority of the family members, this motif is on the N-terminal end of the protein and is highly conserved. The high conservation of the HTH motif is not observed for the other domains of the protein. The differences observed in these other regulatory domains are likely due to differences in the molecules that each family member senses.TetR protein family members are mostly transcriptional repressors, meaning that they prevent the expression of certain genes at the DNA level. These proteins can act on genes with various functions including antibiotic resistance, biosynthesis and metabolism, bacterial pathogenesis, and response to cell stress.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lumer–Phillips theorem** Lumer–Phillips theorem: In mathematics, the Lumer–Phillips theorem, named after Günter Lumer and Ralph Phillips, is a result in the theory of strongly continuous semigroups that gives a necessary and sufficient condition for a linear operator in a Banach space to generate a contraction semigroup. Statement of the theorem: Let A be a linear operator defined on a linear subspace D(A) of the Banach space X. Then A generates a contraction semigroup if and only if D(A) is dense in X, A is dissipative, and A − λ0I is surjective for some λ0> 0, where I denotes the identity operator.An operator satisfying the last two conditions is called maximally dissipative. Variants of the theorem: Reflexive spaces Let A be a linear operator defined on a linear subspace D(A) of the reflexive Banach space X. Then A generates a contraction semigroup if and only if A is dissipative, and A − λ0I is surjective for some λ0> 0, where I denotes the identity operator.Note that the conditions that D(A) is dense and that A is closed are dropped in comparison to the non-reflexive case. This is because in the reflexive case they follow from the other two conditions. Variants of the theorem: Dissipativity of the adjoint Let A be a linear operator defined on a dense linear subspace D(A) of the reflexive Banach space X. Then A generates a contraction semigroup if and only if A is closed and both A and its adjoint operator A∗ are dissipative.In case that X is not reflexive, then this condition for A to generate a contraction semigroup is still sufficient, but not necessary. Variants of the theorem: Quasicontraction semigroups Let A be a linear operator defined on a linear subspace D(A) of the Banach space X. Then A generates a quasi contraction semigroup if and only if D(A) is dense in X, A is closed, A is quasidissipative, i.e. there exists a ω ≥ 0 such that A − ωI is dissipative, and A − λ0I is surjective for some λ0 > ω, where I denotes the identity operator. Examples: Consider H = L2([0, 1]; R) with its usual inner product, and let Au = u′ with domain D(A) equal to those functions u in the Sobolev space H1([0, 1]; R) with u(1) = 0. D(A) is dense. Moreover, for every u in D(A), ⟨u,Au⟩=∫01u(x)u′(x)dx=−12u(0)2≤0, so that A is dissipative. The ordinary differential equation u' − λu = f, u(1) = 0 has a unique solution u in H1([0, 1]; R) for any f in L2([0, 1]; R), namely u(x)=eλx∫1xe−λtf(t)dt so that the surjectivity condition is satisfied. Hence, by the reflexive version of the Lumer–Phillips theorem A generates a contraction semigroup.There are many more examples where a direct application of the Lumer–Phillips theorem gives the desired result. Examples: In conjunction with translation, scaling and perturbation theory the Lumer–Phillips theorem is the main tool for showing that certain operators generate strongly continuous semigroups. The following is an example in point. A normal operator (an operator that commutes with its adjoint) on a Hilbert space generates a strongly continuous semigroup if and only if its spectrum is bounded from above.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Potter's wheel** Potter's wheel: In pottery, a potter's wheel is a machine used in the shaping (known as throwing) of clay into round ceramic ware. The wheel may also be used during the process of trimming excess clay from leather-hard dried ware that is stiff but malleable, and for applying incised decoration or rings of colour. Use of the potter's wheel became widespread throughout the Old World but was unknown in the Pre-Columbian New World, where pottery was handmade by methods that included coiling and beating. Potter's wheel: A potter's wheel may occasionally be referred to as a "potter's lathe". However, that term is better used for another kind of machine that is used for a different shaping process, turning, similar to that used for shaping of metal and wooden articles. The pottery wheel is an important component to create arts and craft products.The techniques of jiggering and jolleying can be seen as extensions of the potter's wheel: in jiggering, a shaped tool is slowly brought down onto the plastic clay body that has been placed on top of the rotating plaster mould. The jigger tool shapes one face, the mould the other. The term is specific to the shaping of flat ware, such as plates, whilst a similar technique, jolleying, refers to the production of hollow ware, such as cups. History: Most early ceramic ware was hand-built using a simple coiling technique in which clay was rolled into long threads that were then pinched and smoothed together to form the body of a vessel. In the coiling method of construction, all the energy required to form the main part of a piece is supplied indirectly by the hands of the potter. Early ceramics built by coiling were often placed on mats or large leaves to allow them to be worked more conveniently. The evidence of this lies in mat or leaf impressions left in the clay of the base of the pot. This arrangement allowed the potter to rotate the vessel during construction, rather than walk around it to add coils of clay. History: The oldest forms of the potter's wheel (called tourneys or slow wheels) were probably developed as an extension to this procedure. Tournettes, in use around 4500 BC in the Near East, were turned slowly by hand or by foot while coiling a pot. Only a small range of vessels were fashioned on the tournette, suggesting that it was used by a limited number of potters. The introduction of the slow wheel increased the efficiency of hand-powered pottery production. History: In the mid to late 3rd millennium BC the fast wheel was developed, which operated on the flywheel principle. It utilised energy stored in the rotating mass of the heavy stone wheel itself to speed the process. This wheel was wound up and charged with energy by kicking, or pushing it around with a stick, providing angular momentum. The fast wheel enabled a new process of pottery-making to develop, called throwing, in which a lump of clay was placed centrally on the wheel and then squeezed, lifted and shaped as the wheel turned. The process tends to leave rings on the inside of the pot and can be used to create thinner-walled pieces and a wider variety of shapes, including stemmed vessels, so wheel-thrown pottery can be distinguished from handmade. Potters could now produce many more pots per hour, a first step towards industrialization. Many modern scholars suggest that the first potter's wheel was first developed by the ancient Sumerians in Mesopotamia. A stone potter's wheel found at the Sumerian city of Ur in modern-day Iraq has been dated to about 3129 BC, but fragments of wheel-thrown pottery of an even earlier date have been recovered in the same area. However, southeastern Europe and China have also been claimed as possible places of origin. Furthermore, the wheel was also in popular use by potters starting around 3500 BC in major cities of the Indus Valley civilization in South Asia, namely Harappa and Mohenjo-daro (Kenoyer, 2005). Others consider Egypt as "being the place of origin of the potter's wheel. It was here that the turntable shaft was lengthened about 3000 BC and a flywheel added. The flywheel was kicked and later was moved by pulling the edge with the left hand while forming the clay with the right. This led to the counterclockwise motion for the potter's wheel which is almost universal." Hence the exact origin of the wheel is not wholly clear yet. History: In the Iron Age, the potter's wheel in common use had a turning platform about one metre (3 feet) over the floor, connected by a long axle to a heavy flywheel at ground level. This arrangement allowed the potter to keep the turning wheel rotating by kicking the flywheel with the foot, leaving both hands free for manipulating the vessel under construction. However, from an ergonomic standpoint, sweeping the foot from side to side against the spinning hub is rather awkward. At some point, an alternative solution was invented that involved a crankshaft with a lever that converted up-and-down motion into rotary motion. History: The use of the motor-driven wheel has become common in modern times, particularly with craft potters and educational institutions, although human-powered ones are still in use and are preferred by some studio potters. Techniques of throwing: A skilled potter can quickly throw a vessel from up to 15 kg (30 lb) of clay. Alternatively, by throwing and adding coils of clay then throwing again, pots up to four feet high may be made, the heat of a blowlamp being used to firm each thrown section before adding the next coil. In Chinese manufacturing, very large pots are made by two throwers working simultaneously. The potter's wheel in myth and legend: In Ancient Egyptian mythology, the deity Khnum was said to have formed the first humans on a potter's wheel.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**E-mahashabdkosh** E-mahashabdkosh: e-mahashabdkosh (Hindi: ई-महाशब्दकोश) is an online dictionary website which is hosted and maintained by Department of Official Language, India. This website is intended for general public use. About the site: e-mahashabdkosh is an online bilingual-bidirectional Hindi–English pronunciation dictionary. In this dictionary, basic meaning, synonyms, word usage and usage of words in special domain are included. This dictionary has the facility of search of Hindi and English words. The purpose of this dictionary is to provide a complete, correct, compact meaning and definition of a word. Development: During the year 2011–12, the work for the development of E-Mahashabdkosh for 12 work areas has been completed. During the year 2011–12, the work has been completed in additional four areas such as education, sports, culture and railways.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Saccheri–Legendre theorem** Saccheri–Legendre theorem: In absolute geometry, the Saccheri–Legendre theorem states that the sum of the angles in a triangle is at most 180°. Absolute geometry is the geometry obtained from assuming all the axioms that lead to Euclidean geometry with the exception of the axiom that is equivalent to the parallel postulate of Euclid.The theorem is named after Giovanni Girolamo Saccheri and Adrien-Marie Legendre. Saccheri–Legendre theorem: The existence of at least one triangle with angle sum of 180 degrees in absolute geometry implies Euclid's parallel postulate. Similarly, the existence of at least one triangle with angle sum of less than 180 degrees implies the characteristic postulate of hyperbolic geometry. Max Dehn gave an example of a non-Legendrian geometry where the angle sum of a triangle is greater than 180 degrees, and a semi-Euclidean geometry where there is a triangle with an angle sum of 180 degrees but Euclid's parallel postulate fails. In Dehn's geometries the Archimedean axiom does not hold.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spam Bully** Spam Bully: Spam Bully is anti-spam software made by Axaware, LLC. SpamBully uses Bayesian filtering to separate good emails from spam emails. Spam Bully 3 included a feature which performed automated clicks on spam mail, similar to some other software, such as the later AdNauseam browser extension. The features include the ability to report spammers to their providers and the FTC, the option of converting the SpamBully toolbar into a variety of languages including Spanish, German, Italian and Russian.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**6-meter band** 6-meter band: The 6-meter band is the lowest portion of the very high frequency (VHF) radio spectrum internationally allocated to amateur radio use. The term refers to the average signal wavelength of 6 meters.Although located in the lower portion of the VHF band, it nonetheless occasionally displays propagation mechanisms characteristic of the high frequency (HF) bands. This normally occurs close to sunspot maximum, when solar activity increases ionization levels in the upper atmosphere. Worldwide 6 meter propagation occurred during the sunspot maximum of 2005, making 6 meter communications as good as or, in some cases and locations, better than HF frequencies. The prevalence of HF characteristics on this VHF band has inspired amateur operators to dub it the "magic band". 6-meter band: In the northern hemisphere, activity peaks from May through early August, when regular sporadic E propagation enables long-distance contacts spanning up to 2,500 kilometres (1,600 mi) for single-hop propagation. Multiple-hop sporadic E propagation allows intercontinental communications at distances of up to 10,000 kilometres (6,200 mi). In the southern hemisphere, sporadic E propagation is most common from November through early February. 6-meter band: The 6-meter band shares many characteristics with the neighboring 8-meter band, but it is somewhat higher in frequency. History: On October 10, 1924, the 5-meter band (56–64 MHz) was first made available to amateurs in the United States by the Third National Radio Conference. On October 4, 1927, the band was allocated on a worldwide basis by the International Radiotelegraph Conference in Washington, D.C. 56–60 MHz was allocated for amateur and experimental use. There was no change to this allocation at the 1932 International Radiotelegraph Conference in Madrid.At the 1938 International Radiocommunication Conference in Cairo, television broadcasting was given priority in a portion of the 5- and 6-meter band in Europe. Television and low power stations, meaning those with less than 1 kW power, were allocated 56–58.5 MHz and amateurs, experimenters and low power stations were allocated 58.5–60 MHz in the European region. The conference maintained the 56–60 MHz allocation for other regions and allowed administrations in Europe latitude to allow amateurs to continue using 56–58.5 MHz.Starting in 1938, the FCC created 6 MHz wide television channel allocations working around the 5-meter amateur band with channel 2 occupying 50–56 MHz. In 1940, television channel 2 was reallocated to 60 MHz and TV channel 1 was moved to 50–56 MHz maintaining a gap for the 5-meter amateur band. When the US entered World War II, transmissions by amateur radio stations were suspended for the duration of the war. After the war, the 5-meter band was briefly reopened to amateurs from 56–60 MHz until March 1, 1946. At that time the FCC moved television channel 2 down to 54–60 MHz and reallocated channel 1 down to 44–50 MHz opening a gap that would become the Amateur radio 6-meter band in the United States. FCC Order 130-C went into effect at 3 am Eastern Standard Time on March 1, 1946, and created the 6-meter band allocation for the amateur service as 50–54 MHz. Emission types A1, A2, A3 and A4 were allowed for the entire band and special emission for Frequency modulation telephony was allowed from 52.5 to 54 MHz.At the 1947 International Radio Conference in Atlantic City, New Jersey, the amateur service was allocated 50–54 MHz in ITU Region 2 and 3. Broadcasting was allocated from 41 to 68 MHz in ITU Region 1, but allowed exclusive amateur use of the 6-meter band (50–54 MHz) in a portion of southern Africa.Amateurs in the United Kingdom remained in the 5-meter band (58.5–60 MHz) for a period of time following World War II, but lost the band to UK analogue television channel 4. They gained a 4-meter band in 1956 and eventually gained the 6 meter band from 50–52 MHz, when it was decided to terminate analogue television broadcasts on channel 2. Amateur radio: The Radio Regulations of the International Telecommunication Union allow amateur radio operations in the frequency range from 50.000–54.000 MHz in ITU Regions 2 and 3. At ITU level, Region 1 is allocated to broadcasting. However, in practice a large number of ITU Region 1 countries allow amateur use of at least some of the 6 meter band. Over the years portions have been vacated by VHF television broadcasting channels for one reason or another. In November 2015 the ITU World Radio Conference (WRC-15) agreed that for their next conference in 2019, Agenda Item 1.1 will study a future allocation of 50–54 MHz to amateur radio in Region 1. Amateur radio: Frequency allocations 6 meter frequency allocations for amateur radio are not universal worldwide. In the United States and Canada, the band ranges from 50 MHz to 54 MHz. In some other countries, the band is restricted to military communications. Further, in some nations, the frequency range is used for television transmissions, although most countries have (re)assigned those television channels to higher frequencies (see TV channel 1). Amateur radio: Although the International Telecommunication Union does not allocate 6 meter frequencies to amateurs in Europe, the decline of VHF television broadcasts and commercial pressure on the lower VHF spectrum has allowed most European countries to provide a 6 meter amateur allocation. Amateur radio: In the United Kingdom, it is legal to use the 6 meter band between frequencies 50 and 52 MHz, with some limitations at some frequencies. In the UK, 50–51 MHz amateur use is primary, and the rest is secondary, with power limitations. A detailed bandplan can be obtained from the Radio Society of Great Britain (RSGB) website.Many organizations promote regular competitions in this frequency range to promote its use and to familiarize operators to its quirks. For example, RSGB VHF Contest Committee has a large number of contests on 6 meters every year.Because of its peculiarity, there are a number of 6 meter band operator groups. These people monitor the status of the band between different paths and promote 6 meter band operations. Amateur radio: For a full list of countries using 6 meters, refer to the bandplan of the International Amateur Radio Union. Television interference Because the 6 meter band is just below the frequencies formerly allocated to the old VHF television Channel 2 in North America (54–60 MHz), television interference (TVI) to neighbors' sets was a common problem for amateurs operating in this band prior to June 2009, when analog television transmissions ended in the U.S. Equipment: Beginning around the turn of the millennium, the availability of transceivers that include the 6 meter band has increased greatly. Many commercial HF transceivers now include the 6 meter band along with shortwave, as do a few handheld VHF/UHF transceivers. There are also a number of stand-alone 6 meter band transceivers, although commercial production of these has been relatively rare in recent years. Despite support in more available radios, however, the 6 meter band does not share the popularity of amateur radio's 2-meter band. This is due, in large part, to the larger size of 6 meter antennas, power limitations in some countries outside the United States, and the 6 meter band's greater susceptibility to local electrical interference. Equipment: As transceivers have become more available for the 6 meter band, it has quickly gained popularity. In many countries, including the United States, access is granted to entry-level license holders. Those without access to international HF frequencies often gain their first experience with truly long-distance communications on the 6 meter band. Many of these operators develop a real affection for the challenge of the band, and often continue to devote much time to it, even when they gain access to the HF frequencies after upgrading their licenses. Equipment: For antennas, horizontal polarization is used for 6 meter weak signal, SSB communications using tropospheric propagation, sporadic-E, and multi-hop sporadic-E, and for other propagation modes where polarization does not matter as much. Vertical polarization is customarily used for local FM communications, repeaters, radio control. Common uses: AM simplex (direct, radio-to-radio communications) FM simplex (direct, radio-to-radio communications) FM repeater operation Earth-Moon-Earth communication Sporadic E propagation Aurora Borealis reflection WSJT digital modes Packet radio SSB voice operation Morse code (CW) operation DX Radio control Radio control hobby use In North America, especially in the United States and Canada, the 6-meter band may be used by licensed amateurs for the safe operation of radio-controlled (RC) aircraft and other types of radio control hobby miniatures. By general agreement among the amateur radio community, 200 kHz of the 6 meter band is reserved for the telecommand of models, by licensed amateurs using amateur frequencies. The sub-band reserved for this use is 50.79–50.99 MHz with ten "specified" frequencies, numbered "00" through "09", spaced at 20 kHz apart from 50.800–50.980 MHz. The upper end of the band, starting at 53.0 MHz, and going upwards in 100 kHz steps to 53.8 MHz, used to be similarly reserved for RC modelers, but with the rise of amateur repeater stations operating above 53 MHz in the United States, and very few 53 MHz RC units in Canada, the move to the lower end of the 6 meter spectrum for radio-controlled model flying activities by amateur radio operators was undertaken in North America, starting in the early 1980s, and more-or-less completed by 1991. It is still completely legal for ground-level RC model operation (cars, boats, etc.) to be accomplished on any frequency within the band, above 50.1 MHz, by any licensed amateur operator in the United States; however, an indiscriminate choice of frequencies for RC operations is discouraged by the amateur radio community via its self-imposed band plan for 6 meters. Common uses: In the United States, the Federal Communications Commission's (FCC) Part 97.215 rules regulate telecommand of model craft in the amateur service within the United States. It allows a maximum radiated RF power output of one watt for RC model operations of any type.In Canada, Industry Canada's RBR-4, Standards for the Operation of Radio Stations in the Amateur Radio Service, limits radio control of craft, for those models intended for use on any amateur radio-allocated frequency, to amateur service frequencies above 30 MHz.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Raita algorithm** Raita algorithm: In computer science, the Raita algorithm is a string searching algorithm which improves the performance of Boyer–Moore–Horspool algorithm. This algorithm preprocesses the string being searched for the pattern, which is similar to Boyer–Moore string-search algorithm. The searching pattern of particular sub-string in a given string is different from Boyer–Moore–Horspool algorithm. This algorithm was published by Timo Raita in 1991. Description: Raita algorithm searches for a pattern "P" in a given text "T" by comparing each character of pattern in the given text. Searching will be done as follows. Window for a text "T" is defined as the length of "P". First, last character of the pattern is compared with the rightmost character of the window. If there is a match, first character of the pattern is compared with the leftmost character of the window. Description: If they match again, it compares the middle character of the pattern with middle character of the window.If everything in the pre-check is successful, then the original comparison starts from the second character to last but one. If there is a mismatch at any stage in the algorithm, it performs the bad character shift function which was computed in pre-processing phase. Bad character shift function is identical to the one proposed in Boyer–Moore–Horspool algorithm.A modern formulation of a similar pre-check is found in std::string::find, a linear/quadratic string-matcher, in libc++ and libstdc++. Assuming a well-optimized version of memcmp, not skipping characters in the "original comparison" tends to be more efficient as the pattern is likely to be aligned. Example: Pattern: abddb Text:abbaabaabddbabadbb Pre- Processing stage: a b d 4 3 1 Attempt 1: abbaabaabddbabadbb ....b Shift by 4 (bmBc[a]) Comparison of last character of pattern to rightmost character in the window. It's a mismatch and shifted by 4 according to the value in pre-processing stage. Attempt 2: abbaabaabddbabadbb A.d.B Shift by 3 (bmBc[b]) Here last and first character of the pattern are matched but middle character is a mismatch. So the pattern is shifted according to the pre-processing stage. Attempt 3: abbaabaabddbabadbb ABDDB Shift by 3 (bmBc[b]) We found exact match here but the algorithm continues until it can't move further. Attempt 4: abbaabaABDDBabadbb ....b Shift by 4 (bmBc[a]) At this stage, we need to shift by 4 and we can't move the pattern by 4. So, the algorithm terminates. Letters in capital letter are exact match of the pattern in the text. Complexity: Pre-processing stage takes O(m) time where "m" is the length of pattern "P". Searching stage takes O(mn) time complexity where "n" is the length of text "T".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nanomaterials** Nanomaterials: Nanomaterials describe, in principle, materials of which a single unit is sized (in at least one dimension) between 1 and 100 nm (the usual definition of nanoscale). Nanomaterials research takes a materials science-based approach to nanotechnology, leveraging advances in materials metrology and synthesis which have been developed in support of microfabrication research. Materials with structure at the nanoscale often have unique optical, electronic, thermo-physical or mechanical properties.Nanomaterials are slowly becoming commercialized and beginning to emerge as commodities. Definition: In ISO/TS 80004, nanomaterial is defined as the "material with any external dimension in the nanoscale or having internal structure or surface structure in the nanoscale", with nanoscale defined as the "length range approximately from 1 nm to 100 nm". This includes both nano-objects, which are discrete pieces of material, and nanostructured materials, which have internal or surface structure on the nanoscale; a nanomaterial may be a member of both these categories. Definition: On 18 October 2011, the European Commission adopted the following definition of a nanomaterial:A natural, incidental or manufactured material containing particles, in an unbound state or as an aggregate or as an agglomerate and for 50% or more of the particles in the number size distribution, one or more external dimensions is in the size range 1 nm – 100 nm. In specific cases and where warranted by concerns for the environment, health, safety or competitiveness the number size distribution threshold of 50% may be replaced by a threshold between 1% to 50%. Sources: Engineered Engineered nanomaterials have been deliberately engineered and manufactured by humans to have certain required properties.Legacy nanomaterials are those that were in commercial production prior to the development of nanotechnology as incremental advancements over other colloidal or particulate materials. They include carbon black and titanium dioxide nanoparticles. Sources: Incidental Nanomaterials may be unintentionally produced as a byproduct of mechanical or industrial processes through combustion and vaporization. Sources of incidental nanoparticles include vehicle engine exhausts, smelting, welding fumes, combustion processes from domestic solid fuel heating and cooking. For instance, the class of nanomaterials called fullerenes are generated by burning gas, biomass, and candle. It can also be a byproduct of wear and corrosion products. Incidental atmospheric nanoparticles are often referred to as ultrafine particles, which are unintentionally produced during an intentional operation, and could contribute to air pollution. Sources: Natural Biological systems often feature natural, functional nanomaterials. The structure of foraminifera (mainly chalk) and viruses (protein, capsid), the wax crystals covering a lotus or nasturtium leaf, spider and spider-mite silk, the blue hue of tarantulas, the "spatulae" on the bottom of gecko feet, some butterfly wing scales, natural colloids (milk, blood), horny materials (skin, claws, beaks, feathers, horns, hair), paper, cotton, nacre, corals, and even our own bone matrix are all natural organic nanomaterials. Sources: Natural inorganic nanomaterials occur through crystal growth in the diverse chemical conditions of the Earth's crust. For example, clays display complex nanostructures due to anisotropy of their underlying crystal structure, and volcanic activity can give rise to opals, which are an instance of a naturally occurring photonic crystals due to their nanoscale structure. Fires represent particularly complex reactions and can produce pigments, cement, fumed silica etc. Sources: Natural sources of nanoparticles include combustion products forest fires, volcanic ash, ocean spray, and the radioactive decay of radon gas. Natural nanomaterials can also be formed through weathering processes of metal- or anion-containing rocks, as well as at acid mine drainage sites. Gallery of natural nanomaterials Types: Nano-objects are often categorized as to how many of their dimensions fall in the nanoscale. A nanoparticle is defined a nano-object with all three external dimensions in the nanoscale, whose longest and the shortest axes do not differ significantly. A nanofiber has two external dimensions in the nanoscale, with nanotubes being hollow nanofibers and nanorods being solid nanofibers. A nanoplate/nanosheet has one external dimension in the nanoscale, and if the two larger dimensions are significantly different it is called a nanoribbon. For nanofibers and nanoplates, the other dimensions may or may not be in the nanoscale, but must be significantly larger. In all of these cases, a significant difference is noted to typically be at least a factor of 3.Nanostructured materials are often categorized by what phases of matter they contain. A nanocomposite is a solid containing at least one physically or chemically distinct region or collection of regions, having at least one dimension in the nanoscale. A nanofoam has a liquid or solid matrix, filled with a gaseous phase, where one of the two phases has dimensions on the nanoscale. A nanoporous material is a solid material containing nanopores, voids in the form of open or closed pores of sub-micron lengthscales. A nanocrystalline material has a significant fraction of crystal grains in the nanoscale. Types: Nanoporous materials The term nanoporous materials contain subsets of microporous and mesoporous materials. Microporous materials are porous materials with a mean pore size smaller than 2 nm, while mesoporous materials are those with pores sizes in the region 2–50 nm. Microporous materials exhibit pore sizes with comparable length-scale to small molecules. For this reason such materials may serve valuable applications including separation membranes. Mesoporous materials are interesting towards applications that require high specific surface areas, while enabling penetration for molecules that may be too large to enter the pores of a microporous material. In some sources, nanoporous materials and nanofoam are sometimes considered nanostructures but not nanomaterials because only the voids and not the materials themselves are nanoscale. Although the ISO definition only considers round nano-objects to be nanoparticles, other sources use the term nanoparticle for all shapes. Types: Nanoparticles Nanoparticles have all three dimensions on the nanoscale. Nanoparticles can also be embedded in a bulk solid to form a nanocomposite. Fullerenes The fullerenes are a class of allotropes of carbon which conceptually are graphene sheets rolled into tubes or spheres. These include the carbon nanotubes (or silicon nanotubes) which are of interest both because of their mechanical strength and also because of their electrical properties. Types: The first fullerene molecule to be discovered, and the family's namesake, buckminsterfullerene (C60), was prepared in 1985 by Richard Smalley, Robert Curl, James Heath, Sean O'Brien, and Harold Kroto at Rice University. The name was a homage to Buckminster Fuller, whose geodesic domes it resembles. Fullerenes have since been found to occur in nature. More recently, fullerenes have been detected in outer space.For the past decade, the chemical and physical properties of fullerenes have been a hot topic in the field of research and development, and are likely to continue to be for a long time. In April 2003, fullerenes were under study for potential medicinal use: binding specific antibiotics to the structure of resistant bacteria and even target certain types of cancer cells such as melanoma. The October 2005 issue of Chemistry and Biology contains an article describing the use of fullerenes as light-activated antimicrobial agents. In the field of nanotechnology, heat resistance and superconductivity are among the properties attracting intense research. Types: A common method used to produce fullerenes is to send a large current between two nearby graphite electrodes in an inert atmosphere. The resulting carbon plasma arc between the electrodes cools into sooty residue from which many fullerenes can be isolated. There are many calculations that have been done using ab-initio Quantum Methods applied to fullerenes. By DFT and TDDFT methods one can obtain IR, Raman, and UV spectra. Results of such calculations can be compared with experimental results. Types: Metal-based nanoparticles Inorganic nanomaterials, (e.g. quantum dots, nanowires, and nanorods) because of their interesting optical and electrical properties, could be used in optoelectronics. Furthermore, the optical and electronic properties of nanomaterials which depend on their size and shape can be tuned via synthetic techniques. There are the possibilities to use those materials in organic material based optoelectronic devices such as organic solar cells, OLEDs etc. The operating principles of such devices are governed by photoinduced processes like electron transfer and energy transfer. The performance of the devices depends on the efficiency of the photoinduced process responsible for their functioning. Therefore, better understanding of those photoinduced processes in organic/inorganic nanomaterial composite systems is necessary in order to use them in optoelectronic devices. Types: Nanoparticles or nanocrystals made of metals, semiconductors, or oxides are of particular interest for their mechanical, electrical, magnetic, optical, chemical and other properties. Nanoparticles have been used as quantum dots and as chemical catalysts such as nanomaterial-based catalysts. Recently, a range of nanoparticles are extensively investigated for biomedical applications including tissue engineering, drug delivery, biosensor.Nanoparticles are of great scientific interest as they are effectively a bridge between bulk materials and atomic or molecular structures. A bulk material should have constant physical properties regardless of its size, but at the nano-scale this is often not the case. Size-dependent properties are observed such as quantum confinement in semiconductor particles, surface plasmon resonance in some metal particles, and superparamagnetism in magnetic materials. Types: Nanoparticles exhibit a number of special properties relative to bulk material. For example, the bending of bulk copper (wire, ribbon, etc.) occurs with movement of copper atoms/clusters at about the 50 nm scale. Copper nanoparticles smaller than 50 nm are considered super hard materials that do not exhibit the same malleability and ductility as bulk copper. The change in properties is not always desirable. Ferroelectric materials smaller than 10 nm can switch their polarization direction using room temperature thermal energy, thus making them useless for memory storage. Suspensions of nanoparticles are possible because the interaction of the particle surface with the solvent is strong enough to overcome differences in density, which usually result in a material either sinking or floating in a liquid. Nanoparticles often have unexpected visual properties because they are small enough to confine their electrons and produce quantum effects. For example, gold nanoparticles appear deep red to black in solution. Types: The often very high surface area to volume ratio of nanoparticles provides a tremendous driving force for diffusion, especially at elevated temperatures. Sintering is possible at lower temperatures and over shorter durations than for larger particles. This theoretically does not affect the density of the final product, though flow difficulties and the tendency of nanoparticles to agglomerate do complicate matters. The surface effects of nanoparticles also reduces the incipient melting temperature. Types: One-dimensional nanostructures The smallest possible crystalline wires with cross-section as small as a single atom can be engineered in cylindrical confinement. Carbon nanotubes, a natural semi-1D nanostructure, can be used as a template for synthesis. Confinement provides mechanical stabilization and prevents linear atomic chains from disintegration; other structures of 1D nanowires are predicted to be mechanically stable even upon isolation from the templates. Types: Two-dimensional nanostructures 2D materials are crystalline materials consisting of a two-dimensional single layer of atoms. The most important representative graphene was discovered in 2004. Thin films with nanoscale thicknesses are considered nanostructures, but are sometimes not considered nanomaterials because they do not exist separately from the substrate. Types: Bulk nanostructured materials Some bulk materials contain features on the nanoscale, including nanocomposites, nanocrystalline materials, nanostructured films, and nanotextured surfaces.Box-shaped graphene (BSG) nanostructure is an example of 3D nanomaterial. BSG nanostructure has appeared after mechanical cleavage of pyrolytic graphite. This nanostructure is a multilayer system of parallel hollow nanochannels located along the surface and having quadrangular cross-section. The thickness of the channel walls is approximately equal to 1 nm. The typical width of channel facets makes about 25 nm. Applications: Nano materials are used in a variety of, manufacturing processes, products and healthcare including paints, filters, insulation and lubricant additives. In healthcare Nanozymes are nanomaterials with enzyme-like characteristics. They are an emerging type of artificial enzyme, which have been used for wide applications in such as biosensing, bioimaging, tumor diagnosis, antibiofouling and more. High quality filters may be produced using nanostructures, these filters are capable of removing particulate as small as a virus as seen in a water filter created by Seldon Technologies. Nanomaterials membrane bioreactor (NMs-MBR), the next generation of conventional MBR, are recently proposed for the advanced treatment of wastewater. In the air purification field, nano technology was used to combat the spread of MERS in Saudi Arabian hospitals in 2012. Nanomaterials are being used in modern and human-safe insulation technologies, in the past they were found in Asbestos-based insulation. As a lubricant additive, nano materials have the ability to reduce friction in moving parts. Worn and corroded parts can also be repaired with self-assembling anisotropic nanoparticles called TriboTEX. Nanomaterials have also been applied in a range of industries and consumer products. Mineral nanoparticles such as titanium-oxide have been used to improve UV protection in sunscreen. In the sports industry, lighter bats to have been produced with carbon nanotubes to improve performance. Another application is in the military, where mobile pigment nanoparticles have been used to create more effective camouflage. Nanomaterials can also be used in three-way-catalyst (TWC) applications. TWC converters have the advantage of controlling the emission of nitrogen oxides (NOx), which are precursors to acid rain and smog. In core-shell structure, nanomaterials form shell as the catalyst support to protect the noble metals such as palladium and rhodium. The primary function is that the supports can be used for carrying catalysts active components, making them highly dispersed, reducing the use of noble metals, enhancing catalysts activity, and improving the mechanical strength. Synthesis: The goal of any synthetic method for nanomaterials is to yield a material that exhibits properties that are a result of their characteristic length scale being in the nanometer range (1 – 100 nm). Accordingly, the synthetic method should exhibit control of size in this range so that one property or another can be attained. Often the methods are divided into two main types, "bottom up" and "top down". Synthesis: Bottom-up methods Bottom-up methods involve the assembly of atoms or molecules into nanostructured arrays. In these methods the raw material sources can be in the form of gases, liquids, or solids. The latter require some sort of disassembly prior to their incorporation onto a nanostructure. Bottom up methods generally fall into two categories: chaotic and controlled. Synthesis: Chaotic processes involve elevating the constituent atoms or molecules to a chaotic state and then suddenly changing the conditions so as to make that state unstable. Through the clever manipulation of any number of parameters, products form largely as a result of the insuring kinetics. The collapse from the chaotic state can be difficult or impossible to control and so ensemble statistics often govern the resulting size distribution and average size. Accordingly, nanoparticle formation is controlled through manipulation of the end state of the products. Examples of chaotic processes are laser ablation, exploding wire, arc, flame pyrolysis, combustion, and precipitation synthesis techniques. Synthesis: Controlled processes involve the controlled delivery of the constituent atoms or molecules to the site(s) of nanoparticle formation such that the nanoparticle can grow to a prescribed sizes in a controlled manner. Generally the state of the constituent atoms or molecules are never far from that needed for nanoparticle formation. Accordingly, nanoparticle formation is controlled through the control of the state of the reactants. Examples of controlled processes are self-limiting growth solution, self-limited chemical vapor deposition, shaped pulse femtosecond laser techniques, plant and microbial approaches and molecular beam epitaxy. Synthesis: Top-down methods Top-down methods adopt some 'force' (e. g. mechanical force, laser) to break bulk materials into nanoparticles. A popular method involves mechanical break apart bulk materials into nanomaterials is 'ball milling'. Besides, nanoparticles can also be made by laser ablation which apply short pulse lasers (e. g. femtosecond laser) to ablate a target (solid). Characterization: Novel effects can occur in materials when structures are formed with sizes comparable to any one of many possible length scales, such as the de Broglie wavelength of electrons, or the optical wavelengths of high energy photons. In these cases quantum mechanical effects can dominate material properties. One example is quantum confinement where the electronic properties of solids are altered with great reductions in particle size. The optical properties of nanoparticles, e.g. fluorescence, also become a function of the particle diameter. This effect does not come into play by going from macrosocopic to micrometer dimensions, but becomes pronounced when the nanometer scale is reached. Characterization: In addition to optical and electronic properties, the novel mechanical properties of many nanomaterials is the subject of nanomechanics research. When added to a bulk material, nanoparticles can strongly influence the mechanical properties of the material, such as the stiffness or elasticity. For example, traditional polymers can be reinforced by nanoparticles (such as carbon nanotubes) resulting in novel materials which can be used as lightweight replacements for metals. Such composite materials may enable a weight reduction accompanied by an increase in stability and improved functionality.Finally, nanostructured materials with small particle size, such as zeolites and asbestos, are used as catalysts in a wide range of critical industrial chemical reactions. The further development of such catalysts can form the basis of more efficient, environmentally friendly chemical processes. Characterization: The first observations and size measurements of nano-particles were made during the first decade of the 20th century. Zsigmondy made detailed studies of gold sols and other nanomaterials with sizes down to 10 nm and less. He published a book in 1914. He used an ultramicroscope that employs a dark field method for seeing particles with sizes much less than light wavelength. Characterization: There are traditional techniques developed during the 20th century in interface and colloid science for characterizing nanomaterials. These are widely used for first generation passive nanomaterials specified in the next section. Characterization: These methods include several different techniques for characterizing particle size distribution. This characterization is imperative because many materials that are expected to be nano-sized are actually aggregated in solutions. Some of methods are based on light scattering. Others apply ultrasound, such as ultrasound attenuation spectroscopy for testing concentrated nano-dispersions and microemulsions.There is also a group of traditional techniques for characterizing surface charge or zeta potential of nano-particles in solutions. This information is required for proper system stabilization, preventing its aggregation or flocculation. These methods include microelectrophoresis, electrophoretic light scattering, and electroacoustics. The last one, for instance colloid vibration current method is suitable for characterizing concentrated systems. Mechanical Properties: The ongoing research has shown that mechanical properties can vary significantly in nanomaterials compared to bulk material. Nanomaterials have substantial mechanical properties due to the volume, surface, and quantum effects of nanoparticles. This is observed when the nanoparticles are added to common bulk material, the nanomaterial refines the grain and forms intergranular and intragranular structures which improve the grain boundaries and therefore the mechanical properties of the materials. Grain boundary refinements provide strengthening by increasing the stress required to cause intergranular or transgranular fractures. A common example where this can be observed is the addition of nano Silica to cement, which improves the tensile strength, compressive strength, and bending strength by the mechanisms just mentioned. The understanding of these properties will enhance the use of nanoparticles in novel applications in various fields such as surface engineering, tribology, nanomanufacturing, and nanofabrication. Mechanical Properties: Techniques used: Steinitz in 1943 used the micro-indentation technique to test the hardness of microparticles, and now nanoindentation has been employed to measure elastic properties of particles at about 5-micron level. These protocols are frequently used to calculate the mechanical characteristics of nanoparticles via atomic force microscopy (AFM) techniques. To measure the elastic modulus; indentation data is obtained via AFM force-displacement curves being converted to force-indentation curves. Hooke's law is used to determine the cantilever deformation and depth of the tip, and in conclusion, the pressure equation can be written as:P=k (ẟc - ẟc0) ẟc : cantilever deformation ẟc0 : deflection ofset AFM allows us to obtain a high-resolution image of multiple types of surfaces while the tip of the cantilever can be used to obtain information about mechanical properties. Computer simulations are also being progressively used to test theories and complement experimental studies. The most used computer method is molecular dynamics simulation, which uses newton's equations of motion for the atoms or molecules in the system. Other techniques such direct probe method are used to determine the adhesive properties of nanomaterials. Both the technique and simulation are coupled with transmission electron microscope (TEM) and AFM techniques to provide results. Mechanical Properties: Mechanical properties of common nanomaterials classes: Crystalline metal nanomaterials: Dislocations are one of the major contributors toward elastic properties within nanomaterials similar to bulk crystalline materials. Despite the traditional view of there being no dislocations in nanomaterials. Ramos, experimental work has shown that the hardness of gold nanoparticles is much higher than their bulk counterparts, as there are stacking faults and dislocations forming that activate multiple strengthening mechanisms in the material. Through these experiments, more research has shown that via nanoindentation techniques, material strength; compressive stress, increases under compression with decreasing particle size, because of nucleating dislocations. These dislocations have been observed using TEM techniques, coupled with nanoindentation. Silicon nanoparticles strength and hardness are four times more than the value of the bulk material. The resistance to pressure applied can be attributed to the line defects inside the particles as well as a dislocation that provides strengthening of the mechanical properties of the nanomaterial. Furthermore, the addition of nanoparticles strengthens a matrix because the pinning of particles inhibits grain growth. This refines the grain, and hence improves the mechanical properties. However, not all additions of nanomaterials lead to an increase in properties for example nano-Cu. But this is attributed to the inherent properties of the material being weaker than the matrix. Mechanical Properties: Nonmetallic nanoparticles and nanomaterials: Size-dependent behavior of mechanical properties is still not clear in the case of polymer nanomaterials however, in one research by Lahouij they found that the compressive moduli of polystyrene nanoparticles were found to be less than that of the bulk counterparts. This can be associated with the functional groups being hydrated. Furthermore, nonmetallic nanomaterials can lead to agglomerates forming inside the matrix they are being added to and hence decrease the mechanical properties by leading to fracture under even low mechanical loads, such as the addition of CNTs. The agglomerates will act as slip planes as well as planes in which cracks can easily propagate (9). However, most organic nanomaterials are flexible and these and the mechanical properties such as hardness etc. are not dominant.Nanowires and nanotubes: The elastic moduli of some nanowires namely lead and silver, decrease with increasing diameter. This has been associated with surface stress, oxidation layer, and surface roughness. However, the elastic behavior of ZnO nanowires does not get affected by surface effects but their fracture properties do. So, it is generally dependent on material behavior and their bonding as well.The reason why mechanical properties of nanomaterials are still a hot topic for research is that measuring the mechanical properties of individual nanoparticles is a complicated method, involving multiple control factors. Nonetheless, Atomic force microscopy has been widely used to measure the mechanical properties of nanomaterials. Mechanical Properties: Adhesion and friction of nanoparticles When talking about the application of a material adhesion and friction play a critical role in determining the outcome of the application. Therefore, it is critical to see how these properties also get affected by the size of a material. Again, AFM is a technique most used to measure these properties and to determine the adhesive strength of nanoparticles to any solid surface, along with the colloidal probe technique and other chemical properties. Furthermore, the forces playing a role in providing these adhesive properties to nanomaterials are either the electrostatic forces, VdW, capillary forces, solvation forces, structure force, etc. It has been found that the addition of nanomaterials in bulk materials substantially increases their adhesive capabilities by increasing their strength through various bonding mechanisms. Nanomaterials dimension approaches zero, which means that the fraction of the particle's surface to overall atoms increases. Mechanical Properties: Along with surface effects, the movement of nanoparticles also plays a role in dictating their mechanical properties such as shearing capabilities. The movement of particles can be observed under TEM. For example, the movement behavior of MoS2 nanoparticles dynamic contact was directly observed in situ which led to the conclusion that fullerenes can shear via rolling or sliding. However, observing these properties is again a very complicated process due to multiple contributing factors. Mechanical Properties: Applications specific to Mechanical Properties: Lubrication Nano-manufacturing Coatings Uniformity: The chemical processing and synthesis of high performance technological components for the private, industrial and military sectors requires the use of high purity ceramics, polymera, glass-ceramics, and composite materials. In condensed bodies formed from fine powders, the irregular sizes and shapes of nanoparticles in a typical powder often lead to non-uniform packing morphologies that result in packing density variations in the powder compact. Uniformity: Uncontrolled agglomeration of powders due to attractive van der Waals forces can also give rise to in microstructural inhomogeneities. Differential stresses that develop as a result of non-uniform drying shrinkage are directly related to the rate at which the solvent can be removed, and thus highly dependent upon the distribution of porosity. Such stresses have been associated with a plastic-to-brittle transition in consolidated bodies, and can yield to crack propagation in the unfired body if not relieved.In addition, any fluctuations in packing density in the compact as it is prepared for the kiln are often amplified during the sintering process, yielding inhomogeneous densification. Some pores and other structural defects associated with density variations have been shown to play a detrimental role in the sintering process by growing and thus limiting end-point densities. Differential stresses arising from inhomogeneous densification have also been shown to result in the propagation of internal cracks, thus becoming the strength-controlling flaws.It would therefore appear desirable to process a material in such a way that it is physically uniform with regard to the distribution of components and porosity, rather than using particle size distributions which will maximize the green density. The containment of a uniformly dispersed assembly of strongly interacting particles in suspension requires total control over particle-particle interactions. A number of dispersants such as ammonium citrate (aqueous) and imidazoline or oleyl alcohol (nonaqueous) are promising solutions as possible additives for enhanced dispersion and deagglomeration. Monodisperse nanoparticles and colloids provide this potential.Monodisperse powders of colloidal silica, for example, may therefore be stabilized sufficiently to ensure a high degree of order in the colloidal crystal or polycrystalline colloidal solid which results from aggregation. The degree of order appears to be limited by the time and space allowed for longer-range correlations to be established. Such defective polycrystalline colloidal structures would appear to be the basic elements of sub-micrometer colloidal materials science, and, therefore, provide the first step in developing a more rigorous understanding of the mechanisms involved in microstructural evolution in high performance materials and components. Nanomaterials in articles, patents, and products: The quantitative analysis of nanomaterials showed that nanoparticles, nanotubes, nanocrystalline materials, nanocomposites, and graphene have been mentioned in 400,000, 181,000, 144,000, 140,000, and 119,000 ISI-indexed articles, respectively, by September 2018. As far as patents are concerned, nanoparticles, nanotubes, nanocomposites, graphene, and nanowires have been played a role in 45,600, 32,100, 12,700, 12,500, and 11,800 patents, respectively. Monitoring approximately 7,000 commercial nano-based products available on global markets revealed that the properties of around 2,330 products have been enabled or enhanced aided by nanoparticles. Liposomes, nanofibers, nanocolloids, and aerogels were also of the most common nanomaterials in consumer products.The European Union Observatory for Nanomaterials (EUON) has produced a database (NanoData) that provides information on specific patents, products, and research publications on nanomaterials. Health and safety: World Health Organization guidelines The World Health Organization (WHO) published a guideline on protecting workers from potential risk of manufactured nanomaterials at the end of 2017. WHO used a precautionary approach as one of its guiding principles. This means that exposure has to be reduced, despite uncertainty about the adverse health effects, when there are reasonable indications to do so. This is highlighted by recent scientific studies that demonstrate a capability of nanoparticles to cross cell barriers and interact with cellular structures. In addition, the hierarchy of controls was an important guiding principle. This means that when there is a choice between control measures, those measures that are closer to the root of the problem should always be preferred over measures that put a greater burden on workers, such as the use of personal protective equipment (PPE). WHO commissioned systematic reviews for all important issues to assess the current state of the science and to inform the recommendations according to the process set out in the WHO Handbook for guideline development. The recommendations were rated as "strong" or "conditional" depending on the quality of the scientific evidence, values and preferences, and costs related to the recommendation. Health and safety: The WHO guidelines contain the following recommendations for safe handling of manufactured nanomaterials (MNMs) A. Assess health hazards of MNMs WHO recommends assigning hazard classes to all MNMs according to the Globally Harmonized System (GHS) of Classification and Labelling of Chemicals for use in safety data sheets. For a limited number of MNMs this information is made available in the guidelines (strong recommendation, moderate-quality evidence). Health and safety: WHO recommends updating safety data sheets with MNM-specific hazard information or indicating which toxicological end-points did not have adequate testing available (strong recommendation, moderate-quality evidence). Health and safety: For the respirable fibres and granular biopersistent particles' groups, the GDG suggests using the available classification of MNMs for provisional classification of nanomaterials of the same group (conditional recommendation, low-quality evidence).B. Assess exposure to MNMs WHO suggests assessing workers' exposure in workplaces with methods similar to those used for the proposed specific occupational exposure limit (OEL) value of the MNM (conditional recommendation, low-quality evidence). Health and safety: Because there are no specific regulatory OEL values for MNMs in workplaces, WHO suggests assessing whether workplace exposure exceeds a proposed OEL value for the MNM. A list of proposed OEL values is provided in an annex of the guidelines. The chosen OEL should be at least as protective as a legally mandated OEL for the bulk form of the material (conditional recommendation, low-quality evidence). Health and safety: If specific OELs for MNMs are not available in workplaces, WHO suggests a step-wise approach for inhalation exposure with, first an assessment of the potential for exposure; second, conducting basic exposure assessment and third, conducting a comprehensive exposure assessment such as those proposed by the Organisation for Economic Cooperation and Development (OECD) or Comité Européen de Normalisation (the European Committee for Standardization, CEN) (conditional recommendation, moderate quality evidence). Health and safety: For dermal exposure assessment, WHO found that there was insufficient evidence to recommend one method of dermal exposure assessment over another.C. Control exposure to MNMs Based on a precautionary approach, WHO recommends focusing control of exposure on preventing inhalation exposure with the aim of reducing it as much as possible (strong recommendation, moderate-quality evidence). Health and safety: WHO recommends reduction of exposures to a range of MNMs that have been consistently measured in workplaces especially during cleaning and maintenance, collecting material from reaction vessels and feeding MNMs into the production process. In the absence of toxicological information, WHO recommends implementing the highest level of controls to prevent workers from any exposure. When more information is available, WHO recommends taking a more tailored approach (strong recommendation, moderate-quality evidence). Health and safety: WHO recommends taking control measures based on the principle of hierarchy of controls, meaning that the first control measure should be to eliminate the source of exposure before implementing control measures that are more dependent on worker involvement, with PPE being used only as a last resort. According to this principle, engineering controls should be used when there is a high level of inhalation exposure or when there is no, or very little, toxicological information available. In the absence of appropriate engineering controls PPE should be used, especially respiratory protection, as part of a respiratory protection programme that includes fit-testing (strong recommendation, moderate-quality evidence). Health and safety: WHO suggests preventing dermal exposure by occupational hygiene measures such as surface cleaning, and the use of appropriate gloves (conditional recommendation, low quality evidence). Health and safety: When assessment and measurement by a workplace safety expert is not available, WHO suggests using control banding for nanomaterials to select exposure control measures in the workplace. Owing to a lack of studies, WHO cannot recommend one method of control banding over another (conditional recommendation, very low-quality evidence).For health surveillance WHO could not make a recommendation for targeted MNM-specific health surveillance programmes over existing health surveillance programmes that are already in use owing to the lack of evidence. WHO considers training of workers and worker involvement in health and safety issues to be best practice but could not recommend one form of training of workers over another, or one form of worker involvement over another, owing to the lack of studies available. It is expected that there will be considerable progress in validated measurement methods and risk assessment and WHO expects to update these guidelines in five years' time, in 2022. Health and safety: Other guidance Because nanotechnology is a recent development, the health and safety effects of exposures to nanomaterials, and what levels of exposure may be acceptable, are subjects of ongoing research. Of the possible hazards, inhalation exposure appears to present the most concern. Animal studies indicate that carbon nanotubes and carbon nanofibers can cause pulmonary effects including inflammation, granulomas, and pulmonary fibrosis, which were of similar or greater potency when compared with other known fibrogenic materials such as silica, asbestos, and ultrafine carbon black. Acute inhalation exposure of healthy animals to biodegradable inorganic nanomaterials have not demonstrated significant toxicity effects. Although the extent to which animal data may predict clinically significant lung effects in workers is not known, the toxicity seen in the short-term animal studies indicate a need for protective action for workers exposed to these nanomaterials, although no reports of actual adverse health effects in workers using or producing these nanomaterials were known as of 2013. Additional concerns include skin contact and ingestion exposure, and dust explosion hazards.Elimination and substitution are the most desirable approaches to hazard control. While the nanomaterials themselves often cannot be eliminated or substituted with conventional materials, it may be possible to choose properties of the nanoparticle such as size, shape, functionalization, surface charge, solubility, agglomeration, and aggregation state to improve their toxicological properties while retaining the desired functionality. Handling procedures can also be improved, for example, using a nanomaterial slurry or suspension in a liquid solvent instead of a dry powder will reduce dust exposure. Engineering controls are physical changes to the workplace that isolate workers from hazards, mainly ventilation systems such as fume hoods, gloveboxes, biosafety cabinets, and vented balance enclosures. Administrative controls are changes to workers' behavior to mitigate a hazard, including training on best practices for safe handling, storage, and disposal of nanomaterials, proper awareness of hazards through labeling and warning signage, and encouraging a general safety culture. Personal protective equipment must be worn on the worker's body and is the least desirable option for controlling hazards. Personal protective equipment normally used for typical chemicals are also appropriate for nanomaterials, including long pants, long-sleeve shirts, and closed-toed shoes, and the use of safety gloves, goggles, and impervious laboratory coats. In some circumstances respirators may be used.Exposure assessment is a set of methods used to monitor contaminant release and exposures to workers. These methods include personal sampling, where samplers are located in the personal breathing zone of the worker, often attached to a shirt collar to be as close to the nose and mouth as possible; and area/background sampling, where they are placed at static locations. The assessment should use both particle counters, which monitor the real-time quantity of nanomaterials and other background particles; and filter-based samples, which can be used to identify the nanomaterial, usually using electron microscopy and elemental analysis. As of 2016, quantitative occupational exposure limits have not been determined for most nanomaterials. The U.S. National Institute for Occupational Safety and Health has determined non-regulatory recommended exposure limits for carbon nanotubes, carbon nanofibers, and ultrafine titanium dioxide. Agencies and organizations from other countries, including the British Standards Institute and the Institute for Occupational Safety and Health in Germany, have established OELs for some nanomaterials, and some companies have supplied OELs for their products.Nanoscale diagnostics Nanotechnology has been making headlines in the medical field, being responsible for biomedical imaging. The unique optical, magnetic and chemical properties of materials on the Nano scale has allowed the development of imaging probes with multi-functionality such as better contrast enhancement, better spatial information, controlled bio distribution, and multi-modal imaging across various scanning devices. These developments have had advantages such as being able to detect the location of tumors and inflammations, accurate assessment of disease progression, and personalized medicine. Health and safety: Silica nanoparticles- Silica nanoparticles can be classified into solid, non-porous, and mesoporous. They have large surface are, hydrophilic surface, and chemical and physical stabilities. Silica nanoparticles are made by the use of the Stöber process. Which is the hydrolysis of silyl ethers such as tetraethyl silicate into silanols (Si-OH) using ammonia in a mixture of water and alcohol followed by the condensation of silanols into 50–2000 nm silica particles. The size of the particle can be controlled by varying the concentration of silyl ether and alcohol or the micro emulsion method. Mesoporous silica nanoparticles are synthesized by the sol-gel process. They have pores that range in diameter from 2 nm to 50 nm. They are synthesized in a water-based solution in the presence of a base catalyst and a pore forming agent known as a surfactant. Surfactants are molecules that present the particularity to have a hydrophobic tail (alkyl chain) and a hydrophilic head (charged group, such as a quaternary amine for example). As these surfactants are added to a water-based solution, they will coordinate to form micelles with increasing concentration in order to stabilize the hydrophobic tails. Varying the pH of the solution and composition of the solvents, and the addition of certain swelling agents can control the pore size. Their hydrophilic surface is what makes silica nanoparticles so important and allows them to carry out functions such as drug and gene delivery, bio imaging and therapy. In order for this application to be successful, assorted surface functional groups are necessary and can be added either by the co-condensation process during preparation or by post surface modification. The high surface area of silica nanoparticles allows them to carry much larger amounts of the desired drug than through conventional methods like polymers and liposomes. It allows for site specific targeting, especially in the treatment of cancer. Once the particles have reached their destination, they can act as a reporter, release a compound, or be remotely heated to damage biological structures in close proximity. Targeting is typically accomplished by modifying the surface of the nanoparticle with a chemical or biological compound. They accumulate at tumor sites through Enhanced Permeability Retention (EPR), where the tumor vessels accelerate the delivery of the nanoparticles directly into the tumor. The porous shell of the silica allows control over the rate at which the drug diffuses out of the nanoparticle. The shell can be modified to have an affinity for the drug, or even to be triggered by pH, heat, light, salts, or other signaling molecules. Silica nanoparticles are also used in bio imaging because they can accommodate fluorescent/MRI/PET/ SPECT contrast agents and drug/DNA molecules to their adaptable surface and pores. This is made possible by using the silica nanoparticle as a vector for the expression of fluorescent proteins. Several different types of fluorescent probes, like cyanine dyes, methyl violegen, or semiconductor quantum dots can be conjugated to silica nanoparticles and delivered into specific cells or injected in vivo. Carrier molecule RGD peptide has been very useful of targeted in vivo imaging. Health and safety: Topically applied surface-enhanced resonance Raman ratiometric spectroscopy (TAS3RS)- TAS3RS is another technique that is starting to make advancement in the medical field. It is an imaging technique that uses Folate Receptors (FR) to detect tumor lesions as small as 370 micrometers. Folate Receptors are membrane bound surface proteins that bind folates and folate conjugates with high affinity. FR is frequently overexpressed in a number of human malignancies including cancer of the ovary, lung, kidney, breast, bladder, brain, and endometrium. Raman imaging is a type of spectroscopy that is used in chemistry to provide structural fingerprint by which molecules can be identified. It relies upon inelastic scattering of photons, which result in ultra high sensitivity. There was a study that was done where two different surface enhanced resonance Raman scattering were synthesized (SERRS). One of the SERRS was a "targeted nanoprobe functionalized with an anti-folate-receptor antibody (αFR-Ab) via a PEG-maleimide-succinimide and using the infrared dye IR780 as the Raman reporter, henceforth referred to as αFR-NP, and a nontargeted probe (nt-NP) coated with PEG5000-maleimide and featuring the IR140 infrared dye as the Raman reporter." These two different mixtures were injected into tumor bearing mice and healthy controlled mice. The mice were imaged with Bioluminescence (BLI) signal that produces light energy within an organism's body. They were also scanned with the Raman microscope in order to be able to see the correlation between the TAS3RS and the BLI map. TAS3RS did not show anything in the healthy mice, but was able to locate the tumor lesions in the infected mice and also able to create a TAS3RS map that could be used as guidance during surgery. TAS3RS shows to be promising in being able to combat ovarian and peritoneal cancer as it allows early detection with high accuracy. This technique can be administered locally, which is an advantage as it does not have to enter the bloodstream and therefore bypassing the toxicity concerns circulating nanoprobes. This technique is also more photostable than fluorochromes because SERRS nanoparticles cannot form from biomolecules and therefore there would not be any false positives in TAS3RS as there is in fluorescence imaging.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tyrosine—tRNA ligase** Tyrosine—tRNA ligase: Tyrosine—tRNA ligase (EC 6.1.1.1), also known as tyrosyl-tRNA synthetase is an enzyme that is encoded by the gene YARS. Tyrosine—tRNA ligase catalyzes the chemical reaction ATP + L-tyrosine + tRNA(Tyr) ⇌ AMP + diphosphate + L-tyrosyl-tRNA(Tyr)The three substrates of this enzyme are ATP, L-tyrosine, and a tyrosine-specific transfer RNA [tRNA(Tyr) or tRNATyr], whereas its three products are AMP, diphosphate, and L-tyrosyl-tRNA(Tyr). Tyrosine—tRNA ligase: This enzyme belongs to the family of ligases, to be specific those forming carbon-oxygen bonds in tRNA and related compounds. More specifically, it belongs to the family of the aminoacyl-tRNA synthetases. These latter enzymes link amino acids to their cognate transfer RNAs (tRNA) in aminoacylation reactions that establish the connection between a specific amino acid and a nucleotide triplet anticodon embedded in the tRNA. Therefore, they are the enzymes that translate the genetic code in vivo. The 20 enzymes, corresponding to the 20 natural amino acids, are divided into two classes of 10 enzymes each. This division is defined by the unique architectures associated with the catalytic domains and by signature sequences specific to each class. Structural studies: As of late 2007, 34 structures have been solved for this class of enzymes, with PDB accession codes The tyrosyl-tRNA synthetases (YARS) are either homodimers or monomers with a pseudo-dimeric structure. Each subunit or pseudo-subunit comprises an N-terminal domain which has: (i) about 230 amino acid residues; (ii) the mononucleotide binding fold (also known as Rossmann fold) of the class I aminoacyl-tRNA synthetases; (iii) an idiosynchratic insertion between the two halves of the fold (known as Connective Peptide 1 or CP1); (iv) the two signature sequences HIGH and KMSKS of the class I aminoacyl-tRNA synthetases. The N-terminal domain contains the catalytic site of the enzyme. The C-terminal moiety of the YARSs varies in sequence, length and organization and is involved in the recognition of the tRNA anticodon. Structural studies: Eubacteria Tyrosyl-tRNA synthetase from Bacillus stearothermophilus was the first synthetase whose crystal structure has been solved at high resolution (2.3 Å), alone or in complex with tyrosine, tyrosyl-adenylate or tyrosinyl-adenylate.(P. Brick 1989) The structures of the Staphylococcus aureus YARS and of a truncated version of Escherichia coli YARS have also been solved. A structural model of the complex between B. sterothermophilus YARS and tRNA(Tyr) was constructed using extensive mutagenesis data on both YARS and tRNATyr and found consistent with the crystal structure of the complex between YARS and tRNA(Tyr) from Thermus thermophilus, which was subsequently solved at 2.9 Å resolution.The C-terminal moiety of the eubacterial YARSs comprises two domains: (i) a proximal α-helical domain (known as Anticodon Binding Domain or α-ACB) of about 100 amino acids; (ii) a distal domain (known as S4-like) that shares high homology with the C-terminal domain of ribosomal protein S4. The S4-like domain was disordered in the crystal structure of B. stearothermophilus YARS. However, biochemical and NMR experiments have shown that the S4-like domain is folded in solution, and that its structure is similar to that in the crystal structure of the T. thermophilus YARS. Mutagenesis experiments have shown that the flexibility of the peptide that links the α-ACB and S4-like domains is responsible for the disorder of the latter in the structure and that elements of sequence in this linker peptide are essential for the binding of tRNA(Tyr) by YARS and its aminoacylation with tyrosine. TyrRSs from eubacterial species are divided into two subgroups according to variation in their C-terminal moiety. Structural studies: Archaea and lower eukaryotes The crystal structures of several archaeal tyrosyl-tRNA synthetases are available. The crystal structure of the complex between YARS from Methanococcus jannaschii, tRNA(Tyr) and L-tyrosine has been solved at 1.95 Å resolution. The crystal structures of the YARSs from Archeoglobus fulgidus, Pyrococcus horikoshii and Aeropyrum pernix have also been solved at high resolution.(M. Kuratani 2006) The C-terminal moieties of the archaeal YARSs contain only one domain. This domain is different from the α-ACB domain of eubacteria; it shares strong homology with the C-terminal domain of the tryptophanyl-tRNA synthetases and was therefore named C-W/Y domain. It is present in all eukarya. The structure of the complex between YARS from Saccharomyces cerevisiae, tRNA(Tyr) and an analog of tysosyl-adenylate has been solved at 2.4 Å resolution. The YARS from this lower eukaryote has an organization which is similar to that of the archaeal YARSs. Structural studies: Homo sapiens cytoplasm The human YARS has a C-terminal moiety that include a proximal C-W/Y domain and a distal domain which is not found in the YARSs of lower eukaryotes, archaea or eubacteria, and is a homolog of endothelial monocyte-activating polypeptide II (EMAP II, a mammalian cytokine). Although full-length, native YARS has no cell-signaling activity, the enzyme is secreted during apoptosis in cell culture and can be cleaved with an extracellular enzyme such as leukocyte elastase. The two released fragments, an N-terminal mini-YARS and a C-terminal EMAP II-like C-terminal domain, are active cytokines. The structure of mini-YARS has been solved at 1.18 Å resolution. It has an N-terminal Rossmann-fold domain and a C-terminal C-W/Y domain, similar to those of other YARSs. Structural studies: Homo sapiens mitochondria The mitochondrial tyrosyl-tRNA synthetases (mt-YARSs) and in particular H. sapiens mt-YARS, likely originate from a YARS of eubacterial origin. Their C-terminal moiety includes both α-ACB and S4-like domains like the eubacterial YARSs and share a low sequence identity with their cytosolic relatives. The crystal structure of a complex between a recombinant H. sapiens mt-YARS, devoid of the S4-like domain, and an analog of tyrosyl-adenylate has been solved at 2.2 Å resolution. Structural studies: Neurospora crassa mitochondria The mitochondrial (mt) tyrosyl-tRNA synthetase of Neurospora crassa, which is encoded by the nuclear gene cyt-18, is a bifunctional enzyme that catalyzes the aminoacylation of mt-tRNA(Tyr) and promotes the splicing of the mitochondrial group I introns. The crystal structure of a C-terminally truncated N. crassa mt-YARS that functions in splicing group I introns, has been determined at 1.95 Å resolution. Its Rossmann-fold domain and intermediate α-ACB domain superimpose on those of eubacterial YARSs, except for an additional N-terminal extension and three small insertions. The structure of the complex between a group I intron ribozyme and the splicing-active, carboxy-terminally truncated mt-YARS has been solved at 4.5 Å resolution. The structure shows that the group I intron binds across the two subunits of the homodimeric protein with a newly evolved RNA-binding surface distinct from that which binds tRNA(Tyr). This RNA binding surface provides an extended scaffold for the phosphodiester backbone of the conserved catalytic core of the intron RNA, allowing the protein to promote the splicing of a wide variety of group I introns. The group I intron-binding surface includes three small insertions and additional structural adaptations relative to non-splicing eubacterial YARSs, indicating a multistep adaptation for splicing function. Structural studies: Plasmodium falciparum The structure of the complex between Plasmodium falciparum tyrosyl-tRNA synthetase (Pf-YARS) and tyrosyl-adenylate at 2.2 Å resolution, shows that the overall fold of Pf-YARS is typical of class I synthetases. It comprises an N-terminal catalytic domain (residues 18–260) and an anticodon-binding domain (residues 261–370). The polypeptide loop that includes the KMSKS motif, is highly ordered and close to the bound substrate at the active site. Pf-YARS contains the ELR motif, which is present in H. sapiens mini-YARS and chemokines. Pf-YARS is expressed in all asexual parasite stages (rings, trophozoites and schizonts) and is exported to the host erythrocyte cytosol, from where it is released into blood plasma on iRBC rupture. Using its ELR peptide motif, Pf-YARS specifically binds to and internalizes into host macrophages, leading to enhanced secretion of the pro-inflammatory cytokines TnF-α and IL-6. The interaction between Pf-YARS and macrophages augments expression of adherence-linked host endothelial receptors ICAm-1 and VCAm-1. Structural studies: Mimivirus Acanthamoeba polyphaga mimivirus is the largest known DNA virus. It genome encodes four aminoacyl-tRNA synthetases: RARS, CARS, MARS, and YARS. The crystal structure of the mimivirus tyrosyl-tRNA synthetase in complex with tyrosinol has been solved at 2.2 Å resolution. The mimiviral YARS exhibits the typical fold and active-site organization of archaeal-type YARSs, with an N-terminal Rossmann-fold catalytic domain, an anticodon binding domain, and no extra C-terminal domain. It presents a unique dimeric conformation and significant differences in its anticodon binding site, when compared with the YARSs from other organisms. Structural studies: Leishmania major The single YARS gene that is present in the genomes of trypanosomatids, codes for a protein that has twice the length of tyrosyl-tRBA synthetase from other organisms. Each half of the double-length YARS contains a catalytic domain and an anticodon-binding domain; however, the two halves retain only 17% sequence identity to each other. Crystal structures of Leishmania major YARS at 3.0 Å resolution show that the two halves of a single molecule form a pseudo-dimer that resembles the canonical YARS dimer. The C-terminal copy of the catalytic domain has lost the catalytically important HIGH and KMSKS motifs, characteristic of class I aminoacyl-tRNA synthetases. Thus, the pseudo-dimer contains only one functional active site (contributed by the N-terminal half) and only one functional anticodon recognition site (contributed by the C-terminal half). Thus, the L. major YARS pseudo-dimer is inherently asymmetric. Roles of the subunits and domains: The N-terminal domain of tyrosyl-tRNA synthetase provides the chemical groups necessary for converting the substrates tyrosine and ATP into a reactive intermediate, tyrosyl-adenylate (the first step of the aminoacylation reaction) and for transferring the amino-acid moiety from tyrosyl-adenylate to the 3'OH-CCA terminus of the cognate tRNA(Tyr) (the second step of the aminoacylation reaction). The other domains are responsible (i) for the recognition of the anticodon bases of the cognate tRNA(Tyr); (ii) for the binding of the long variable arm of tRNA(Tyr) in eubacteria; and (iii) for unrelated functions such as cytokine activity. Roles of the subunits and domains: Recognition of tRNA(Tyr) The tRNA(Tyr) molecule has an L-shaped structure. Its recognition involves both subunits of the tyrosyl-tRNA synthetase dimer. The acceptor arm of tRNA(Tyr) interacts with the catalytic domain of one YARS monomer whereas the anticodon arm interacts with the C-terminal moiety of the other monomer. In most YARS structures, the monomers are related to each other by a twofold rotational symmetry. Moreover, all available crystal structures of complexes between YARS and tRNA(Tyr) are also planar, with symmetrical conformations of the two monomers in the dimer and with two tRNA(Tyr) molecules simultaneously interacting with one YARS dimer. However, kinetic studies of tyrosine activation and tRNA(Tyr) charging have revealed an anticooperative behavior of the TyrRS dimer in solution: each TyrRS dimer binds and tyrosylates only one tRNA(Tyr) molecule at a time. Thus, only one of the two sites is active at any given time.The presence of base pair Gua1:Cyt72 in the acceptor stem of tRNA(Tyr) from eubacteria and of base pair Cyt1-Gua72 in tRNA(Tyr) from archaea and eukaryotes results in a species specific recognition of tRNATyr by tyrosyl-tRNA synthetase. This characteristic of the recognition between YARS and tRNA(Tyr) has been used to obtain aminoacyl-tRNA synthetases that can specifically charge non-sense suppressor derivatives of tRNA(Tyr) with unnatural aminoacids in vivo without interfering with the normal process of translation in the cell.Both tyrosyl-tRNA synthetases and tryptophanyl-tRNA synthetases belong to Class I of the aminoacyl-tRNA synthetases, both are dimers and both have a class II mode of tRNA recognition, i.e. they interact with their cognate tRNAs from the variable loop and major groove side of the acceptor stem. This is in strong contrast to the other class I enzymes, which are monomeric and approach their cognate tRNA from minor groove side of the acceptor stem. Roles of the subunits and domains: Folding and stability The unfolding reaction and stability of tyrosyl-tRNA synthetase from Bacillus stearothermophilus have been studied under equilibrium conditions. This homodimeric enzyme is highly stable with a variation of free energy upon unfolding equal to 41 ± 1 kcal/mol. It unfolds through a compact monomeric intermediate. About one-third of the global energy of stabilization comes from the association between the two subunits, and one-third come from the secondary and tertiary interactions stabilizing each of the two molecules of the monomeric intermediate. Both mutations within the dimer interface and mutations distal to the interface can destabilize the association between the subunits. These experiments have shown in particular that the monomer of YARS is enzymatically inactive.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pressure** Pressure: Pressure (symbol: p or P) is the force applied perpendicular to the surface of an object per unit area over which that force is distributed.: 445  Gauge pressure (also spelled gage pressure) is the pressure relative to the ambient pressure. Pressure: Various units are used to express pressure. Some of these derive from a unit of force divided by a unit of area; the SI unit of pressure, the pascal (Pa), for example, is one newton per square metre (N/m2); similarly, the pound-force per square inch (psi, symbol lbf/in2) is the traditional unit of pressure in the imperial and US customary systems. Pressure may also be expressed in terms of standard atmospheric pressure; the atmosphere (atm) is equal to this pressure, and the torr is defined as 1⁄760 of this. Manometric units such as the centimetre of water, millimetre of mercury, and inch of mercury are used to express pressures in terms of the height of column of a particular fluid in a manometer. Definition: Pressure is the amount of force applied perpendicular to the surface of an object per unit area. The symbol for it is "p" or P. The IUPAC recommendation for pressure is a lower-case p. However, upper-case P is widely used. The usage of P vs p depends upon the field in which one is working, on the nearby presence of other symbols for quantities such as power and momentum, and on writing style. Definition: Formula Mathematically: p=FA, where: p is the pressure, F is the magnitude of the normal force, A is the area of the surface on contact.Pressure is a scalar quantity. It relates the vector area element (a vector normal to the surface) with the normal force acting on it. The pressure is the scalar proportionality constant that relates the two normal vectors: dFn=−pdA=−pndA. Definition: The minus sign comes from the convention that the force is considered towards the surface element, while the normal vector points outward. The equation has meaning in that, for any surface S in contact with the fluid, the total force exerted by the fluid on that surface is the surface integral over S of the right-hand side of the above equation. Definition: It is incorrect (although rather usual) to say "the pressure is directed in such or such direction". The pressure, as a scalar, has no direction. The force given by the previous relationship to the quantity has a direction, but the pressure does not. If we change the orientation of the surface element, the direction of the normal force changes accordingly, but the pressure remains the same.Pressure is distributed to solid boundaries or across arbitrary sections of fluid normal to these boundaries or sections at every point. It is a fundamental parameter in thermodynamics, and it is conjugate to volume. Definition: Units The SI unit for pressure is the pascal (Pa), equal to one newton per square metre (N/m2, or kg·m−1·s−2). This name for the unit was added in 1971; before that, pressure in SI was expressed simply in newtons per square metre. Definition: Other units of pressure, such as pounds per square inch (lbf/in2) and bar, are also in common use. The CGS unit of pressure is the barye (Ba), equal to 1 dyn·cm−2, or 0.1 Pa. Pressure is sometimes expressed in grams-force or kilograms-force per square centimetre (g/cm2 or kg/cm2) and the like without properly identifying the force units. But using the names kilogram, gram, kilogram-force, or gram-force (or their symbols) as units of force is expressly forbidden in SI. The technical atmosphere (symbol: at) is 1 kgf/cm2 (98.0665 kPa, or 14.223 psi). Definition: Pressure is related to energy density and may be expressed in units such as joules per cubic metre (J/m3, which is equal to Pa). Mathematically: distance distance Work Volume Energy (J) Volume (m3). Definition: Some meteorologists prefer the hectopascal (hPa) for atmospheric air pressure, which is equivalent to the older unit millibar (mbar). Similar pressures are given in kilopascals (kPa) in most other fields, except aviation where the hecto- prefix is commonly used. The inch of mercury is still used in the United States. Oceanographers usually measure underwater pressure in decibars (dbar) because pressure in the ocean increases by approximately one decibar per metre depth. Definition: The standard atmosphere (atm) is an established constant. It is approximately equal to typical air pressure at Earth mean sea level and is defined as 101325 Pa. Definition: Because pressure is commonly measured by its ability to displace a column of liquid in a manometer, pressures are often expressed as a depth of a particular fluid (e.g., centimetres of water, millimetres of mercury or inches of mercury). The most common choices are mercury (Hg) and water; water is nontoxic and readily available, while mercury's high density allows a shorter column (and so a smaller manometer) to be used to measure a given pressure. The pressure exerted by a column of liquid of height h and density ρ is given by the hydrostatic pressure equation p = ρgh, where g is the gravitational acceleration. Fluid density and local gravity can vary from one reading to another depending on local factors, so the height of a fluid column does not define pressure precisely. When millimetres of mercury (or inches of mercury) are quoted today, these units are not based on a physical column of mercury; rather, they have been given precise definitions that can be expressed in terms of SI units. One millimetre of mercury is approximately equal to one torr. The water-based units still depend on the density of water, a measured, rather than defined, quantity. These manometric units are still encountered in many fields. Blood pressure is measured in millimetres (or centimetres) of mercury in most of the world, and lung pressures in centimetres of water are still common.Underwater divers use the metre sea water (msw or MSW) and foot sea water (fsw or FSW) units of pressure, and these are the units for pressure gauges used to measure pressure exposure in diving chambers and personal decompression computers. A msw is defined as 0.1 bar (= 10000 Pa), is not the same as a linear metre of depth. 33.066 fsw = 1 atm (1 atm = 101325 Pa / 33.066 = 3064.326 Pa). The pressure conversion from msw to fsw is different from the length conversion: 10 msw = 32.6336 fsw, while 10 m = 32.8083 ft.Gauge pressure is often given in units with "g" appended, e.g. "kPag", "barg" or "psig", and units for measurements of absolute pressure are sometimes given a suffix of "a", to avoid confusion, for example "kPaa", "psia". However, the US National Institute of Standards and Technology recommends that, to avoid confusion, any modifiers be instead applied to the quantity being measured rather than the unit of measure. For example, "pg = 100 psi" rather than "p = 100 psig". Definition: Differential pressure is expressed in units with "d" appended; this type of measurement is useful when considering sealing performance or whether a valve will open or close. Definition: Presently or formerly popular pressure units include the following: atmosphere (atm) manometric units: centimetre, inch, millimetre (torr) and micrometre (mTorr, micron) of mercury, height of equivalent column of water, including millimetre (mm H2O), centimetre (cm H2O), metre, inch, and foot of water; imperial and customary units: kip, short ton-force, long ton-force, pound-force, ounce-force, and poundal per square inch, short ton-force and long ton-force per square inch, fsw (feet sea water) used in underwater diving, particularly in connection with diving pressure exposure and decompression; non-SI metric units: bar, decibar, millibar, msw (metres sea water), used in underwater diving, particularly in connection with diving pressure exposure and decompression, kilogram-force, or kilopond, per square centimetre (technical atmosphere), gram-force and tonne-force (metric ton-force) per square centimetre, barye (dyne per square centimetre), kilogram-force and tonne-force per square metre, sthene per square metre (pieze). Definition: Examples As an example of varying pressures, a finger can be pressed against a wall without making any lasting impression; however, the same finger pushing a thumbtack can easily damage the wall. Although the force applied to the surface is the same, the thumbtack applies more pressure because the point concentrates that force into a smaller area. Pressure is transmitted to solid boundaries or across arbitrary sections of fluid normal to these boundaries or sections at every point. Unlike stress, pressure is defined as a scalar quantity. The negative gradient of pressure is called the force density.Another example is a knife. If the flat edge is used, force is distributed over a larger surface area resulting in less pressure, and it will not cut. Whereas using the sharp edge, which has less surface area, results in greater pressure, and so the knife cuts smoothly. This is one example of a practical application of pressureFor gases, pressure is sometimes measured not as an absolute pressure, but relative to atmospheric pressure; such measurements are called gauge pressure. An example of this is the air pressure in an automobile tire, which might be said to be "220 kPa (32 psi)", but is actually 220 kPa (32 psi) above atmospheric pressure. Since atmospheric pressure at sea level is about 100 kPa (14.7 psi), the absolute pressure in the tire is therefore about 320 kPa (46 psi). In technical work, this is written "a gauge pressure of 220 kPa (32 psi)". Where space is limited, such as on pressure gauges, name plates, graph labels, and table headings, the use of a modifier in parentheses, such as "kPa (gauge)" or "kPa (absolute)", is permitted. In non-SI technical work, a gauge pressure of 32 psi (220 kPa) is sometimes written as "32 psig", and an absolute pressure as "32 psia", though the other methods explained above that avoid attaching characters to the unit of pressure are preferred.Gauge pressure is the relevant measure of pressure wherever one is interested in the stress on storage vessels and the plumbing components of fluidics systems. However, whenever equation-of-state properties, such as densities or changes in densities, must be calculated, pressures must be expressed in terms of their absolute values. For instance, if the atmospheric pressure is 100 kPa (15 psi), a gas (such as helium) at 200 kPa (29 psi) (gauge) (300 kPa or 44 psi [absolute]) is 50% denser than the same gas at 100 kPa (15 psi) (gauge) (200 kPa or 29 psi [absolute]). Focusing on gauge values, one might erroneously conclude the first sample had twice the density of the second one. Definition: Scalar nature In a static gas, the gas as a whole does not appear to move. The individual molecules of the gas, however, are in constant random motion. Because there are an extremely large number of molecules and because the motion of the individual molecules is random in every direction, no motion is detected. When the gas is at least partially confined (that is, not free to expand rapidly), the gas will exhibit a static pressure. This confinement can be achieved with either a physical container of some sort, or in a gravitational well such as a planet, otherwise known as atmospheric pressure. In a physical container, the pressure of the gas originates from the molecules colliding with the walls of the container. The walls of the container can be anywhere inside the gas, and the force per unit area (the pressure) is the same. If the "container" is shrunk down to a very small point (becoming less true as the atomic scale is approached), the pressure will still have a single value at that point. Therefore, pressure is a scalar quantity, not a vector quantity. It has magnitude but no direction sense associated with it. Pressure force acts in all directions at a point inside a gas. At the surface of a gas, the pressure force acts perpendicular (at right angle) to the surface.A closely related quantity is the stress tensor σ, which relates the vector force F to the vector area A via the linear relation F=σA This tensor may be expressed as the sum of the viscous stress tensor minus the hydrostatic pressure. The negative of the stress tensor is sometimes called the pressure tensor, but in the following, the term "pressure" will refer only to the scalar pressure.According to the theory of general relativity, pressure increases the strength of a gravitational field (see stress–energy tensor) and so adds to the mass-energy cause of gravity. This effect is unnoticeable at everyday pressures but is significant in neutron stars, although it has not been experimentally tested. Types: Fluid pressure Fluid pressure is most often the compressive stress at some point within a fluid. (The term fluid refers to both liquids and gases – for more information specifically about liquid pressure, see section below.) Fluid pressure occurs in one of two situations: An open condition, called "open channel flow", e.g. the ocean, a swimming pool, or the atmosphere. Types: A closed condition, called "closed conduit", e.g. a water line or gas line.Pressure in open conditions usually can be approximated as the pressure in "static" or non-moving conditions (even in the ocean where there are waves and currents), because the motions create only negligible changes in the pressure. Such conditions conform with principles of fluid statics. The pressure at any given point of a non-moving (static) fluid is called the hydrostatic pressure. Types: Closed bodies of fluid are either "static", when the fluid is not moving, or "dynamic", when the fluid can move as in either a pipe or by compressing an air gap in a closed container. The pressure in closed conditions conforms with the principles of fluid dynamics. Types: The concepts of fluid pressure are predominantly attributed to the discoveries of Blaise Pascal and Daniel Bernoulli. Bernoulli's equation can be used in almost any situation to determine the pressure at any point in a fluid. The equation makes some assumptions about the fluid, such as the fluid being ideal and incompressible. An ideal fluid is a fluid in which there is no friction, it is inviscid (zero viscosity). The equation for all points of a system filled with a constant-density fluid is pγ+v22g+z=const, where: p, pressure of the fluid, γ = ρg, density × acceleration of gravity is the (volume-) specific weight of the fluid, v, velocity of the fluid, g, acceleration of gravity, z, elevation, pγ , pressure head, v22g , velocity head. Types: Applications Hydraulic brakes Artesian well Blood pressure Hydraulic head Plant cell turgidity Pythagorean cup Pressure washing Explosion or deflagration pressures Explosion or deflagration pressures are the result of the ignition of explosive gases, mists, dust/air suspensions, in unconfined and confined spaces. Types: Negative pressures While pressures are, in general, positive, there are several situations in which negative pressures may be encountered: When dealing in relative (gauge) pressures. For instance, an absolute pressure of 80 kPa may be described as a gauge pressure of −21 kPa (i.e., 21 kPa below an atmospheric pressure of 101 kPa). For example, abdominal decompression is an obstetric procedure during which negative gauge pressure is applied intermittently to a pregnant woman's abdomen. Types: Negative absolute pressures are possible. They are effectively tension, and both bulk solids and bulk liquids can be put under negative absolute pressure by pulling on them. Microscopically, the molecules in solids and liquids have attractive interactions that overpower the thermal kinetic energy, so some tension can be sustained. Thermodynamically, however, a bulk material under negative pressure is in a metastable state, and it is especially fragile in the case of liquids where the negative pressure state is similar to superheating and is easily susceptible to cavitation. In certain situations, the cavitation can be avoided and negative pressures sustained indefinitely, for example, liquid mercury has been observed to sustain up to −425 atm in clean glass containers. Negative liquid pressures are thought to be involved in the ascent of sap in plants taller than 10 m (the atmospheric pressure head of water). Types: The Casimir effect can create a small attractive force due to interactions with vacuum energy; this force is sometimes termed "vacuum pressure" (not to be confused with the negative gauge pressure of a vacuum). For non-isotropic stresses in rigid bodies, depending on how the orientation of a surface is chosen, the same distribution of forces may have a component of positive pressure along one surface normal, with a component of negative pressure acting along another surface normal. The stresses in an electromagnetic field are generally non-isotropic, with the pressure normal to one surface element (the normal stress) being negative, and positive for surface elements perpendicular to this. In cosmology, dark energy creates a very small yet cosmically significant amount of negative pressure, which accelerates the expansion of the universe. Types: Stagnation pressure Stagnation pressure is the pressure a fluid exerts when it is forced to stop moving. Consequently, although a fluid moving at higher speed will have a lower static pressure, it may have a higher stagnation pressure when forced to a standstill. Static pressure and stagnation pressure are related by: p0=12ρv2+p where p0 is the stagnation pressure, ρ is the density, v is the flow velocity, p is the static pressure.The pressure of a moving fluid can be measured using a Pitot tube, or one of its variations such as a Kiel probe or Cobra probe, connected to a manometer. Depending on where the inlet holes are located on the probe, it can measure static pressures or stagnation pressures. Types: Surface pressure and surface tension There is a two-dimensional analog of pressure – the lateral force per unit length applied on a line perpendicular to the force. Surface pressure is denoted by π: π=Fl and shares many similar properties with three-dimensional pressure. Properties of surface chemicals can be investigated by measuring pressure/area isotherms, as the two-dimensional analog of Boyle's law, πA = k, at constant temperature. Surface tension is another example of surface pressure, but with a reversed sign, because "tension" is the opposite to "pressure". Types: Pressure of an ideal gas In an ideal gas, molecules have no volume and do not interact. According to the ideal gas law, pressure varies linearly with temperature and quantity, and inversely with volume: p=nRTV, where: p is the absolute pressure of the gas, n is the amount of substance, T is the absolute temperature, V is the volume, R is the ideal gas constant.Real gases exhibit a more complex dependence on the variables of state. Types: Vapour pressure Vapour pressure is the pressure of a vapour in thermodynamic equilibrium with its condensed phases in a closed system. All liquids and solids have a tendency to evaporate into a gaseous form, and all gases have a tendency to condense back to their liquid or solid form. Types: The atmospheric pressure boiling point of a liquid (also known as the normal boiling point) is the temperature at which the vapor pressure equals the ambient atmospheric pressure. With any incremental increase in that temperature, the vapor pressure becomes sufficient to overcome atmospheric pressure and lift the liquid to form vapour bubbles inside the bulk of the substance. Bubble formation deeper in the liquid requires a higher pressure, and therefore higher temperature, because the fluid pressure increases above the atmospheric pressure as the depth increases. Types: The vapor pressure that a single component in a mixture contributes to the total pressure in the system is called partial vapor pressure. Types: Liquid pressure When a person swims under the water, water pressure is felt acting on the person's eardrums. The deeper that person swims, the greater the pressure. The pressure felt is due to the weight of the water above the person. As someone swims deeper, there is more water above the person and therefore greater pressure. The pressure a liquid exerts depends on its depth. Types: Liquid pressure also depends on the density of the liquid. If someone was submerged in a liquid more dense than water, the pressure would be correspondingly greater. Thus, we can say that the depth, density and liquid pressure are directly proportionate. The pressure due to a liquid in liquid columns of constant density or at a depth within a substance is represented by the following formula: p=ρgh, where: p is liquid pressure, g is gravity at the surface of overlaying material, ρ is density of liquid, h is height of liquid column or depth within a substance.Another way of saying the same formula is the following: weight density depth . Types: The pressure a liquid exerts against the sides and bottom of a container depends on the density and the depth of the liquid. If atmospheric pressure is neglected, liquid pressure against the bottom is twice as great at twice the depth; at three times the depth, the liquid pressure is threefold; etc. Or, if the liquid is two or three times as dense, the liquid pressure is correspondingly two or three times as great for any given depth. Liquids are practically incompressible – that is, their volume can hardly be changed by pressure (water volume decreases by only 50 millionths of its original volume for each atmospheric increase in pressure). Thus, except for small changes produced by temperature, the density of a particular liquid is practically the same at all depths. Types: Atmospheric pressure pressing on the surface of a liquid must be taken into account when trying to discover the total pressure acting on a liquid. The total pressure of a liquid, then, is ρgh plus the pressure of the atmosphere. When this distinction is important, the term total pressure is used. Otherwise, discussions of liquid pressure refer to pressure without regard to the normally ever-present atmospheric pressure. Types: The pressure does not depend on the amount of liquid present. Volume is not the important factor – depth is. The average water pressure acting against a dam depends on the average depth of the water and not on the volume of water held back. For example, a wide but shallow lake with a depth of 3 m (10 ft) exerts only half the average pressure that a small 6 m (20 ft) deep pond does. (The total force applied to the longer dam will be greater, due to the greater total surface area for the pressure to act upon. But for a given 5-foot (1.5 m)-wide section of each dam, the 10 ft (3.0 m) deep water will apply one quarter the force of 20 ft (6.1 m) deep water). A person will feel the same pressure whether their head is dunked a metre beneath the surface of the water in a small pool or to the same depth in the middle of a large lake. If four interconnected vases contain different amounts of water but are all filled to equal depths, then a fish with its head dunked a few centimetres under the surface will be acted on by water pressure that is the same in any of the vases. If the fish swims a few centimetres deeper, the pressure on the fish will increase with depth and be the same no matter which vase the fish is in. If the fish swims to the bottom, the pressure will be greater, but it makes no difference which vase it is in. All vases are filled to equal depths, so the water pressure is the same at the bottom of each vase, regardless of its shape or volume. If water pressure at the bottom of a vase were greater than water pressure at the bottom of a neighboring vase, the greater pressure would force water sideways and then up the narrower vase to a higher level until the pressures at the bottom were equalized. Pressure is depth dependent, not volume dependent, so there is a reason that water seeks its own level. Types: Restating this as an energy equation, the energy per unit volume in an ideal, incompressible liquid is constant throughout its vessel. At the surface, gravitational potential energy is large but liquid pressure energy is low. At the bottom of the vessel, all the gravitational potential energy is converted to pressure energy. The sum of pressure energy and gravitational potential energy per unit volume is constant throughout the volume of the fluid and the two energy components change linearly with the depth. Mathematically, it is described by Bernoulli's equation, where velocity head is zero and comparisons per unit volume in the vessel are pγ+z=const. Types: Terms have the same meaning as in section Fluid pressure. Types: Direction of liquid pressure An experimentally determined fact about liquid pressure is that it is exerted equally in all directions. If someone is submerged in water, no matter which way that person tilts their head, the person will feel the same amount of water pressure on their ears. Because a liquid can flow, this pressure is not only downward. Pressure is seen acting sideways when water spurts sideways from a leak in the side of an upright can. Pressure also acts upward, as demonstrated when someone tries to push a beach ball beneath the surface of the water. The bottom of a boat is pushed upward by water pressure (buoyancy). Types: When a liquid presses against a surface, there is a net force that is perpendicular to the surface. Although pressure does not have a specific direction, force does. A submerged triangular block has water forced against each point from many directions, but components of the force that are not perpendicular to the surface cancel each other out, leaving only a net perpendicular point. This is why water spurting from a hole in a bucket initially exits the bucket in a direction at right angles to the surface of the bucket in which the hole is located. Then it curves downward due to gravity. If there are three holes in a bucket (top, bottom, and middle), then the force vectors perpendicular to the inner container surface will increase with increasing depth – that is, a greater pressure at the bottom makes it so that the bottom hole will shoot water out the farthest. The force exerted by a fluid on a smooth surface is always at right angles to the surface. The speed of liquid out of the hole is 2gh , where h is the depth below the free surface. This is the same speed the water (or anything else) would have if freely falling the same vertical distance h. Types: Kinematic pressure P=p/ρ0 is the kinematic pressure, where p is the pressure and ρ0 constant mass density. The SI unit of P is m2/s2. Kinematic pressure is used in the same manner as kinematic viscosity ν in order to compute the Navier–Stokes equation without explicitly showing the density ρ0 Navier–Stokes equation with kinematic quantities ∂u∂t+(u∇)u=−∇P+ν∇2u.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lambda Arietis** Lambda Arietis: Lambda Arietis (λ Ari, λ Arietis) is the Bayer designation for a double star in the northern constellation of Aries. Based upon an annual parallax shift of 25.32 arcseconds, this system is approximately 129 light-years (40 parsecs) distant from Earth. The pair have a combined apparent visual magnitude of 4.79, which is bright enough to be viewed with the naked eye. Because the yellow secondary is nearly three magnitudes fainter than the white primary, they are a challenge to split with quality 7× binoculars and are readily resolvable at 10×.The brighter component is an F-type main sequence star with a visual magnitude of 4.95 and a stellar classification of F0 V. At an angular separation of 37.4 arcseconds is fainter, magnitude 7.75 companion. This is a G-type main sequence star with a classification of G1 V.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bumper brim** Bumper brim: A bumper brim is a millinery feature in which the hat brim is tubular in design, making it a prominent feature of the hat. In order to achieve this effect, the brim may be rolled, stiffened or padded. A bumper brim can be added to a variety of hat designs, from small to large. History of the design: The bumper brim was popular during the 1930s, when it was added to small hats, usually these were tilted well forward on the face. It could be incorporated into hats made of a variety of materials; a 1937 article in The Times describes a new trend in London for small summertime bumper brim hats, designed for street rather than beach wear, made of straw, grosgrain or felt.In the same year, a Virginia Gardner article in the Chicago Tribune reported on key trends from Chicago designers and highlighted the bumper brim as the major innovation of the season. "'The new muffin hat', a buyer explained. 'It is exceeded in importance only by the new bumper brim'."Bumper-brimmed designs also featured in the 1940s, when they were often worn well back on the head – often in the style of a halo hat – in order to frame the face. Millinery editor of Women's Wear Daily Maud G. Moody attended a 1946 fashion show in New York held by representatives of the French millinery industry – including Elsa Schiaparelli and Rose Descat – and described the most notable designs as including beret-type hats with bumper brims. She also highlighted a wide-brimmed padre hat, combining red crown with navy-blue bumper brim.In the 1950s, hats with bumper brims were often worn square on, creating a wider profile. History of the design: Notable bumper brim hats Hillary Clinton wore a blue velour rolled-brim hat at Bill Clinton's 1993 Presidential Inauguration. The design, which was by Connecticut milliner Darcy Creech, attracted criticism. An article in The New York Times reported it was considered unflattering by fashion critics and some commentators considered it inappropriate to wear a hat once her jacket had been removed. An article originally published in The Times ahead of the 2009 inauguration of Barack Obama provided a run-through of previous fashion hits and misses among first ladies and noted that Hillary Clinton's headgear had become known as the "Oh-God-What-is-That? Hat".Princess Beatrix of the Netherlands favours a bumper-brim style, wearing a blue version during a 2013 visit to Amsterdam with President Putin. She also wore a distinctive multiple-rimmed bumper design in black straw for the memorial service to Richard von Weizsaecker in February 2015.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shantanu Sengupta** Shantanu Sengupta: Shantanu Sengupta is an Indian cell biologist and a professor at the Institute of Genomics and Integrative Biology (IGIB) of the Council of Scientific and Industrial Research. At IGIB, he coordinates the activities of the National Facility for Biochemical and Genomic Resources (NFBGR) and the Proteomics and Structural Biology Unit of the institute. He is a member of the executive council of the Proteomic Society, India and is known for his studies of cardiovascular diseases from a genetic perspective as well as of Homocysteine with regard to its toxicity and its role in epigenetic modifications. His studies have been documented by way of a number of articles and ResearchGate, an online repository of scientific articles has listed 149 of them. The Department of Biotechnology of the Government of India awarded him the National Bioscience Award for Career Development, one of the highest Indian science awards, for his contributions to biosciences, in 2011. Selected bibliography: Ghose, Subhoshree; Bhaskar, AkashKumar; Sharma, Anju; Sengupta, Shantanu (1 September 2016). "Mendelian randomization: A biologist's perspective". Journal of the Practice of Cardiovascular Sciences. 2 (3): 151. doi:10.4103/jpcs.jpcs_62_16. Sengupta, Shantanu; Ghose, Subhoshree; Sharma, Anju; Agarwal, Aishwarya (1 September 2015). "Human-induced pluripotent stem cells in modeling inherited cardiomyopathies". Journal of the Practice of Cardiovascular Sciences. 1 (3): 241. doi:10.4103/2395-5414.177232. Sengupta, Shantanu; Varshney, Swati; Bhardwaj, Nitin; Basak, Trayambak (1 January 2015). "Identification of differentially expressed proteins in vitamin B12". Journal of the Practice of Cardiovascular Sciences. 1 (1): 45. doi:10.4103/2395-5414.157568.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dehydroepiandrosterone** Dehydroepiandrosterone: Dehydroepiandrosterone (DHEA), also known as androstenolone, is an endogenous steroid hormone precursor. It is one of the most abundant circulating steroids in humans. DHEA is produced in the adrenal glands, the gonads, and the brain. It functions as a metabolic intermediate in the biosynthesis of the androgen and estrogen sex steroids both in the gonads and in various other tissues. However, DHEA also has a variety of potential biological effects in its own right, binding to an array of nuclear and cell surface receptors, and acting as a neurosteroid and modulator of neurotrophic factor receptors.In the United States, DHEA is sold as an over-the-counter supplement, and medication called prasterone. Biological function: As an androgen DHEA and other adrenal androgens such as androstenedione, although relatively weak androgens, are responsible for the androgenic effects of adrenarche, such as early pubic and axillary hair growth, adult-type body odor, increased oiliness of hair and skin, and mild acne. DHEA is potentiated locally via conversion into testosterone and dihydrotestosterone (DHT) in the skin and hair follicles. Women with complete androgen insensitivity syndrome (CAIS), who have a non-functional androgen receptor (AR) and are immune to the androgenic effects of DHEA and other androgens, have absent or only sparse/scanty pubic and axillary hair and body hair in general, demonstrating the role of DHEA and other androgens in body hair development at both adrenarche and pubarche. Biological function: As an estrogen DHEA is a weak estrogen. In addition, it is transformed into potent estrogens such as estradiol in certain tissues such as the vagina, and thereby produces estrogenic effects in such tissues. As a neurosteroid As a neurosteroid and neurotrophin, DHEA has important effects in the central nervous system. Biological activity: Hormonal activity Androgen receptor Although it functions as an endogenous precursor to more potent androgens such as testosterone and DHT, DHEA has been found to possess some degree of androgenic activity in its own right, acting as a low affinity (Ki = 1 μM), weak partial agonist of the androgen receptor (AR). However, its intrinsic activity at the receptor is quite weak, and on account of that, due to competition for binding with full agonists like testosterone, it can actually behave more like an antagonist depending on circulating testosterone and dihydrotestosterone (DHT) levels, and hence, like an antiandrogen. However, its affinity for the receptor is very low, and for that reason, is unlikely to be of much significance under normal circumstances. Biological activity: Estrogen receptors In addition to its affinity for the androgen receptor, DHEA has also been found to bind to (and activate) the ERα and ERβ estrogen receptors with Ki values of 1.1 μM and 0.5 μM, respectively, and EC50 values of >1 μM and 200 nM, respectively. Though it was found to be a partial agonist of the ERα with a maximal efficacy of 30–70%, the concentrations required for this degree of activation make it unlikely that the activity of DHEA at this receptor is physiologically meaningful. Remarkably however, DHEA acts as a full agonist of the ERβ with a maximal response similar to or actually slightly greater than that of estradiol, and its levels in circulation and local tissues in the human body are high enough to activate the receptor to the same degree as that seen with circulating estradiol levels at somewhat higher than their maximal, non-ovulatory concentrations; indeed, when combined with estradiol with both at levels equivalent to those of their physiological concentrations, overall activation of the ERβ was doubled. Biological activity: Other nuclear receptors DHEA does not bind to or activate the progesterone, glucocorticoid, or mineralocorticoid receptors. Other nuclear receptor targets of DHEA besides the androgen and estrogen receptors include the PPARα, PXR, and CAR. However, whereas DHEA is a ligand of the PPARα and PXR in rodents, it is not in humans. In addition to direct interactions, DHEA is thought to regulate a handful of other proteins via indirect, genomic mechanisms, including the enzymes CYP2C11 and 11β-HSD1 – the latter of which is essential for the biosynthesis of the glucocorticoids such as cortisol and has been suggested to be involved in the antiglucocorticoid effects of DHEA – and the carrier protein IGFBP1. Biological activity: Neurosteroid activity Neurotransmitter receptors DHEA has been found to directly act on several neurotransmitter receptors, including acting as a positive allosteric modulator of the NMDA receptor, as a negative allosteric modulator of the GABAA receptor, and as an agonist of the σ1 receptor. Biological activity: Neurotrophin receptors In 2011, the surprising discovery was made that DHEA, as well as its sulfate ester, DHEA-S, directly bind to and activate TrkA and p75NTR, receptors of neurotrophins like nerve growth factor (NGF) and brain-derived neurotrophic factor (BDNF), with high affinity. DHEA was subsequently also found to bind to TrkB and TrkC with high affinity, though it only activated TrkC not TrkB. DHEA and DHEA-S bound to these receptors with affinities in the low nanomolar range (around 5 nM), which were nonetheless approximately two orders of magnitude lower relative to highly potent polypeptide neurotrophins like NGF (0.01–0.1 nM). In any case, DHEA and DHEA-S both circulate at requisite concentrations to activate these receptors and were thus identified as important endogenous neurotrophic factors. They have since been labeled "steroidal microneurotrophins", due to their small-molecule and steroidal nature relative to their polypeptide neurotrophin counterparts. Subsequent research has suggested that DHEA and/or DHEA-S may in fact be phylogenetically ancient "ancestral" ligands of the neurotrophin receptors from early on in the evolution of the nervous system. The findings that DHEA binds to and potently activates neurotrophin receptors may explain the positive association between decreased circulating DHEA levels with age and age-related neurodegenerative diseases. Biological activity: Microtubule-associated protein 2 Similarly to pregnenolone, its synthetic derivative 3β-methoxypregnenolone (MAP-4343), and progesterone, DHEA has been found to bind to microtubule-associated protein 2 (MAP2), specifically the MAP2C subtype (Kd = 27 μM). However, it is unclear whether DHEA increases binding of MAP2 to tubulin like pregnenolone. ADHD Some research has shown that DHEA levels are too low in people with ADHD, and treatment with methylphenidate or bupropion (stimulant type of medications) normalizes DHEA levels. Biological activity: Other activity G6PDH inhibitor DHEA is an uncompetitive inhibitor of G6PDH (Ki = 17 μM; IC50 = 18.7 μM), and is able to lower NADPH levels and reduce NADPH-dependent free radical production. It is thought that this action may possibly be responsible for much of the antiinflammatory, antihyperplastic, chemopreventative, antihyperlipidemic, antidiabetic, and antiobesic, as well as certain immunomodulating activities of DHEA (with some experimental evidence to support this notion available). However, it has also been said that inhibition of G6PDH activity by DHEA in vivo has not been observed and that the concentrations required for DHEA to inhibit G6PDH in vitro are very high, thus making the possible contribution of G6PDH inhibition to the effects of DHEA uncertain. Biological activity: Cancer DHEA supplements have been promoted as chemopreventative, for their claimed cancer prevention properties. There is scientific evidence to support these claims. Miscellaneous DHEA has been found to competitively inhibit TRPV1. Biochemistry: Biosynthesis DHEA is produced in the zona reticularis of the adrenal cortex under the control of adrenocorticotropic hormone (ACTH) and by the gonads under the control of gonadotropin-releasing hormone (GnRH). It is also produced in the brain. DHEA is synthesized from cholesterol via the enzymes cholesterol side-chain cleavage enzyme (CYP11A1; P450scc) and 17α-hydroxylase/17,20-lyase (CYP17A1), with pregnenolone and 17α-hydroxypregnenolone as intermediates. It is derived mostly from the adrenal cortex, with only about 10% being secreted from the gonads. Approximately 50 to 70% of circulating DHEA originates from desulfation of DHEA-S in peripheral tissues. DHEA-S itself originates almost exclusively from the adrenal cortex, with 95 to 100% being secreted from the adrenal cortex in women. Biochemistry: Increasing endogenous production Regular exercise is known to increase DHEA production in the body. Calorie restriction has also been shown to increase DHEA in primates. Some theorize that the increase in endogenous DHEA brought about by calorie restriction is partially responsible for the longer life expectancy known to be associated with calorie restriction. Distribution In the circulation, DHEA is mainly bound to albumin, with a small amount bound to sex hormone-binding globulin (SHBG). The small remainder of DHEA not associated with albumin or SHBG is unbound and free in the circulation.DHEA easily crosses the blood–brain barrier into the central nervous system. Biochemistry: Metabolism DHEA is transformed into DHEA-S by sulfation at the C3β position via the sulfotransferase enzymes SULT2A1 and to a lesser extent SULT1E1. This occurs naturally in the adrenal cortex and during first-pass metabolism in the liver and intestines when exogenous DHEA is administered orally. Levels of DHEA-S in circulation are approximately 250 to 300 times those of DHEA. DHEA-S in turn can be converted back into DHEA in peripheral tissues via steroid sulfatase (STS).The terminal half-life of DHEA is short at only 15 to 30 minutes. In contrast, the terminal half-life of DHEA-S is far longer, at 7 to 10 hours. As DHEA-S can be converted back into DHEA, it serves as a circulating reservoir for DHEA, thereby extending the duration of DHEA.Metabolites of DHEA include DHEA-S, 7α-hydroxy-DHEA, 7β-hydroxy-DHEA, 7-keto-DHEA, 7α-hydroxyepiandrosterone, and 7β-hydroxyepiandrosterone, as well as androstenediol and androstenedione. Biochemistry: Pregnancy During pregnancy, DHEA-S is metabolized into the sulfates of 16α-hydroxy-DHEA and 15α-hydroxy-DHEA in the fetal liver as intermediates in the production of the estrogens estriol and estetrol, respectively. Biochemistry: Levels Prior to puberty in humans, DHEA and DHEA-S levels elevate upon differentiation of the zona reticularis of the adrenal cortex. Peak levels of DHEA and DHEA-S are observed around age 20, which is followed by an age-dependent decline throughout life eventually back to prepubertal concentrations. Plasma levels of DHEA in adult men are 10 to 25 nM, in premenopausal women are 5 to 30 nM, and in postmenopausal women are 2 to 20 nM. Conversely, DHEA-S levels are an order of magnitude higher at 1–10 μM. Levels of DHEA and DHEA-S decline to the lower nanomolar and micromolar ranges in men and women aged 60 to 80 years.DHEA levels are as follows: Adult men: 180–1250 ng/dL Adult women: 130–980 ng/dL Pregnant women: 135–810 ng/dL Prepubertal children (<1 year): 26–585 ng/dL Prepubertal children (1–5 years): 9–68 ng/dL Prepubertal children (6–12 years): 11–186 ng/dL Adolescent boys (Tanner II–III): 25–300 ng/dL Adolescent girls (Tanner II–III): 69–605 ng/dL Adolescent boys (Tanner IV–V): 100–400 ng/dL Adolescent girls (Tanner IV–V): 165–690 ng/dL Measurement As almost all DHEA is derived from the adrenal glands, blood measurements of DHEA-S/DHEA are useful to detect excess adrenal activity as seen in adrenal cancer or hyperplasia, including certain forms of congenital adrenal hyperplasia. Women with polycystic ovary syndrome tend to have elevated levels of DHEA-S. Chemistry: DHEA, also known as androst-5-en-3β-ol-17-one, is a naturally occurring androstane steroid and a 17-ketosteroid. It is closely related structurally to androstenediol (androst-5-ene-3β,17β-diol), androstenedione (androst-4-ene-3,17-dione), and testosterone (androst-4-en-17β-ol-3-one). DHEA is the 5-dehydro analogue of epiandrosterone (5α-androstan-3β-ol-17-one) and is also known as 5-dehydroepiandrosterone or as δ5-epiandrosterone. Chemistry: Isomers The term "dehydroepiandrosterone" is ambiguous chemically because it does not include the specific positions within epiandrosterone at which hydrogen atoms are missing. DHEA itself is 5,6-didehydroepiandrosterone or 5-dehydroepiandrosterone. A number of naturally occurring isomers also exist and may have similar activities. Some isomers of DHEA are 1-dehydroepiandrosterone (1-androsterone) and 4-dehydroepiandrosterone. These isomers are also technically "DHEA", since they are dehydroepiandrosterones in which hydrogens are removed from the epiandrosterone skeleton. Chemistry: Dehydroandrosterone (DHA) is the 3α-epimer of DHEA and is also an endogenous androgen. History: DHEA was first isolated from human urine in 1934 by Adolf Butenandt and Kurt Tscherning.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Direct Agency / Rep Electronic Connection** Direct Agency / Rep Electronic Connection: Direct Agency / Rep Electronic Connection (DARE), or Direct Agency Rep Exchange, is an exchange protocol used by advertising agencies and television station sales representatives to transact and manage electronically spot TV orders, offers, revisions, and confirmations. It was developed by Mediaocean and supported by Imagine Communications.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zygomasseteric system** Zygomasseteric system: The zygomasseteric system (or zygomasseteric structure) in rodents is the anatomical arrangement of the masseter muscle of the jaw and the zygomatic arch of the skull. The anteroposterior or propalinal (front-to-back) motion of the rodent jaw is enabled by an extension of the zygomatic arch and the division of the masseter into a superficial, lateral and medial muscle. The four main types are described as protrogomorphous, sciuromorphous, hystricomorphous, and myomorphous. Protrogomorphy: The members of this grade include nearly all of the pre-Oligocene rodents of North America and Asia and some of those of Europe. Several lineages survive into the Oligocene or early Miocene, with only one species still alive today, the mountain beaver (Aplodontia rufa). The molerats (family Bathyergidae) are considered secondarily protrogomorphous since their zygomatic condition is clearly derived from a hystricomorphous ancestor. The rostrum of protrogomorph rodents is unmodified and the infraorbital foramen is small. The superficial masseter originates on the lateral surface of the anterior maxilla and inserts along the ventral margin of the angular process of the mandible. The lateral masseter inserts here as well and originates from the lateral portion of the zygomatic arch. The small medial masseter originates along the medial surface of the zygomatic arch and inserts along the dorsal portion of the mandible at the end of the tooth row. Sciuromorphy: This condition is found in most members of the family Sciuridae (suborder Sciuromorpha), and also in members of the Castoridae, the Eomyidae, and the Geomyoidea. Sciuromorphy: Relative to the primitive protrogomorphous condition, the superficial masseter remains unchanged. The lateral masseter has shifted forward and upward, behind and medial to the superficial masseter. Here it originates from a wide zygomatic plate developed on the anterior (maxillary) root of the zygomatic arch. This shift of origin changed the direction of pull of the anterior part of the lateral masseter from 30 to 60 degrees, greatly strengthening the forward component of the masseter contraction. Hystricomorphy: This condition is found throughout the suborders Hystricomorpha and Anomaluromorpha. In the suborder Myomorpha, it is found in the superfamily Dipodoidea and some fossil Muroidea (such as Pappocricetodon). Hystricomorphy is also found in the African dormouse Graphiurus, which is a member of the suborder Sciuromorpha.In hystricomorphs the medial masseter is enlarged and originates on the side of the rostrum (in extreme cases as far forward as the premaxilla), where it then passes through a greatly enlarged infraorbital foramen to insert on the mandible. This gives an almost horizontal resultant to the muscle contraction. Myomorphy: This condition is found in the Muroidea (Myomorpha) and most Gliridae (Sciuromorpha: in the latter it is often referred to as pseudomyomorphy). suggest that the infraorbital foramen of the extinct sciurid subfamily Cedromurinae may have allowed for the passage of the masseter muscle. If true, this subfamily would represent an additional example of myomorphy in the rodent suborder Sciuromorpha. Myomorphs combine characteristics found in both the sciuromorphous and hystricomorphous rodents. Both the lateral and medial masseter muscles have migrated, and both a large zygomatic plate as well as a large infraorbital foramen are present. This type gives the greatest anteroposterior component of any rodent zygomasseteric system, which might explain the success of the cosmopolitan Muroidea.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BusKill** BusKill: BusKill is an open-source hardware and software project that designs computer kill cords to protect the confidentiality of the system's data from physical theft. The hardware designs are licensed CC BY-SA and the software is licensed GPLv3. BusKill cables are available commercially from the official website or through authorized distributors. The name BusKill is an amalgamation of "Bus" from USB and "Kill" from kill cord. History: The first computer kill cord was built by Michael Altfield in 2017The term "BusKill" was coined by Altfield in January 2020 when publishing the first BusKill build and udev usage instructions (Linux-only), and it was ported by cyberkryption from Linux to Windows a couple weeks later. The project's official website launched the following month.The first OS X version of the BusKill app was released in May 2020 by Steven Johnson. History: A cross-platform rewrite of the software based on Kivy was released in August 2020 with support for Linux, OS X, and Windows.In December 2021, Alt Shift International OÜ ran a crowdfunding campaign to manufacture BusKill cables on Crowd Supply. The campaign raised $18,507 by January 2022. Hardware: The BusKill cable is a kill cord that physically tethers a user to their computer with a USB cable.One end of the cable plugs into a computer. The other end of the cable is a carabiner that attaches to the user.In the middle of the cable is a magnetic breakaway coupler, to allow the cable to be safely separated at any angle without physically damaging the computer or the user.A 3D-printable hardware BusKill cable is currently under development. Software: The BusKill project maintains a cross-platform GUI app that can either lock the screen or shutdown the computer when the cable's connection to the computer is severed and the app is in the "armed" state. Use: If the computer is separated from the user, then a magnetic breakaway in the cable causes a USB hotplug removal event to execute a trigger in the app.The trigger executed by the BusKill cable's removal can lock the screen, shutdown, or securely erase the LUKS header and master encryption keys within a few seconds of the cable's separation.If combined with full disk encryption, then these triggers can be used to ensure the confidentiality of data or be used as a counter-forensics device.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Approval proofer** Approval proofer: Within the printing industry, the Approval proofer, also known as the Approval Digital Imaging System or Kodak Approval System, was designed for use in Prepress proofing, especially for the highest quality contract proofs.The Approval is a laminate based system where up to 6 color donors can transfer images to a receiver sheet by a high-powered laser. Once imaging is complete the image can be transfer to a wide variety of substrates including papers, boards, shrink wrap, plastics, metals, etc. Approval proofer: The system comes in two sizes: The 3-page model images a proof 13.3 × 20.9 in. (338 × 530 mm); the 4-page model images a proof 26.6 × 20.9 in. (676 × 530 mm) The Approval is similar to the Fuji Finalproof product. Another similar lamination device is the Creo Spectrum (now supported by Kodak) which is unique in that proofs are created on the actual plate-setting device (the Creo/Kodak Trendsetter). History: The Approval was introduced to the market by Kodak in 1991, and continues to be sold and used in printing shops as of 2010. The Approval Classic (original) version quickly became the market standard for contract proofs. That was quickly followed in 1995 by Approval PS. In 1998 there was a major redesign which is the basis for the contemporary product. The Approval NX released in 2004 decreased the printing time of spot colors giving users significant productivity improvements. There is continued research and development aimed at improving the quality and usefulness of Approval output. Prepress Applications: The Approval was designed to mimic the quality of Printing presses using high resolution imaging (2,400 or 2,540 DPI similar to the printing plate) and halftone screening to accurately reflect what would be seen on press. Stochastic screening (or FM screening) can also be used to proof print runs with this screening technique. Being able to simulate screening effects with high fidelity makes it possible to detect undesirable screening artifacts (i.e. Moiré patterns) before going to press, consequently saving customers time and money. Prepress Applications: The Approval system allows control over screen angles, screen ruling, density control per color, dot gain adjustment and dot shapes. Prepress Applications: The wide range of color donors makes it possible to simulate accurately process, corporate, brand, spot and special colors. Process donors include cyan, magenta, yellow and black. Additional donors, orange, green and blue, extend the color gamut. There are 2 opaque donors: white and metallic. The metallic donor combined with the other color donors allows for the creation of a wide range of metallic colors such as gold, copper, bronze, etc. This produces special effects not possible via inkjet printers but commonly used in today’s packaging. Prepress Applications: The Approval is especially useful in packaging applications because it is possible to transfer the images to so many of the different substrates used in the packaging industry. The white donor is a critical tool in replicating packaging printing that will be applied to clear packaging. The adjustable laydown order allow exact representation of the prepress shops most difficult print jobs such as package labels and lottery cards were white or silver is required on the top and bottom. Often customers want three-dimensional mock-ups of the actual package. This could be cardboard, metal (i.e. aluminum pop can), glass, plastic, shrink wrap, etc. Approval proofs are highly effective for these applications. Prepress Applications: As of 2010 the Approval supports several certified workflows: Kodak Proofing Software (KPS), Prinergy, Kodak (HQ-1), Brisque, EskoArtwork FlexRIP and Nexus, and Rampage RIPS / workflows with direct connections through the Open Front End (OFE) interface. Nexus, MetaDimensions, and Screen Trueflow all interface through the Approval Interface Toolkit software (AIT).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Affective forecasting** Affective forecasting: Affective forecasting (also known as hedonic forecasting, or the hedonic forecasting mechanism) is the prediction of one's affect (emotional state) in the future. As a process that influences preferences, decisions, and behavior, affective forecasting is studied by both psychologists and economists, with broad applications. History: In The Theory of Moral Sentiments (1759), Adam Smith observed the personal challenges, and social benefits, of hedonic forecasting errors: [Consider t]he poor man's son, whom heaven in its anger has visited with ambition, when he begins to look around him, admires the condition of the rich …. and, in order to arrive at it, he devotes himself for ever to the pursuit of wealth and greatness…. Through the whole of his life he pursues the idea of a certain artificial and elegant repose which he may never arrive at, for which he sacrifices a real tranquillity that is at all times in his power, and which, if in the extremity of old age he should at last attain…, he will find to be in no respect preferable to that humble security and contentment which he had abandoned for it. It is then, in the last dregs of life, his body wasted with toil and diseases, his mind galled and ruffled by the memory of a thousand injuries and disappointments..., that he begins at last to find that wealth and greatness are mere trinkets of frivolous utility…. History: [Yet] it is well that nature imposes upon us in this manner. It is this deception which rouses and keeps in continual motion the industry of mankind. History: In the early 1990s, Kahneman and Snell began research on hedonic forecasts, examining its impact on decision making. The term "affective forecasting" was later coined by psychologists Timothy Wilson and Daniel Gilbert. Early research tended to focus solely on measuring emotional forecasts, while subsequent studies began to examine the accuracy of forecasts, revealing that people are surprisingly poor judges of their future emotional states. For example, in predicting how events like winning the lottery might affect their happiness, people are likely to overestimate future positive feelings, ignoring the numerous other factors that might contribute to their emotional state outside of the single lottery event. Some of the cognitive biases related to systematic errors in affective forecasts are focalism, hot-cold empathy gap, and impact bias. Applications: While affective forecasting has traditionally drawn the most attention from economists and psychologists, their findings have in turn generated interest from a variety of other fields, including happiness research, law, and health care. Its effect on decision-making and well-being is of particular concern to policy-makers and analysts in these fields, although it also has applications in ethics. For example, one's tendency to underestimate one's ability to adapt to life-changing events has led to legal theorists questioning the assumptions behind tort damage compensation. Behavioral economists have incorporated discrepancies between forecasts and actual emotional outcomes into their models of different types of utility and welfare. This discrepancy also concerns healthcare analysts, in that many important health decisions depend upon patients' perceptions of their future quality of life. Overview: Affective forecasting can be divided into four components: predictions about valence (i.e. positive or negative), the specific emotions experienced, their duration, and their intensity. While errors may occur in all four components, research overwhelmingly indicates that the two areas most prone to bias, usually in the form of overestimation, are duration and intensity. Immune neglect is a form of impact bias in response to negative events, in which people fail to predict how much their recovery will be hastened by their psychological immune system. The psychological immune system is a metaphor "for that system of defenses that helps you feel better when bad things happen", according to Gilbert. On average, people are fairly accurate about predicting which emotions they will feel in response to future events. However, some studies indicate that predicting specific emotions in response to more complex social events leads to greater inaccuracy. For example, one study found that while many women who imagine encountering gender harassment predict feelings of anger, in reality, a much higher proportion report feelings of fear. Other research suggests that accuracy in affective forecasting is greater for positive affect than negative affect, suggesting an overall tendency to overreact to perceived negative events. Gilbert and Wilson posit that this is a result of the psychological immune system.While affective forecasts take place in the present moment, researchers also investigate its future outcomes. That is, they analyze forecasting as a two-step process, encompassing a current prediction as well as a future event. Breaking down the present and future stages allow researchers to measure accuracy, as well as tease out how errors occur. Gilbert and Wilson, for example, categorize errors based on which component they affect and when they enter the forecasting process. In the present phase of affective forecasting, forecasters bring to mind a mental representation of the future event and predict how they will respond emotionally to it. The future phase includes the initial emotional response to the onset of the event, as well as subsequent emotional outcomes, for example, the fading of the initial feeling.When errors occur throughout the forecasting process, people are vulnerable to biases. These biases disable people from accurately predicting their future emotions. Errors may arise due to extrinsic factors, such as framing effects, or intrinsic ones, such as cognitive biases or expectation effects. Because accuracy is often measured as the discrepancy between a forecaster's present prediction and the eventual outcome, researchers also study how time affects affective forecasting. For example, the tendency for people to represent distant events differently from close events is captured in the construal level theory.The finding that people are generally inaccurate affective forecasters has been most obviously incorporated into conceptualizations of happiness and its successful pursuit, as well as decision making across disciplines. Findings in affective forecasts have stimulated philosophical and ethical debates, for example, on how to define welfare. On an applied level, findings have informed various approaches to healthcare policy, tort law, consumer decision making, and measuring utility (see below sections on economics, law, and health). Overview: Newer and conflicting evidence suggests that intensity bias in affective forecasting may not be as strong as previous research indicates. Five studies, including a meta-analysis, recover evidence that overestimation in affective forecasting is partly due to the methodology of past research. Their results indicate that some participants misinterpreted specific questions in affective forecasting testing. For example, one study found that undergraduate students tended to overestimate experienced happiness levels when participants were asked how they were feeling in general with and without reference to the election, compared to when participants were asked how they were feeling specifically in reference to the election. Findings indicated that 75%-81% of participants who were asked general questions misinterpreted them. After clarification of tasks, participants were able to more accurately predict the intensity of their emotions Major sources of errors: Because forecasting errors commonly arise from literature on cognitive processes, many affective forecasting errors derive from and are often framed as cognitive biases, some of which are closely related or overlapping constructs (e.g. projection bias and empathy gap). Below is a list of commonly cited cognitive processes that contribute to forecasting errors. Major sources of errors: Major sources of error in emotion Impact bias One of the most common sources of error in affective forecasting across various populations and situations is impact bias, the tendency to overestimate the emotional impact of a future event, whether in terms of intensity or duration. The tendencies to overestimate intensity and duration are both robust and reliable errors found in affective forecasting.One study documenting impact bias examined college students participating in a housing lottery. These students predicted how happy or unhappy they would be one year after being assigned to either a desirable or an undesirable dormitory. These college students predicted that the lottery outcomes would lead to meaningful differences in their own level of happiness, but follow-up questionnaires revealed that students assigned to desirable or undesirable dormitories reported nearly the same levels of happiness. Thus, differences in forecasts overestimated the impact of the housing assignment on future happiness. Major sources of errors: Some studies specifically address "durability bias," the tendency to overestimate the length of time future emotional responses will last. Even if people accurately estimate the intensity of their future emotions, they may not be able to estimate their duration. Durability bias is generally stronger in reaction to negative events. This is important because people tend to work toward events they believe will cause lasting happiness, and according to durability bias, people might be working toward the wrong things. Similar to impact bias, durability bias causes a person to overemphasize where the root cause of their happiness lies.Impact bias is a broad term and covers a multitude of more specific errors. Proposed causes of impact bias include mechanisms like immune neglect, focalism, and misconstruals. The pervasiveness of impact bias in affective forecasts is of particular concern to healthcare specialists, in that it affects both patients' expectations of future medical events as well as patient-provider relationships. (See health.) Expectation effects Previously formed expectations can alter emotional responses to the event itself, motivating forecasters to confirm or debunk their initial forecasts. In this way, the self-fulfilling prophecy can lead to the perception that forecasters have made accurate predictions. Inaccurate forecasts can also become amplified by expectation effects. For example, a forecaster who expects a movie to be enjoyable will, upon finding it dull, like it significantly less than a forecaster who had no expectations. Major sources of errors: Sense-making processes Major life events can have a huge impact on people's emotions for a very long time but the intensity of that emotion tends to decrease with time, a phenomenon known as emotional evanescence. When making forecasts, forecasters often overlook this phenomenon. Psychologists have suggested that emotion does not decay over time predictably like radioactive isotopes but that the mediating factors are more complex. People have psychological processes that help dampen emotions. Psychologists have proposed that surprising, unexpected, or unlikely events cause more intense emotional reactions. Research suggests that people are unhappy with randomness and chaos and that they automatically think of ways to make sense of an event when it is surprising or unexpected. This sense-making helps individuals recover from negative events more quickly than they would have expected. This is related to immune neglect in that when these unwanted acts of randomness occur people become upset and try to find meaning or ways to cope with the event. The way that people try to make sense of the situation can be considered a coping strategy made by the body. This idea differs from immune neglect due to the fact that this is more of a momentary idea. Immune neglect tries to cope with the event before it even happens. Major sources of errors: One study documents how sense-making processes decrease emotional reactions. The study found that a small gift produced greater emotional reactions when it was not accompanied by a reason than when it was, arguably because the reason facilitated the sense-making process, dulling the emotional impact of the gift. Researchers have summarized that pleasant feelings are prolonged after a positive situation if people are uncertain about the situation.People fail to anticipate that they will make sense of events in a way that will diminish the intensity of the emotional reaction. This error is known as ordinization neglect. For example, ("I will be ecstatic for many years if my boss agrees to give me a raise") an employee might believe, especially if the employee believes the probability of a raise was unlikely. Immediately after having the request approved, the employee may be thrilled but with time the employees make sense of the situation (e.g., "I am a very hard worker and my boss must have noticed this") thus dampening the emotional reaction. Major sources of errors: Immune neglect Gilbert et al. originally coined the term immune neglect (or immune bias) to describe a function of the psychological immune system, which is the set of processes that restore positive emotions after the experience of negative emotions. Immune neglect is people's unawareness of their tendency to adapt to and cope with negative events. Unconsciously the body will identify a stressful event and try to cope with the event or try to avoid it. Bolger & Zuckerman found that coping strategies vary between individuals and are influenced by their personalities. They assumed that since people generally do not take their coping strategies into account when they predict future events, that people with better coping strategies should have a bigger impact bias or a greater difference between their predicted and actual outcome. For example, asking someone who is afraid of clowns how going to a circus would feel may result in an overestimation of fear because the anticipation of such fear causes the body to begin coping with the negative event. Hoerger et al. examined this further by studying college students' emotions toward football games. They found that students who generally coped with their emotions instead of avoiding them would have a greater impact bias when predicting how they'd feel if their team lost the game. They found that those with better coping strategies recovered more quickly. Since the participants did not think about their coping strategies when making predictions, those who actually coped had a greater impact bias. Those who avoided their emotions, felt very closely to what they predicted they would. In other words, students who were able to deal with their emotions were able to recover from their feelings. The students were unaware that their body was actually coping with the stress and this process made them feel better than not dealing with the stress. Hoerger ran another study on immune neglect after this, which studied both daters' and non-daters' forecasts about Valentine's Day, and how they would feel in the days that followed. Hoerger found that different coping strategies would cause people to have different emotions in the days following Valentine's Day, but participants' predicted emotions would all be similar. This shows that most people do not realize the impact that coping can have on their feelings following an emotional event. He also found that not only did immune neglect create a bias for negative events, but also for positive ones. This shows that people continually make inaccurate forecasts because they do not take into account their ability to cope and overcome emotional events. Hoerger proposed that coping styles and cognitive processes are associated with actual emotional reactions to life events.A variant of immune neglect also proposed by Gilbert and Wilson is the region-beta paradox, where recovery from more intense suffering is faster than recovery from less intense experiences because of the engagement of coping systems. This complicates forecasting, leading to errors. Contrarily, accurate affective forecasting can also promote the region-beta paradox. For example, Cameron and Payne conducted a series of studies in order to investigate the relationship between affective forecasting and the collapse of compassion phenomenon, which refers to the tendency for people's compassion to decrease as the number of people in need of help increases. Participants in their experiments read about either 1 or a group of 8 children from Darfur. These researchers found that people who are skilled at regulating their emotions tended to experience less compassion in response to stories about 8 children from Darfur compared to stories about only 1 child. These participants appeared to collapse their compassion by correctly forecasting their future affective states and proactively avoiding the increased negative emotions resulting from the story. In order to further establish the causal role of proactive emotional regulation in this phenomenon, participants in another study read the same materials and were encouraged to either reduce or experience their emotions. Participants instructed to reduce their emotions reported feeling less upset for 8 children than for 1, presumably because of the increased emotional burden and effort required for the former (an example of the region-beta paradox). These studies suggest that in some cases accurate affective forecasting can actually promote unwanted outcomes such as the collapse of compassion phenomenon by way of the region-beta paradox. Major sources of errors: Positive vs negative affect Research suggests that the accuracy of affective forecasting for positive and negative emotions is based on the distance in time of the forecast. Finkenauer, Gallucci, van Dijk, and Pollman discovered that people show greater forecasting accuracy for positive than negative affect when the event or trigger being forecast is more distant in time. Contrarily, people exhibit greater affective forecasting accuracy for negative affect when the event/trigger is closer in time. The accuracy of an affective forecast is also related to how well a person predicts the intensity of his or her emotions. In regard to forecasting both positive and negative emotions, Levine, Kaplan, Lench, and Safer have recently shown that people can in fact predict the intensity of their feelings about events with a high degree of accuracy. This finding is contrary to much of the affective forecasting literature currently published, which the authors suggest is due to a procedural artifact in how these studies were conducted. Major sources of errors: Another important affective forecasting bias is fading affect bias, in which the emotions associated with unpleasant memories fade more quickly than the emotion associated with positive events. Major sources of errors: Major sources of error in cognition Focalism Focalism (or the "focusing illusion") occurs when people focus too much on certain details of an event, ignoring other factors. Research suggests that people have a tendency to exaggerate aspects of life when focusing their attention on it. A well-known example originates from a paper by Kahneman and Schkade, who coined the term "focusing illusion" in 1998. They found that although people tended to believe that someone from the Midwest would be more satisfied if they lived in California, results showed equal levels of life satisfaction in residents of both regions. In this case, concentrating on the easily observed difference in weather bore more weight in predicting satisfaction than other factors. There are many other factors that could have contributed to the desire to move to the Midwest, but the focal point for their decisions was weather. Various studies have attempted to "defocus" participants, meaning instead of focusing on that one factor, they tried to make the participants think of other factors or look at the situation through a different lens. There were mixed results dependent upon the methods used. One successful study asked people to imagine how happy a winner of the lottery and a recently diagnosed HIV patient would be. The researchers were able to reduce the amount of focalism by exposing participants to detailed and mundane descriptions of each person's life, meaning that the more information the participants had on the lottery winner and the HIV patient the less they were able to only focus on few factors, these participants subsequently estimated similar levels of happiness for the HIV patient as well as the lottery-winner. As for the control participants, they made unrealistically disparate predictions of happiness. This could be due to the fact that the more information that is available, the less likely it is one will be able to ignore contributory factors. Major sources of errors: Time discounting Time discounting (or time preference) is the tendency to weigh present events over future events. Immediate gratification is preferred to delayed gratification, especially over longer periods of time and with younger children or adolescents. For example, a child may prefer one piece of candy now (1 candy/0 seconds=infinity candies/second) instead of five pieces of candy in four months (5 candies/10540800 seconds≈0.00000047candies/second). The bigger the candies/second, the more people like it. This pattern is sometimes referred to as hyperbolic discounting or "present bias" because people's judgements are biased toward present events. Economists often cite time discounting as a source of mispredictions of future utility. Major sources of errors: Memory Affective forecasters often rely on memories of past events. When people report memories of past events they may leave out important details, change things that occurred, and even add things that have not happened. This suggests the mind constructs memories based on what actually happened, and other factors including the person's knowledge, experiences, and existing schemas. Using highly available, but unrepresentative memories, increases the impact bias. Baseball fans, for example, tend to use the best game they can remember as the basis for their affective forecast of the game they are about to see. Commuters are similarly likely to base their forecasts of how unpleasant it would feel to miss a train on their memory of the worst time they missed the train Various studies indicate that retroactive assessments of past experiences are prone to various errors, such as duration neglect or decay bias. People tend to overemphasize the peaks and ends of their experiences when assessing them (peak/end bias), instead of analyzing the event as a whole. For example, in recalling painful experiences, people place greater emphasis on the most discomforting moments as well as the end of the event, as opposed to taking into account the overall duration. Retroactive reports often conflict with present-moment reports of events, further pointing to contradictions between the actual emotions experienced during an event and the memory of them. In addition to producing errors in forecasts about the future, this discrepancy has incited economists to redefine different types of utility and happiness (see the section on economics). Major sources of errors: Another problem that can arise with affective forecasting is that people tend to remember their past predictions inaccurately. Meyvis, Ratner, and Levav predicted that people forget how they predicted an experience would be beforehand, and thought their predictions were the same as their actual emotions. Because of this, people do not realize that they made a mistake in their predictions, and will then continue to inaccurately forecast similar situations in the future. Meyvis et al. ran five studies to test whether or not this is true. They found in all of their studies, when people were asked to recall their previous predictions they instead write how they currently feel about the situation. This shows that they do not remember how they thought they would feel, and makes it impossible for them to learn from this event for future experiences. Major sources of errors: Misconstruals When predicting future emotional states people must first construct a good representation of the event. If people have a lot of experience with the event then they can easily picture the event. When people do not have much experience with the event they need to create a representation of what the event likely contains. For example, if people were asked how they would feel if they lost one hundred dollars in a bet, gamblers are more likely to easily construct an accurate representation of the event. "Construal level theory" theorizes that distant events are conceptualized more abstractly than immediate ones. Thus, psychologists suggest that a lack of concrete details prompts forecasters to rely on more general or idealized representations of events, which subsequently leads to simplistic and inaccurate predictions. For example, when asked to imagine what a 'good day' would be like for them in the near future, people often describe both positive and negative events. When asked to imagine what a 'good day' would be like for them in a year, however, people resort to more uniformly positive descriptions. Gilbert and Wilson call bringing to mind a flawed representation of a forecasted event the misconstrual problem. Framing effects, environmental context, and heuristics (such as schemas) can all affect how a forecaster conceptualizes a future event. For example, the way options are framed affects how they are represented: when asked to forecast future levels of happiness based on pictures of dorms they may be assigned to, college students use physical features of the actual buildings to predict their emotions. In this case, the framing of options highlighted visual aspects of future outcomes, which overshadowed more relevant factors to happiness, such as having a friendly roommate. Major sources of errors: Projection bias Overview Projection bias is the tendency to falsely project current preferences onto a future event. When people are trying to estimate their emotional state in the future they attempt to give an unbiased estimate. However, people's assessments are contaminated by their current emotional state. Thus, it may be difficult for them to predict their emotional state in the future, an occurrence known as mental contamination. For example, if a college student was currently in a negative mood because he just found out he failed a test, and if the college student forecasted how much he would enjoy a party two weeks later, his current negative mood may influence his forecast. In order to make an accurate forecast the student would need to be aware that his forecast is biased due to mental contamination, be motivated to correct the bias, and be able to correct the bias in the right direction and magnitude.Projection bias can arise from empathy gaps (or hot/cold empathy gaps), which occur when the present and future phases of affective forecasting are characterized by different states of physiological arousal, which the forecaster fails to take into account. For example, forecasters in a state of hunger are likely to overestimate how much they will want to eat later, overlooking the effect of their hunger on future preferences. As with projection bias, economists use the visceral motivations that produce empathy gaps to help explain impulsive or self-destructive behaviors, such as smoking.An important affective forecasting bias related to projection bias is personality neglect. Personality neglect refers to a person's tendency to overlook their personality when making decisions about their future emotions. In a study conducted by Quoidbach and Dunn, students' predictions of their feelings about future exam scores were used to measure affective forecasting errors related to personality. They found that college students who predicted their future emotions about their exam scores were unable to relate these emotions to their own dispositional happiness. To further investigate personality neglect, Quoidbach and Dunn studied happiness in relation to neuroticism. People predicted their future feelings about the outcome of the 2008 US presidential election between Barack Obama and John McCain. Neuroticism was correlated with impact bias, which is the overestimation of the length and intensity of emotions. People who rated themselves as higher in neuroticism overestimated their happiness in response to the election of their preferred candidate, suggesting that they failed to relate their dispositional happiness to their future emotional state.The term "projection bias" was first introduced in the 2003 paper "Projection Bias in Predicting Future Utility" by Loewenstein, O'Donoghue and Rabin. Major sources of errors: Market applications of projection bias The novelty of new products oftentimes overexcites consumers and results in the negative consumption externality of impulse buying. To counteract such, George Loewenstein recommends offering "cooling off" periods for consumers. During such, they would have a few days to reflect on their purchase and appropriately develop a longer-term understanding of the utility they receive from it. This cooling-off period could also benefit the production side by diminishing the need for a salesperson to "hype" certain products. Transparency between consumers and producers would increase as "sellers will have an incentive to put buyers in a long-run average mood rather than an overenthusiastic state". By implementing Loewentstein's recommendation, firms that understand projection bias should minimize information asymmetry; such would diminish the negative consumer externality that comes from purchasing an undesirable good and relieve sellers from extraneous costs required to exaggerate the utility of their product. Major sources of errors: Life-cycle consumption Projection bias influences the life cycle of consumption. The immediate utility obtained from consuming particular goods exceeds the utility of future consumption. Consequently, projection bias causes "a person to (plan to) consume too much early in life and too little late in life relative to what would be optimal". Graph 1 displays decreasing expenditures as a percentage of total income from 20 to 54. The period following where income begins to decline can be explained by retirement. According to Loewenstein's recommendation, a more optimal expenditure and income distribution is displayed in Graph 2. Here, income is left the same as in Graph 1, but expenditures are recalculated by taking the average percentage of expenditures in terms of income from ages 25 to 54 (77.7%) and multiplying such by income to arrive at a theoretical expenditure. The calculation is only applied to this age group because of unpredictable income before 25 and after 54 due to school and retirement. Major sources of errors: Food waste When buying food, people often wrongly project what they will want to eat in the future when they go shopping, which results in food waste. Major sources of errors: Major sources of error in motivation Motivated reasoning Generally, affect is a potent source of motivation. People are more likely to pursue experiences and achievements that will bring them more pleasure than less pleasure. In some cases, affective forecasting errors appear to be due to forecasters' strategic use of their forecasts as a means to motivate them to obtain or avoid the forecasted experience. Students, for example, might predict they would be devastated if they failed a test as a way to motivate them to study harder for it. The role of motivated reasoning in affective forecasting has been demonstrated in studies by Morewedge and Buechel (2013). Research participants were more likely to overestimate how happy they would be if they won a prize, or achieved a goal, if they made an affective forecast while they could still influence whether or not they achieved it than if they made an affective forecast after the outcome had been determined (while still in the dark about whether they knew if they won the prize or achieved the goal). In economics: Economists share psychologists' interests in affective forecasting insomuch as it affects the closely related concepts of utility, decision making, and happiness. In economics: Utility Research in affective forecasting errors complicates conventional interpretations of utility maximization, which presuppose that to make rational decisions, people must be able to make accurate forecasts about future experiences or utility. Whereas economics formerly focused largely on utility in terms of a person's preferences (decision utility), the realization that forecasts are often inaccurate suggests that measuring preferences at a time of choice may be an incomplete concept of utility. Thus, economists such as Daniel Kahneman, have incorporated differences between affective forecasts and later outcomes into corresponding types of utility. Whereas a current forecast reflects expected or predicted utility, the actual outcome of the event reflects experienced utility. Predicted utility is the "weighted average of all possible outcomes under certain circumstances." Experienced utility refers to the perceptions of pleasure and pain associated with an outcome. Kahneman and Thaler provide an example of "the hungry shopper," in which case the shopper takes pleasure in the purchase of food due to their current state of hunger. The usefulness of such purchasing is based on their current experience and their anticipated pleasure in fulfilling their hunger. In economics: Decision making Affective forecasting is an important component of studying human decision making. Research in affective forecasts and economic decision making include investigations of durability bias in consumers and predictions of public transit satisfaction. In relevance to the durability bias in consumers, a study was conducted by Wood and Bettman, that showed that people make decisions regarding the consumption of goods based on the predicted pleasure, and the duration of that pleasure, that the goods will bring them. Overestimation of such pleasure, and its duration, increases the likelihood that the good will be consumed. Knowledge on such an effect can aid in the formation of marketing strategies of consumer goods. Studies regarding the predictions of public transit satisfaction reveal the same bias. However, with a negative impact on consumption, due to their lack of experience with public transportation, car users predict that they will receive less satisfaction with the use of public transportation than they actually experience. This can lead them to refrain from the use of such services, due to inaccurate forecasting. Broadly, the tendencies people have to make biased forecasts deviate from rational models of decision making. Rational models of decision making presume an absence of bias, in favor of making comparisons based on all relevant and available information. Affective forecasting may cause consumers to rely on the feelings associated with consumption rather than the utility of the good itself. One application of affective forecasting research is in economic policy. The knowledge that forecasts, and therefore, decisions, are affected by biases as well as other factors (such as framing effects), can be used to design policies that maximize the utility of people's choices. This approach is not without its critics, however, as it can also be seen to justify economic paternalism.Prospect theory describes how people make decisions. It differs from expected utility theory in that it takes into account the relativity of how people view utility and incorporates loss aversion, or the tendency to react more strongly to losses rather than gains. Some researchers suggest that loss aversion is in itself an affective forecasting error since people often overestimate the impact of future losses. In economics: Happiness and well-being Economic definitions of happiness are tied to concepts of welfare and utility, and researchers are often interested in how to increase levels of happiness in the population. The economy has a major influence on the aid that is provided through welfare programs because it provides funding for such programs. Many welfare programs are focused on providing assistance with the attainment of basic necessities such as food and shelter. This may be due to the fact that happiness and well-being are best derived from personal perceptions of one's ability to provide these necessities. This statement is supported by research that states after basic needs have been met, income has less of an impact on perceptions of happiness. Additionally, the availability of such welfare programs can enable those that are less fortunate to have additional discretionary income. Discretionary income can be dedicated to enjoyable experiences, such as family outings, and in turn, provides an additional dimension to their feelings and experience of happiness. Affective forecasting provides a unique challenge to answering the question regarding the best method for increasing levels of happiness, and economists are split between offering more choices to maximize happiness, versus offering experiences that contain more objective or experienced utility. Experienced utility refers to how useful an experience is in its contribution to feelings of happiness and well-being. Experienced utility can refer to both material purchases and experiential purchases. Studies show that experiential purchases, such as a bag of chips, result in forecasts of higher levels of happiness than material purchases, such as the purchase of a pen. This prediction of happiness as a result of a purchase experience exemplifies affective forecasting. It is possible that an increase in choices, or means, of achieving desired levels of happiness will be predictive of increased levels of happiness. For example, if one is happy with their ability to provide themselves with both a choice of necessities and a choice of enjoyable experiences they are more likely to predict that they will be happier than if they were forced to choose between one or the other. Also, when people are able to reference multiple experiences that contribute to their feelings of happiness, more opportunities for comparison will lead to a forecast of more happiness. Under these circumstances, both the number of choices and the quantity of experienced utility have the same effect on affective forecasting, which makes it difficult to choose a side of the debate on which method is most effective in maximizing happiness. In economics: Applying findings from affective forecasting research to happiness also raises methodological issues: should happiness measure the outcome of an experience or the satisfaction experienced as a result of the choice made based upon a forecast? For example, although professors may forecast that getting tenure would significantly increase their happiness, research suggests that in reality, happiness levels between professors who are or are not awarded tenure are insignificant. In this case happiness is measured in terms of the outcome of an experience. Affective forecasting conflicts such as this one have also influenced theories of hedonic adaptation, which compares happiness to a treadmill, in that it remains relatively stable despite forecasts. In law: Similar to how some economists have drawn attention to how affective forecasting violates assumptions of rationality, legal theorists point out that inaccuracies in, and applications of, these forecasts have implications in law that have remained overlooked. The application of affective forecasting, and its related research, to legal theory reflects a wider effort to address how emotions affect the legal system. In addition to influencing legal discourse on emotions, and welfare, Jeremy Blumenthal cites additional implications of affective forecasting in tort damages, capital sentencing and sexual harassment. In law: Tort damages Jury awards for tort damages are based on compensating victims for pain, suffering, and loss of quality of life. However, findings in affective forecasting errors have prompted some to suggest that juries are overcompensating victims since their forecasts overestimate the negative impact of damages on the victims' lives. Some scholars suggest implementing jury education to attenuate potentially inaccurate predictions, drawing upon research that investigates how to decrease inaccurate affective forecasts. In law: Capital sentencing During the process of capital sentencing, juries are allowed to hear victim impact statements (VIS) from the victim's family. This demonstrates affective forecasting in that its purpose is to present how the victim's family has been impacted emotionally and, or, how they expect to be impacted in the future. These statements can cause juries to overestimate the emotional harm, causing harsh sentencing, or underestimate harm, resulting in inadequate sentencing. The time frame in which these statements are present also influences affective forecasting. By increasing the time gap between the crime itself and sentencing (the time at which victim impact statements are given), forecasts are more likely to be influenced by the error of immune neglect (See Immune neglect) Immune neglect is likely to lead to underestimation of future emotional harm, and therefore results in inadequate sentencing. As with tort damages, jury education is a proposed method for alleviating the negative effects of forecasting error. In law: Sexual harassment In cases involving sexual harassment, judgements are more likely to blame the victim for their failure to react in a timely fashion or their failure to make use of services that were available to them in the event of sexual harassment. This is because prior to the actual experience of harassment, people tend to overestimate their affective reactions as well as their proactive reactions in response to sexual harassment. This exemplifies the focalism error (See Focalism) in which forecasters ignore alternative factors that may influence one's reaction, or failure to react. For example, in their study, Woodzicka and LaFrance studied women's predictions of how they would react to sexual harassment during an interview. Forecasters overestimated their affective reactions of anger, while underestimating the level of fear they would experience. They also overestimated their proactive reactions. In Study 1, participants reported that they would refuse to answer questions of a sexual nature and, or, report the question to the interviewer's supervisor. However, in Study 2, of those who had actually experienced sexual harassment during an interview, none of them displayed either proactive reaction. If juries are able to recognize such errors in forecasting, they may be able to adjust such errors. Additionally, if juries are educated on other factors that may influence the reactions of those who are victims of sexual harassment, such as intimidation, they are more likely to make more accurate forecasts, and less likely to blame victims for their own victimization. In health: Affective forecasting has implications in health decision making and medical ethics and policy. Research in health-related affective forecasting suggests that nonpatients consistently underestimate the quality of life associated with chronic health conditions and disability. The so-called "disability paradox" states the discrepancy between self-reported levels of happiness amongst chronically ill people versus the predictions of their happiness levels by healthy people. The implications of this forecasting error in medical decision making can be severe, because judgments about future quality of life often inform health decisions. Inaccurate forecasts can lead patients, or more commonly their health care agent, to refuse life-saving treatment in cases when the treatment would involve a drastic change in lifestyle, for example, the amputation of a leg. A patient, or health care agent, who falls victim to focalism would fail to take into account all the aspects of life that would remain the same after losing a limb. Although Halpern and Arnold suggest interventions to foster awareness of forecasting errors and improve medical decision making amongst patients, the lack of direct research in the impact of biases in medical decisions provides a significant challenge.Research also indicates that affective forecasts about future quality of life are influenced by the forecaster's current state of health. Whereas healthy individuals associate future low health with low quality of life, less healthy individuals do not forecast necessarily low quality of life when imagining having poorer health. Thus, patient forecasts and preferences about their own quality of life may conflict with public notions. Because a primary goal of healthcare is maximizing quality of life, knowledge about patients' forecasts can potentially inform policy on how resources are allocated.Some doctors suggest that research findings in affective forecasting errors merit medical paternalism. Others argue that although biases exist and should support changes in doctor-patient communication, they do not unilaterally diminish decision-making capacity and should not be used to endorse paternalistic policies. This debate captures the tension between medicine's emphasis on protecting the autonomy of the patient and an approach that favors intervention in order to correct biases. Improving forecasts: Individuals who recently have experienced an emotionally charged life event will display the impact bias. The individual predicts they will feel happier than they actually feel about the event. Another factor that influences overestimation is focalism which causes individuals to concentrate on the current event. Individuals often fail to realize that other events will also influence how they currently feel. Lam et al. (2005) found that the perspective that individuals take influences their susceptibility to biases when making predictions about their feelings.A perspective that overrides impact bias is mindfulness. Mindfulness is a skill that individuals can learn to help them prevent overestimating their feelings. Being mindful helps the individual understand that they may currently feel negative emotions, but the feelings are not permanent. The Five Factor Mindfulness Questionnaire (FFMQ) can be used to measure an individual's mindfulness. The five factors of mindfulness are observing, describing, acting with awareness, non-judging of inner experience, and non-reactivity to inner experience. The two most important factors for improving forecasts are observing and acting with awareness. The observing factor assesses how often an individual attends to their sensations, emotions, and outside environment. The ability to observe allows the individual to avoid focusing on one single event, and be aware that other experiences will influence their current emotions. Acting with awareness requires assessing how individuals tend to current activities with careful consideration and concentration. Emanuel, Updegraff, Kalmbach, and Ciesla (2010) stated that the ability to act with awareness reduces the impact bias because the individual is more aware that other events co-occur with the present event. Being able to observe the current event can help individuals focus on pursuing future events that provide long-term satisfaction and fulfillment.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Core–mantle boundary** Core–mantle boundary: The core–mantle boundary (CMB) of Earth lies between the planet's silicate mantle and its liquid iron–nickel outer core, at a depth of 2,891 km (1,796 mi) below Earth's surface. The boundary is observed via the discontinuity in seismic wave velocities at that depth due to the differences between the acoustic impedances of the solid mantle and the molten outer core. P-wave velocities are much slower in the outer core than in the deep mantle while S-waves do not exist at all in the liquid portion of the core. Recent evidence suggests a distinct boundary layer directly above the CMB possibly made of a novel phase of the basic perovskite mineralogy of the deep mantle named post-perovskite. Seismic tomography studies have shown significant irregularities within the boundary zone and appear to be dominated by the African and Pacific Large Low-Shear-Velocity Provinces (LLSVP).The uppermost section of the outer core is thought to be about 500–1,800 K hotter than the overlying mantle, creating a thermal boundary layer. The boundary is thought to harbor topography, much like Earth's surface, that is supported by solid-state convection within the overlying mantle. Variations in the thermal properties of the core-mantle boundary may affect how the outer core's iron-rich fluids flow, which are ultimately responsible for Earth's magnetic field. The D″ region: The approx. 200 km thick layer of the lower mantle directly above the boundary is referred to as the D″ region ("D double-prime" or "D prime prime") and is sometimes included in discussions regarding the core–mantle boundary zone. The D″ name originates from mathematician Keith Bullen's designations for the Earth's layers. His system was to label each layer alphabetically, A through G, with the crust as 'A' and the inner core as 'G'. In his 1942 publication of his model, the entire lower mantle was the D layer. In 1949, Bullen found his 'D' layer to actually be two different layers. The upper part of the D layer, about 1800 km thick, was renamed D′ (D prime) and the lower part (the bottom 200 km) was named D″. Later it was found that D" is non-spherical. In 1993, Czechowski found that inhomogeneities in D" form structures analogous to continents (i.e. core-continents). They move in time and determine some properties of hotspots and mantle convection. Later research supported this hypothesis. Seismic discontinuity: A seismic discontinuity occurs within Earth's interior at a depth of about 2,900 km (1,800 mi) below the surface, where there is an abrupt change in the speed of seismic waves (generated by earthquakes or explosions) that travel through Earth. At this depth, primary seismic waves (P waves) decrease in velocity while secondary seismic waves (S waves) disappear completely. S waves shear material, and cannot transmit through liquids, so it is thought that the unit above the discontinuity is solid, while the unit below is in a liquid or molten form. Seismic discontinuity: The discontinuity was discovered by Beno Gutenberg (1889-1960), a seismologist who made several important contributions to the study and understanding of the Earth's interior. The CMB has also been referred to as the Gutenberg discontinuity, the Oldham-Gutenberg discontinuity, or the Wiechert-Gutenberg discontinuity. In modern times, however, the term Gutenberg discontinuity or the "G" is most commonly used in reference to a decrease in seismic velocity with depth that is sometimes observed at about 100 km below the Earth's oceans.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quantum geometry** Quantum geometry: In theoretical physics, quantum geometry is the set of mathematical concepts generalizing the concepts of geometry whose understanding is necessary to describe the physical phenomena at distance scales comparable to the Planck length. At these distances, quantum mechanics has a profound effect on physical phenomena. Quantum gravity: Each theory of quantum gravity uses the term "quantum geometry" in a slightly different fashion. String theory, a leading candidate for a quantum theory of gravity, uses the term quantum geometry to describe exotic phenomena such as T-duality and other geometric dualities, mirror symmetry, topology-changing transitions, minimal possible distance scale, and other effects that challenge intuition. More technically, quantum geometry refers to the shape of a spacetime manifold as experienced by D-branes which includes quantum corrections to the metric tensor, such as the worldsheet instantons. For example, the quantum volume of a cycle is computed from the mass of a brane wrapped on this cycle. Quantum gravity: In an alternative approach to quantum gravity called loop quantum gravity (LQG), the phrase "quantum geometry" usually refers to the formalism within LQG where the observables that capture the information about the geometry are now well defined operators on a Hilbert space. In particular, certain physical observables, such as the area, have a discrete spectrum. It has also been shown that the loop quantum geometry is non-commutative.It is possible (but considered unlikely) that this strictly quantized understanding of geometry will be consistent with the quantum picture of geometry arising from string theory. Quantum gravity: Another, quite successful, approach, which tries to reconstruct the geometry of space-time from "first principles" is Discrete Lorentzian quantum gravity. Quantum states as differential forms: Differential forms are used to express quantum states, using the wedge product: |ψ⟩=∫ψ(x,t)|x,t⟩d3x where the position vector is x=(x1,x2,x3) the differential volume element is d3x=dx1∧dx2∧dx3 and x1, x2, x3 are an arbitrary set of coordinates, the upper indices indicate contravariance, lower indices indicate covariance, so explicitly the quantum state in differential form is: |ψ⟩=∫ψ(x1,x2,x3,t)|x1,x2,x3,t⟩dx1∧dx2∧dx3 The overlap integral is given by: ⟨χ|ψ⟩=∫χ∗ψd3x in differential form this is ⟨χ|ψ⟩=∫χ∗ψdx1∧dx2∧dx3 The probability of finding the particle in some region of space R is given by the integral over that region: ⟨ψ|ψ⟩=∫Rψ∗ψdx1∧dx2∧dx3 provided the wave function is normalized. When R is all of 3d position space, the integral must be 1 if the particle exists. Quantum states as differential forms: Differential forms are an approach for describing the geometry of curves and surfaces in a coordinate independent way. In quantum mechanics, idealized situations occur in rectangular Cartesian coordinates, such as the potential well, particle in a box, quantum harmonic oscillator, and more realistic approximations in spherical polar coordinates such as electrons in atoms and molecules. For generality, a formalism which can be used in any coordinate system is useful.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Friction stud welding** Friction stud welding: Friction stud welding is a solid phase welding technique involving a stud or appurtenance being rotated at high speed while being forced against a substrate, generating heat by friction. The metal surfaces reach a temperature at which they flow plastically under pressure, surface impurities are expelled and a forged weld is formed. This technique is rather more costly than arc stud welding and is therefore used for special applications where arc welding may present problems, such as: welding underwater welding on live subsea pipelines to attach anodes welding in explosive environments and zoned areas welding materials that are difficult to join by fusion welding processes friction plug weldingPortable equipment for friction stud welding is available for use on construction work sites, offshore, underwater and in workshops. These portable units are much lighter and smaller than the large static friction welding machines which are used, for example, in factories to weld engine components such as drive shafts. Principle of operation: A portable friction stud welding tool consists of a motor to rotate the stud at high speed and a piston to apply the necessary force to the stud. The equipment may be air or hydraulically powered. A clamping system is also required to hold the tool onto the work piece and to provide reaction to the force on the stud. The clamps used are typically magnetic or vacuum clamps for flat surfaces, chain or claw clamps for pipes and various mechanical clamps for welding onto I beams or other shapes. Principle of operation: The weld is made by rotating the stud at high speed and forcing it onto the substrate causing friction which heats the stud tip and substrate surface. Metal at the interface between the stud and the substrate flows plastically under pressure, removing impurities from the metal surfaces, and a solid phase weld is formed. The rotation of the stud is then stopped but the force on the stud is maintained for a few seconds. The maximum temperatures reached during welding are much lower than the melting point of the metals. Advantages and disadvantages: Some notable advantages of the process are: The relatively low temperature at which the weld is formed means that the process can be adapted for applications such as welding on live pipelines and in explosive environments. The absence of an electric arc and a liquid phase in the metal avoids some of the potential problems encountered with arc welding such as contamination of the weld with hydrogen, nitrogen and oxygen. The rapid weld cycle time (typically 5 to 10 seconds) and the method of weld formation result in a fine grain structure.In the “as welded” condition the residual stresses are compressive which tend to result in good fatigue life. Studs can also be welded through epoxy paint or rubber coatings. Advantages and disadvantages: The main disadvantages of this process are: The process can only be used to weld relatively small components (such as studs or plugs) which can be rotated at high speed, onto a work piece. The systems used are limited to studs up to typically 25 mm diameter and plugs for filling holes up to typically 25 mm diameter (plug welding). Advantages and disadvantages: The system requires a rigid clamp to hold the welding tool on the work piece and withstand the force applied to the stud during welding. Although these clamps can be moved from one weld location to the next quite rapidly they are generally larger and more cumbersome than is the case with arc stud welding systems. Applications: For the type of applications listed here it is especially important that the welding and operating procedures are fully tested and certified for both weld integrity and operational safety prior to use in production. Operators must be thoroughly trained and systems must be in place to ensure that the procedures are properly applied and risks properly assessed. Applications: Welding underwater When this process is used underwater, a shroud is fitted around the stud which prevents the weld from being cooled too rapidly by the surrounding water. The air powered systems can operate underwater to a depth of approximately 20m and are relatively simple for divers to use. The hydraulically powered systems can also be used by divers and have welded to depths in excess of 300m from a Remotely Operated Vehicle (ROV). Current friction stud welding systems are designed to operate to a depth of approximately 1000m. Applications: Welding on live subsea pipelines to attach anodes Friction stud welding has been used to retrofit sacrificial anodes to subsea pipelines while the pipeline is “live” (that is, it continues to transport hydrocarbons at pressure). In some cases the anodes are placed on the sea bed next to the pipeline and a lug on a cable from the anode is connected to the stud welded on the pipeline. Another option is a tripartite weld where the lug on the anode cable is made of steel with a tapered hole in it. The tapered end of the stud welds through the hole onto the pipeline, welding to both the lug and the pipe and providing a fully welded connection between the anode cable and pipeline. The advantage of this method is that there is no significant increase in the electrical resistance of the connection due to corrosion during the lifetime of the pipeline. Many subsea pipelines have concrete weight coating on them and a small area of this can be removed with a water jet to permit welding. Applications: Welding in explosive environments and zoned areas Friction stud welding has been used to attach grating to offshore oil platforms in areas where arc welding is not permitted because of the risk of causing a fire or explosion. A shroud similar to the one used for welding underwater acts as a barrier between the weld and the surrounding atmosphere. A water screen can also be used as an additional barrier. Applications: Welding materials that are difficult to join by fusion welding processes Friction stud welding is a solid phase welding process where the metals do not liquefy. This permits metal combinations such as welding aluminium studs to steel which would be problematic with arc welding because of the formation of brittle inter-metallic compounds. Applications: Friction plug welding In friction plug welding a tapered shaped plug is friction welded into a tapered hole in the substrate. This welding method can be used to repair defects in castings. It has also been used to fill the holes that occur on completion of a friction stir welding pass when the stirring probe is withdrawn from the weld. Applications: Specific recent applications of friction stud welding include: Retrofitting anodes in an FPSO oil storage tank in a Zone 1 area. Retrofitting equipment in Zone 1 areas on offshore platforms. Attachment of anodes inside seawater discharge pipelines in a gas processing plant.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Finger pillory** Finger pillory: A finger pillory is a style of restraint where the fingers are held in a wooden block, using an L-shaped hole to keep the knuckle bent inside the block. The name is taken from the pillory, a much larger device used to secure the head and hands. Finger stocks were also used in churches for minor offences, like not paying attention during a sermon. An example still survives in St Helen's Church, Ashby-de-la-Zouch, Leicestershire, England.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Plantation** Plantation: Plantations are farms specializing in cash crops, usually mainly planting a single crop, with perhaps ancillary areas for vegetables for eating and so on. Plantations, centered on a plantation house, grow crops including cotton, cannabis, coffee, tea, cocoa, sugar cane, opium, sisal, oil seeds, oil palms, fruits, rubber trees and forest trees. Protectionist policies and natural comparative advantage have sometimes contributed to determining where plantations are located. Plantation: In modern use, the term usually refers only to large-scale estates. Nevertheless, before about 1800, it was the usual term for a farm of any size in the southern parts of British North America, with, as Noah Webster noted, "farm" becoming the usual term from about Maryland northward. It was used in most British colonies but very rarely in the United Kingdom itself in this sense. There, as also in America, it was used mainly for tree plantations, areas artificially planted with trees, whether purely for commercial forestry, or partly for ornamental effect in gardens and parks, when it might also cover plantings of garden shrubs.Among the earliest examples of plantations were the latifundia of the Roman Empire, which produced large quantities of grain, wine, and olive oil for export. Plantation agriculture proliferated with the increase in international trade and the development of a worldwide economy that followed the expansion of European colonialism. Tree plantations: Tree plantations, in the United States often called tree farms, are established for the commercial production of timber or tree products such as palm oil, coffee, or rubber. Teak and bamboo plantations in India have given good results and an alternative crop solution to farmers of central India, where conventional farming was widespread. But due to the rising input costs of agriculture, many farmers have done teak and bamboo plantations, which require very little water (only during the first two years). Teak and bamboo have legal protection from theft. Bamboo, once planted, gives output for 50 years till flowering occurs. Teak requires 20 years to grow to full maturity and fetch returns. Tree plantations: These may be established for watershed or soil protection. They are established for erosion control, landslide stabilization, and windbreaks. Such plantations are established to foster native species and promote forest regeneration on degraded lands as a tool of environmental restoration. Tree plantations: Ecological impact Probably the most critical factor a plantation has on the local environment is the site where the plantation is established. In Brazil, coffee plantations would use slash-and-burn agriculture, tearing down rainforests and planting coffee trees that depleted the nutrients in soil. Once the soil had been sapped, growers would move on to another place. If a natural forest is cleared for a planted forest, then a reduction in biodiversity and loss of habitat will likely result. In some cases, their establishment may involve draining wetlands to replace mixed hardwoods that formerly predominated with pine species. Tree plantations: If a plantation is established on abandoned agricultural land or highly degraded land, it can increase both habitat and biodiversity. A planted forest can be profitably established on lands that will not support agriculture or suffer from a lack of natural regeneration. The tree species used in a plantation are also an important factor. Where non-native varieties or species are grown, few native faunas are adapted to exploit these, and further biodiversity loss occurs. However, even non-native tree species may serve as corridors for wildlife and act as a buffer for native forests, reducing edge effect. Tree plantations: Once a plantation is established, managing it becomes an important environmental factor. The most critical aspect of management is the rotation period. Plantations harvested on more extended rotation periods (30 years or more) can provide similar benefits to a naturally regenerated forest managed for wood production on a similar rotation. This is especially true if native species are used. In the case of exotic species, the habitat can be improved significantly if the impact is mitigated by measures such as leaving blocks of native species in the plantation or retaining corridors of natural forest. In Brazil, similar measures are required by government regulation. Tree plantations: Sugar Sugar plantations were highly valued in the Caribbean by the British and French colonists in the 17th and 18th centuries, and the use of sugar in Europe rose during this period. Sugarcane is still an important crop in Cuba. Sugar plantations also arose in countries such as Barbados and Cuba because of the natural endowments that they had. These natural endowments included soil conducive to growing sugar and a high marginal product of labor realized through the increasing number of enslaved people. Tree plantations: Rubber Plantings of the Pará rubber tree (Hevea brasiliensis) are usually called plantations. Oil palm Oil palm agriculture rapidly expands across wet tropical regions and is usually developed at a plantation scale. Orchards Fruit orchards are sometimes considered to be plantations. Arable crops These include tobacco, sugarcane, pineapple, bell pepper, and cotton, especially in historical usage. Before the rise of cotton in the American South, indigo and rice were also sometimes called plantation crops. Fishing When Newfoundland was colonized by England in 1610, the original colonists were called "planters", and their fishing rooms were known as "fishing plantations". These terms were used well into the 20th century. The following three plantations are maintained by the Government of Newfoundland and Labrador as provincial heritage sites: Sea-Forest Plantation was a 17th-century fishing plantation established at Cuper's Cove (present-day Cupids) under a royal charter issued by King James I. Mockbeggar Plantation is an 18th-century fishing plantation at Bonavista. Pool Plantation a 17th-century fishing plantation maintained by Sir David Kirke and his heirs at Ferryland. The plantation was destroyed by French invaders in 1696.Other fishing plantations: Bristol's Hope Plantation, a 17th-century fishing plantation established at Harbour Grace, created by the Bristol Society of Merchant-Adventurers. Benger Plantation, an 18th-century fishing plantation maintained by James Benger and his heirs at Ferryland. It was built on the site of a Georgia plantation. Piggeon's Plantation, an 18th-century fishing plantation maintained by Ellias Piggeon at Ferryland. Plantation slave economy: Plantation owners extensively used enslaved Africans to work on early plantations (such as tobacco, rice, cotton, hemp, and sugar plantations) in the American colonies and the United States, throughout the Caribbean, the Americas, and in European-occupied areas of Africa. In modern times, the low wages typically paid to plantation workers are the basis of plantation profitability in some areas. Plantation slave economy: In more recent times, overt slavery has been replaced by para-slavery or slavery-in-kind, including the sharecropping system, and even that has been severely reduced. At its most extreme, workers are in "debt bondage": they must work to pay off a debt at such punitive interest rates that it may never be paid off. Others work unreasonably long hours and are paid subsistence wages that (in practice) may only be spent in the company store. Plantation slave economy: In Brazil, a sugarcane plantation was termed an engenho ("engine"), and the 17th-century English usage for organized colonial production was "factory." Such colonial social and economic structures are discussed at Plantation economy. Sugar workers on plantations in Cuba and elsewhere in the Caribbean lived in company towns known as bateyes. American South
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lovespoon** Lovespoon: A lover is a wooden spoon decoratively carved that was traditionally presented as a gift of romantic intent. The spoon is normally decorated with symbols of love, and was intended to reflect the skill of the carver. Due to the intricate designs, lovespoons are no longer used as functioning spoons and are now decorative craft items. Origins: The lovespoon is a traditional craft that dates back to the seventeenth century. Over generations, decorative carvings were added to the spoon and it lost its original practical use and became a treasured decorative item to be hung on a wall. The earliest known dated lovespoon from Wales, displayed in the St Fagans National History Museum near Cardiff, is from 1667, although the tradition is believed to date back long before that. The earliest dated lovespoon worldwide originates from Germany, and is dated as 1664. Symbols: The lovespoon was given to a young woman by her suitor. It was important for the girl's father to see that the young man was capable of providing for the family and woodworking. Sailors would often carve lovespoons during their long journeys, which is why anchors would often be incorporated into the carvings. Symbols: Certain symbols came to have specific meanings: a horseshoe for luck, a cross for faith, bells for marriage, hearts for love, a wheel supporting a loved one and a lock for security, among others. Caged balls indicated the number of children hoped for. Other difficult carvings, such as chains, were as much a demonstration of the carver's skill as a symbolic meaning.Although the Welsh lovespoon is the most famous there are also traditions of lovespoons in Scandinavia and some parts of Eastern Europe, which have their own unique styles and techniques when it comes to the Lovespoon. Symbols: Today lovespoons are given as wedding and anniversary gifts, as well as birthday, baby gifts, Christmas or Valentine's Day gifts. They are now mostly seen as a folk craft. Wedding spoons: In old times, newly married couples in Norway ate with linked spoons to symbolize the linkage of their marriage. Often the spoons and chain were made from a single piece of wood, emphasizing wood-carving craftsmanship. Similar linked spoons can be found in some ethnographic museums.A design for making linked spoons was published in Popular Science in 1967.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**JMJD6** JMJD6: Bifunctional arginine demethylase and lysyl-hydroxylase JMJD6 is an enzyme that in humans is encoded by the JMJD6 gene. Function: This gene encodes a nuclear protein with a JmjC domain. JmjC domain-containing proteins belong to the alpha-ketoglutarate-dependent hydroxylase superfamily. They are predicted to function as protein hydroxylases or histone demethylases. This protein was first identified as a putative phosphatidylserine receptor involved in phagocytosis of apoptotic cells. Subsequent studies suggest that the protein may cross-react with a monoclonal antibody that recognizes the phosphatidylserine receptor and does not directly function in the clearance of apoptotic cells. Multiple transcript variants encoding different isoforms have been found for this gene. On a physiological level JMJD6 has a role in angiogenesis, the process of vessel formation, whereas further roles of JMJD6 in pathophysiological processes were implicated, such as mammary tumorigenesis. Here, elevated JMJD6 level were found in breast cancer associated with aggressiveness and metastasis in mice.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Melody** Melody: A melody (from Greek μελῳδία, melōidía, "singing, chanting"), also tune, voice or line, is a linear succession of musical tones that the listener perceives as a single entity. In its most literal sense, a melody is a combination of pitch and rhythm, while more figuratively, the term can include other musical elements such as tonal color. It is the foreground to the background accompaniment. A line or part need not be a foreground melody. Melody: Melodies often consist of one or more musical phrases or motifs, and are usually repeated throughout a composition in various forms. Melodies may also be described by their melodic motion or the pitches or the intervals between pitches (predominantly conjunct or disjunct or with further restrictions), pitch range, tension and release, continuity and coherence, cadence, and shape. Function and elements: Johann Philipp Kirnberger argued: The true goal of music—its proper enterprise—is melody. All the parts of harmony have as their ultimate purpose only beautiful melody. Therefore, the question of which is the more significant, melody or harmony, is futile. Beyond doubt, the means is subordinate to the end. Function and elements: The Norwegian composer Marcus Paus has argued: Melody is to music what a scent is to the senses: it jogs our memory. It gives face to form, and identity and character to the process and proceedings. It is not only a musical subject, but a manifestation of the musically subjective. It carries and radiates personality with as much clarity and poignancy as harmony and rhythm combined. As such a powerful tool of communication, melody serves not only as protagonist in its own drama, but as messenger from the author to the audience. Function and elements: Given the many and varied elements and styles of melody "many extant explanations [of melody] confine us to specific stylistic models, and they are too exclusive." Paul Narveson claimed in 1984 that more than three-quarters of melodic topics had not been explored thoroughly.The melodies existing in most European music written before the 20th century, and popular music throughout the 20th century, featured "fixed and easily discernible frequency patterns", recurring "events, often periodic, at all structural levels" and "recurrence of durations and patterns of durations".Melodies in the 20th century "utilized a greater variety of pitch resources than ha[d] been the custom in any other historical period of Western music." While the diatonic scale was still used, the chromatic scale became "widely employed." Composers also allotted a structural role to "the qualitative dimensions" that previously had been "almost exclusively reserved for pitch and rhythm". Kliewer states, "The essential elements of any melody are duration, pitch, and quality (timbre), texture, and loudness. Though the same melody may be recognizable when played with a wide variety of timbres and dynamics, the latter may still be an "element of linear ordering." Examples: Different musical styles use melody in different ways. For example: Jazz musicians use the term "lead" or "head" to refer to the main melody, which is used as a starting point for improvisation. Rock music, and other forms of popular music and folk music tend to pick one or two melodies (verse and chorus, sometimes with a third, contrasting melody known as a bridge or middle eight) and stick with them; much variety may occur in the phrasing and lyrics. Indian classical music relies heavily on melody and rhythm, and not so much on harmony, as the music contains no chord changes. Balinese gamelan music often uses complicated variations and alterations of a single melody played simultaneously, called heterophony. Examples: In western classical music, composers often introduce an initial melody, or theme, and then create variations. Classical music often has several melodic layers, called polyphony, such as those in a fugue, a type of counterpoint. Often, melodies are constructed from motifs or short melodic fragments, such as the opening of Beethoven's Fifth Symphony. Richard Wagner popularized the concept of a leitmotif: a motif or melody associated with a certain idea, person or place. Examples: While in both most popular music and classical music of the common practice period pitch and duration are of primary importance in melodies, the contemporary music of the 20th and 21st centuries pitch and duration have lessened in importance and quality has gained importance, often primary. Examples include musique concrète, klangfarbenmelodie, Elliott Carter's Eight Etudes and a Fantasy (which contains a movement with only one note), the third movement of Ruth Crawford-Seeger's String Quartet 1931 (later re-orchestrated as Andante for string orchestra), which creates the melody from an unchanging set of pitches through "dissonant dynamics" alone, and György Ligeti's Aventures, in which recurring phonetics create the linear form.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pigmented hairy epidermal nevus syndrome** Pigmented hairy epidermal nevus syndrome: Pigmented hairy epidermal nevus syndrome is a cutaneous condition characterized by a Becker nevus, ipsilateral hypoplasia of the breast, and skeletal defects such as scoliosis.: 635 : 776
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Needlegun scaler** Needlegun scaler: A needlegun scaler, needle scaler, or needle-gun is a tool used to remove rust, mill scale, and old paint from metal surfaces. The tool is used in metalwork applications as diverse as home repair, automotive repair, and shipboard preservation. Operation and use: A needle gun has a set of very fine chisels known as needles. The tool forces these needles against a work surface at variable speeds up to around 5,000 times per minute. Different models offer choices of number of needles, operating speed, and power levels. Many models use compressed air, although electrical needle-guns do exist.In a pneumatic unit, compressed air forces a piston forwards and backwards. This movement causes the needles to move back and forth against the work surface.The needle gun has advantages over other scaling tools. Its main advantage is that the needles automatically adjust themselves to contours, making the tool a good choice for cleaning irregular surfaces. A needle gun can clean an area to bare metal in seconds, and compares well to other scaling tools in terms of accuracy and precision.It is recommended that before needlegunning, a surface should be prepared by removing oil, grease, dirt, chemicals and water-soluble contaminants. This can be done with solvents or with a combination of detergent and fresh water. Operation and use: Then, the needle gun is used to remove rust, loose scale, and paint, leaving bare metal. It is used most effectively by holding it at a 45° angle to the work surface. It is recommended that an area no larger than 6 to 8 inches (150 to 200 mm) be cleared at once. Two to three passes over an area is generally sufficient to clean it. Then the process is repeated until the desired area is completed.Prior to painting, it is desirable to feather any edges between metal and old paint. It is also important to check the surface for oil deposited during chipping, and if necessary, clean the area with solvents. Since bare metal surfaces will flash rust soon after exposure to the atmosphere, paint should be applied as soon as possible after chipping. If flash rusting occurs prior to coating, further chipping, cleaning and sanding may be necessary. Personal protective equipment (PPE): Because the power tool is noisy and can produce flying chips of debris as well as fine dust, PPE to protect vision, hearing, and breathing are recommended by safety regulators and tool manufacturers. General references: Miller, Jason (2008-01-30). "Needle Scaler and Debris Removal". Free Online Library. Free Online Library. Retrieved 2008-04-05. General references: Naval Sea Systems Command (NAVSEA) (2008). "Chapter 7: Submarine Forces Afloat Painting and Preservation Guidelines for Non-Nuclear Spaces and Components". Contracted Ship Maintenance. Joint Fleet Maintenance Manual - Rev A Change 7. Vol. VI. United States Navy.—CHAP 7 LINK NO LONGER WORKS Naval Education and Training Command (NETC) (2003) [1996]. "Chapter 11: Painting". NAVEDTRA 14343: Boatswain's Mate (PDF). Nonresident Training Manuals. Pensacola, Florida: United States Navy. Retrieved 2008-04-05. General references: "060323-N-3946H-042.jpg". Eye on the Fleet Photo Gallery. Navy NewsStand. United States Navy. 2006-03-23. Archived from the original on 12 April 2006. Retrieved 9 March 2021. Park, Sharon (1984). "#13: The Repair and Thermal Upgrading of Historic Steel Windows". Preservation Briefs. Government Publishing Office. Archived from the original on 2008-04-15. Retrieved 2008-04-05. Nitto Kohki (2008). "EJC-32A Electric Needle Scaler". nittokohki.com. Nitto Kohki USA. Archived from the original on 2008-06-15. Retrieved 2008-04-05.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Moondial** Moondial: Moondials are time pieces similar to a sundial. The most basic moondial, which is identical to a sundial, is only accurate on the night of the full moon. Every night after it becomes an additional (on average) 48 minutes slow, while every night preceding the full moon it is (again on average) 49 minutes fast, assuming there is even enough light to take a reading by. Thus, one week to either side of the full moon the moondial will read 5 hours and 36 minutes before or after the proper time.More advanced moondials can include charts showing the exact calculations to get the correct time, as well as dials designed with latitude and longitude in mind. Moondials are very closely associated with lunar gardening (night-blooming plants) and some comprehensive gardening books may mention them.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Guitar bracing** Guitar bracing: Guitar bracing refers to the system of wooden struts which internally support and reinforce the soundboard and back of acoustic guitars. Guitar bracing: Soundboard or top bracing transmits the forces exerted by the strings from the bridge to the rim. The luthier faces the challenge of bracing the instrument to withstand the stress applied by the strings with minimal distortion, while permitting the top to respond as fully as possible to the tones generated by the strings. Brace design contributes significantly to the type of sound a guitar will produce. According to luthiers W. Cumpiano and J. Natelson, "By varying brace design, each builder has sought to produce a sound that conformed to his concept of the ideal."The back of the instrument is braced to help distribute the force exerted by the neck on the body, and to maintain the tonal responsiveness and structural integrity of the sound box. Materials: Braces may be made from top woods (spruce or cedar), balsa wood or, in high-end instruments, carbon fiber composites. Nylon string guitar bracing: Fan bracing This is the standard bracing pattern on the classical guitar, dating to the work of Antonio Torres Jurado in the 19th century. Although the originator of this bracing style has not been reliably established, the earliest known use is by Spanish luthier Francisco Sanguino in the mid to late 18th century. Kasha Bracing In the 1970s, scientist Michael Kasha radically overhauled every aspect of guitar design to incorporate principles such as mechanical impedance matching. Lattice bracing The Australian guitarmaker Greg Smallman introduced guitars with an extremely thin soundboard, which is supported by bracing in the shape of a lattice. Smallman combines this with heavier, laminated back and sides with a frame. Smallman's guitars are used by John Williams. Nylon string guitar bracing: Smallman's design was inspired by research by Torres who made a guitar with a papier mâché back and sides to show that the soundboard was the most important factor in guitar sound projection. Smallman also uses two 45 degree pole supports in the frame running from the bottom of the guitar to the waist to prevent the string tension from distorting the body and sound board. Steel string flat-top guitar bracing: In all steel-string instruments, the ends of the top braces taper at the edge of the soundboard. In most factory built guitars the brace tops are given a round profile, but are otherwise left unshaped. This produces a stronger top and may reduce the number of warranty claims arising from damage, however, over-built tops are less responsive. Braces are usually made from Sitka Spruce (Picea sitchensis). Some luthiers use Adirondack Spruce, also known as "Red Spruce" (Picea rubens), in high end instruments. Steel string flat-top guitar bracing: X-Bracing The tops of most steel string acoustic guitars are braced using the X-brace system, or a variation of the X-brace system, generally attributed to Christian Frederick Martin between 1840 and 1845 for use in gut string guitars. Steel string flat-top guitar bracing: The system consists of two braces forming an "X" shape across the soundboard below the top of the sound hole. The lower arms of the "X" straddle and support the ends of the bridge. Under the bridge is a hardwood bridge plate which prevents the ball end of the strings from damaging the underside of the soundboard. Below the bridge patch are one or more tone bars which support the bottom of the soundboard. These abut one of the X braces and usually slant down towards the bottom edge of the soundboard. The top tone bar butts against a portion of the bridge patch in most instruments. Above the sound hole a large transverse brace spans the width of the upper bout of the soundboard. Around the lower bout, small finger braces support the area between the X-braces and the edge of the soundboard. Steel string flat-top guitar bracing: Double X-bracing In this system, two overlapping X shapes form a diamond surrounding the underside of the bridge plate. Some luthiers prefer it where additional strength is required, for instance for twelve string guitars. This bracing does not allow the top to move or vibrate as much as it normally would but offers more strength and prevent bellying around the bridge area. Steel string flat-top guitar bracing: A-bracing Several bracing styles are designated as A-bracing. Mottola's Cyclopedic Dictionary of Lutherie Terms lists two. The first, typical of instruments built by Tacoma, use two long longitudinal struts that diverge from near the neck block to near the tail end of the guitar. This bracing style is used on instruments that feature a soundhole that is not centrally located. The second style listed is that used by some models of Ovation guitars, also called Adamas bracing. There is also a variation on X-bracing called A-bracing. The X-shaped structure under the bridge is retained, but the transverse strut between the fingerboard and soundhole is replaced by two diagonal braces which splay outward going toward the soundhole. It is used by Lowden Guitars. Steel string flat-top guitar bracing: V-Class Bracing V-Class bracing is a bracing style developed by Andy Powers for Taylor Guitars. It is similar to A-bracing, however the main braces diverge across both sides of the sound hole towards the neck side of the top, and converge at the endblock to form the shape of a V. A lateral brace positioned between the soundhole and bridge plate spans the width of the top, and there are two sets of tone/finger braces between the bridge plate and endblock that are roughly perpendicular to the longitudinal V-braces and run from the V-brace towards the edge of the top. It is purported to allow the top to be both stiff and flexible in order to produce more volume and sustain. Taylor also purports that the design improves harmonic intonation. Steel string flat-top guitar bracing: Falcate bracing Falcate (sickle-shaped) bracing is a symmetric bracing style designed by luthier and engineer Trevor Gore, and used on his steel-string and classical nylon-string guitars. It claims a more even sound over the frequency spectrum, and more responsiveness and volume without being too delicate. Steel string flat-top guitar bracing: Brace shape and 'voicing' or 'tap tuning' Luthiers building higher quality instruments adjust the stiffness of the top and shape the braces to maximize the response of the top while maintaining structural integrity. Tone bars and bottom halves of the X-braces may be either scalloped or parabolic in shape. Above the X-brace joint, braces usually have a parabolic shape. Experienced luthiers 'voice' or 'tap-tune' the tops and backs of high end guitars to produce optimum tone and responsiveness in the hands of the player. Steel string flat-top guitar bracing: Scalloped vs. parabolic bracing Bracing style and shape will affect the tone of the instrument. According to luthiers Bob Connor and David Mainwaring, "scalloped braces will produce a warmer sounding bass response in the guitar with smooth mids and crisp highs. Parabolic braces will yield a quick response with a more pronounced mid range and a more focused bottom end." Ladder bracing This simple system, where braces are arranged parallel to each other and perpendicular to the direction of the strings, is employed on most guitar backs. The earliest steel string guitars very often had ladder braced tops, a practice which survives in the Maccaferri guitar. It is considered more suitable for parlor guitars and lightly strung instruments. Archtop bracing: Archtop guitars originally had two near-horizontal braces or "tone bars" on either side from bridge to neck, a system known as parallel bracing. The braces roughly run under the feet of the archtop guitar's bridge. X-bracing, similar to that of flat-top guitars was later introduced. Their tops are inherently stronger than flat tops, so less bracing may be required. "Trestle" bracing was a system used on some Gretsch archtops
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Playlist markup language** Playlist markup language: A playlist markup language is a markup language that specifies the contents and playback of a digital multimedia playlist; this includes streams of music, slideshows or even animations. List of playlist markup languages: .asx, an XML style playlist containing more information about the items on the playlist. .smil is an XML recommendation of the World Wide Web Consortium that includes playlist features. Kalliope PlayList (.kpl) is a kind of XML playlist storing developed to speed up loading and managing playlists. .pla, Samsung format(?), binary, Winamp handles these XSPF, an XML format designed to enable playlist sharing. WPL is an XML format used in Microsoft Windows Media Player versions 9–11.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Prospidium chloride** Prospidium chloride: Prospidium chloride (prospidine) is a drug with cytostatic (alkylating) and anti-inflammatory properties. It has been studied for the treatment of rheumatoid arthritis.Chemically, it is a spiro compound.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alcoholic polyneuropathy** Alcoholic polyneuropathy: Alcoholic polyneuropathy is a neurological disorder in which peripheral nerves throughout the body malfunction simultaneously. It is defined by axonal degeneration in neurons of both the sensory and motor systems and initially occurs at the distal ends of the longest axons in the body. This nerve damage causes an individual to experience pain and motor weakness, first in the feet and hands and then progressing centrally. Alcoholic polyneuropathy is caused primarily by chronic alcoholism; however, vitamin deficiencies are also known to contribute to its development. This disease typically occurs in chronic alcoholics who have some sort of nutritional deficiency. Treatment may involve nutritional supplementation, pain management, and abstaining from alcohol. Signs and symptoms: An early warning sign (prodrome) of the possibility of developing alcoholic polyneuropathy, especially in a chronic alcoholic, would be weight loss because this usually signifies a nutritional deficiency that can lead to the development of the disease.Alcoholic polyneuropathy usually has a gradual onset over months or even years, although axonal degeneration often begins before an individual experiences any symptoms.The disease typically involves sensory issues and motor loss, as well as painful physical perceptions, though all sensory modalities may be involved. Signs and symptoms: Symptoms that affect the sensory and motor systems seem to develop symmetrically. For example, if the right foot is affected, the left foot is affected simultaneously or soon becomes affected. In most cases, the legs are affected first, followed by the arms. The hands usually become involved when the symptoms reach above the ankle. This is called a stocking-and-glove pattern of sensory disturbances. Signs and symptoms: Sensory Common manifestations of sensory issues include numbness or painful sensations in the arms and legs, abnormal sensations like "pins and needles," and heat intolerance. Pain experienced by individuals depends on the severity of the polyneuropathy. It may be dull and constant in some individuals while being sharp and lancinating in others. In many subjects, tenderness is seen upon the palpitation of muscles in the feet and legs. Certain people may also feel cramping sensations in the muscles affected and others say there is a burning sensation in their feet and calves. Signs and symptoms: Motor Sensory symptoms are gradually followed by motor symptoms. Motor symptoms may include muscle cramps and weakness, erectile dysfunction in men, problems urinating, constipation, and diarrhea. Individuals also may experience muscle wasting and decreased or absent deep tendon reflexes. Some people may experience frequent falls and gait unsteadiness due to ataxia. This ataxia may be caused by cerebellar degeneration, sensory ataxia, or distal muscle weakness. Over time, alcoholic polyneuropathy may also cause difficulty swallowing (dysphagia), speech impairment (disarthria), muscle spasms, and muscle atrophy.In addition to alcoholic polyneuropathy, the individual may also show other related disorders such as Wernicke–Korsakoff syndrome and cerebellar degeneration that result from alcoholism-related nutritional disorders. Signs and symptoms: Severity Polyneuropathy spans a large range of severity. Some cases are seemingly asymptomatic and may only be recognized on careful examination. The most severe cases may cause profound physical disability. Causes: The general cause of this disease appears to be prolonged and heavy consumption of alcohol accompanied by a nutritional deficiency. However, there is ongoing debate over the active mechanisms, including whether the main cause is the direct toxic effect of alcohol itself or whether the disease is a result of alcoholism-related malnutrition. A 2019 metastudy found that the relationship between ethanol toxicity and neuropathy remained unproven. Causes: Effects due to nutritional deficiency Frequently alcoholics have disrupted social links in their lives and have an irregular lifestyle. This may cause an alcoholic to change their eating habits including more missed meals and a poor dietary balance. Alcoholism may also result in loss of appetite, alcoholic gastritis, and vomiting, which decrease food intake. Alcohol abuse damages the lining of the gastrointestinal system and reduces absorption of nutrients that are taken in. The combination of all of them may result in a nutritional deficiency that is linked to the development of alcoholic polyneuropathy.There is evidence that providing individuals with adequate vitamins improves symptoms despite continued alcohol intake, indicating that vitamin deficiency may be a major factor in the development and progression of alcoholic polyneuropathy. In experimental models of alcoholic polyneuropathy utilizing rats and monkeys no convincing evidence was found that proper nutritional intake along with alcohol results in polyneuropathy. Causes: In most cases, individuals with alcoholic polyneuropathy have some degree of nutritional deficiency. Alcohol, a carbohydrate, increases the metabolic demand for thiamine (vitamin B1) because of its role in the metabolism of glucose. Thiamine levels are usually low in alcoholics due to their decreased nutritional intake. In addition, alcohol interferes with intestinal absorption of thiamine, thereby further decreasing thiamine levels in the body. Thiamine is important in three reactions in the metabolism of glucose: the decarboxylation of pyruvic acid, d-ketoglutaric acid, and transketolase. A lack of thiamine in the cells may therefore prevent neurons from maintaining necessary adenosine triphosphate (ATP) levels as a result of impaired glycolysis. Thiamine deficiency alone could explain the impaired nerve conduction in those with alcoholic polyneuropathy, but other factors likely play a part.The malnutrition many alcoholics experience deprives them of important cofactors for the oxidative metabolism of glucose. Neural tissues depend on this process for energy, and disruption of the cycle would impair cell growth and function. Schwann cells produce myelin that wraps around the sensory and motor nerve axons to enhance action potential conduction in the periphery. An energy deficiency in Schwann cells would account for the disappearance of myelin on peripheral nerves, which may result in damage to axons or loss of nerve function altogether. In peripheral nerves, oxidative enzyme activity is most concentrated around the nodes of Ranvier, making these locations most vulnerable to cofactor deprivation. Lacking essential cofactors reduces myelin impedance, increases current leakage, and slows signal transmission. Disruptions in conductance first affect the peripheral ends of the longest and largest peripheral nerve fibers because they suffer most from decreased action potential propagation. Thus, neural deterioration occurs in an accelerating cycle: myelin damage reduces conductance, and reduced conductance contributes to myelin degradation. The slowed conduction of action potentials in axons causes segmental demyelination extending proximally; this is also known as retrograde degeneration.Many of the studies conducted that observe alcoholic polyneuropathy in patients are often criticized for their criteria used to assess nutritional deficiency in the subjects because they may not have completely ruled out the possibility of a nutritional deficiency in the genesis of the polyneuropathy. Many researchers favor the nutritional origin of this disease, but the possibility of alcohol having a toxic effect on the peripheral nerves has not been completely ruled out. Causes: Effects due to alcohol ingestion The consumption of alcohol may lead to the buildup of certain toxins in the body. For example, in the process of breaking down alcohol, the body produces acetaldehyde, which can accumulate to toxic levels in alcoholics. This suggests that there is a possibility ethanol (or its metabolites) may cause alcoholic polyneuropathy. There is evidence that polyneuropathy is also prevalent in well nourished alcoholics, supporting the idea that there is a direct toxic effect of alcohol.The metabolic effects of liver damage associated with alcoholism may also contribute to the development of alcoholic polyneuropathy. Normal products of the liver, such as lipoic acid, may be deficient in alcoholics. This deficiency would also disrupt glycolysis and alter metabolism, transport, storage, and activation of essential nutrients.Acetaldehyde is toxic to peripheral nerves. There are increased levels of acetaldehyde produced during ethanol metabolism. If the acetaldehyde is not metabolized quickly the nerves may be affected by the accumulation of acetaldehyde to toxic levels. Pathophysiology: The pathophysiology of alcoholic polyneuropathy is unclear. Diagnosis: Alcoholic polyneuropathy is very similar to other axonal degenerative polyneuropathies and therefore can be difficult to diagnose. When alcoholics have sensorimotor polyneuropathy as well as a nutritional deficiency, a diagnosis of alcoholic polyneuropathy is often reached.To confirm the diagnosis, a physician must rule out other causes of similar clinical syndromes. Other neuropathies can be differentiated on the basis of typical clinical or laboratory features. Differential diagnoses to alcoholic polyneuropathy include amyotrophic lateral sclerosis, beriberi, Charcot-Marie-Tooth disease, diabetic lumbosacral plexopathy, Guillain Barre Syndrome, diabetic neuropathy, mononeuritis multiplex and post-polio syndrome.To clarify the diagnosis, medical workup most commonly involves laboratory tests, though, in some cases, imaging, nerve conduction studies, electromyography, and vibrometer testing may also be used.A number of tests may be used to rule out other causes of peripheral neuropathy. One of the first presenting symptoms of diabetes mellitus may be peripheral neuropathy, and hemoglobin A1C can be used to estimate average blood glucose levels. Elevated blood creatinine levels may indicate chronic kidney disease and may also be a cause of peripheral neuropathy. A heavy metal toxicity screen should also be used to exclude lead toxicity as a cause of neuropathy.Alcoholism is normally associated with nutritional deficiencies, which may contribute to the development of alcoholic polyneuropathy. Thiamine, vitamin B-12, and folic acid are vitamins that play an essential role in the peripheral and central nervous system and should be among the first analyzed in laboratory tests. It has been difficult to assess thiamine status in individuals due to difficulties in developing a method to directly assay thiamine in the blood and urine. A liver function test may also be ordered, as alcoholic consumption may cause an increase in liver enzyme levels. Management: Although there is no known cure for alcoholic polyneuropathy, there are a number of treatments that can control symptoms and promote independence. Physical therapy is beneficial for strength training of weakened muscles, as well as for gait and balance training. Management: Nutrition To best manage symptoms, refraining from consuming alcohol is essential. Abstinence from alcohol encourages proper diet and helps prevent progression or recurrence of the neuropathy. Once an individual stops consuming alcohol it is important to make sure they understand that substantial recovery usually isn't seen for a few months. Some subjective improvement may appear right away, but this is usually due to the overall benefits of alcohol detoxification. If alcohol consumption continues, vitamin supplementation alone is not enough to improve the symptoms of most individuals.Nutritional therapy with parenteral multivitamins is beneficial to implement until the person can maintain adequate nutritional intake. Treatments also include vitamin supplementation (especially thiamine). In more severe cases of nutritional deficiency 320 mg/day of benfotiamine for 4 weeks followed by 120 mg/day for 4 more weeks may be prescribed in an effort to return thiamine levels to normal. Management: Pain Painful dysesthesias caused by alcoholic polyneuropathy can be treated by using gabapentin or amitriptyline in combination with over-the-counter pain medications, such as aspirin, ibuprofen, or acetaminophen. Tricyclic antidepressants such as amitriptyline, or carbamazepine may help stabbing pains and have central and peripheral anticholinergic and sedative effects. These agents have central effects on pain transmission and block the active reuptake of norepinephrine and serotonin.Anticonvulsant drugs like gabapentin or pregabalin block the active reuptake of norepinephrine and serotonin and have properties that relieve neuropathic pain. However, these medications take a few weeks to become effective and are rarely used in the treatment of acute pain.Topical analgesics like capsaicin may also relieve minor aches and pains of muscles and joints. Prognosis: Alcoholic polyneuropathy is not life-threatening but may significantly affect one's quality of life. Effects of the disease range from mild discomfort to severe disability.It is difficult to assess the prognosis of a patient because it is hard to convince chronic alcoholics to abstain from drinking alcohol completely. It has been shown that a good prognosis may be given for mild neuropathy if the alcoholic has abstained from drinking for 3–5 years. Prognosis: Early stage During the early stages of the disease the damage appears reversible when people take adequate amounts of vitamins, such as thiamine. If the polyneuropathy is mild, the individual normally experiences a significant improvement and symptoms may be eliminated within weeks to months after proper nutrition is established. When those people diagnosed with alcohol polyneuropathy experience a recovery, it is presumed to result from regeneration and collateral sprouting of the damaged axons. Prognosis: Progressed disease As the disease progresses, the damage may become permanent. In severe cases of thiamine deficiency, a few of the positive symptoms (including neuropathic pain) may persist indefinitely. Even after the restoration of a balanced nutritional intake, those patients with severe or chronic polyneuropathy may experience lifelong residual symptoms. Epidemiology: In 2020 the NIH quoted an estimate that in the United States 25% to 66% of chronic alcohol users experience some form of neuropathy. The rate of incidence of alcoholic polyneuropathy involving sensory and motor polyneuropathy has been stated as from 10% to 50% of alcoholics depending on the subject selection and diagnostic criteria. If electrodiagnostic criteria are used, alcoholic polyneuropathy may be found in up to 90% of individuals being assessed.The distribution and severity of the disease depends on regional dietary habits, individual drinking habits, as well as an individual's genetics. Large studies have been conducted and show that alcoholic polyneuropathy severity and incidence correlates best with the total lifetime consumption of alcohol. Factors such as nutritional intake, age, or other medical conditions are correlate in lesser degrees. For unknown reasons, alcoholic polyneuropathy has a high incidence in women.Certain alcoholic beverages can also contain congeners that may also be bioactive; therefore, the consumption of varying alcohol beverages may result in different health consequences. An individual's nutritional intake also plays a role in the development of this disease. Depending on the specific dietary habits, they may have a deficiency of one or more of the following: thiamine (vitamin B1), pyridoxine (vitamin B6), pantothenic acid and biotin, vitamin B12, folic acid, niacin (vitamin B3), and vitamin A. Epidemiology: Acetaldehyde It is also thought there is perhaps a genetic predisposition for some alcoholics that results in increased frequency of alcoholic polyneuropathy in certain ethnic groups. During the body's processing of alcohol, ethanol is oxidized to acetaldehyde mainly by alcohol dehydrogenase; acetaldehyde is then oxidized to acetate mainly by aldehyde dehydrogenase (ALDH). ALDH2 is an isozyme of ALDH and ALDH2 has a polymorphism (ALDH2*2, Glu487Lys) that makes ADLH2 inactive; this allele is more prevalent among Southeast and East Asians and results in a failure to quickly metabolize acetaldehyde. The neurotoxicity resulting from the accumulation of acetaldehyde may play a role in the pathogenesis of alcoholic polyneuropathy. History: The first description of symptoms associated with alcoholic polyneuropathy were recorded by John C. Lettsome in 1787 when he noted hyperesthesia and paralysis in legs more than arms of patients. Jackson has also been credited with describing polyneuropathy in chronic alcoholics in 1822. The clinical title of alcoholic polyneuropathy was widely recognized by the late nineteenth century. It was thought that the polyneuropathy was a direct result of the toxic effect alcohol had on peripheral nerves when used excessively. In 1928, George C. Shattuck argued that the polyneuropathy resulted from a vitamin B deficiency commonly found in alcoholics and he claimed that alcoholic polyneuropathy should be related to beriberi. This debate continues today over what exactly causes this disease, some argue it is just the alcohol toxicity, others claim the vitamin deficiencies are to blame and still others say it is some combination of the two. Research directions: In 2001 research directions included the effect that an alcoholics' consumption and choice of alcoholic beverage might have on their development of alcoholic polyneuropathy. Some beverages may include more nutrients than others (such as thiamine), but the effects of this with regards to helping with a nutritional deficiency in alcoholics is yet unknown.Research also continued on reasons for the development of alcoholic polyneuropathy. Some argue it is a direct result of alcohol's toxic effect on the nerves, but others say factors such as a nutritional deficiency or chronic liver disease may play a role in the development as well. Multiple mechanisms may be present.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Coherent topos** Coherent topos: In mathematics, a coherent topos is a topos generated by a collection of quasi-compact quasi-separated objects closed under finite products.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Population cycle** Population cycle: A population cycle in zoology is a phenomenon where populations rise and fall over a predictable period of time. There are some species where population numbers have reasonably predictable patterns of change although the full reasons for population cycles is one of the major unsolved ecological problems. There are a number of factors which influence population change such as availability of food, predators, diseases and climate. Occurrence in mammal populations: Olaus Magnus, the Archbishop of Uppsala in central Sweden, identified that species of northern rodents had periodic peaks in population and published two reports on the subject in the middle of the 16th century. In North America, the phenomenon was identified in populations of the snowshoe hare. In 1865, trappers with the Hudson's Bay Company were catching plenty of animals. By 1870, they were catching very few. It was finally identified that the cycle of high and low catches ran over approximately a ten-year period. The most well known example of creatures which have a population cycle is the lemming. The biologist Charles Sutherland Elton first identified in 1924 that the lemming had regular cycles of population growth and decline. When their population outgrows the resources of their habitat, lemmings migrate, although contrary to popular myth, they do not jump into the sea. Mouse plagues in Australia happen at intervals of about four years. Other species: While the phenomenon is often associated with rodents, it does occur in other species such as the ruffed grouse. There are other species which have irregular population explosions such as grasshoppers where overpopulation results in locust swarms in Africa and Australia. Relationships between predators and prey: There is also an interaction between prey with periodic cycles and predators. As the population expands, there is more food available for predators. As it contracts, there is less food available for predators, putting pressure on their population numbers. Length: Each population cycle tends to last as long as a species' life expectancy (i.e. lemmings, rabbits and locusts)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Everest (Framework)** Everest (Framework): The Everest Framework is an open-source framework used to assist software developers in the digital health sector to create HL7v3 messages in Pan-Canadian or Universal formats. Everest (Framework): The framework was developed by Mohawk College used for HL7 version 3 messaging and CDA Document processing called the "Everest Framework". This framework is available for Java and .NET and comes with extensive examples and documentation on how to use HL7v3 messaging. Support is also available via the CodePlex project page. This framework was developed through grant funding provided by the Natural Sciences and Engineering Research Council of Canada (NSERC) and Canada Health Infoway. Everest (Framework): Everest is used in the following products MEDIC Client Registry OpenIZ MEDIC Service Core FrameworkThe Everest Developer's Guide can be found on Lulu.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**S-specific spore photoproduct lyase** S-specific spore photoproduct lyase: S-specific spore photoproduct lyase (EC 4.1.99.15, SAM, SP lyase, SPL, SplB, SplG) is an enzyme with systematic name S-specific spore photoproduct pyrimidine-lyase. This enzyme catalyses the following chemical reaction (5S)-5,6-dihydro-5-(thymidin-7-yl)thymidine (in DNA) + S-adenosyl-L-methionine ⇌ thymidylyl-(3'->5')-thymidylate (in DNA) + 5'-deoxyadenosine + L-methionineThis enzyme is an iron-sulfur protein.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pimento loaf** Pimento loaf: Pimiento loaf, more commonly pimento loaf, also called pickle and pimiento loaf, pickle and pimento loaf, or P&P loaf, is a loaf-type luncheon meat containing finely chopped beef and pork, as well as chopped pickles and pimientos. After being formed into a loaf and cooked, the loaf is kept whole so it can be sliced and served cold as deli meat. Pimento loaf is closely related to olive loaf (the primary difference being pimentos and pickles replacing pimento-stuffed olives) and spiced luncheon loaf. It is distantly related to ham and cheese loaf. Pimento loaf: Unlike bologna and salami, which are sausages, pimento loaf is baked like a meatloaf in a loaf pan. Inexpensive pimento loaf is made with chicken and other ingredients common to inexpensive bologna. Also, less expensive pimento loaves are baked in sleeves instead of pans to give the cold cuts a round appearance, leading to the misconception that pimento loaf is related to bologna. Since pickles are typically less expensive than olives, pimento loaf is far more common as an inexpensive deli meat.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Finite measure** Finite measure: In measure theory, a branch of mathematics, a finite measure or totally finite measure is a special measure that always takes on finite values. Among finite measures are probability measures. The finite measures are often easier to handle than more general measures and show a variety of different properties depending on the sets they are defined on. Definition: A measure μ on measurable space (X,A) is called a finite measure if it satisfies μ(X)<∞. By the monotonicity of measures, this implies for all A∈A. If μ is a finite measure, the measure space (X,A,μ) is called a finite measure space or a totally finite measure space. Properties: General case For any measurable space, the finite measures form a convex cone in the Banach space of signed measures with the total variation norm. Important subsets of the finite measures are the sub-probability measures, which form a convex subset, and the probability measures, which are the intersection of the unit sphere in the normed space of signed measures and the finite measures. Properties: Topological spaces If X is a Hausdorff space and A contains the Borel σ -algebra then every finite measure is also a locally finite Borel measure. Properties: Metric spaces If X is a metric space and the A is again the Borel σ -algebra, the weak convergence of measures can be defined. The corresponding topology is called weak topology and is the initial topology of all bounded continuous functions on X . The weak topology corresponds to the weak* topology in functional analysis. If X is also separable, the weak convergence is metricized by the Lévy–Prokhorov metric. Properties: Polish spaces If X is a Polish space and A is the Borel σ -algebra, then every finite measure is a regular measure and therefore a Radon measure. If X is Polish, then the set of all finite measures with the weak topology is Polish too.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Conduit metaphor** Conduit metaphor: In linguistics, the conduit metaphor is a dominant class of figurative expressions used when discussing communication itself (metalanguage). It operates whenever people speak or write as if they "insert" their mental contents (feelings, meanings, thoughts, concepts, etc.) into "containers" (words, phrases, sentences, etc.) whose contents are then "extracted" by listeners and readers. Thus, language is viewed as a "conduit" conveying mental content between people. Conduit metaphor: Defined and described by linguist Michael J. Reddy, PhD, his proposal of this conceptual metaphor refocused debate within and outside the linguistic community on the importance of metaphorical language.Fellow linguist George Lakoff stated that "The contemporary theory that metaphor is primarily conceptual, conventional, and part of the ordinary system of thought and language can be traced to Michael Reddy’s now classic essay... With a single, thoroughly analyzed example, he allowed us to see, albeit in a restricted domain, that ordinary everyday English is largely metaphorical, dispelling once and for all the traditional view that metaphor is primarily in the realm of poetic or 'figurative' language. Reddy showed, for a single, very significant case, that the locus of metaphor is thought, not language, that metaphor is a major and indispensable part of our ordinary, conventional way of conceptualizing the world, and that our everyday behavior reflects our metaphorical understanding of experience. Though other theorists had noticed some of these characteristics of metaphor, Reddy was the first to demonstrate them by rigorous linguistic analysis, stating generalizations over voluminous examples." Background: The genesis of Reddy's paper drew inspiration from work done by others in several disciplines, as well as linguistics. Research on information theory had led Norbert Wiener to publish the seminal book on cybernetics, in which he had stated, "Society can only be understood through a study of the messages and communications facilities which belong to it." Social-systems theorist Donald Schön examined the effects of metaphorical speech in matters of public-policy; he suggested that people's conflicting frames of reference were often to blame for communication breakdown. Schön's frame-restructuring solution was similar in some ways to Thomas Kuhn's groundbreaking views on the shifting of scientific paradigms through what he called the "translation" process.Research within linguistics (including the controversial Sapir-Whorf hypothesis and Max Black's arguments against it), coupled with Uriel Weinreich's assertion that "Language is its own metalanguage," prompted Reddy to approach the conduit metaphor's exposition and its possible impact on language and thought with caution. Summary of Reddy's paper: The way English speakers discuss communication depends on the semantics of the language itself English has a default conceptual framework for communicating (the conduit metaphor) The conduit metaphor has a self-reinforcing bias A contrasting, more accurate, seldom-used non-metaphorical framework exists (the toolmakers paradigm) The resulting frame conflict may negatively impact solutions to social and cultural problems Research into core expressions Reddy collected and studied examples of how English speakers talk about success or failure in communication. The overwhelming majority of what he calls core expressions involved dead metaphors selected from speakers' internal thoughts and feelings. Speakers then "put these thoughts into words" and listeners "take them out of the words." Since words are actually marks or sounds and do not literally have "insides," people talk about language largely in terms of metaphors. Summary of Reddy's paper: Most English core expressions used in talking about communication assert that actual thoughts and feelings pass back and forth between people through the conduit of words. These core expressions and the few that do not qualify as conduit metaphors are listed in the paper's extensive appendix, which itself has been cited by Andrew Ortony as "a major piece of work, providing linguistics with an unusual corpus, as well as substantiating Reddy's claims about the pervasiveness of the root metaphor." Major framework There are two distinct but similar frameworks in which the conduit metaphor appears. Four types of core expressions constitute the major framework. (In the following example sentences, the operative core expressions are italicized.) Language is a conduit These commonplace examples— You can't get your concept across to the class that way His feelings came through to her only vaguely They never give us any idea of what they expect—are understood metaphorically. In 1., people do not actually "get across" concepts by talking; in 2., feelings do not really "come through to" people; and in 3., people do not in fact "give" to others their ideas, which are mental states. Listeners assemble from their own mental states a partial replica of the speakers'. These core expressions assert figuratively that language literally transfers people's mental contents to others. Summary of Reddy's paper: Speakers insert thoughts into words These examples— Practice capturing your feelings in complete sentences I need to put each idea into phrases with care Insert that thought further down in the paragraph She forced her meanings into the wrong lyrics Please pack more sensation into fewer stanzas He loads an argument with more viewpoints than it can withstand—show that in 1., the speaker might be inexperienced in ensnaring meaning; in 2., be clumsy when putting it in; in 3., put it in the wrong place; in 4., compel words to accommodate meanings for which there is not enough room; in 5., fail to put in enough; or in 6., put in too much. These core expressions assert that speakers "insert" mental content into the "containers" represented by words with varying degrees of success. Summary of Reddy's paper: Words contain thoughts These examples indicate that sounds and marks can be "containers" for mental content: The sense of loneliness is in just about every sentence His story was pregnant with meaning The entire paragraph was full of emotion These lines indeed rhyme, but they are devoid of feeling Your words are hollow—you don't mean them.These core expressions assert that words contain or do not contain mental content, depending on the success or failure of the insertion process. Summary of Reddy's paper: Listeners extract thoughts from words These examples— I couldn't actually extract coherent ideas from that prose. You found some challenging concepts in the essay They wouldn't really get any hatred out of those statements Her remark is truly impenetrable The author's intentions are going to be locked up in that dense chapter forever Hiding the meaning in his sentences is just his style. Summary of Reddy's paper: They're reading things into the poem—indicate that speakers and writers are responsible to a large extent for the mental content conveyed by language, and that listeners and readers play a more passive role. However, in 7., a reader can add something to the container that was not originally there. Overall, these core expressions assert that listeners must "extract" mental content from words. Summary of Reddy's paper: Minor framework Instead of words, an "idea space" between people's heads can be the container for mental content. The conduit is no longer a sealed pipeline between people, but an open pipe allowing mental content to escape into, or enter from, this space. Three types of core expressions constitute the minor framework of the conduit metaphor. Speakers eject thoughts into idea space These examples— She poured out the sorrow she'd been holding back He finally got those ideas out there—show that speakers and writers can eject mental content into an external idea space outside people. Idea space contains thoughts These examples— That theory has been floating around for a century His crazy notions made their way immediately into cyberspace Those opinions are on the streets of Brooklyn, not in a classroom—indicate that mental content has a material existence in an idea space, existing outside people. Listeners extract thoughts from idea space The following examples— I had to absorb Einstein's ideas gradually His deepest emotions went right over her head We couldn't get all that stuff into our brains in one afternoon—demonstrate that mental content from an idea space may or may not re-enter people. Logical apparatus The italicized words in the above examples are interchangeable with a wide array of terms that label mental content, the containers in which the content may be placed, and the ways in which these containers may be transferred in the conduit-metaphor paradigm. Summary of Reddy's paper: Reddy developed a logical apparatus for diagramming the conduit metaphor's many permutations in both frameworks. Mental contents (feelings, emotions, ideas, etc.) are represented by RM, which stands for "repertoire member." Containers (words, phrases, sentences, etc.) are represented by S, which stands for "signal." Thus, "I need to put each idea into phrases with care" can be rendered as the core expression put RM into S. Reddy uses this logical apparatus throughout the appendix to his paper to clarify distinctions between metalingual expressions that use the conduit metaphor and the minority that do not. Summary of Reddy's paper: The toolmakers paradigm In order to examine the effects of the objectification of mental content in communication using the conduit metaphor, Reddy proposes an alternate, contrasting, "radical subjectivist" conception of communication called the toolmakers paradigm. Summary of Reddy's paper: A person's mental content is in fact isolated from others'. This isolation can be represented by a wheel-shaped compound, each wedge-shaped sector of which is an environment (a brain) bounded by two spokes and part of the circumference (right). They all contain differing amounts and types of plants, rocks, water, etc. (repertoire members). The wheel's hub has machinery that can deliver sheets of paper between sectors (communication). People use it to exchange crude blueprints (signals) for making tools, shelters, foods, etc., but they have no other contact whatsoever, and know of others' existence by inferences based on these blueprints.Living in a forested sector, Alex builds a wooden rake, draws three identical blueprints, and drops them in the slots for Bob, Curt and Don. Bob finds a piece of wood for the handle, but because he lives in a rocky sector, starts making a stone rake head. (Alex had not considered wood to be unavailable or wrong for the rake head, so it was not specified.) When halfway done, Bob connects his stone head to the handle, realizes it will be heavy, and decides it must be a device for digging up rocks when clearing a field for planting. (He infers that Alex must be either very strong or has only small rocks in his sector.) Bob decides two large prongs will make his tool lighter, thus finishing with a two-bladed pickax. He makes three identical blueprints for his pickax and drops them in the slots for Alex, Curt and Don. Summary of Reddy's paper: Alex assembles a kind of rock-pick, but must modify the design if a wooden, two-pronged head is to be strong enough. (He cannot see much use for the tool in his largely rock-free sector, sensing that Bob has misunderstood his rake.) Alex draws a second blueprint for the rake head and sends it out as before. Curt crafts a hoe for slicing cleanly through roots to clear out a swamp. Don creates a gaff for fishing.Blueprint users (language users) in the toolmakers paradigm can converge by inference on accurate replications of others' tools (mental content) after a laborious series of exchanges. Using the same diagram for the conduit-metaphor paradigm instead, the hub is a duplicator that can transfer actual materials and constructions among sectors, ending the isolation. No guesswork or construction is needed: Alex puts the rake in a special chamber, pushes a button, and precise replicas appear instantly in similar chambers for Bob, Curt and Don. Summary of Reddy's paper: The subjectivist toolmakers paradigm embodies a language requiring real effort to overcome failures in communication, whereas the objectivist conduit-metaphor paradigm embodies one in which very little effort is needed for success. Core expressions are pervasive and unavoidable Although the toolmakers paradigm is available as a more accurate model of communication, the conduit metaphor is pervasive and difficult to avoid in English syntax and semantics. Thinking in terms of another model of communication is generally brief, isolated and fragmentary because of an entrenched system of opposing attitudes and assumptions. Pervasive Reddy's tally of core expressions is about 140. Examining alternative ways of speaking about communication—either metaphorically opposed or neutral to the conduit-metaphor framework—results in a list of 30 to 40 expressions. Thus, 70% of the metalingual apparatus of the English language is based on the conduit metaphor. The influence of the remaining 30% is weakened by several factors. Summary of Reddy's paper: They are usually multisyllabic, Latinate abstractions (e.g. "communicate," "disseminate," "notify," "disclose," etc.), which are neither graphic nor metaphorically coherent Most can be used with adjuncts such as "in words," thereby losing their neutrality and lending added support to the conduit metaphor. ("Communicate your feelings using simpler words," for example, avoids the conduit metaphor, whereas, "Communicate your feelings in simpler words," does not.) Many of these expressions have etymological roots arising directly from the conduit-metaphor framework ("express," "disclose," etc.) Unavoidable Speaking carefully and attentively, it is possible to avoid conduit-metaphor expressions. For example, "Did you get anything out of that article?" might be replaced by, "Were you able to construct anything of interest on the basis of the assigned text?" Eschewing obvious conduit-metaphor expressions when communication is the topic is difficult. "Try to communicate more effectively" differs in impact from "You've got to learn how to put your thoughts into words." Reddy proceeds to show that even if avoidance were possible, it does not necessarily free people from the framework. Summary of Reddy's paper: Semantic pathology via metonymy A semantic pathology arises "whenever two or more incompatible senses capable of figuring meaningfully in the same context develop around the same name." "I'm sorry" is an example of two contextually relevant meanings in collision. A person may expect an apology when the other wishes only to sympathize, or anticipate sympathy but hear an apology instead. Summary of Reddy's paper: Pathology in linguistic theory Many other terms are ambiguous between mental content and the words "containing" it. For instance the word "poem" denotes a particular grouping of the sounds or marks (signals) exchanged between people. However, its use in sentences reveals that it can refer to thoughts or feelings (repertoire members). In this example— The poem has four lines and forty words—"poem" refers to a text. The word-sense can be labeled POEM1. However, in this example— Eliot's poem is so utterly depressing—"poem" refers to the mental content assembled in its reading. The word-sense in this case can be labeled POEM2. Moreover, this example— Her poem is so sloppy!—can be understood as either POEM1 or POEM2 (polysemy). Summary of Reddy's paper: The ambiguity of "poem" is intimately related to the conduit metaphor. If words contain ideas, then POEM1 contains POEM2. When two entities are commonly found together, one of their names—usually the more concrete—will develop a new sense referring to the other (the process of metonymy). Just as ROSE1 (the blossom) developed ROSE2 (a shade of pinkish red) by metonymy, so POEM1 gave rise to POEM2.In the toolmakers paradigm, words do not contain ideas, so POEM1 cannot contain POEM2; therefore, a distinction between them must be preserved. Although there is only one POEM1 in most cases, the differences in mental content among people (and the difficult task of assembling it based on instructions in the text) mean that there are as many POEM2s as there are people. These internal POEM2s will only come to resemble one another after people expend effort comparing their mental content. Summary of Reddy's paper: If language had been operating historically under the toolmakers paradigm, these two different concepts would not currently be accessed by the same word: talking about mental content and signals as if they were the same would have led to insoluble confusion. The ambiguity of "poem" would thus have been an incurable semantic pathology. However, the conduit metaphor can completely ignore it."Poem" is a paradigm case for the entire class of English words denoting signals ("word," "phrase," "sentence," "essay," "novel," "speech," "text," etc.), demonstrating that semantic structures can be completely normal in one view of reality and pathological in another. This lends support to the theory that language and views about reality develop together. Summary of Reddy's paper: Pathology in mathematical information theory Evidence of the biasing power of the conduit metaphor can be found in fields outside of linguistics. Information theory, with its concept-free algorithms and computers as models, would seem to be immune from effects arising from semantic pathology, because the framework shares many attributes with the toolmakers paradigm. Nevertheless, there is evidence that use of the conduit metaphor has hampered investigators' attempts to develop the theory.Communication is the transfer of information (a selection from a set of alternatives). This set and a language (code) relating the alternatives to physical signals are established. A copy of each (an "a priori shared context") is placed with the sender and receiver. A sequence of the alternatives (the message) is selected for communication to the receiver, but the message itself is not sent. The selected alternatives are related by the code to energy patterns (the signal) that travel quickly and unmodified. Mathematics is used to measure quantitatively how much the received signal narrows down the possible selections from stored alternatives. Summary of Reddy's paper: The similarity between the frameworks of information theory and the toolmakers paradigm is that the shared context corresponds to the repertoire members the signal does not contain the message the signal carries neither the alternatives nor a replica of the message the signal in the former is the blueprint in the latter the receiver uses the signal to duplicate the sender's selection process and recreate the message if a signal is received and the shared context is damaged or missing, the proper selection cannot be madeThis analogy has withstood information theory's utility in simple, technical applications, but in biology, the social sciences, human language and behavior, it has been historically less successful. These attempts foundered by misunderstanding the conceptual framework of the theory rather than its mathematics. Reliance on ordinary language has made the information theory's insights less clear.The negative impact of ordinary language on information theory's use in other fields can be traced to terms the founders themselves used to label parts of their paradigm, telegraphy. The set of alternatives (repertoire members) were called the "alphabet." While true for telegraphy, Claude Shannon and Warren Weaver used it as a nomenclature referring to any set of alternative states, behaviors, etc. The alphabet confuses the distinction between signals and repertoire members in human communication. Despite Weaver's particular interest in applying the theory to language, this fact went unrecognized. Summary of Reddy's paper: Shannon and Weaver were also unaware that the choice of the term "message" to represent the selection of alternatives from the repertoire shared the same semantic pathology as "poem." I got your message (MESSAGE1), but had no time to read it Okay, John, I get the message (MESSAGE2); let’s leave him aloneBecause MESSAGE1 is the signal and MESSAGE2 is the repertoire members in communication, the reasoning is faulty. The ambiguity is trivial in the conduit-metaphor framework, but fatal for information theory, which is based on the idea that MESSAGE2 cannot be transmitted. Although Shannon and Weaver noted the distinction between "received" and "transmitted" signals based on possible distortion and noise, they wrote the word "message" on the receiving end of their paradigm.Weaver employed many conduit-metaphor expressions; for example, "How precisely do the transmitted symbols convey the desired meaning?" [italics Reddy's]. He also contrasted two "messages, one of which is heavily loaded with meaning and the other of which is pure nonsense." Weaver wrote as if repertoire members are sent, adding that the sender "changes the message into the signal" [italics Weaver's]. A code specifies how two systems relate, without changing anything; it preserves in the receiver the organizational pattern of the sender. Marks and sounds do not change into electrons, just as thoughts do not change into words. Summary of Reddy's paper: Shannon correctly wrote, "The receiver ordinarily performs the inverse operation of that done by the transmitter, reconstructing the message from the signal." But conduit metaphors continue to appear in the form of "encode" and "decode," defined as putting the repertoire members into code and taking them out, respectively. In addition, because the theory conceives of information as the ability to copy an organization via nonrandom selections, the term "information content" is itself a misnomer: signals do something but cannot contain anything. The conduit metaphor has thus influenced the thinking of information theorists in a counterproductive way.Confusion between the message and the signal persisted for two decades as theorists in other fields of inquiry drew on the insights of information theory. Kenneth K. Sereno and C. David Mortensen wrote that "investigators have yet to establish a completely acceptable definition of communication". Summary of Reddy's paper: "Those models based upon a mathematical conception describe communication as analogous to the operations of an information processing machine: an event occurs in which a source or sender transmits a signal or message through a channel to some destination or receiver." [italics Sereno & Mortensen's] Additionally, when they state, "The theory was concerned with the problem of defining the quantity of information contained in a message to be transmitted...," information is contained in a transmitted "message". If it refers to MESSAGE1, it is the conduit metaphor asserting that information is contained in the signals. If it is MESSAGE2, it is the repertoire members that are sent in signals, which contain measurable information. The insights of information theory have been challenged by using the conduit metaphor instead of the more accurate toolmakers paradigm, upon which its premises were initially based. Summary of Reddy's paper: Opposition of conflicting paradigms The conduit-metaphor paradigm states that communication failure needs explanation, because success should be automatic: materials are naturally gathered, but misguided people expend energy scattering them. Conversely, the toolmakers paradigm states that partial miscommunication is inherent and can only be fixed by continuous effort and extensive verbal interaction: materials are gathered using energy or they will be naturally scattered. Summary of Reddy's paper: Reddy explores some of the potential social and psychological effects of believing that communication is a "success without effort" system, whereas it is an "energy must be expended" system. The conduit metaphor objectifies meaning and influences people to talk and think about mental content as if it possessed an external, inter-subjective reality. Summary of Reddy's paper: Cultural and social implications Having discussed the conduit metaphor's impact on theorists within and outside of linguistics, Reddy speculates about its distorting potential in culture and society. He points out that You'll find better ideas than that in the libraryis a conduit metaphor asserting that ideas are in words, which are on pages, which are in books, which are in libraries—with the result that "ideas are in libraries." The implication of this minor-framework core expression is that libraries full of books, tapes, photographs, videos and electronic media contain culture. Summary of Reddy's paper: In the toolmakers-paradigm perspective, there are no ideas in the words; therefore, none in libraries. Instead, there are patterns of marks, bumps or magnetized particles capable of creating patterns of noise and light. Using these patterns as instructions, people can reconstruct mental content resembling that of those long gone. Since people in the past experienced a different world and used slightly different language instructions, a person unschooled in the language and lacking a full reservoir of mental content from which to draw, is unlikely to reconstruct a cultural heritage. Summary of Reddy's paper: Because culture does not exist in books or libraries, it must be continually reconstructed in people's brains. Libraries preserve the opportunity to perform this reconstruction, but if language skills and the habit of reconstruction are not preserved, there will be no culture. Thus, Reddy asserts that the only way to preserve culture is to train people to "regrow" it in others. Summary of Reddy's paper: He stresses that the difference of viewpoint between the conduit metaphor and the toolmakers paradigm is profound. Humanists—those traditionally charged with reconstructing culture and teaching others to reconstruct it—are increasingly rare. Reddy proposes that, despite a sophisticated system for mass communication, there is actually less communication; and moreover, that people are following a flawed manual. The conduit-metaphor influenced view is that the more signals created and preserved, the more ideas "transferred" and "stored." Society is thus often neglecting the human ability to reconstruct thought patterns based on signals. This ability atrophies when "extraction" is seen as a trivial process not requiring instruction past a rudimentary level. Summary of Reddy's paper: Reddy concludes that the conduit metaphor may continue to have negative technological and social consequences: mass communications systems that largely ignore the internal, human systems responsible for the majority of the work in communicating. Because the logical framework of the conduit metaphor indicates people think in terms of "capturing ideas in words"—despite there being no ideas "within" the ever-increasing stream of words—a burgeoning public may be less culturally informed than expected. Post-publication research by others: Since the publication of Reddy's paper in 1979, it has garnered a large number of citations in linguistics, as well as a wide spectrum of other fields of inquiry. In 2007, a search at Web of Science revealed 354 citations broken down roughly as follows: 137 in linguistics; 45 in information science; 43 in psychology; 38 in education; 17 in sociology; 15 in anthropology; 10 in law; 9 in business/economics; 8 in neurology; 7 in medicine; 5 in political science; 4 each in the arts, biology, environmental science, and mathematics; and 1 each in architecture, geography, parapsychology and robotics. Further online reading: Managerial and organizational communication in terms of the conduit metaphor Stephen R. Axley examines "the theoretical and empirical bases of Reddy's provocative thesis" The contemporary theory of metaphor George Lakoff, University of California, San Diego Metaphors we live by (excerpt) George Lakoff and Mark Johnson examine metaphor and provide a synopsis of the conduit metaphor Programming with agents Michael Travers compares the conduit metaphor and toolmakers paradigms Metonymic motivation of the conduit metaphor Celia Martín de León examines the role of metonymy in the conduit metaphor The "conduit metaphor" revisited: A reassessment of metaphors for communication Joe Grady of the University of California, Berkeley, criticizes existing analyses of the conduit metaphor Exculpation of the conduit metaphor Tomasz P. Krzeszowski examines the conduit metaphor in Language History and Linguistic Modelling : A Festschrift for Jacek Fisiak on His 60th Birthday (Trends in Linguistics. Studies and Monographs, 101) (Vol.1) The poetics of mind: figurative thought, language, and understanding Raymond W. Gibbs examines the influence of the conduit metaphor in the context of poetics Constructions: a construction grammar approach to argument structure Adele E. Goldberg discusses the conduit metaphor in the context of ditransitive argument structure
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nucleoside phosphotransferase** Nucleoside phosphotransferase: In enzymology, a nucleoside phosphotransferase (EC 2.7.1.77) is an enzyme that catalyzes the chemical reaction a nucleotide + a 2'-deoxynucleoside ⇌ a nucleoside + a 2'-deoxynucleoside 5'-phosphateThus, the two substrates of this enzyme are nucleotide and 2'-deoxynucleoside, whereas its two products are nucleoside and 2'-deoxynucleoside 5'-phosphate. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with an alcohol group as acceptor. The systematic name of this enzyme class is nucleotide:nucleoside 5'-phosphotransferase. Other names in common use include nonspecific nucleoside phosphotransferase, and nucleotide:3'-deoxynucleoside 5'-phosphotransferase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Environmental velocity** Environmental velocity: In strategic management and organizational theory, environmental velocity is the rate and direction of change of the notional space in which organizations exist. This "space" consists of the political, technological, economic and competitive environment that influences an organization Organizations that can adjust and entrain their activities to suit their environmental velocity will have a competitive advantage over those organizations that can't. Decision making: Eisenhardt & Bourgeois (1988) proposed the concept of environmental velocity when studying strategic decision making in the micro-computer industry. They argued that this particular industry could be characterized as having a high-velocity environment, because it exhibited rapid and discontinuous change in demand, competition, technology and regulations. In a number of subsequent studies, it has been determined that success in high-velocity environments is related to fast, formal strategic decision-making processes and the use of heuristic reasoning processes. Innovation and organizational change: In line with contingency theory, an organization's environmental velocity dictates the rate at which high performing organizations should adapt. In a study that examined the link between product innovation and organizational change, Eisenhardt and Tabrizi (1995) show that rapid product development facilitates fast organizational change and thus gives firms the capability to keep pace with fast changing environments. Similarly, it has been found that the management of multiple-product innovation projects by firms induces improvisation and experimentation behaviors within these firms. These behaviors help firms to consistently succeed in high-velocity environments. Cognition: In the context of environmental velocity, research has examined the link between firm collective cognition and perceived environmental velocity. That is, how do the collective beliefs and associated practices of a firm shape how members of the firm perceive the velocity conditions of the environment? In a study of firms in the aircraft and semiconductor industries, it was found that environmental velocity is not simply an external and objective condition to which firms react; rather, firms collectively construct their environmental velocity through their social networks, collective assumptions, and environmental scanning approaches (Nadkarni and Narayanan, 2007). Furthermore, firms should employ adaptive scanning and sensemaking approaches to effectively understand and deal with the dynamism in high-velocity environments. Velocity regimes: In a key review of some of the major studies in this area, McCarthy et al. (2010) found that researchers and managers often focus on the rate or speed of change only, treating velocity as a single, latent aspect of the environment, characterized simply as being “high” or “low”. However, the original definition of environmental velocity (Eisenhardt and Bourgeois 1988) defines and describes it as a vector quality, composed of both rate and direction of change across multiple dimensions (e.g., regulations, demand, product, technology, and competition). Velocity regimes: McCarthy et al. (2010) developed a framework that describes the relationships between these multiple velocity dimensions, noting that they may each have a distinct and often different velocity. They define “velocity homology” as the degree to which velocity dimensions have similar rates and directions of change and “velocity coupling” as the degree to which the velocities of different dimensions affect one another. This multidimensional treatment of environmental velocity results in four “velocity regimes” - simple, divergent, conflicted and integrated - based on the patterns of velocity homology and velocity coupling. A key implication of the framework is that firms should not necessarily focus on being uniformly fast (or slow) to suit industry conditions. Each of the four velocity regimes requires firms to maintain different forms of temporal fit (i.e., the entrainment of multiple organizational paces) and temporal coordination (i.e., managing the interdependences between organizational paces). Temporal orientations: McCarthy et al. (2010) explain that each of the velocity regimes they propose is suited to a different temporal orientation, which they define as “how individuals and teams conceive of time". Specially, they argue that when velocity dimensions are tightly coupled to each (i.e., “the relationship between the velocities of different dimensions involve significant immediate, direct causal effects”), an organization's capabilities would benefit from a polychronic orientation. In contrast, when velocity regimes are loosely coupled (i.e., “changes in the velocity of one dimension have relatively little immediate, direct impact on the velocities of other dimensions”) an organization's capabilities would benefit from a monochronic orientation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aesthetic interpretation** Aesthetic interpretation: In the philosophy of art, an interpretation is an explanation of the meaning of a work of art. An aesthetic interpretation expresses a particular emotional or experiential understanding most often used in reference to a poem or piece of literature, and may also apply to a work of visual art or performance. Aims of interpretation: Readers may approach reading a text from different starting points. A student assigned to interpret a poem for class comes at reading differently from someone on the beach reading a novel for escapist pleasure. "Interpretation" implies the conscious task of making sense out of a piece of writing that may not be clear at first glance or that may reward deeper reading even if it at first appears perfectly clear. The beach reader will probably not need to interpret what she or he reads, but the student will. Professor Louise Rosenblatt, a specialist in the study of the reading process, distinguished between two reading stances that occupy opposite ends of a spectrum. Aesthetic reading differs from efferent reading in that the former describes a reader coming to the text expecting to devote attention to the words themselves, to take pleasure in their sounds, images, connotations, etc. Efferent reading, on the other hand, describes someone, "reading for knowledge, for information, or for the conclusion to an argument, or maybe for directions as to action, as in a recipe..., reading for what [one is] going to carry away afterwards. I term this efferent reading."(L. Rosenblat) That is what efferent means, leading or conducting away from something, in this case information from a text. On this view, poems and stories do not offer the reader a message to carry away, but a pleasure in which to participate actively—aesthetically. One or many: There are many different theories of interpretation. On the one hand, there may be innumerable interpretations for any given piece of art, any one of which may be considered valid. However, it may also be claimed that there really is only one valid interpretation for any given piece of art. The aesthetic theory that says people may approach art with different but equally valid aims is called "pluralism." But the aim of some of interpretations is such that they claim to be true or false. One or many: A "relativistic" kind of claim - between "All readings are equally good" and "Only one reading is correct" - holds that readings that tie together more details of the text and that gain approval of practiced readers are better than ones that do not. One kind of relativistic interpretation is called "formal," referring to the "form" or shape of patterns in the words of a text, especially a poem or song. Pointing to the rhymes at the ends of lines is an objective set of resemblances in a poem. A reader of Edgar Allan Poe's "The Raven" cannot help but hear the repetition of "nevermore" as a formal element of Poe's poem. Less obvious and a bit subjective would be an interpreter's pointing to the resemblance tying together all the mentions of weariness, napping, dreaming, and the drug nepenthe.In the early 20th Century the German philosopher Martin Heidegger explored questions of formal philosophical analysis verses personal interpretations of aesthetic experience, preferencing the direct subjective experience of a work of art as essential to an individual's aesthetic interpretation.A contemporary theory informed by awareness that an ever-expanding exposure to ideas made possible by the internet has changed both the act of creation, and the experience of perception, is known as Multi Factorial Apperception (MFA). This approach seeks to integrate a wide range of cultural variables to expand the contextual frame for viewing the creation, and perception, of aesthetic experience. Emphasis is on a dynamic mulit-layerd cultural framing of the act of creation at a particular moment in time, and admits that the meaning of a particular work will be in flux from that moment onward. Intended interpretation: Some students of the reading process advocate that a reader should attempt to identify what the artist is trying to accomplish and interpret the art in terms of whether or not the artist has succeeded. Professor E. D. Hirsch wrote two books arguing that "the author's intention must be the ultimate determiner of meaning." (E. D. Hirsch) In this controversial view, there is a single correct interpretation consistent with the artist's intention for any given art work.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Samsung SGH-T639** Samsung SGH-T639: The Samsung SGH-T639 is a clamshell mobile phone manufactured by Samsung and offered by T-Mobile. It has four external changeable faceplates, each with a different color. All four colors (blue, red, olive, and navy) come with each phone. The external screen is longer and slenderer than many similar phones, measuring 1.375 inches (3.49 cm) length by 0.4375 inches (1.111 cm) width. Users can choose the color of the text displayed on the external screen immediately after the phone is closed. The larger internal screen, which is found by opening the phone in portrait mode, measures 1.75 by 1.375 inches (4.45 by 3.49 cm) width. Samsung SGH-T639: The SGH-t639 has a 1.3 megapixel camera/camcorder with a small rectangular self-portrait mirror. It has 30 megabytes (MB) of phone storage and a microSD memory card slot. It operates on T-Mobile, a GSM network, and has a SIM card, which can be found by removing the battery case. It has a numerical keypad and an OK button surrounded by four navigational arrow keys (which double by leading to call records, voice notes recording, contact list, and new text message respectively), and two soft keys, along with T-Zones (T-Mobile's web browser), call, end/power, clear, and shortcut keys. The customizable shortcut key with the square icon can lead to or open any feature, application, or menu on the phone. On the left side the phone has two up/down volume rocker keys and the combination charger/headphone jack, and on the right has a key that accesses camera mode as well as the microSD memory card slot.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NeuN** NeuN: NeuN (Fox-3, Rbfox3, or Hexaribonucleotide Binding Protein-3), a protein which is a homologue to the protein product of a sex-determining gene in Caenorhabditis elegans, is a neuronal nuclear antigen that is commonly used as a biomarker for neurons. History: NeuN was first described in 1992 by Mullen et al., who raised a series of monoclonal antibodies to mouse antigens with the original intent of finding mouse species specific immunological markers for use in transplantation experiments. In the event they isolated a hybridoma line which produced a monoclonal antibody called mAb A60, which proved to bind an antigen expressed only in neuronal nuclei and to a lesser extent the cytoplasm of neuronal cells, and which appeared to work on all vertebrates. This antigen was therefore known as NeuN for "Neuronal Nuclei" though what the A60 antibody was binding to was unknown for the next 17 years. In 2009, Kim et al. used proteomic methods to show that NeuN corresponds to a protein known as Fox-3, also known as Rbfox3, a mammalian homologue of Fox-1, a protein originally identified from genetic studies of the nematode worm C. elegans. Structure: Western blotting shows that mAb A60 binds to two bands of apparent molecular weight ~46kDa and ~48kDa on SDS-PAGE. These two bands are generated from a single Fox-3 gene by alternate splicing. There are in fact four protein products from the Fox-3 gene as a result of the presence or absence of two amino acid sequences coded by two exons. The inclusion or absence of 47 amino acids from exon 12 results in the ~46kDa and ~48kDa bands seen on SDS-PAGE gels, while the inclusion or absence of 14 amino acids from exon 15 produces two forms which are too similar in molecular size to be discerned on typical SDS-PAGE gels. Interestingly, the protein coded by exon 15 adds a C-terminal PY type nuclear localization sequence, which presumably explains why NeuN/Fox-3 protein can be both nuclear and, in some cell types, also cytoplasmic. All forms are expressed only in neurons so the mAb A60 antibody and other similar antibodies to NeuN/Fox-3 have become very widely used as robust markers of neurons. Uses as a Neuronal Biomarker: NeuN antibodies are widely used to label neurons, despite some shortcomings, and a January 2023 Pubmed search using the keyword "NeuN" produced 4200 hits. A few neuronal cell types are not recognized by NeuN antibodies, such as Purkinje cells, stellate and Golgi cells of the cerebellum, olfactory Mitral cells, retinal photoreceptors and spinal cord gamma motor neurons. However the vast majority of neurons are strongly NeuN positive, and NeuN immunoreactivity has been widely used to identify neurons in tissue culture and in sections and to measure the neuron/glia ratio in brain regions. NeuN immunoreactivity becomes obvious as neurons mature, typically after they have downregulated expression of Doublecortin, a marker seen in the earliest stages of neuronal development. Feminizing Locus on X Homologue: Fox-3 is one of a family of mammalian homologues of the Fox-1 protein, originally discovered in the nematode worm C. elegans as the protein product of a gene involved in sex determination. Fox is, in fact, an acronym of "Feminizing locus on X". The mammalian genome contains three genes homologous to C. elegans Fox-1, called Fox-1, Fox-2 and Fox-3. The Fox proteins are all about 46kDa in size, and each includes a central, highly conserved ~70 amino acid RRM or RNA recognition motif. RRM domains are one of the most common in the human genome and are found in numerous proteins which bind RNA molecules. NeuN/Fox-3 and the other Fox proteins function in the regulation of mRNA splicing and bind specific RNA sequences. For a review of the Fox family of proteins see this reference. An alternate name for Fox-3 is hexaribonucleotide binding protein 3 since Fox-3, like Fox-2 and Fox-1, bind the hexaribonucleotide UGCAUG, this binding being involved in the regulation of mRNA splicing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quad Quandary** Quad Quandary: In the 2007-2008 FIRST Tech Challenge robot competition, Quad Quandary is the first challenge theme replacing the former FIRST Vex Challenge, with similar general rules regarding the specifications of the robot and the game play. Unlike the previous challenge, Hangin'-A-Round, Quad Quandary makes use of small rings and movable goal posts. Robot rules: The largest acceptable size for the robot is 18"x18"x18". The teams may not introduce a new robot during any time of the match. No two identical robots are allowed on the match. Upon entering the match, all robots must pass inspection, and if modified immensely, must be reinspected. The robots must only contain Vex parts (however not all parts from Vex are competition legal) and not be potentially damaging to the playing field, other robots, and the players. The Challenge: The main characteristic of the Quad Quandary challenge is defined by its field, which is a 12' by 12' square divided into four equal quadrants. Each alliance (red and blue) are given two quadrants, of their color, on opposite sides of the field. The field is split using two diagonal lines. The challenge uses rings (placed on the opposite color's quadrants and mixed on the quadrant division line) and two different kinds of posts — two 18 inches (457 mm) high posts (atop the single goals), and two 24 inches (610 mm) posts connected by a 60 inches (1,524 mm) bar, which rest atop the paired goals. The bar rests 9 inches (229 mm) off the ground but during the match, can be raised up to 15 inches (381 mm). The rings may also be placed on a 3.5 inches (89 mm) high base, known as a goal (single or paired), that holds the posts. The goals are on casters and thus can be moved. There are also four, 20 square inches (129 cm2) low goals centered along the edges. The rings used have a 3 inches (76 mm) inner diameter and are 1 inch (25 mm) thick. There are 50 rings total; 25 for red and 25 for blue. There are 44 total rings on the field, and three available to each alliance to load on their robot before the match starts. Scoring: One ring in the low (ground) goal is worth one point. One ring on a single or paired goal is worth two points (the ring must be placed completely on the inside of the goal's outer edges). One ring "rung" on an 18 inches (457 mm) post is worth three points. Scoring: One ring "rung" on a 24 inches (610 mm) post is worth five points.(All rings scored award points for their corresponding colored alliance.) A single or paired goal that is in an alliance's quadrant at the end of the match awards that alliance 7 points (determined not by the area of the goal in the quadrant but by where the goal's post lies). Scoring: The winner of the autonomous period is given a 10-point bonus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SmartDO** SmartDO: SmartDO is a multidisciplinary design optimization software, based on the Direct Global Search technology developed and marketed by FEA-Opt Technology. SmartDO specialized in the CAE-Based optimization, such as CAE (computer-aided engineering), FEA (finite element analysis), CAD (computer-aided design), CFD (Computational fluid dynamics) and automatic control, with application on various physics phenomena. It is both GUI and scripting driven, allowed to be integrated with almost any kind of CAD/CAE and in-house codes. SmartDO: SmartDO focuses on the direct global optimization solver, which does not need much parametric study and tweaking on the solver parameter. Because of this, SmartDO has been frequently customized as the push-button expert system. History: SmartDO was originated in 1995 by its founder (Dr. Shen-Yeh Chen) during his Ph.D. study in Civil Engineering Department of Arizona State University. During 1998 to 2004, SmartDO was continuously developed and applied on aerospace industry and CAE consulting application as an in-house code. In 2005, Dr. Chen established FEA-Opt Technology as a CAE consulting firm and software vendor. The first commercialized version 1.0 was published in 2006 by FEA-Opt Technology. In 2012, FEA-Opt Technology signed partner agreement with both ANSYS and MSC Software base on SmartDO. Process integration: SmartDO uses both GUI and scripting-based interface to integrate with the 3rd party software. The GUI includes general operation of SmartDO and package specific linking interface, called the SmartLink. Smartlink can link with 3rd party CAE software, such as ANSYS Workbench. The user can cross-link any parameters in ANSYS Workbench to any design parameters in SmartDO, such as design variables, objective function, and constraints, and SmartDO will usually solve the problem well with the default settings. Process integration: The scripting interface in SmartDO is based on Tcl/Tk shell. This makes SmartDO able to link with almost any kind of 3rd party software and in-house code. SmartDO comes with the SmartScripting GUI, for generating Tcl/Tk script automatically. The user can create script by answering questions in the SmartScripting GUI, and SmartScripting will generate Tcl/Tk scripts for the user. The flexible scripting interface allows SmartDO to be customized as a push-button automatic design system. Design optimization: SmartDO uses the Direct Global Search methodology to achieve global optimization, including both Gradient-Based Nonlinear programming and Genetic Algorithm based stochastic programming. These two approaches can also be combined or mixed to solve specific problems. For all the solvers in SmartDO, there is no theoretical and/or coding restriction on the number of design variables and/or constraints. SmartDO can start from an infeasible design point, pushing the design into the feasible domain first, and then proceed with optimization. Design optimization: Gradient-Based Nonlinear Programming SmartDO uses the Generalized Reduced Gradient Method and the Method of Feasible Directions as its foundation to solve the constrained nonlinear programming problem. To achieve global search capability, SmartDO also uses Tunneling and Hill climbing to escape from the local minimum. This also enables SmartDO to eliminate the numerical noise caused by meshing, discretization, and other phenomena during numerical analysis. Other unique technologies includeAutomatic recognition of active constraints. Design optimization: Smart Dynamic Search to automatically adjust search direction and step size. Genetic Algorithm The Genetic Algorithm in SmartDO was part of the founder's Ph.D. dissertation, which is called the Robust Genetic Algorithms. It includes some special approaches to achieve stability and efficiency, for example,Adaptive Penalty Function. Automatic Schema Representation. Automatic Population and Generation Number Calculation. Adaptive and Automatic Cross-Over Probability Calculation. Absolute Descent.Because there are various types of design variables available in the Robust Genetic Algorithms, the users can perform Concurrent Sizing, Shaping and Topology Optimization with SmartDO. Applications: SmartDO has been widely applied on the industry design and control since 1995. The disciplines and physics phenomena includes Structure CFD Heat Flow Heat Transfer Crashworthiness Structural/Thermal/Electronic Coupled Automatic ControlAnd the application includes Life prolonging of semi-conductor component. Keratotomy Surgeries. Civil structure and resident roof optimization (sizing, shaping and topology). Life prolonging and weight reduction for the components of gas turbine engines. Enhancement for the performance of the fluid power system. Weight reduction and strength increase of the nuclear heavy-duty lifting hook. Performance optimization of the shock absorbing mechanism. Weight reduction of the air cargo deck. Performance optimization of the thermoelectric generator. Weight reduction of the lower A-Arm of the armored tank. Performance curve optimization for the keyboard rubber dome. Performance curve optimization for the connectors. Composite structure optimization. Strength optimization of the circulation water pump in power plant. Structural optimization for the wave energy converter. Performance optimization of the jet nozzle. Optimization of the O-Ring Sealing for the steel charger. Performance Enhancement of the Golf Club Head. Crashworthiness Optimization of The Crash Box. Ceramic Gas Turbine Engines Rotor Disk Structural Optimization.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ż** Ż: Ż, ż (Z with overdot) is a letter, consisting of the letter Z of the ISO basic Latin alphabet and an overdot. Usage: Polish In the Polish language, ż is the final, 32nd letter of the alphabet. It typically represents the voiced retroflex fricative ([ʐ]), somewhat similar to the pronunciation of ⟨g⟩ in "mirage"; however, in a word-final position or when followed by a voiceless obstruent, it is devoiced to the voiceless retroflex fricative ([ʂ]). Usage: Its pronunciation is the same as that of the digraph ⟨rz⟩, except that ⟨rz⟩ (unlike ⟨ż⟩) also undergoes devoicing when preceded by a voiceless obstruent. The difference in spelling comes from their historical pronunciations: ż originates from a palatalized /ɡ/ or /z/, while ⟨rz⟩ evolved from a palatalized ⟨r⟩.The letter was originally introduced in 1513 by Stanisław Zaborowski in his book OrtographiaOccasionally, the letter Ƶ ƶ (Z with a horizontal stroke) is used instead of Ż ż for aesthetic purposes, especially in all-caps text and handwriting. Usage: Kashubian Kashubian ż is a voiced fricative like in Polish, but it is postalveolar ([ʒ]) rather than retroflex. Maltese In Maltese, ż represents the voiced alveolar sibilant [z], pronounced like "z" in English "maze". This contrasts with the letter ⟨z⟩, which represents the voiceless alveolar sibilant affricate [ts], like in the word "pizza".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**5,N,N-TMT** 5,N,N-TMT: 5,N,N-trimethyltryptamine (5,N,N-TMT; 5-TMT) is a tryptamine derivative that is a psychedelic drug. It was first made in 1958 by E. H. Young. In animal experiments it was found to be in between DMT and 5-MeO-DMT in potency which would suggest an active dosage for humans in the 20–60 mg range. Human psychoactivity for this compound has been claimed in reports on websites such as Erowid but has not been independently confirmed. Legal Status: United States 5,N,N-TMT is not scheduled at the federal level in the United States, but it could be considered an analog of 5-MeO-DMT, in which case, sales or possession intended for human consumption could be prosecuted under the Federal Analog Act.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Siemens SL10** Siemens SL10: The Siemens SL10 is a sliding mobile phone with a four-color screen (red, green, blue, and white). It was the second mobile phone with a multicolor screen after the Siemens S10 and the first sliding mobile phone.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Biotechnology Advances** Biotechnology Advances: Biotechnology Advances is a peer-reviewed scientific journal which focuses on the biotechnology principles and industry applications of research in agriculture, medicine, and the environment. Abstracting and indexing: The journal is abstracted and indexed in BIOSIS Previews, CAB Abstracts, Chemical Abstracts, Current Contents/Agriculture, Biology & Environmental Sciences, EMBASE, Science Citation Index, and Scopus. According to the Journal Citation Reports, the journal has a 2020 impact factor of 14.227 .
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**EPharmaSolutions** EPharmaSolutions: ePharmaSolutions (ePS), as a contract research organization, is an American clinical research provider. The solutions include proprietary software applications and global clinical services that focus on the major areas of clinical study delays. EPharmaSolutions: ePS was founded in 2001 by Lance Converse and is headquartered in Plymouth Meeting, Pennsylvania, employing 60+ clinical researchers and technologists. ePharmaSolutions' Clinical Trial Portal technology won the 2009 and 2013 Bio-IT World for “Best in Class Clinical Trial Technology” and the 2013 Microsoft Lifesciences Innovation Award making it the most widely used Clinical Trial Portal and Electronic Trial Master File solution in the pharmaceutical industry with over 350,000 users in 130 countries. EPharmaSolutions: 1Bio-IT World - July 2009 [1] Clinical trial software: Clinical Trial Portal Clinical Trial Portal- Single-sign-on application used by more than 300,000 clinical researchers in 130 countries. Clinical trial software: User Management Application & Vendor Integration Manager - Single sign-on and user provisioning applications for clinical trial technologies Investigator Database - over 300,000 investigators in 120 countries Site Feasibility Application - secure online site feasibility and ranking application Secure Document Exchange - online completion of study documents and contracts with digital signature Safety Letter Distribution - safety letter distribution and tracking system Learning Management System - delivery and tracking of on and offline training and certification programs Patient Recruitment Manager - develop and track global patient recruitment and retention programs Electronic Monitor Visit Report Application - track all monitor visits using the on or offline capabilities (with electronic signature) Electronic Trial Master File - tracks the completion and filing of clinical trial documents via configurable workflow and reference models. Global clinical research services: Site Feasibility and Selection - Access to over 300,000 investigators profiled by experience and patient populations Site Activation - Secure document exchange technology with site support services Study/Site/Systems Training - Training of clinical research professionals Inter-rater Reliability Services and Data Monitoring - Custom tailored solutions for most every scale, delivered in over 50 countries Patient Recruitment and Retention - Full service solutions supporting programs in over 60 countries
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FDIC Enterprise Architecture Framework** FDIC Enterprise Architecture Framework: FDIC Enterprise Architecture Framework was the enterprise architecture framework of the United States Federal Deposit Insurance Corporation (FDIC). A lot of the current article is about the enterprise architecture framework developed around 2005, and currently anno 2011 out-of-date. Overview: The FDIC's framework for implementing its Enterprise Architecture was based on Federal and industry best practices, including the Chief Information Officer (CIO) Council's Federal Enterprise Architecture Framework (FEAF) and the Zachman Framework for Enterprise Architecture. FDIC's framework was tailored to emphasize security. The historic FDIC EA framework complies with the FEAF and highlights the importance of security to all other components of the architecture.The FDIC EA framework included five components. The first component, the Business Architecture, focused on FDIC's business needs. The next three components, the Data Architecture, Applications Architecture, and Technical Infrastructure Architectures, focused on the technological capabilities that support the business and information needs. The final component, the Security Architecture, focused on specific aspects of interest to the Corporation that span the enterprise and must be integral parts of all other architectures. History: Historically, Federal agencies managed IT investments autonomously. Until the new millennium, there was little incentive for agencies to partner to effectively reuse IT investments, share IT knowledge, and explore joint solutions. Starting in the second half of 1990 a collective, government-wide effort, supported by the Federal CIO Council, utilizing the Federal Enterprise Architecture (FEA), was undertaken in an effort to yield significant improvements in the management and reuse of IT investments, while improving services to citizens, and facilitating business relationships internally and externally.The Federal Deposit Insurance Corporation (FDIC) first realized the value of Enterprise Architecture in 1997, when two business executives had to reconcile data that had come from different systems for a high-profile report to the banking industry. The FDIC's first EA blueprint was published in December 2002.In 2004 the FDIC received a 2004 Enterprise Architecture Excellence Award from the Zachman Institute for Framework Advancement (ZIFA) for its initiative to manage corporate data collaboratively. EA framework topics: Historical FDIC EA framework The FDIC EA framework from 2005 included five components. EA framework topics: Business Architecture : The Business Architecture described the activities and processes performed by the corporation to achieve its mission and to realize its vision and goals. Developing the Business Architecture was the first step in creating an Enterprise Architecture (EA) that linked the corporation's business needs to its Information Technology (IT) environment. Maximizing IT support for these requirements was intended to optimize Corporate performance. EA framework topics: Data Architecture : The Data Architecture described the activities required to obtain and maintain data that support the information needed by the corporation's major business areas. Data and information are different. Data is the foundation of information. Data is the raw material that is processed and refined to generate information. Information consists of a collection of related data that has been processed into a form that is meaningful to the recipient. EA framework topics: Applications Architecture : The Applications Architecture described the major types of applications that manage data to produce the information needed to support the activities of the corporation. The Applications Architecture provided a framework that enabled the migration from the applications catalog and software development environment in use at the time to the target integrated applications, development and engineering environments. The target architecture promoted the use of commercial and government off-the-shelf products, consolidating applications, where applicable, and the use of emerging technologies where appropriate. EA framework topics: Technical Infrastructure Architecture : The IT infrastructure provided access to application systems and office automation tools used in performance of the business processes. The Corporation placed high priority on maintaining a consistent, available, and reliable technical infrastructure. The Technical Architecture described the underlying technology for the corporation's business, data, and application processing. It included the technologies used for communications, data storage, application processing, and computing platforms. EA framework topics: Security Architecture : The Security Architecture established a framework for integrating safeguards into all layers of the FDIC's Enterprise Architecture. The security architecture used a risk management and information assurance strategy that provides access control, confidentiality, integrity, and non-repudiation for the corporation's information and systems. EA framework topics: Self-Funding Model for Reinvestment in IT The banking business model of 2008 had become more complex, giving rise to financial instruments such as collateralized debt obligations (CDOs) and structured investment vehicles (SIVs) to manage risk. These instruments created greater dependencies between the domestic and international financial markets. Financial institutions of that time should have, therefore, struck a balance between regulatory, legislative and banker concerns while appropriately managing risk.Notionally, as cost savings are realized from a simplified IT environment and more efficient processes, the savings can be reinvested for IT improvements or accrue to the corporation. This self-funding model is shown on the right. EA framework topics: 2008 - 2013 technology roadmap The technology roadmap outlined the major initiatives for standardizing the IT environment and increasing IT's efficiency and effectiveness over five years. The initiatives were determined by various sources including business-side IT roadmaps, executive management planning meetings, client planning sessions, and client year-end reviews. The three major initiatives identified were enterprise architecture, security and privacy programs, and fiscal discipline. EA framework topics: The enterprise architecture initiative focused on simplifying the environment to ensure stable and economical performance for mission-critical applications. Simplifying the environment to decrease costs included activities, such as decreasing the number of application systems and migrating applications off the mainframe. Efficiencies were also hoped to be gained by expanding capabilities for manipulating large data sets and storing traditional paper-based files electronically. The SOA service center was intended to manage code (or services) for all development teams to discover and use, which was expected to save time and costs in application development, testing and deployment.The organization planned to continue to enhance IT security and privacy programs to address new and evolving risks by improving controls over sensitive data. In some cases, technology, such as scanning outgoing e-mail for sensitive information and encrypting removable storage devices, could mitigate potential risks. The other cornerstone of mitigating risk was educating employees of emerging security and privacy issues.Lastly, in order to continue sound fiscal discipline and responsibility, the organization planned to establish IT baselines and metrics, study steady-state costs, manage service level agreements, and more judiciously choose new development projects. These three areas – enterprise architecture, security and privacy programs, and fiscal discipline – are shown below with the estimated time frames.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kalman filter** Kalman filter: For statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The filter is named after Rudolf E. Kálmán, who was one of the primary developers of its theory. Kalman filter: This digital filter is sometimes termed the Stratonovich–Kalman–Bucy filter because it is a special case of a more general, nonlinear filter developed somewhat earlier by the Soviet mathematician Ruslan Stratonovich. In fact, some of the special case linear filter's equations appeared in papers by Stratonovich that were published before summer 1961, when Kalman met with Stratonovich during a conference in Moscow.Kalman filtering has numerous technological applications. A common application is for guidance, navigation, and control of vehicles, particularly aircraft, spacecraft and ships positioned dynamically. Furthermore, Kalman filtering is a concept much applied in time series analysis used for topics such as signal processing and econometrics. Kalman filtering is also one of the main topics of robotic motion planning and control and can be used for trajectory optimization. Kalman filtering also works for modeling the central nervous system's control of movement. Due to the time delay between issuing motor commands and receiving sensory feedback, the use of Kalman filters provides a realistic model for making estimates of the current state of a motor system and issuing updated commands.The algorithm works by a two-phase process. For the prediction phase, the Kalman filter produces estimates of the current state variables, along with their uncertainties. Once the outcome of the next measurement (necessarily corrupted with some error, including random noise) is observed, these estimates are updated using a weighted average, with more weight being given to estimates with greater certainty. The algorithm is recursive. It can operate in real time, using only the present input measurements and the state calculated previously and its uncertainty matrix; no additional past information is required. Kalman filter: Optimality of Kalman filtering assumes that errors have a normal (Gaussian) distribution. In the words of Rudolf E. Kálmán: "In summary, the following assumptions are made about random processes: Physical random phenomena may be thought of as due to primary random sources exciting dynamic systems. The primary sources are assumed to be independent gaussian random processes with zero mean; the dynamic systems will be linear." Though regardless of Gaussianity, if the process and measurement covariances are known, the Kalman filter is the best possible linear estimator in the minimum mean-square-error sense. Kalman filter: It is a common misconception (perpetuated in the literature) that the Kalman filter cannot be rigorously applied unless all noise processes are assumed to be Gaussian.Extensions and generalizations of the method have also been developed, such as the extended Kalman filter and the unscented Kalman filter which work on nonlinear systems. The basis is a hidden Markov model such that the state space of the latent variables is continuous and all latent and observed variables have Gaussian distributions. Kalman filtering has been used successfully in multi-sensor fusion, and distributed sensor networks to develop distributed or consensus Kalman filtering. History: The filtering method is named for Hungarian émigré Rudolf E. Kálmán, although Thorvald Nicolai Thiele and Peter Swerling developed a similar algorithm earlier. Richard S. Bucy of the Johns Hopkins Applied Physics Laboratory contributed to the theory, causing it to be known sometimes as Kalman–Bucy filtering. History: Stanley F. Schmidt is generally credited with developing the first implementation of a Kalman filter. He realized that the filter could be divided into two distinct parts, with one part for time periods between sensor outputs and another part for incorporating measurements. It was during a visit by Kálmán to the NASA Ames Research Center that Schmidt saw the applicability of Kálmán's ideas to the nonlinear problem of trajectory estimation for the Apollo program resulting in its incorporation in the Apollo navigation computer.: 16 This Kalman filtering was first described and developed partially in technical papers by Swerling (1958), Kalman (1960) and Kalman and Bucy (1961). History: The Apollo computer used 2k of magnetic core RAM and 36k wire rope [...]. The CPU was built from ICs [...]. Clock speed was under 100 kHz [...]. The fact that the MIT engineers were able to pack such good software (one of the very first applications of the Kalman filter) into such a tiny computer is truly remarkable. History: Kalman filters have been vital in the implementation of the navigation systems of U.S. Navy nuclear ballistic missile submarines, and in the guidance and navigation systems of cruise missiles such as the U.S. Navy's Tomahawk missile and the U.S. Air Force's Air Launched Cruise Missile. They are also used in the guidance and navigation systems of reusable launch vehicles and the attitude control and navigation systems of spacecraft which dock at the International Space Station. Overview of the calculation: Kalman filtering uses a system's dynamic model (e.g., physical laws of motion), known control inputs to that system, and multiple sequential measurements (such as from sensors) to form an estimate of the system's varying quantities (its state) that is better than the estimate obtained by using only one measurement alone. As such, it is a common sensor fusion and data fusion algorithm. Overview of the calculation: Noisy sensor data, approximations in the equations that describe the system evolution, and external factors that are not accounted for, all limit how well it is possible to determine the system's state. The Kalman filter deals effectively with the uncertainty due to noisy sensor data and, to some extent, with random external factors. The Kalman filter produces an estimate of the state of the system as an average of the system's predicted state and of the new measurement using a weighted average. The purpose of the weights is that values with better (i.e., smaller) estimated uncertainty are "trusted" more. The weights are calculated from the covariance, a measure of the estimated uncertainty of the prediction of the system's state. The result of the weighted average is a new state estimate that lies between the predicted and measured state, and has a better estimated uncertainty than either alone. This process is repeated at every time step, with the new estimate and its covariance informing the prediction used in the following iteration. This means that Kalman filter works recursively and requires only the last "best guess", rather than the entire history, of a system's state to calculate a new state. Overview of the calculation: The measurements' certainty-grading and current-state estimate are important considerations. It is common to discuss the filter's response in terms of the Kalman filter's gain. The Kalman-gain is the weight given to the measurements and current-state estimate, and can be "tuned" to achieve a particular performance. With a high gain, the filter places more weight on the most recent measurements, and thus conforms to them more responsively. With a low gain, the filter conforms to the model predictions more closely. At the extremes, a high gain close to one will result in a more jumpy estimated trajectory, while a low gain close to zero will smooth out noise but decrease the responsiveness. Overview of the calculation: When performing the actual calculations for the filter (as discussed below), the state estimate and covariances are coded into matrices because of the multiple dimensions involved in a single set of calculations. This allows for a representation of linear relationships between different state variables (such as position, velocity, and acceleration) in any of the transition models or covariances. Example application: As an example application, consider the problem of determining the precise location of a truck. The truck can be equipped with a GPS unit that provides an estimate of the position within a few meters. The GPS estimate is likely to be noisy; readings 'jump around' rapidly, though remaining within a few meters of the real position. In addition, since the truck is expected to follow the laws of physics, its position can also be estimated by integrating its velocity over time, determined by keeping track of wheel revolutions and the angle of the steering wheel. This is a technique known as dead reckoning. Typically, the dead reckoning will provide a very smooth estimate of the truck's position, but it will drift over time as small errors accumulate. Example application: For this example, the Kalman filter can be thought of as operating in two distinct phases: predict and update. In the prediction phase, the truck's old position will be modified according to the physical laws of motion (the dynamic or "state transition" model). Not only will a new position estimate be calculated, but also a new covariance will be calculated as well. Perhaps the covariance is proportional to the speed of the truck because we are more uncertain about the accuracy of the dead reckoning position estimate at high speeds but very certain about the position estimate at low speeds. Next, in the update phase, a measurement of the truck's position is taken from the GPS unit. Along with this measurement comes some amount of uncertainty, and its covariance relative to that of the prediction from the previous phase determines how much the new measurement will affect the updated prediction. Ideally, as the dead reckoning estimates tend to drift away from the real position, the GPS measurement should pull the position estimate back toward the real position but not disturb it to the point of becoming noisy and rapidly jumping. Technical description and context: The Kalman filter is an efficient recursive filter estimating the internal state of a linear dynamic system from a series of noisy measurements. It is used in a wide range of engineering and econometric applications from radar and computer vision to estimation of structural macroeconomic models, and is an important topic in control theory and control systems engineering. Together with the linear-quadratic regulator (LQR), the Kalman filter solves the linear–quadratic–Gaussian control problem (LQG). The Kalman filter, the linear-quadratic regulator, and the linear–quadratic–Gaussian controller are solutions to what arguably are the most fundamental problems of control theory. Technical description and context: In most applications, the internal state is much larger (has more degrees of freedom) than the few "observable" parameters which are measured. However, by combining a series of measurements, the Kalman filter can estimate the entire internal state. For the Dempster–Shafer theory, each state equation or observation is considered a special case of a linear belief function and the Kalman filtering is a special case of combining linear belief functions on a join-tree or Markov tree. Additional methods include belief filtering which use Bayes or evidential updates to the state equations. Technical description and context: A wide variety of Kalman filters exists by now, from Kalman's original formulation - now termed the "simple" Kalman filter, the Kalman–Bucy filter, Schmidt's "extended" filter, the information filter, and a variety of "square-root" filters that were developed by Bierman, Thornton, and many others. Perhaps the most commonly used type of very simple Kalman filter is the phase-locked loop, which is now ubiquitous in radios, especially frequency modulation (FM) radios, television sets, satellite communications receivers, outer space communications systems, and nearly any other electronic communications equipment. Underlying dynamic system model: Kalman filtering is based on linear dynamic systems discretized in the time domain. They are modeled on a Markov chain built on linear operators perturbed by errors that may include Gaussian noise. The state of the target system refers to the ground truth (yet hidden) system configuration of interest, which is represented as a vector of real numbers. At each discrete time increment, a linear operator is applied to the state to generate the new state, with some noise mixed in, and optionally some information from the controls on the system if they are known. Then, another linear operator mixed with more noise generates the measurable outputs (i.e., observation) from the true ("hidden") state. The Kalman filter may be regarded as analogous to the hidden Markov model, with the difference that the hidden state variables have values in a continuous space as opposed to a discrete state space as for the hidden Markov model. There is a strong analogy between the equations of a Kalman Filter and those of the hidden Markov model. A review of this and other models is given in Roweis and Ghahramani (1999) and Hamilton (1994), Chapter 13.In order to use the Kalman filter to estimate the internal state of a process given only a sequence of noisy observations, one must model the process in accordance with the following framework. This means specifying the matrices, for each time-step k, following: Fk, the state-transition model; Hk, the observation model; Qk, the covariance of the process noise; Rk, the covariance of the observation noise; and sometimes Bk, the control-input model as described below; if Bk is included, then there is also uk, the control vector, representing the controlling input into control-input model.The Kalman filter model assumes the true state at time k is evolved from the state at (k − 1) according to xk=Fkxk−1+Bkuk+wk where Fk is the state transition model which is applied to the previous state xk−1; Bk is the control-input model which is applied to the control vector uk; wk is the process noise, which is assumed to be drawn from a zero mean multivariate normal distribution, N , with covariance, Qk: wk∼N(0,Qk) .At time k an observation (or measurement) zk of the true state xk is made according to zk=Hkxk+vk where Hk is the observation model, which maps the true state space into the observed space and vk is the observation noise, which is assumed to be zero mean Gaussian white noise with covariance Rk: vk∼N(0,Rk) .The initial state, and the noise vectors at each step {x0, w1, ..., wk, v1, ... ,vk} are all assumed to be mutually independent. Underlying dynamic system model: Many real-time dynamic systems do not exactly conform to this model. In fact, unmodeled dynamics can seriously degrade the filter performance, even when it was supposed to work with unknown stochastic signals as inputs. The reason for this is that the effect of unmodeled dynamics depends on the input, and, therefore, can bring the estimation algorithm to instability (it diverges). On the other hand, independent white noise signals will not make the algorithm diverge. The problem of distinguishing between measurement noise and unmodeled dynamics is a difficult one and is treated as a problem of control theory using robust control. Details: The Kalman filter is a recursive estimator. This means that only the estimated state from the previous time step and the current measurement are needed to compute the estimate for the current state. In contrast to batch estimation techniques, no history of observations and/or estimates is required. In what follows, the notation x^n∣m represents the estimate of x at time n given observations up to and including at time m ≤ n. Details: The state of the filter is represented by two variables: xk∣k , the a posteriori state estimate mean at time k given observations up to and including at time k; Pk∣k , the a posteriori estimate covariance matrix (a measure of the estimated accuracy of the state estimate).The algorithm structure of the Kalman filter resembles that of Alpha beta filter. The Kalman filter can be written as a single equation; however, it is most often conceptualized as two distinct phases: "Predict" and "Update". The predict phase uses the state estimate from the previous timestep to produce an estimate of the state at the current timestep. This predicted state estimate is also known as the a priori state estimate because, although it is an estimate of the state at the current timestep, it does not include observation information from the current timestep. In the update phase, the innovation (the pre-fit residual), i.e. the difference between the current a priori prediction and the current observation information, is multiplied by the optimal Kalman gain and combined with the previous state estimate to refine the state estimate. This improved estimate based on the current observation is termed the a posteriori state estimate. Details: Typically, the two phases alternate, with the prediction advancing the state until the next scheduled observation, and the update incorporating the observation. However, this is not necessary; if an observation is unavailable for some reason, the update may be skipped and multiple prediction procedures performed. Likewise, if multiple independent observations are available at the same time, multiple update procedures may be performed (typically with different observation matrices Hk). Details: Predict Update The formula for the updated (a posteriori) estimate covariance above is valid for the optimal Kk gain that minimizes the residual error, in which form it is most widely used in applications. Proof of the formulae is found in the derivations section, where the formula valid for any Kk is also shown. A more intuitive way to express the updated state estimate ( x^k∣k ) is: xk∣k=(I−KkHk)x^k∣k−1+Kkzk This expression reminds us of a linear interpolation, x=(1−t)(a)+t(b) for t between [0,1]. In our case: t is the Kalman gain ( Kk ), a matrix that takes values from 0 (high error in the sensor) to I (low error). a is the value estimated from the model. b is the value from the measurement.This expression also resembles the alpha beta filter update step. Invariants If the model is accurate, and the values for x^0∣0 and P0∣0 accurately reflect the distribution of the initial state values, then the following invariants are preserved: E⁡[xk−x^k∣k]=E⁡[xk−x^k∣k−1]=0E⁡[y~k]=0 where E⁡[ξ] is the expected value of ξ . That is, all estimates have a mean error of zero. Also: cov cov cov ⁡(y~k) so covariance matrices accurately reflect the covariance of estimates. Details: Estimation of the noise covariances Qk and Rk Practical implementation of a Kalman Filter is often difficult due to the difficulty of getting a good estimate of the noise covariance matrices Qk and Rk. Extensive research has been done to estimate these covariances from data. One practical method of doing this is the autocovariance least-squares (ALS) technique that uses the time-lagged autocovariances of routine operating data to estimate the covariances. The GNU Octave and Matlab code used to calculate the noise covariance matrices using the ALS technique is available online using the GNU General Public License. Field Kalman Filter (FKF), a Bayesian algorithm, which allows simultaneous estimation of the state, parameters and noise covariance has been proposed. The FKF algorithm has a recursive formulation, good observed convergence, and relatively low complexity, thus suggesting that the FKF algorithm may possibly be a worthwhile alternative to the Autocovariance Least-Squares methods. Details: Optimality and performance It follows from theory that the Kalman filter provides an optimal state estimation in cases where a) the model matches the real system perfectly, b) the entering noise is "white" (uncorrelated) and c) the covariances of the noise are known exactly. Correlated noise can also be treated using Kalman filters. Several methods for the noise covariance estimation have been proposed during past decades, including ALS, mentioned in the section above. After the covariances are estimated, it is useful to evaluate the performance of the filter; i.e., whether it is possible to improve the state estimation quality. If the Kalman filter works optimally, the innovation sequence (the output prediction error) is a white noise, therefore the whiteness property of the innovations measures filter performance. Several different methods can be used for this purpose. If the noise terms are distributed in a non-Gaussian manner, methods for assessing performance of the filter estimate, which use probability inequalities or large-sample theory, are known in the literature. Example application, technical: Consider a truck on frictionless, straight rails. Initially, the truck is stationary at position 0, but it is buffeted this way and that by random uncontrolled forces. We measure the position of the truck every Δt seconds, but these measurements are imprecise; we want to maintain a model of the truck's position and velocity. We show here how we derive the model from which we create our Kalman filter. Example application, technical: Since F,H,R,Q are constant, their time indices are dropped. The position and velocity of the truck are described by the linear state space xk=[xx˙] where x˙ is the velocity, that is, the derivative of position with respect to time. Example application, technical: We assume that between the (k − 1) and k timestep, uncontrolled forces cause a constant acceleration of ak that is normally distributed with mean 0 and standard deviation σa. From Newton's laws of motion we conclude that xk=Fxk−1+Gak (there is no Bu term since there are no known control inputs. Instead, ak is the effect of an unknown input and G applies that effect to the state vector) where F=[1Δt01]G=[12Δt2Δt] so that xk=Fxk−1+wk where wk∼N(0,Q)Q=GGTσa2=[14Δt412Δt312Δt3Δt2]σa2. Example application, technical: The matrix Q is not full rank (it is of rank one if Δt≠0 ). Hence, the distribution N(0,Q) is not absolutely continuous and has no probability density function. Another way to express this, avoiding explicit degenerate distributions is given by wk∼G⋅N(0,σa2). At each time phase, a noisy measurement of the true position of the truck is made. Let us suppose the measurement noise vk is also distributed normally, with mean 0 and standard deviation σz. Example application, technical: zk=Hxk+vk where H=[10] and R=E[vkvkT]=[σz2] We know the initial starting state of the truck with perfect precision, so we initialize x^0∣0=[00] and to tell the filter that we know the exact position and velocity, we give it a zero covariance matrix: P0∣0=[0000] If the initial position and velocity are not known perfectly, the covariance matrix should be initialized with suitable variances on its diagonal: P0∣0=[σx200σx˙2] The filter will then prefer the information from the first measurements over the information already in the model. Asymptotic form: For simplicity, assume that the control input uk=0 . Then the Kalman filter may be written: x^k∣k=Fkx^k−1∣k−1+Kk[zk−HkFkx^k−1∣k−1]. A similar equation holds if we include a non-zero control input. Gain matrices Kk evolve independently of the measurements zk . From above, the four equations needed for updating the Kalman gain are as follows: Pk∣k−1=FkPk−1∣k−1FkT+Qk,Sk=HkPk∣k−1HkT+Rk,Kk=Pk∣k−1HkTSk−1,Pk|k=(I−KkHk)Pk|k−1. Asymptotic form: Since the gain matrices depend only on the model, and not the measurements, they may be computed offline. Convergence of the gain matrices Kk to an asymptotic matrix K∞ applies for conditions established in Walrand and Dimakis. Simulations establish the number of steps to convergence. For the moving truck example described above, with Δt=1 . and σa2=σz2=σx2=σx˙2=1 , simulation shows convergence in 10 iterations. Asymptotic form: Using the asymptotic gain, and assuming Hk and Fk are independent of k , the Kalman filter becomes a linear time-invariant filter: x^k=Fx^k−1+K∞[zk−HFx^k−1]. The asymptotic gain K∞ , if it exists, can be computed by first solving the following discrete Riccati equation for the asymptotic state covariance P∞ :P∞=F(P∞−P∞HT(HP∞HT+R)−1HP∞)FT+Q. The asymptotic gain is then computed as before. K∞=P∞HT(R+HP∞HT)−1. Additionally, a form of the asymptotic Kalman filter more commonly used in control theory is given by x^k+1=Fx^k+Buk+K¯∞[zk−Hx^k], where K¯∞=FP∞HT(R+HP∞HT)−1. This leads to an estimator of the form x^k+1=(F−K¯∞H)x^k+Buk+K¯∞zk, Derivations: The Kalman filter can be derived as a generalized least squares method operating on previous data. Derivations: Deriving the posteriori estimate covariance matrix Starting with our invariant on the error covariance Pk | k as above cov ⁡(xk−x^k∣k) substitute in the definition of x^k∣k cov ⁡[xk−(x^k∣k−1+Kky~k)] and substitute y~k cov ⁡(xk−[x^k∣k−1+Kk(zk−Hkx^k∣k−1)]) and zk cov ⁡(xk−[x^k∣k−1+Kk(Hkxk+vk−Hkx^k∣k−1)]) and by collecting the error vectors we get cov ⁡[(I−KkHk)(xk−x^k∣k−1)−Kkvk] Since the measurement error vk is uncorrelated with the other terms, this becomes cov cov ⁡[Kkvk] by the properties of vector covariance this becomes cov cov ⁡(vk)KkT which, using our invariant on Pk | k−1 and the definition of Rk becomes Pk∣k=(I−KkHk)Pk∣k−1(I−KkHk)T+KkRkKkT This formula (sometimes known as the Joseph form of the covariance update equation) is valid for any value of Kk. It turns out that if Kk is the optimal Kalman gain, this can be simplified further as shown below. Derivations: Kalman gain derivation The Kalman filter is a minimum mean-square error estimator. The error in the a posteriori state estimation is xk−x^k∣k We seek to minimize the expected value of the square of the magnitude of this vector, E⁡[‖xk−x^k|k‖2] . This is equivalent to minimizing the trace of the a posteriori estimate covariance matrix Pk|k . By expanding out the terms in the equation above and collecting, we get: Pk∣k=Pk∣k−1−KkHkPk∣k−1−Pk∣k−1HkTKkT+Kk(HkPk∣k−1HkT+Rk)KkT=Pk∣k−1−KkHkPk∣k−1−Pk∣k−1HkTKkT+KkSkKkT The trace is minimized when its matrix derivative with respect to the gain matrix is zero. Using the gradient matrix rules and the symmetry of the matrices involved we find that tr 0. Derivations: Solving this for Kk yields the Kalman gain: KkSk=(HkPk∣k−1)T=Pk∣k−1HkT⇒Kk=Pk∣k−1HkTSk−1 This gain, which is known as the optimal Kalman gain, is the one that yields MMSE estimates when used. Derivations: Simplification of the posteriori error covariance formula The formula used to calculate the a posteriori error covariance can be simplified when the Kalman gain equals the optimal value derived above. Multiplying both sides of our Kalman gain formula on the right by SkKkT, it follows that KkSkKkT=Pk∣k−1HkTKkT Referring back to our expanded formula for the a posteriori error covariance, Pk∣k=Pk∣k−1−KkHkPk∣k−1−Pk∣k−1HkTKkT+KkSkKkT we find the last two terms cancel out, giving Pk∣k=Pk∣k−1−KkHkPk∣k−1=(I−KkHk)Pk∣k−1 This formula is computationally cheaper and thus nearly always used in practice, but is only correct for the optimal gain. If arithmetic precision is unusually low causing problems with numerical stability, or if a non-optimal Kalman gain is deliberately used, this simplification cannot be applied; the a posteriori error covariance formula as derived above (Joseph form) must be used. Sensitivity analysis: The Kalman filtering equations provide an estimate of the state x^k∣k and its error covariance Pk∣k recursively. The estimate and its quality depend on the system parameters and the noise statistics fed as inputs to the estimator. This section analyzes the effect of uncertainties in the statistical inputs to the filter. In the absence of reliable statistics or the true values of noise covariance matrices Qk and Rk , the expression Pk∣k=(I−KkHk)Pk∣k−1(I−KkHk)T+KkRkKkT no longer provides the actual error covariance. In other words, Pk∣k≠E[(xk−x^k∣k)(xk−x^k∣k)T] . In most real-time applications, the covariance matrices that are used in designing the Kalman filter are different from the actual (true) noise covariances matrices. This sensitivity analysis describes the behavior of the estimation error covariance when the noise covariances as well as the system matrices Fk and Hk that are fed as inputs to the filter are incorrect. Thus, the sensitivity analysis describes the robustness (or sensitivity) of the estimator to misspecified statistical and parametric inputs to the estimator. Sensitivity analysis: This discussion is limited to the error sensitivity analysis for the case of statistical uncertainties. Here the actual noise covariances are denoted by Qka and Rka respectively, whereas the design values used in the estimator are Qk and Rk respectively. The actual error covariance is denoted by Pk∣ka and Pk∣k as computed by the Kalman filter is referred to as the Riccati variable. When Qk≡Qka and Rk≡Rka , this means that Pk∣k=Pk∣ka . While computing the actual error covariance using Pk∣ka=E[(xk−x^k∣k)(xk−x^k∣k)T] , substituting for x^k∣k and using the fact that E[wkwkT]=Qka and E[vkvkT]=Rka , results in the following recursive equations for Pk∣ka :Pk∣k−1a=FkPk−1∣k−1aFkT+Qka and Pk∣ka=(I−KkHk)Pk∣k−1a(I−KkHk)T+KkRkaKkT While computing Pk∣k , by design the filter implicitly assumes that E[wkwkT]=Qk and E[vkvkT]=Rk . The recursive expressions for Pk∣ka and Pk∣k are identical except for the presence of Qka and Rka in place of the design values Qk and Rk respectively. Researches have been done to analyze Kalman filter system's robustness. Square root form: One problem with the Kalman filter is its numerical stability. If the process noise covariance Qk is small, round-off error often causes a small positive eigenvalue of the state covariance matrix P to be computed as a negative number. This renders the numerical representation of P indefinite, while its true form is positive-definite. Square root form: Positive definite matrices have the property that they have a triangular matrix square root P = S·ST. This can be computed efficiently using the Cholesky factorization algorithm, but more importantly, if the covariance is kept in this form, it can never have a negative diagonal or become asymmetric. An equivalent form, which avoids many of the square root operations required by the matrix square root yet preserves the desirable numerical properties, is the U-D decomposition form, P = U·D·UT, where U is a unit triangular matrix (with unit diagonal), and D is a diagonal matrix. Square root form: Between the two, the U-D factorization uses the same amount of storage, and somewhat less computation, and is the most commonly used square root form. (Early literature on the relative efficiency is somewhat misleading, as it assumed that square roots were much more time-consuming than divisions,: 69  while on 21st-century computers they are only slightly more expensive.) Efficient algorithms for the Kalman prediction and update steps in the square root form were developed by G. J. Bierman and C. L. Thornton.The L·D·LT decomposition of the innovation covariance matrix Sk is the basis for another type of numerically efficient and robust square root filter. The algorithm starts with the LU decomposition as implemented in the Linear Algebra PACKage (LAPACK). These results are further factored into the L·D·LT structure with methods given by Golub and Van Loan (algorithm 4.1.2) for a symmetric nonsingular matrix. Any singular covariance matrix is pivoted so that the first diagonal partition is nonsingular and well-conditioned. The pivoting algorithm must retain any portion of the innovation covariance matrix directly corresponding to observed state-variables Hk·xk|k-1 that are associated with auxiliary observations in yk. The l·d·lt square-root filter requires orthogonalization of the observation vector. This may be done with the inverse square-root of the covariance matrix for the auxiliary variables using Method 2 in Higham (2002, p. 263). Parallel form: The Kalman filter is efficient for sequential data processing on central processing units (CPUs), but in its original form it is inefficient on parallel architectures such as graphics processing units (GPUs). It is however possible to express the filter-update routine in terms of an associative operator using the formulation in Särkkä (2021). The filter solution can then be retrieved by the use of a prefix sum algorithm which can be efficiently implemented on GPU. This reduces the computational complexity from O(N) in the number of time steps to log ⁡(N)) Relationship to recursive Bayesian estimation: The Kalman filter can be presented as one of the simplest dynamic Bayesian networks. The Kalman filter calculates estimates of the true values of states recursively over time using incoming measurements and a mathematical process model. Similarly, recursive Bayesian estimation calculates estimates of an unknown probability density function (PDF) recursively over time using incoming measurements and a mathematical process model.In recursive Bayesian estimation, the true state is assumed to be an unobserved Markov process, and the measurements are the observed states of a hidden Markov model (HMM). Relationship to recursive Bayesian estimation: Because of the Markov assumption, the true state is conditionally independent of all earlier states given the immediately previous state. p(xk∣x0,…,xk−1)=p(xk∣xk−1) Similarly, the measurement at the k-th timestep is dependent only upon the current state and is conditionally independent of all other states given the current state. Relationship to recursive Bayesian estimation: p(zk∣x0,…,xk)=p(zk∣xk) Using these assumptions the probability distribution over all states of the hidden Markov model can be written simply as: p(x0,…,xk,z1,…,zk)=p(x0)∏i=1kp(zi∣xi)p(xi∣xi−1) However, when a Kalman filter is used to estimate the state x, the probability distribution of interest is that associated with the current states conditioned on the measurements up to the current timestep. This is achieved by marginalizing out the previous states and dividing by the probability of the measurement set. Relationship to recursive Bayesian estimation: This results in the predict and update phases of the Kalman filter written probabilistically. The probability distribution associated with the predicted state is the sum (integral) of the products of the probability distribution associated with the transition from the (k − 1)-th timestep to the k-th and the probability distribution associated with the previous state, over all possible xk−1 .p(xk∣Zk−1)=∫p(xk∣xk−1)p(xk−1∣Zk−1)dxk−1 The measurement set up to time t is Zt={z1,…,zt} The probability distribution of the update is proportional to the product of the measurement likelihood and the predicted state. Relationship to recursive Bayesian estimation: p(xk∣Zk)=p(zk∣xk)p(xk∣Zk−1)p(zk∣Zk−1) The denominator p(zk∣Zk−1)=∫p(zk∣xk)p(xk∣Zk−1)dxk is a normalization term. The remaining probability density functions are p(xk∣xk−1)=N(Fkxk−1,Qk)p(zk∣xk)=N(Hkxk,Rk)p(xk−1∣Zk−1)=N(x^k−1,Pk−1) The PDF at the previous timestep is assumed inductively to be the estimated state and covariance. This is justified because, as an optimal estimator, the Kalman filter makes best use of the measurements, therefore the PDF for xk given the measurements Zk is the Kalman filter estimate. Marginal likelihood: Related to the recursive Bayesian interpretation described above, the Kalman filter can be viewed as a generative model, i.e., a process for generating a stream of random observations z = (z0, z1, z2, ...). Specifically, the process is Sample a hidden state x0 from the Gaussian prior distribution p(x0)=N(x^0∣0,P0∣0) Sample an observation z0 from the observation model p(z0∣x0)=N(H0x0,R0) For k=1,2,3,… , do Sample the next hidden state xk from the transition model p(xk∣xk−1)=N(Fkxk−1+Bkuk,Qk). Marginal likelihood: Sample an observation zk from the observation model p(zk∣xk)=N(Hkxk,Rk). This process has identical structure to the hidden Markov model, except that the discrete state and observations are replaced with continuous variables sampled from Gaussian distributions. Marginal likelihood: In some applications, it is useful to compute the probability that a Kalman filter with a given set of parameters (prior distribution, transition and observation models, and control inputs) would generate a particular observed signal. This probability is known as the marginal likelihood because it integrates over ("marginalizes out") the values of the hidden state variables, so it can be computed using only the observed signal. The marginal likelihood can be useful to evaluate different parameter choices, or to compare the Kalman filter against other models using Bayesian model comparison. Marginal likelihood: It is straightforward to compute the marginal likelihood as a side effect of the recursive filtering computation. By the chain rule, the likelihood can be factored as the product of the probability of each observation given previous observations, p(z)=∏k=0Tp(zk∣zk−1,…,z0) ,and because the Kalman filter describes a Markov process, all relevant information from previous observations is contained in the current state estimate x^k∣k−1,Pk∣k−1. Marginal likelihood: Thus the marginal likelihood is given by p(z)=∏k=0T∫p(zk∣xk)p(xk∣zk−1,…,z0)dxk=∏k=0T∫N(zk;Hkxk,Rk)N(xk;x^k∣k−1,Pk∣k−1)dxk=∏k=0TN(zk;Hkx^k∣k−1,Rk+HkPk∣k−1HkT)=∏k=0TN(zk;Hkx^k∣k−1,Sk), i.e., a product of Gaussian densities, each corresponding to the density of one observation zk under the current filtering distribution Hkx^k∣k−1,Sk . This can easily be computed as a simple recursive update; however, to avoid numeric underflow, in a practical implementation it is usually desirable to compute the log marginal likelihood log ⁡p(z) instead. Adopting the convention ℓ(−1)=0 , this can be done via the recursive update rule log log ⁡2π), where dy is the dimension of the measurement vector.An important application where such a (log) likelihood of the observations (given the filter parameters) is used is multi-target tracking. For example, consider an object tracking scenario where a stream of observations is the input, however, it is unknown how many objects are in the scene (or, the number of objects is known but is greater than one). For such a scenario, it can be unknown apriori which observations/measurements were generated by which object. A multiple hypothesis tracker (MHT) typically will form different track association hypotheses, where each hypothesis can be considered as a Kalman filter (for the linear Gaussian case) with a specific set of parameters associated with the hypothesized object. Thus, it is important to compute the likelihood of the observations for the different hypotheses under consideration, such that the most-likely one can be found. Information filter: In cases where the dimension of the observation vector y is bigger than the dimension of the state space vector x, the information filter can avoid the inversion of a bigger matrix in the Kalman gain calculation at the price of inverting a smaller matrix in the prediction step, thus saving computing time. In the information filter, or inverse covariance filter, the estimated covariance and estimated state are replaced by the information matrix and information vector respectively. These are defined as: Yk∣k=Pk∣k−1y^k∣k=Pk∣k−1x^k∣k Similarly the predicted covariance and state have equivalent information forms, defined as: Yk∣k−1=Pk∣k−1−1y^k∣k−1=Pk∣k−1−1x^k∣k−1 as have the measurement covariance and measurement vector, which are defined as: Ik=HkTRk−1Hkik=HkTRk−1zk The information update now becomes a trivial sum. Information filter: Yk∣k=Yk∣k−1+Iky^k∣k=y^k∣k−1+ik The main advantage of the information filter is that N measurements can be filtered at each time step simply by summing their information matrices and vectors. Yk∣k=Yk∣k−1+∑j=1NIk,jy^k∣k=y^k∣k−1+∑j=1Nik,j To predict the information filter the information matrix and vector can be converted back to their state space equivalents, or alternatively the information space prediction can be used. Mk=[Fk−1]TYk−1∣k−1Fk−1Ck=Mk[Mk+Qk−1]−1Lk=I−CkYk∣k−1=LkMk+CkQk−1CkTy^k∣k−1=Lk[Fk−1]Ty^k−1∣k−1 Fixed-lag smoother: The optimal fixed-lag smoother provides the optimal estimate of x^k−N∣k for a given fixed-lag N using the measurements from z1 to zk . It can be derived using the previous theory via an augmented state, and the main equation of the filter is the following: [x^t∣tx^t−1∣t⋮x^t−N+1∣t]=[I0⋮0]x^t∣t−1+[0…0I0⋮⋮⋱⋮0…I][x^t−1∣t−1x^t−2∣t−1⋮x^t−N+1∣t−1]+[K(0)K(1)⋮K(N−1)]yt∣t−1 where: x^t∣t−1 is estimated via a standard Kalman filter; yt∣t−1=zt−Hx^t∣t−1 is the innovation produced considering the estimate of the standard Kalman filter; the various x^t−i∣t with i=1,…,N−1 are new variables; i.e., they do not appear in the standard Kalman filter; the gains are computed via the following scheme: K(i+1)=P(i)HT[HPHT+R]−1 and P(i)=P[(F−KH)T]i where P and K are the prediction error covariance and the gains of the standard Kalman filter (i.e., Pt∣t−1 ).If the estimation error covariance is defined so that := E[(xt−i−x^t−i∣t)∗(xt−i−x^t−i∣t)∣z1…zt], then we have that the improvement on the estimation of xt−i is given by: P−Pi=∑j=0i[P(j)HT(HPHT+R)−1H(P(i))T] Fixed-interval smoothers: The optimal fixed-interval smoother provides the optimal estimate of x^k∣n (k<n ) using the measurements from a fixed interval z1 to zn . This is also called "Kalman Smoothing". There are several smoothing algorithms in common use. Rauch–Tung–Striebel The Rauch–Tung–Striebel (RTS) smoother is an efficient two-pass algorithm for fixed interval smoothing.The forward pass is the same as the regular Kalman filter algorithm. These filtered a-priori and a-posteriori state estimates x^k∣k−1 , x^k∣k and covariances Pk∣k−1 , Pk∣k are saved for use in the backward pass (for retrodiction). In the backward pass, we compute the smoothed state estimates x^k∣n and covariances Pk∣n . We start at the last time step and proceed backward in time using the following recursive equations: x^k∣n=x^k∣k+Ck(x^k+1∣n−x^k+1∣k)Pk∣n=Pk∣k+Ck(Pk+1∣n−Pk+1∣k)CkT where Ck=Pk∣kFk+1TPk+1∣k−1. xk∣k is the a-posteriori state estimate of timestep k and xk+1∣k is the a-priori state estimate of timestep k+1 . The same notation applies to the covariance. Fixed-interval smoothers: Modified Bryson–Frazier smoother An alternative to the RTS algorithm is the modified Bryson–Frazier (MBF) fixed interval smoother developed by Bierman. This also uses a backward pass that processes data saved from the Kalman filter forward pass. The equations for the backward pass involve the recursive computation of data which are used at each observation time to compute the smoothed state and covariance. Fixed-interval smoothers: The recursive equations are Λ~k=HkTSk−1Hk+C^kTΛ^kC^kΛ^k−1=FkTΛ~kFkΛ^n=0λ~k=−HkTSk−1yk+C^kTλ^kλ^k−1=FkTλ~kλ^n=0 where Sk is the residual covariance and C^k=I−KkHk . The smoothed state and covariance can then be found by substitution in the equations Pk∣n=Pk∣k−Pk∣kΛ^kPk∣kxk∣n=xk∣k−Pk∣kλ^k or Pk∣n=Pk∣k−1−Pk∣k−1Λ~kPk∣k−1xk∣n=xk∣k−1−Pk∣k−1λ~k. An important advantage of the MBF is that it does not require finding the inverse of the covariance matrix. Minimum-variance smoother The minimum-variance smoother can attain the best-possible error performance, provided that the models are linear, their parameters and the noise statistics are known precisely. This smoother is a time-varying state-space generalization of the optimal non-causal Wiener filter. Fixed-interval smoothers: The smoother calculations are done in two passes. The forward calculations involve a one-step-ahead predictor and are given by x^k+1∣k=(Fk−KkHk)x^k∣k−1+Kkzkαk=−Sk−12Hkx^k∣k−1+Sk−12zk The above system is known as the inverse Wiener-Hopf factor. The backward recursion is the adjoint of the above forward system. The result of the backward pass βk may be calculated by operating the forward equations on the time-reversed αk and time reversing the result. In the case of output estimation, the smoothed estimate is given by y^k∣N=zk−Rkβk Taking the causal part of this minimum-variance smoother yields y^k∣k=zk−RkSk−12αk which is identical to the minimum-variance Kalman filter. The above solutions minimize the variance of the output estimation error. Note that the Rauch–Tung–Striebel smoother derivation assumes that the underlying distributions are Gaussian, whereas the minimum-variance solutions do not. Optimal smoothers for state estimation and input estimation can be constructed similarly. Fixed-interval smoothers: A continuous-time version of the above smoother is described in.Expectation–maximization algorithms may be employed to calculate approximate maximum likelihood estimates of unknown state-space parameters within minimum-variance filters and smoothers. Often uncertainties remain within problem assumptions. A smoother that accommodates uncertainties can be designed by adding a positive definite term to the Riccati equation.In cases where the models are nonlinear, step-wise linearizations may be within the minimum-variance filter and smoother recursions (extended Kalman filtering). Frequency-weighted Kalman filters: Pioneering research on the perception of sounds at different frequencies was conducted by Fletcher and Munson in the 1930s. Their work led to a standard way of weighting measured sound levels within investigations of industrial noise and hearing loss. Frequency weightings have since been used within filter and controller designs to manage performance within bands of interest. Frequency-weighted Kalman filters: Typically, a frequency shaping function is used to weight the average power of the error spectral density in a specified frequency band. Let y−y^ denote the output estimation error exhibited by a conventional Kalman filter. Also, let W denote a causal frequency weighting transfer function. The optimum solution which minimizes the variance of W(y−y^) arises by simply constructing W−1y^ The design of W remains an open question. One way of proceeding is to identify a system which generates the estimation error and setting W equal to the inverse of that system. This procedure may be iterated to obtain mean-square error improvement at the cost of increased filter order. The same technique can be applied to smoothers. Nonlinear filters: The basic Kalman filter is limited to a linear assumption. More complex systems, however, can be nonlinear. The nonlinearity can be associated either with the process model or with the observation model or with both. The most common variants of Kalman filters for non-linear systems are the Extended Kalman Filter and Unscented Kalman filter. The suitability of which filter to use depends on the non-linearity indices of the process and observation model. Extended Kalman filter In the extended Kalman filter (EKF), the state transition and observation models need not be linear functions of the state but may instead be nonlinear functions. These functions are of differentiable type. xk=f(xk−1,uk)+wkzk=h(xk)+vk The function f can be used to compute the predicted state from the previous estimate and similarly the function h can be used to compute the predicted measurement from the predicted state. However, f and h cannot be applied to the covariance directly. Instead a matrix of partial derivatives (the Jacobian) is computed. At each timestep the Jacobian is evaluated with current predicted states. These matrices can be used in the Kalman filter equations. This process essentially linearizes the nonlinear function around the current estimate. Nonlinear filters: Unscented Kalman filter When the state transition and observation models—that is, the predict and update functions f and h —are highly nonlinear, the extended Kalman filter can give particularly poor performance. This is because the covariance is propagated through linearization of the underlying nonlinear model. The unscented Kalman filter (UKF) uses a deterministic sampling technique known as the unscented transformation (UT) to pick a minimal set of sample points (called sigma points) around the mean. The sigma points are then propagated through the nonlinear functions, from which a new mean and covariance estimate are then formed. The resulting filter depends on how the transformed statistics of the UT are calculated and which set of sigma points are used. It should be remarked that it is always possible to construct new UKFs in a consistent way. For certain systems, the resulting UKF more accurately estimates the true mean and covariance. This can be verified with Monte Carlo sampling or Taylor series expansion of the posterior statistics. In addition, this technique removes the requirement to explicitly calculate Jacobians, which for complex functions can be a difficult task in itself (i.e., requiring complicated derivatives if done analytically or being computationally costly if done numerically), if not impossible (if those functions are not differentiable). Nonlinear filters: Sigma points For a random vector x=(x1,…,xL) , sigma points are any set of vectors {s0,…,sN}={(s0,1s0,2…s0,L),…,(sN,1sN,2…sN,L)} attributed with first-order weights W0a,…,WNa that fulfill ∑j=0NWja=1 for all i=1,…,L : E[xi]=∑j=0NWjasj,i second-order weights W0c,…,WNc that fulfill ∑j=0NWjc=1 for all pairs (i,l)∈{1,…,L}2:E[xixl]=∑j=0NWjcsj,isj,l .A simple choice of sigma points and weights for xk−1∣k−1 in the UKF algorithm is s0=x^k−1∣k−1−1<W0a=W0c<1sj=x^k−1∣k−1+L1−W0Aj,j=1,…,LsL+j=x^k−1∣k−1−L1−W0Aj,j=1,…,LWja=Wjc=1−W02L,j=1,…,2L where x^k−1∣k−1 is the mean estimate of xk−1∣k−1 . The vector Aj is the jth column of A where Pk−1∣k−1=AAT . Typically, A is obtained via Cholesky decomposition of Pk−1∣k−1 . With some care the filter equations can be expressed in such a way that A is evaluated directly without intermediate calculations of Pk−1∣k−1 . This is referred to as the square-root unscented Kalman filter.The weight of the mean value, W0 , can be chosen arbitrarily. Nonlinear filters: Another popular parameterization (which generalizes the above) is s0=x^k−1∣k−1W0a=α2κ−Lα2κW0c=W0a+1−α2+βsj=x^k−1∣k−1+ακAj,j=1,…,LsL+j=x^k−1∣k−1−ακAj,j=1,…,LWja=Wjc=12α2κ,j=1,…,2L. Nonlinear filters: α and κ control the spread of the sigma points. β is related to the distribution of x Appropriate values depend on the problem at hand, but a typical recommendation is 10 −3 , κ=1 , and β=2 . However, a larger value of α (e.g., α=1 ) may be beneficial in order to better capture the spread of the distribution and possible nonlinearities. If the true distribution of x is Gaussian, β=2 is optimal. Nonlinear filters: Predict As with the EKF, the UKF prediction can be used independently from the UKF update, in combination with a linear (or indeed EKF) update, or vice versa. Given estimates of the mean and covariance, x^k−1∣k−1 and Pk−1∣k−1 , one obtains N=2L+1 sigma points as described in the section above. The sigma points are propagated through the transition function f. Nonlinear filters: xj=f(sj)j=0,…,2L .The propagated sigma points are weighed to produce the predicted mean and covariance. x^k∣k−1=∑j=02LWjaxjPk∣k−1=∑j=02LWjc(xj−x^k∣k−1)(xj−x^k∣k−1)T+Qk where Wja are the first-order weights of the original sigma points, and Wjc are the second-order weights. The matrix Qk is the covariance of the transition noise, wk Update Given prediction estimates x^k∣k−1 and Pk∣k−1 , a new set of N=2L+1 sigma points s0,…,s2L with corresponding first-order weights W0a,…W2La and second-order weights W0c,…,W2Lc is calculated. These sigma points are transformed through the measurement function h .zj=h(sj),j=0,1,…,2L .Then the empirical mean and covariance of the transformed points are calculated. Nonlinear filters: z^=∑j=02LWjazjS^k=∑j=02LWjc(zj−z^)(zj−z^)T+Rk where Rk is the covariance matrix of the observation noise, vk . Additionally, the cross covariance matrix is also needed Cxz=∑j=02LWjc(xj−x^k|k−1)(zj−z^)T. The Kalman gain is Kk=CxzS^k−1. The updated mean and covariance estimates are x^k∣k=x^k|k−1+Kk(zk−z^)Pk∣k=Pk∣k−1−KkS^kKkT. Discriminative Kalman filter When the observation model p(zk∣xk) is highly non-linear and/or non-Gaussian, it may prove advantageous to apply Bayes' rule and estimate p(zk∣xk)≈p(xk∣zk)p(xk) where p(xk∣zk)≈N(g(zk),Q(zk)) for nonlinear functions g,Q . This replaces the generative specification of the standard Kalman filter with a discriminative model for the latent states given observations. Under a stationary state model p(x1)=N(0,T),p(xk∣xk−1)=N(Fxk−1,C), where T=FTF⊺+C , if p(xk∣z1:k)≈N(x^k|k−1,Pk|k−1), then given a new observation zk , it follows that p(xk+1∣z1:k+1)≈N(x^k+1|k,Pk+1|k) where Mk+1=FPk|k−1F⊺+C,Pk+1|k=(Mk+1−1+Q(zk)−1−T−1)−1,x^k+1|k=Pk+1|k(Mk+1−1Fx^k|k−1+Pk+1|k−1g(zk)). Note that this approximation requires Q(zk)−1−T−1 to be positive-definite; in the case that it is not, Pk+1|k=(Mk+1−1+Q(zk)−1)−1 is used instead. Such an approach proves particularly useful when the dimensionality of the observations is much greater than that of the latent states and can be used build filters that are particularly robust to nonstationarities in the observation model. Adaptive Kalman filter: Adaptive Kalman filters allow to adapt for process dynamics which are not modeled in the process model F(t) , which happens for example in the context of a maneuvering target when a constant velocity (reduced order) Kalman filter is employed for tracking. Kalman–Bucy filter: Kalman–Bucy filtering (named for Richard Snowden Bucy) is a continuous time version of Kalman filtering.It is based on the state space model ddtx(t)=F(t)x(t)+B(t)u(t)+w(t)z(t)=H(t)x(t)+v(t) where Q(t) and R(t) represent the intensities (or, more accurately: the Power Spectral Density - PSD - matrices) of the two white noise terms w(t) and v(t) , respectively. Kalman–Bucy filter: The filter consists of two differential equations, one for the state estimate and one for the covariance: ddtx^(t)=F(t)x^(t)+B(t)u(t)+K(t)(z(t)−H(t)x^(t))ddtP(t)=F(t)P(t)+P(t)FT(t)+Q(t)−K(t)R(t)KT(t) where the Kalman gain is given by K(t)=P(t)HT(t)R−1(t) Note that in this expression for K(t) the covariance of the observation noise R(t) represents at the same time the covariance of the prediction error (or innovation) y~(t)=z(t)−H(t)x^(t) ; these covariances are equal only in the case of continuous time.The distinction between the prediction and update steps of discrete-time Kalman filtering does not exist in continuous time. Kalman–Bucy filter: The second differential equation, for the covariance, is an example of a Riccati equation. Nonlinear generalizations to Kalman–Bucy filters include continuous time extended Kalman filter. Hybrid Kalman filter: Most physical systems are represented as continuous-time models while discrete-time measurements are made frequently for state estimation via a digital processor. Therefore, the system model and measurement model are given by x˙(t)=F(t)x(t)+B(t)u(t)+w(t),w(t)∼N(0,Q(t))zk=Hkxk+vk,vk∼N(0,Rk) where xk=x(tk) Initialize Var ⁡[x(t0)] Predict , with , with P(tk−1)=Pk−1∣k−1⇒Pk∣k−1=P(tk) The prediction equations are derived from those of continuous-time Kalman filter without update from measurements, i.e., K(t)=0 . The predicted state and covariance are calculated respectively by solving a set of differential equations with the initial value equal to the estimate at the previous step. Hybrid Kalman filter: For the case of linear time invariant systems, the continuous time dynamics can be exactly discretized into a discrete time system using matrix exponentials. Update Kk=Pk∣k−1HkT(HkPk∣k−1HkT+Rk)−1x^k∣k=x^k∣k−1+Kk(zk−Hkx^k∣k−1)Pk∣k=(I−KkHk)Pk∣k−1 The update equations are identical to those of the discrete-time Kalman filter. Variants for the recovery of sparse signals: The traditional Kalman filter has also been employed for the recovery of sparse, possibly dynamic, signals from noisy observations. Recent works utilize notions from the theory of compressed sensing/sampling, such as the restricted isometry property and related probabilistic recovery arguments, for sequentially estimating the sparse state in intrinsically low-dimensional systems. Relation to Gaussian processes: Since linear Gaussian state-space models lead to Gaussian processes, Kalman filters can be viewed as sequential solvers for Gaussian process regression.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Focus mitt** Focus mitt: A focus mitt is a padded target attached to a glove and usually used in training boxers and other combat athletes. Focus mitt: The use of focus mitts is said to have come about as Muay Thai and Far Eastern martial arts made their way toward the United States in the late 1700s. The concept first began with using foot tongs or slippers on hands to absorb the impact from kicks and strikes. Modern day punch mitts came into more widespread use in the mid-1960s when Bruce Lee was seen using them in his training routines. Although they have been around for decades, they were never a central part of coaching until the late ‘70s and early ‘80s. Now they’ve become an almost irreplaceable part of a fighter’s routine. Focus mitt: The person holding the focus mitts will typically call out combinations and "feed" the puncher good counter-force while maneuvering and working specific skills. Focus mitts are often used as an augment to sparring, with more explicit focus on the puncher than the feeder, especially to develop good punch combinations and defensive maneuvers such as "slipping," "bobbing" and "weaving." When wearing focus mitts it is important not merely to hold them but to actively "feed" them into the punches, to balance their force and prevent injury to both parties. Focus mitt: Similar to a focus mitt but designed for different purposes are heavier Thai pads used in muay Thai boxing and MMA, kicking shields, body shields and uppercut shields used in a variety of martial arts to help gauge distance and practice techniques with kicks, knees, elbows and uppercuts. Working with focus mitts: It is often said that holding focus mitts can be as taxing as striking them. Typically, the person wearing the focus mitts will yell a number that represents a combination. For example, yelling "one,two,three!" might signify that the striker should throw a jab, followed by a cross, followed by a hook in rapid succession. Defensive maneuvers are often incorporated into the combinations as well. Working with focus mitts: Improving punching technique relies upon the person wearing the focus mitts knowing where to set his/her hands, as well as knowing how to time the movement of the focus mitts. Typically the holder will comment on how the striker can improve his/her technique between combinations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The New Wittgenstein** The New Wittgenstein: The New Wittgenstein (2000) is a book containing a family of interpretations of the work of philosopher Ludwig Wittgenstein. In particular, those associated with this interpretation, such as Cora Diamond, Alice Crary, and James F. Conant, understand Wittgenstein to have avoided putting forth a "positive" metaphysical program, and understand him to be advocating philosophy as a form of "therapy." Under this interpretation, Wittgenstein's program is dominated by the idea that philosophical problems are symptoms of illusions or "bewitchments by language," and that attempts at a "narrow" solution to philosophical problems, that do not take into account larger questions of how the questioner conducts her life, interacts with other people, and uses language generally, are doomed to failure. Overview: According to the introduction to the anthology The New Wittgenstein (ISBN 0-415-17319-1): Wittgenstein's primary aim in philosophy is – to use a word he himself employs in characterizing his later philosophical procedures – a therapeutic one. These papers have in common an understanding of Wittgenstein as aspiring, not to advance metaphysical theories, but rather to help us work ourselves out of confusions we become entangled in when philosophizing. Overview: While many philosophers have suggested variants of such ideas in readings of the work of later Wittgenstein, namely the author of Philosophical Investigations, a notable aspect of the New Wittgenstein interpretation is a view that the work of early Wittgenstein, exemplified by the Tractatus Logico-Philosophicus, and the Investigations, are actually more deeply connected, and in less opposition, to each other than usually understood. This view is in direct conflict with the long-standing, if somewhat old-fashioned, interpretation of the Tractatus Logico-Philosophicus advocated by the logical positivists associated with the Vienna Circle. Overview: The therapeutic approach of the New Wittgenstein scholars is not without critics: Hans-Johann Glock argues that the "plain nonsense" reading of the Tractatus "is at odds with the external evidence, writings and conversations in which Wittgenstein states that the Tractatus is committed to the idea of ineffable insight".There is no unitary "New Wittgenstein" interpretation, and proponents differ deeply amongst themselves. Philosophers often associated with the interpretation include a number of influential philosophers, mostly associated with (although sometimes antagonistic to) the traditions of analytic philosophy, including Stanley Cavell, James F. Conant, John McDowell, Matthew B. Ostrow, Thomas Ricketts, Warren Goldfarb, Hilary Putnam, Stephen Mulhall, Alice Crary, and Cora Diamond. Explicit critics of the "New Wittgenstein" interpretation include P. M. S. Hacker, Ian Proops and Genia Schönbaumsfeld.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Remainder** Remainder: In mathematics, the remainder is the amount "left over" after performing some computation. In arithmetic, the remainder is the integer "left over" after dividing one integer by another to produce an integer quotient (integer division). In algebra of polynomials, the remainder is the polynomial "left over" after dividing one polynomial by another. The modulo operation is the operation that produces such a remainder when given a dividend and divisor. Remainder: Alternatively, a remainder is also what is left after subtracting one number from another, although this is more precisely called the difference. This usage can be found in some elementary textbooks; colloquially it is replaced by the expression "the rest" as in "Give me two dollars back and keep the rest." However, the term "remainder" is still used in this sense when a function is approximated by a series expansion, where the error expression ("the rest") is referred to as the remainder term. Integer division: Given an integer a and a non-zero integer d, it can be shown that there exist unique integers q and r, such that a = qd + r and 0 ≤ r < |d|. The number q is called the quotient, while r is called the remainder. Integer division: (For a proof of this result, see Euclidean division. For algorithms describing how to calculate the remainder, see division algorithm.) The remainder, as defined above, is called the least positive remainder or simply the remainder. The integer a is either a multiple of d, or lies in the interval between consecutive multiples of d, namely, q⋅d and (q + 1)d (for positive q). Integer division: In some occasions, it is convenient to carry out the division so that a is as close to an integral multiple of d as possible, that is, we can write a = k⋅d + s, with |s| ≤ |d/2| for some integer k.In this case, s is called the least absolute remainder. As with the quotient and remainder, k and s are uniquely determined, except in the case where d = 2n and s = ± n. For this exception, we have: a = k⋅d + n = (k + 1)d − n.A unique remainder can be obtained in this case by some convention—such as always taking the positive value of s. Integer division: Examples In the division of 43 by 5, we have: 43 = 8 × 5 + 3,so 3 is the least positive remainder. We also have that: 43 = 9 × 5 − 2,and −2 is the least absolute remainder. These definitions are also valid if d is negative, for example, in the division of 43 by −5, 43 = (−8) × (−5) + 3,and 3 is the least positive remainder, while, 43 = (−9) × (−5) + (−2)and −2 is the least absolute remainder. In the division of 42 by 5, we have: 42 = 8 × 5 + 2,and since 2 < 5/2, 2 is both the least positive remainder and the least absolute remainder. Integer division: In these examples, the (negative) least absolute remainder is obtained from the least positive remainder by subtracting 5, which is d. This holds in general. When dividing by d, either both remainders are positive and therefore equal, or they have opposite signs. If the positive remainder is r1, and the negative one is r2, then r1 = r2 + d. For floating-point numbers: When a and d are floating-point numbers, with d non-zero, a can be divided by d without remainder, with the quotient being another floating-point number. If the quotient is constrained to being an integer, however, the concept of remainder is still necessary. It can be proved that there exists a unique integer quotient q and a unique floating-point remainder r such that a = qd + r with 0 ≤ r < |d|. For floating-point numbers: Extending the definition of remainder for floating-point numbers, as described above, is not of theoretical importance in mathematics; however, many programming languages implement this definition (see modulo operation). In programming languages: While there are no difficulties inherent in the definitions, there are implementation issues that arise when negative numbers are involved in calculating remainders. Different programming languages have adopted different conventions. For example: Pascal chooses the result of the mod operation positive, but does not allow d to be negative or zero (so, a = (a div d ) × d + a mod d is not always valid). In programming languages: C99 chooses the remainder with the same sign as the dividend a. (Before C99, the C language allowed other choices.) Perl, Python (only modern versions) choose the remainder with the same sign as the divisor d. Haskell and Scheme offer two functions, remainder and modulo – Ada, Common Lisp and PL/I have mod and rem, while Fortran has mod and modulo; in each case, the former agrees in sign with the dividend, and the latter with the divisor. Polynomial division: Euclidean division of polynomials is very similar to Euclidean division of integers and leads to polynomial remainders. Its existence is based on the following theorem: Given two univariate polynomials a(x) and b(x) (where b(x) is a non-zero polynomial) defined over a field (in particular, the reals or complex numbers), there exist two polynomials q(x) (the quotient) and r(x) (the remainder) which satisfy: a(x)=b(x)q(x)+r(x) where deg deg ⁡(b(x)), where "deg(...)" denotes the degree of the polynomial (the degree of the constant polynomial whose value is always 0 can be defined to be negative, so that this degree condition will always be valid when this is the remainder). Moreover, q(x) and r(x) are uniquely determined by these relations. Polynomial division: This differs from the Euclidean division of integers in that, for the integers, the degree condition is replaced by the bounds on the remainder r (non-negative and less than the divisor, which insures that r is unique.) The similarity between Euclidean division for integers and that for polynomials motivates the search for the most general algebraic setting in which Euclidean division is valid. The rings for which such a theorem exists are called Euclidean domains, but in this generality, uniqueness of the quotient and remainder is not guaranteed.Polynomial division leads to a result known as the polynomial remainder theorem: If a polynomial f(x) is divided by x − k, the remainder is the constant r = f(k).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Subsea valves** Subsea valves: Subsea valves are used to isolate or control the flow of material through an undersea pipeline (submarine pipeline) or other apparatus. Most commonly used to transport oil and gas, they are designed to function in a sub-marine environment, withstanding the effects of raised external pressure, salt-water corrosion, and bubbles or debris in the material carried. Subsea valves undergo stringent testing to ensure high reliability. Usage: Subsea valves are used in sub-marine environments, which can range in depth from shallow water (usually down to a depth of 75 meters) to deep water (down to 3500 meters). Various industries use subsea valves, with the oil and gas sector accounting for the majority, where there is a need to move material from, to, or below the seabed. Hazards to subsea valves: External environmental factors to be considered specifically for subsea valves include waterproofing, increased ambient pressure, and long-term corrosion from the high salt content of seawater. Internal factors to consider for subsea valves are related to the type of flow material (what passes through the valve apparatus). Typically in subsea environments, the flows will either be liquid or gas based but due to location of the operation, the flow can contain a significant amount of sand and debris. This can present internal structural challenges. Hazards to subsea valves: One of the most challenging aspects for subsea valve deployment is cavitation. This occurs when liquid, being pumped through various pieces of machinery including the subsea valve, contains bubbles (or cavities). When the bubbles move through the system into areas of higher pressure they will collapse, and on moving into areas with lower pressure they will expand. This can have several negative effects including: An increase in noise and more importantly vibration, which may cause damage to a number of machinery components including the subsea valve and in extreme cases cause total pump failure. Hazards to subsea valves: The pump may undergo a reduction in capacity. Pressure may not be maintained, potentially causing fracturing within the pump. Overall pump efficiency drops.Due to the subsea valve not being easily accessible, it is of particular importance that it can function without hindrance, as replacement may be extremely costly. Subsea valve testing: To overcome the problems associated with sub-marine environments, subsea valves are required to pass a number of stringent tests. These may include (as applicable): Gas testing according to API 6DSS / API 17D or API 6A (PSL 2, PSL 3, PSL 3G or PSL 4) Performance verification test according to API 6A PR 2 Hyperbaric testing Endurance Testing according to API6A Bending calculation and test Seismic test
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Symmetrical inflation target** Symmetrical inflation target: A symmetrical inflation target is a requirement placed on a central bank to respond when inflation is too low as well as when inflation is too high. Symmetrical inflation target: For example, the Bank of England and the Bank of Canada have symmetrical inflation targets. Following the strategy review led by the new president Christine Lagarde and finalised in July 2021, also the European Central Bank adopted a symmetric inflation target of two per cent over the medium term and officially abandoned the asymmetric "below but close to two percent" definition.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mixing (physics)** Mixing (physics): In physics, a dynamical system is said to be mixing if the phase space of the system becomes strongly intertwined, according to at least one of several mathematical definitions. For example, a measure-preserving transformation T is said to be strong mixing if lim k→∞μ(T−kA∩B)=μ(A)⋅μ(B) whenever A and B are any measurable sets and μ is the associated measure. Other definitions are possible, including weak mixing and topological mixing. Mixing (physics): The mathematical definition of mixing is meant to capture the notion of physical mixing. A canonical example is the Cuba libre: suppose one is adding rum (the set A) to a glass of cola. After stirring the glass, the bottom half of the glass (the set B) will contain rum, and it will be in equal proportion as it is elsewhere in the glass. The mixing is uniform: no matter which region B one looks at, some of A will be in that region. A far more detailed, but still informal description of mixing can be found in the article on mixing (mathematics). Mixing (physics): Every mixing transformation is ergodic, but there are ergodic transformations which are not mixing. Physical mixing: The mixing of gases or liquids is a complex physical process, governed by a convective diffusion equation that may involve non-Fickian diffusion as in spinodal decomposition. The convective portion of the governing equation contains fluid motion terms that are governed by the Navier–Stokes equations. When fluid properties such as viscosity depend on composition, the governing equations may be coupled. There may also be temperature effects. It is not clear that fluid mixing processes are mixing in the mathematical sense. Physical mixing: Small rigid objects (such as rocks) are sometimes mixed in a rotating drum or tumbler. The 1969 Selective Service draft lottery was carried out by mixing plastic capsules which contained a slip of paper (marked with a day of the year).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Strophanthin** Strophanthin: Strophanthins are cardiac glycosides in plants of the genus Strophanthus. The singular may refer to: g-Strophanthin, also known as ouabain k-Strophanthin
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rear-view mirror** Rear-view mirror: A rear-view mirror (or rearview mirror) is a, usually flat, mirror in automobiles and other vehicles, designed to allow the driver to see rearward through the vehicle's rear window (rear windshield). In cars, the rear-view mirror is usually affixed to the top of the windshield on a double-swivel mount allowing it to be adjusted to suit the height and viewing angle of any driver and to swing harmlessly out of the way if impacted by a vehicle occupant in a collision. The rear-view mirror is augmented by one or more side-view mirrors, which serve as the only rear-vision mirrors on trucks, motorcycles and bicycles. History: Among the rear-view mirror's early uses is a mention by Dorothy Levitt in her 1909 book The Woman and the Car which noted that women should "carry a little hand-mirror in a convenient place when driving" so they may "hold the mirror aloft from time to time in order to see behind while driving in traffic". However, earlier use is described in 1906, in a trade magazine noting mirrors for showing what is coming behind now popular on closed bodied automobiles, and to likely be widely adopted in a short time. The same year, a Mr. Henri Cain from France patented a "Warning mirror for automobiles". The Argus Dash Mirror, adjustable to any position to see the road behind, appeared in 1908. Earliest known rear-view mirror mounted on a racing vehicle appeared on Ray Harroun's Marmon race car at the inaugural Indianapolis 500 race in 1911. Harroun himself claimed he got the idea from seeing a mirror used for a similar purpose on a horse-drawn vehicle in 1904. Harroun also claimed that the mirror vibrated constantly due to the rough brick surface, and it was rendered largely useless.Elmer Berger is usually credited with inventing the rear-view mirror, though in fact he was the first to patent it (1921) and develop it for incorporation into production street going automobiles by his Berger and Company. Augmentations and alternatives: Recently, rear-view video cameras have been built into many new model cars, this was partially in response to the rear-view mirrors' inability to show the road directly behind the car, due to the rear deck or trunk obscuring as much as 3–5 meters (10–15 feet) of road behind the car. As many as 50 small children are killed by SUVs every year in the USA because the driver cannot see them in their rear-view mirrors. Camera systems are usually mounted to the rear bumper or lower parts of the car, allowing for better rear visibility. Augmentations and alternatives: Aftermarket secondary rear-view mirrors are available. They attach to the main rear-view mirror and are independently adjustable to view the back seat. This is useful to enable adults to monitor children in the back seat. Anti-glare: A prismatic rear-view mirror—sometimes called a "day/night mirror"—can be tilted to reduce the brightness and glare of lights, mostly for high-beam headlights of vehicles behind which would otherwise be reflected directly into the driver's eyes at night. This type of mirror is made of a piece of glass that is wedge-shaped in cross-section—its front and rear surfaces are not parallel. Anti-glare: On manual tilt versions, a tab is used to adjust the mirror between "day" and "night" positions. In the day view position, the front surface is tilted and the reflective back side gives a strong reflection. When the mirror is moved to the night view position, its reflecting rear surface is tilted out of line with the driver's view. This view is actually a reflection of the low-reflection front surface; only a much-reduced amount of light is reflected in the driver's eyes. Anti-glare: "Manual tilt" day/night mirrors first began appearing in the 1930s and became standard equipment on most passenger cars and trucks by the early 1970s. Anti-glare: Automatic dimming In the 1940s, American inventor Jacob Rabinow developed a light-sensitive automatic mechanism for the wedge-type day/night mirror. Several Chrysler Corporation cars offered these automatic mirrors as optional equipment as early as 1959, but few customers ordered them for their cars and the item was soon withdrawn from the option lists. Several automakers began offering rear-view mirrors with automatic dimming again in 1983, and it was in the late 1980s that they began to catch on in popularity.Current systems usually use photosensors mounted in the rear-view mirror to detect light and dim the mirror by means of electrochromism. This electrochromic feature has also been incorporated into side-view mirrors allowing them to dim and reduce glare as well. Suspending objects: Objects are sometimes hung from the rear-view mirror, including cross necklaces, prayer beads, good luck charms, decorations like fuzzy dice, and air fresheners like Little Trees. In some jurisdictions such hanging is illegal on the basis that it impairs the driver's forward view and so compromises safety. Black Lives Matter protesters have cited this as an example of the minor violations used as grounds for traffic stops disproportionately targeting black drivers. Trucks and buses: On trucks and buses, the load often blocks rearward vision out the backlight. In the U.S. virtually all trucks and buses have a side view mirror on each side, often mounted on the doors and viewed out the side windows, which are used for rear vision. These mirrors leave a large unviewable ("blind") area behind the vehicle, which tapers down as the distance increases. This is a safety issue which the driver must compensate for, often with a person guiding the truck back in congested areas, or by backing in a curve. "Spot mirrors", a convex mirror which provides a distorted image of the entire side of the vehicle, are commonly mounted on at least the right side of a vehicle. In the U.S. mirrors are considered "safety equipment", and are not included in width restrictions. Motorcycles: Depending on the type of motorcycle, the motorcycle may or may not have rear-view mirrors. Street-legal motorcycles are generally required to have rear-view mirrors. Motorcycles for off-road use only normally do not have rear-view mirrors. Rear-view mirrors come in various shapes and designs and have various methods of mounting the mirrors to the motorcycle, most commonly to the handlebars. Rear-view mirrors can also be attached to the rider's motorcycle helmet. Bicycles: Some bicycles are equipped with a rear-view mirror mounted on a handlebar. Rear-view mirrors may also be fitted to the bicycle frame, on a helmet on the arm or the frame of a pair of eyeglasses. This allows what is behind to be checked continuously without turning round. Rear-view mirrors almost never come with a new bicycle and require an additional purchase. Aircraft: In 1956, the Civil Aeronautical Administration proposed a rear-view mirror mounted right above the pilot to keep an eye when private aircraft are landing or taxiing on the runway to prevent collisions. Fighter aircraft usually have one or more rear-view mirrors mounted on the front canopy frame to watch out for chasing aircraft. Computer monitors: Some computer monitors are fitted with rear-view mirrors to see if anyone is positioned behind the user where they can see sensitive information, such as names and passwords, being keyed in or on the screen. These are used especially on automated teller machines and similar.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vagrant predicate** Vagrant predicate: Vagrant predicates are logical constructions that exhibit an inherent limit to conceptual knowledge. Such predicates can be used in general descriptions but are self-contradictory when applied to particulars. For instance, there are numbers which have never been mentioned but no example can be given as this would contradict its definition. Vagrant predicates have been proposed and studied by Nicholas Rescher. F is a vagrant predicate iff ( ∃ u)Fu is true while nevertheless Fu0 is false for each and every specifically identified u0.When infinity is thought of as a number greater than any given, a similar idea is conceived. However vagrancy needs not to be monotonous and occurs also within bounds. Rescher has used vagrant predicates to solve the vagueness problem.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mesylate** Mesylate: In organosulfur chemistry, a mesylate is any salt or ester of methanesulfonic acid (CH3SO3H). In salts, the mesylate is present as the CH3SO−3 anion. When modifying the international nonproprietary name of a pharmaceutical substance containing the group or anion, the spelling used is sometimes mesilate (as in imatinib mesilate, the mesylate salt of imatinib).Mesylate esters are a group of organic compounds that share a common functional group with the general structure CH3SO2O−R, abbreviated MsO−R, where R is an organic substituent. Mesylate is considered a leaving group in nucleophilic substitution reactions. Preparation: Mesylates are generally prepared by treating an alcohol and methanesulfonyl chloride in the presence of a base, such as triethylamine. Mesyl: Related to mesylate is the mesyl (Ms) or methanesulfonyl (CH3SO2) functional group. Methanesulfonyl chloride is often referred to as mesyl chloride. Whereas mesylates are often hydrolytically labile, mesyl groups, when attached to nitrogen, are resistant to hydrolysis. This functional group appears in a variety of medications, particularly cardiac (antiarrhythmic) drugs, as a sulfonamide moiety. Examples include sotalol, ibutilide, sematilide, dronedarone, dofetilide, E-4031, and bitopertin. Natural occurrence: Ice core samples from a single spot in Antarctica were found to have tiny inclusions of magnesium methanesulfonate dodecahydrate. This natural phase is recognized as the mineral ernstburkeite. It is extremely rare.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Infant swimming** Infant swimming: Infant swimming is the phenomenon of human babies and toddlers reflexively moving themselves through water and changing their rate of respiration and heart rate in response to being submerged. The slowing of heart rate and breathing is called the bradycardic response. It is not true that babies are born with the ability to swim, though they have primitive reflexes that make it look like they are. Newborns are not old enough to hold their breath intentionally or strong enough to keep their head above water, and cannot swim unassisted. Infant swimming: Most infants, though not all, will reflexively hold their breath when submerged to protect their airway and are able to survive immersion in water for short periods of time. Infants can also be taken to swimming lessons. Although this may be done to reduce their risk of drowning, the effects on drowning risk are not reliable. Babies can imitate swimming motions and reflexes, but are not yet physically capable of swimming. Infant swimming: A submersion of the head may last only a few seconds. A German physician pointed out the health risks of infant diving and the sometimes serious consequences as early as 1986, writing that since the introduction of baby swimming in Germany, several hundred infants had died from brain complications as a result of sinusitis and otitis that occurred after diving. Pediatricians also reported cases of cardiac arrest or respiratory failure. Infant swimming or diving reflex: Most human babies demonstrate an innate swimming or diving reflex from birth until the age of approximately six months, which are part of a wider range of primitive reflexes found in infants and babies, but not children, adolescents and adults. Other mammals also demonstrate this phenomenon (see mammalian diving reflex). This reflex involves apnea (loss of drive to breathe), slowed heart rate (reflex bradycardia), and reduced blood circulation to the extremities such as fingers and toes (peripheral vasoconstriction). During the diving reflex, the infant's heart rate decreases by an average of 20%. The glottis is spontaneously sealed off and the water entering the upper respiratory tract is diverted down the esophagus into the stomach. Infant swimming or diving reflex: The diving response has been shown to have an oxygen-conserving effect, both during movement and at rest. Oxygen is saved for the heart and the brain, slowing the onset of serious hypoxic damage. The diving response can therefore be regarded as an important defence mechanism for the body. Drowning risk: Drowning is a leading cause of unintentional injury and death worldwide, and the highest rates are among children. Overall, drowning is the most common fatal injury among children aged 1–4 years in the USA, and is the second highest cause of death altogether in that age range, after congenital defects.A Centers for Disease Control and Prevention study in 2012 of United States data from 2005–2009 indicated that each year an average of 513 children aged 0–4 years were victims of fatal drowning and a further 3,057 of that age range were treated in U.S. hospital emergency departments for non-fatal drowning. Of all the age groups, children aged 0–4 years had the highest death rate and also non-fatal injury rate. In 2013, among children 1 to 4 years old who died from an unintentional injury, almost 30% died from drowning. These children most commonly drowned in swimming pools, often at their own homes. Swimming lessons for infants: Traditionally, swimming lessons started at age four years or later, as children under four were not considered developmentally ready. However, swimming lessons for infants have become more common. The Australian Swimming Coaches and Teachers Association recommends that infants can start a formal program of swimming lessons at four months of age and many accredited swimming schools offer classes for very young children, especially towards the beginning of the swimming season in October. In the US, the YMCA and American Red Cross offer swim classes. A baby has to be able to hold his or her head up (usually at 3 to 4 months), to be ready for swimming lessons.Children can be taught, through a series of "prompts and procedures," to float on their backs to breathe, and then to flip over and swim toward a wall or other safe area. Children are essentially taught to swim, flip over and float, then flip over and swim again. Thus, the method is called "swim, float, swim." Pros and cons of infant swimming lessons In a 2009 retrospective case-control study that involved significant potential sources of bias, participation in formal swimming lessons was associated with an 88% reduction in the risk of drowning in 1- to 4-year-old children, although the authors of the study found the conclusion imprecise. Another study showed that infant swimming lessons may improve motor skills, but the number of study subjects was too low to be conclusive.There may be a link between infant swimming and rhinovirus-induced wheezing illnesses.Others have indicated concerns that the lessons might be traumatic, that the parents will have a false sense of security and not supervise young children adequately around pools, or that the infant could experience hypothermia, suffer from water intoxication after swallowing water, or develop gastrointestinal or skin infections. Swimming lessons for infants: Professional positions In 2010, the American Academy of Pediatrics reversed its previous position in which it had disapproved of lessons before age 4, indicating that the evidence no longer supported an advisory against early swimming lessons. However, the AAP stated that it found the evidence at that time insufficient to support a recommendation that all 1- to 4-year-old children receive swimming lessons. The AAP further stated that in spite of the popularity of swimming lessons for infants under 12 months of age and anecdotal evidence of infants having saved themselves, no scientific study had clearly demonstrated the safety and efficacy of training programs for infants that young. The AAP indicated its position that the possible benefit of early swimming instruction must be weighed against the potential risks (e.g., hypothermia, hyponatremia, infectious illness, and lung damage from pool chemicals).The U.S. Centers for Disease Control and Prevention recommends swimming lessons for children from 1–4, along with other precautionary measures to prevent drowning.The Canadian Pediatric Society takes a middle-of-the-road approach. While it does not advise against swimming lessons for infants and toddlers, it advises that they can not be considered a reliable prevention for drowning, and that lessons for children less than 4 years should focus on building confidence in the water and teaching parents and children water safety skills. They also recommend, for all children less than 4 years, constant arms-length supervision for toddlers near any body of water (including bathtubs) and that infants be held at all times.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Long ball** Long ball: In association football, a long ball is an attempt to move the ball a long distance down the field via one long aerial kick from either a goalkeeper or a defender directly to an attacking player, with the ball generally bypassing the midfield. Rather than arrive at the feet of the receiving attacking player, the attacker is expected to challenge the opposing defence in the air, with other attacking players and midfielders arriving to try and take possession of the ball if it breaks loose. It is a technique that can be especially effective for a team with either fast or tall strikers. The long ball technique is also a through pass from distance in an effort to get the ball by the defensive line and create a foot race between striker and defender. While often derided as either boring or primitive, it can prove effective where players or weather conditions suit this style; in particular, it is an effective counter-attacking style of play in which some defenders can be caught off-guard.Not all lengthy passes are considered long ball play, and long but precise passes towards a particular teammate may not fit the description. Long-ball play is generally characterised by the relatively aimless nature of the kick upfield, with the ball simply being 'hoofed' high in the air towards the general location of the forwards, who, given the length of time the ball is in the air, will have time to arrive at the position where the ball will drop. Statistical basis: The 'long ball theory' was first discussed by a retired RAF Wing Commander—Charles Reep—in the 1950s in England. Reep was an amateur statistician and analysed not only the number of passes that led to a goal, but also the field positions where those passes originated. Reep documented his findings in various publications including match day programmes.Reep developed a number of concepts describing effective long ball play. 'Gulleys' refer to the optimum position between the corner flag and six yard box from which to make the final pass into the penalty box; the '3-pass optimisation rule' emerges from the fact that a higher percentage of goals are scored in moves involving only three passes prior to the shot; the '9 shots per goal' maxim, stating that on average, only one goal is scored for every nine shots; and the 'twelve point three yard' position, which is the mean distance from the goal that all goals are scored. The long-ball game is also advocated in such books The Winning Formula: The Football Association Soccer Skills and Tactics, by Charles Hughes, which demonstrates with statistics that a majority of goals are scored within 5 passes of the ball.Jonathan Wilson criticises Reep's statistical analysis as heavily flawed. The 'three pass optimum', for example, comes in for particular criticism. Wilson notes that while Reep's statistics showed that a higher percentage of goals were scored in moves involving three passes, they also show that three pass moves account for a higher percentage of all shots. Instead, the percentage of shots for which three-pass or fewer account is higher than the percentage of goals for which they account, implying that moves involving more passes have a higher ratio of success. Furthermore, Reep's own statistics show that this trend becomes stronger at higher levels of football, indicating that moves with a greater number of passes become more effective amongst higher quality teams. Reep also fails to distinguish statistically between three-pass moves that emerge from long balls and those that emerge from other sources such as attacking free kicks or successful tackles in the opponent's half. Effectiveness: The long ball strategy has often been criticised as a method that has held back the England national football team. Hughes became the head of coaching at the FA in the 1990s, and used this position to promote his theory of long ball, which followed on from the work of Reep. Hughes and those who defend the tactic claim that time and time again, teams playing direct play have more success. At the 1994 FIFA World Cup, for example, the winning Brazil team scored the most goals from three or fewer passes, while the team to score from a move involving the most passes - the Republic of Ireland - were eliminated in the second round. While multi-pass moves such as those by Brazil against Italy in the 1970 FIFA World Cup Final or Argentina versus Serbia and Montenegro at the 2006 FIFA World Cup are widely lauded as brilliant examples of football, it is partially the rareness of success for such long moves that results in their appreciation.It is however used by teams desperate to score a goal before the end of a match, though this is probably as much due to the lack of time for a gradual build-up as it is for its perceived effectiveness. The long ball technique is also effective in lower level football matches since players lack skill to work as a team and pass the ball accurately up the field. A long ball is a quick counterattacking move and with a fast striker may produce multiple goals. Examples: The long ball is sometimes criticised as being used by weaker teams with less tactical skill. In the hands of mediocre teams, or at the lower youth leagues this might be so. Analysis of its implementation at world-class levels however, shows that effective use of long-ball techniques can be found in numerous competitive World Cup or championship club teams. It can be used as a counterattacking style, or as a daring through pass when opportunities open up during a game. The long ball requires top level skill to implement correctly. Mere passing is not the only variable—intelligent running into space, good dribbling and crisp finishing are also required.One of the best uses of the long-ball was Netherlands striker Dennis Bergkamp's goal against Argentina in the 1998 FIFA World Cup. Dutch defender Frank de Boer initiated the move from near the middle of the field, with a long pass that curled over 7 opposing players. Bergkamp controlled the difficult ball, spun past a defender and smashed it home. The example illustrates the power of the long-ball style but also that it is more than simply pumping the ball upfield. Only Bergkamp's excellent skills were able to take advantage of the de Boer's outstanding, and daring pass. As such, it emphasises that football is a game requiring not only a comprehensive package of individual skills, but imagination and creativity as well. Both are present in the long-ball style. Examples: Contemporary teams like Norway and Sweden have also demonstrated the viability of the long-ball approach when executed with skill, precision and creativity by top players. Norway played a characteristic 4-5-1 formation in the 1990s and early 21st century. The left back would often hit long crosses to Jostein Flo, who in turn would head the ball to either one of the central midfielders or to the striker. This was known as the Flo Pass, and the Norwegian national team garnered much criticism for its perceived long-ball approach. Egil Olsen did, however, take the national team to two World Cups, and the long ball style of play is considered to have played an important role in accomplishing this.One of the greatest of the Norwegian goals scored with this style was by the striker Tore André Flo during the 1998 World Cup. Similar to the Bergkamp goal, but played to an advanced man on the wing, it began with an extremely long pass from Stig Inge Bjørnebye. Flo was alone when he received. He ran on and cut inside to beat his defending opponent, then slotted the ball past the goalkeeper Cláudio Taffarel. The Norwegians went on to upset the mighty Brazilian team in this match. However, Brazil had already won the group before this game took place while Norway needed to win. Examples: Accurate passes aimed at a specific player are examples of individual long balls, but do not represent the spirit of a team playing a long-ball game. In that situation, the team would be pumping long-balls up repeatedly into an area, rather than a specific player, hoping the striker would get some of them and the percentages would pay off in the long-run. Examples: The long ball can be very effective as a switch in game plan in pressure situations. In Chelsea's quarter-final victory over PSG in the 2013–14 UEFA Champions League, PSG needed to defend their 3–2 lead on aggregate for 10 more minutes when Fernando Torres entered the game as a substitute for Oscar. Chelsea's rehearsed gameplan for this scenario was to go direct from anywhere in the field, and PSG's defensive line fell very deep and very compressed. All secondary balls from either Chelsea or PSG players fell into spaces occupied solely by Chelsea players, leading to multiple goal scoring opportunities, one of which eventually taken by Demba Ba.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**UserLAnd Technologies** UserLAnd Technologies: UserLAnd Technologies is a free and open-source compatibility layer mobile app that allows Linux distributions, computer programs, computer games and numerical computing programs to run on mobile devices without requiring a root account. UserLAnd also provides a program library of popular free and open-source Linux-based programs to which additional programs and different versions of programs can be added. The name "UserLAnd" is a reference to the concept of userland in modern computer operating systems. Overview: Unlike other Linux compatibility layer mobile apps, UserLAnd does not require a root account. UserLAnd's ability to function without root directories, also known as "rooting," avoids "bricking" or the non-functionality of the mobile device while the Linux program is in use, which in addition to making the mobile device non-functional may void the device's warranty. Furthermore, the requirement of programs other than UserLAnd to "root" your mobile device has proven a formidable challenge for inexperienced Linux users. A prior application, GNURoot Debian, attempted to similarly run Linux programs on mobile devices, but it has ceased to be maintained and, therefore, is no longer operational.UserLAnd allows those with a mobile device to run Linux programs, many of which aren't available as mobile apps. Even for those Linux applications, e.g. Firefox, which have mobile versions available, people often find that their user experience with these mobile versions pales in comparison with their desktop. UserLAnd allows its users to recreate that desktop experience on their mobile device. Overview: UserLAnd currently only operates on Android mobile devices. UserLAnd is available for download on Google Play and F-Droid. Operation: To use UserLAnd, one must first download – typically from F-Droid or the Google Play Store – the application and then install it. Once installed, a user selects an app to open. When a program is selected, the user is prompted to enter login information and select a connection type. Following this, the user gains access to their selected program. Program library: UserLAnd is pre-loaded with the distributions Alpine, Arch, Debian, Kali, and Ubuntu; the web browser Firefox; the desktop environments LXDE and Xfce; the deployment environments Git and IDLE; the text-based games Colossal Cave Adventure and Zork; the numerical computing programs gnuplot, GNU Octave and R; the office suite LibreOffice; and the graphics editors GIMP and Inkscape. Further Linux programs and different versions of programs may be added to this program library. Reception: A review on Slant.co listed UserLAnd's "Pro's": support for VNC X sessions, no "rooting" required, easy setup, and that it's free and open-source; and "Con's": its lack of support for Lollipop and the difficulty of use for non-technical users. On the contrary, OS Journal found that the lack of a need to "root" your mobile device made using UserLAnd considerably easier than Linux compatibility layer applications, a position shared with SlashGear's review of UserLAnd. OS Journal went on to state that with UserLAnd one could do "almost anything" and "you’re (only) limited by your insanity" with respect to what you can do with the application. Linux Journal stated that "UserLAnd offers a quick and easy way to run an entire Linux distribution, or even just a Linux application or game, from your pocket." SlashGear stated that UserLAnd is "absolutely super simple to use and requires little to no technical knowledge to get off the ground running."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pirated movie release types** Pirated movie release types: Pirated movie release types are the different types of pirated movies and television series that end up on the Internet. They vary wildly in rarity and quality due to the different sources and methods used for acquiring the video content, in addition to encoding formats. Pirated movie releases may be derived from cams, which have distinctly low quality; screener and workprint discs or digital distribution copies (DDC), telecine copies from analog reels, video on demand (VOD) or TV recordings, and DVD and Blu-ray rips. They are seen in P2P networks, pirated websites and rarely on video sharing websites such as YouTube and Dailymotion due to their strict copyright rules. History: Pirated movies are usually released in many formats and different versions as better sources become available. The versions are usually encoded in the popular formats at the time of encoding. The sources for pirated copies have often changed with time in response to technology or anti-piracy measures. History: Cams Cam releases were the early attempts at movie piracy which were implemented by recording the on-screen projection of a movie in a cinema. This enabled groups to pirate movies which were in their theatrical period (not released for personal entertainment). Alternative methods were sought, as these releases often suffered distinctly low quality and required undetected videotaping in movie theaters. History: Pre-release Beginning in 1998, feature films began to be released on the internet by warez groups prior to their theatrical release. These pirated versions usually came in the form of VCD or SVCD. A prime example was the release of American Pie. This is notable for three reasons: It was released in an uncensored workprint format. The later theatrical release was cut down by several minutes and had scenes reworked to avoid nudity to pass MPAA guidelines. History: It was released nearly two months prior to its release in theaters (CNN Headline News reported on its early release). It was listed by the movie company as one of the reasons it released an unrated DVD edition. History: DVD and VOD ripping DivX In October 1999, DeCSS was released. This program allowed anyone to remove the CSS encryption on a DVD. Although its authors only intended the software to be used for playback purposes, it also meant that one could decode the content perfectly for ripping; combined with the DivX 3.11 Alpha codec released shortly after, the new codec increased video quality from near VHS to almost DVD quality when encoding from a DVD source. History: Xvid The early DivX releases were mostly internal for group use, but once the codec spread, it became accepted as a standard and quickly became the most widely used format for the scene. With help from associates who either worked for a movie theater, movie production company, or video rental company, groups were supplied with massive amounts of material, and new releases began appearing at a very fast pace. When version 4.0 of DivX was released, the codec went commercial and the need for a free codec, Xvid (then called "XviD", "DivX" backwards), was created. Later, Xvid replaced DivX entirely. Although the DivX codec has evolved from version 4 to 10.6 during this time, it is banned in the warez scene due to its commercial nature. History: x264 In February 2012, a consortium of popular piracy groups officially announced x264, the free H.264 codec, as the new standard for releases, replacing the previous format, which was Xvid wrapped in an AVI container. The move to H.264 also obsoletes AVI in favor of MP4 and Matroska that most commonly uses the .mkv file name extension. x265 (HEVC) With the increasing popularity of online movie-streaming sites like Netflix, some movies are being ripped from such websites now and are being encoded in HEVC wrapped in Matroska containers. This codec allows a high-quality movie to be stored in a relatively smaller file size. AV1 AV1 is a free modern video format developed by the Alliance for Open Media (AOM). It delivers high quality video at lower bitrates than H.264 or even H.265/HEVC. Unlike HEVC, it can be streamed in common web browsers. It is being adopted by YouTube and Netflix, amongst others. As of 2023, a few encoders use AV1. Release formats: Below is a table of pirated movie release types along with respective sources, ranging from the lowest quality to the highest. Scene rules define in which format and way each release type is to be packaged and distributed. Release formats: Cam/CamRip A Cam is a copy made in a cinema using a camcorder or mobile phone. The sound source is the camera microphone. Cam rips can quickly appear online after the first preview or premiere of the film. The quality ranges from subpar to adequate, depending on the group of persons performing the recording and the resolution of the camera used. The main disadvantage of this is the sound quality. The microphone does not only record the sound from the movie, but also the background sound in the cinema. The camera can also record movements and audio of the audience in the theater, for instance, when someone stands up in front of the screen, or when the audience laughs at a funny moment in the film. Release formats: Telesync A telesync (TS) is a bootleg recording of a film recorded in a movie theater, sometimes filmed using a professional camera on a tripod in the projection booth. The main difference between a CAM and TS copy is that the audio of a TS is captured with a direct connection to the sound source (often an FM microbroadcast provided for the hearing-impaired, or from a drive-in theater). Often, a cam is mislabeled as a telesync. HDTS is used to label a High-definition video recording. Release formats: Workprint A Workprint is a copy made from an unfinished version of a film produced by the studio. Typically a workprint has missing effects and overlays, and differs from its theatrical release. Some workprints have a time index marker running in a corner or on the top edge; some may also include a watermark. A workprint might be an uncut version, and missing some material that would appear in the final movie (or including scenes later cut). Release formats: Telecine A Telecine is a copy captured from a film print using a machine that transfers the movie from its analog reel to digital format. These were rare because telecine machines for making these prints were very costly and very large. However, they have recently become much more common. Telecine has basically the same quality as DVD, since the technique is the same as digitizing the actual film to DVD. However, the result is inferior since the source material is usually a lower quality copy reel. Telecine machines usually cause a slight left-right jitter in the picture and have inferior color levels compared to DVD. HDTC is used to label a High-definition video recording. Release formats: PPV Rip PPVRips come from Pay-Per-View sources. All the PPVRip releases are brand new movies which have not yet been released to Screener or DVD, but are available for viewing by customers with high-end TV package deals. Release formats: Screener Screeners are early DVD or BD releases of the theatrical version of a film, typically sent to movie reviewers, Academy members, and executives for review purposes. A screener normally has a message overlaid on its picture, with wording similar to: "The film you are watching is a promotional copy. If you purchased this film at a retail store, please contact 1-800-NO-COPYS to report it." or more commonly if released for awards consideration simply, "FOR YOUR CONSIDERATION." Apart from this, some movie studios release their screeners with a number of scenes of varying duration shown in black-and-white. Aside from this message, and the occasional B&W scenes, screeners are normally of only slightly lower quality than a retail DVD-Rip, due to the smaller investment in DVD mastering for the limited run. Some screener rips with the overlay message get cropped to remove the message and get released mislabeled as DVD-Rips. Release formats: Note: Screeners make a small exception here—since the content may differ from a retail version, it can be considered as lower quality than a DVD-Rip (even if the screener in question was sourced from a DVD). DDC A digital distribution copy (DDC) is basically the same as a Screener, but sent digitally (FTP, HTTP, etc.) to companies instead of via the postal system. This makes distribution cheaper. Its quality is lower than one of a R5, but higher than a Cam or Telesync. In the warez scene DDC refers to Downloadable/Direct Digital Content which is not freely available. Release formats: R5 What is known as an R5 is a studio produced unmastered telecine put out quickly and cheaply to compete against telecine piracy in Russia. The R5 tag refers to the DVD region 5 which consists of Russia, the Indian subcontinent, most of Africa, North Korea, and Mongolia. R5 releases differ from normal releases in that they are a direct Telecine transfer of the film without any of the image processing. If the DVD does not contain an English-language audio track, the R5 video is synced to a previously released English audio track. Then a LiNE tag is added. This means that the sound often is not as good as DVD-Rips. To account for the lesser audio quality typically present in R5 releases, some release groups take the high quality Russian or Ukrainian 5.1 channel audio track included with the R5 DVD and modify it with audio editing software. They remove the non-English spoken portion of the audio and sync the remaining portion, which contains high quality sound effects and music with a previously recorded source of English vocals usually taken from a LiNE tagged release. The result of this process is an almost retail DVD quality surround sound audio track which is included in the movie release. Releases of this type are normally tagged AC3.5.1.HQ and details about what was done to the audio track as well as the video are present in the release notes accompanying the pirated movie. Release formats: DVD Rip A DVD-Rip is a final retail version of a film, typically released before it is available outside its originating region. Often after one group of pirates releases a high-quality DVD-Rip, the "race" to release that film will stop. The release is an AVI file and uses the XviD codec (some in DivX) for video, and commonly mp3 or AC3 for audio. Because of their high quality, DVD-Rips generally replace any earlier copies that may already have been circulating. Widescreen DVDs used to be indicated as WS.DVDRip. DVDMux differs from DVDRips as they tend to use the x264 codec for video, AAC or AC3 codec for audio and multiplex it on a .mp4/.mkv file. Release formats: DVD-R DVD-R refers to a final retail version of a film in DVD format, generally a complete copy from the original DVD. If the original DVD is released in the DVD-9 format, however, extras might be removed and/or the video reencoded to make the image fit the less expensive for burning and quicker to download DVD-5 format. DVD-R releases often accompany DVD-Rips. DVD-R rips are larger in size, generally filling up the 4.37 or 7.95 GiB provided by DVD-5 and DVD-9 respectively. Untouched or lossless rips in the strictest sense are 1:1 rips of the source, with nothing removed or changed, though often the definition is lightened to include DVDs which have not been transcoded, and no features were removed from the user's perspective, removing only restrictions and possible nuisances such as copyright warnings and movie previews. Release formats: TV Rip TVRip is a capture source from an analog capture card (coaxial/composite/s-video connection). Digital satellite rip (DSR, also called SATRip or DTH) is a rip that is captured from a non-standard definition digital source like satellite. HDTV stands for captured source from HD television, while PDTV (Pure Digital TV) stands for any SDTV rip captured using solely digital methods from the original transport stream, not from HDMI or other outputs from a decoder, it can also refer to any standard definition content broadcast on a HD channel. DVB rips often come from free-the-air transmissions (such as digital terrestrial television). With an HDTV source, the quality can sometimes even surpass DVD. Movies in this format are starting to grow in popularity. Network logos can be seen, and some advertisement and commercial banner can be observed on some releases during playback. Release formats: Analog, DSR, and PDTV sources used to be often reencoded to 512×384 if fullscreen, currently to 640x480 if fullscreen and 720x404 if widescreen. HDTV sources are reencoded to multiple resolutions such as 720x404 (360p), 960×540 (540p), 1280×720 (720p), and 1920x1080 (1080p) at various file sizes for pirated releases. They can be progressive scan captured or not (480i digital transmission or 1080i broadcast for HD caps). Release formats: VOD Rip VODRip stands for Video-On-Demand Rip. This can be done by recording or capturing a video/movie from an On-Demand service such as through a cable or satellite TV service. Most services will state that ripping or capturing films is a breach of their use policy, but it is becoming more and more popular as it requires little technology or setup. There are many online On-Demand services that would not require one to connect their TV and computer. It can be done by using software to identify the video source address and downloading it as a video file which is often the method that bears the best quality end result. However, some people have used screen cams which effectively record, like a video camera, what is on a certain part of the computer screen, but does so internally, making the quality not of HD quality, but nevertheless significantly better than a Cam or Telesync version filmed from a cinema, TV or computer screen. Release formats: HC HD Rip In a HC HDRip, HC stands for hard-coded subtitles. This format is released shortly after the movie leaves theaters. It is usually sourced from Korean VOD services like Naver. The quality of this release is lower than a WEB as it is screen recorded, and it is a less preferred option due to the subtitles being baked into the video and cannot be removed, hence the HC tag. P2P groups have released blurred copies, which have the subtitles blurred or blocked. Release formats: Web Capture A WEBCap is a rip created by capturing video from a DRM-enabled streaming service, such as Amazon Prime Video or Netflix. Quality can range from mediocre (comparable with low quality XVID encodes) to excellent (comparable with high quality BR encodes). Essentially, the quality of the image obtained depends on internet connection speed and the specifications of the recording machine. WEBCaps nowadays are labeled as WEBRips, thus making this tag rare. Release formats: HDRip HDRips are typically transcoded versions of HDTV or WEB-DL source files, but may be any type of HD transcode. Web Rip In a WEBRip (P2P), the file is often extracted using the HLS or RTMP/E protocols and remuxed from a TS, MP4 or FLV container to MKV. This tag was used to indicate releases from streaming services with weak or no DRM in order to differentiate from iTunes's WEB-DL; however, it is generally used to tag the captured (and re-encoded) releases, much like WEBCap. Release formats: Web Download WEB-DL (P2P) refers to a file losslessly ripped from a streaming service, such as Netflix, Amazon Video, Hulu, Crunchyroll, Discovery GO, BBC iPlayer, etc., or downloaded via an online distribution website such as iTunes. The quality is relatively good since they are not re-encoded ("untouched" releases). The video (H.264 or H.265) and audio (AC3/AAC) streams are usually extracted from iTunes or Amazon Video and remuxed into a MKV container without sacrificing quality. An advantage with these releases is that, like BD/DVDRips, they usually have no onscreen network logos unlike TV rips. A disadvantage is that if there are normally subtitles for scenes in other languages, they often aren't found in these releases. Some releases are still mislabeled as WEBRip. Release formats: BDRip / Blu-ray Blu-ray or Bluray rips (once known as BDRip) are encoded directly from a Blu-ray disc source to a 2160p, 1080p or 720p (depending on the source), and use the x264 or x265 codec. They can be ripped from BD25, BD50 disc (or UHD Blu-ray at higher resolutions or bitrates), and even Remuxes. BDRip now refers to a Blu-ray source that has been encoded to a lower resolution (i.e. 1080p down to 720p/576p/480p). BDRips can go from 2160p to 1080p, etc as long as they go downward in resolution of the source disc. BRRips, which are often mistaken for BDRips, are an already encoded video at HD resolution that is then transcoded to another resolution (usually SD). BDRips are not a transcode, but BRRips are, which change their quality. BD/BRRips in DVDRip resolutions can vary between XviD/x264/x265 codecs (commonly 700 MB and 1.5 GB in size as well as larger DVD5 or DVD9: 4.5 GB or 8.4GB). Size fluctuates depending on the length and quality of releases, but the higher the size the more likely they use the x264/x265 codecs. A BD/BRRip to a lower resolution looks better, regardless, because the encode is from a higher quality source. BDRips have followed the above guideline after Blu-ray replaced the BDRip title structure in scene releases. Release formats: Full BD25/BD50 data rips also exist, and are similar to their counterpart DVD5/DVD9 full data releases. They are AVCHD compatible using the BD folder structure (sometimes called Bluray RAW/m2ts/iso), and are usually intended to be burnt back to disk for play in AVCHD-compatible Blu-ray players. BD25/BD50 data rips may or may not be remuxed and are never transcoded. UHD data rips also exist. In scene releases, full copy of the Blu-ray Disc is called "COMPLETE.BLURAY" or "BDISO" when in a .iso file format, meanwhile full copy of Ultra HD Blu-Ray discs is called "COMPLETE.UHD.BLURAY". Release formats: BD and BRRips come in various (now possibly outdated) versions: m-720p (or mini 720p) a compressed version of a 720p and usually sized at around 2–3 GB. Currently uncommon. Movie piracy sites such as RARBG and YTS has its own compressed versions of the movies released on these sites, tagged as 1080p. 720p usually around 4–7 GB and is the most downloaded form of BDRip. m-1080p (or mini 1080p) usually a little bit larger than 720p. 1080p can be anywhere from 8 GB to as large as 40–60 GB. mHD (or mini HD) encoded in the same resolution but at a lower bitrate and are smaller in size. µHD (or micro HD) fine-tuned AVC+AC3 encoding in an MP4 container aimed at 1 to 3 GB per feature movie, keeping 1920 pixels of horizontal resolution with a 2 to 2.5 Mbit/s. Common abbreviations in general Common abbreviations in Anime / Japanese Shows
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Equine chorionic gonadotropin** Equine chorionic gonadotropin: Equine chorionic gonadotropin (acronym given as eCG but not to be confused with ECG) is a gonadotropic hormone produced in the chorion of pregnant mares. Previously referred to as pregnant mare's serum gonadotropin (PMSG), the hormone is commonly used in concert with progestogen to induce ovulation in livestock prior to artificial insemination. Equine chorionic gonadotropin: Pregnant mares secrete the hormone from their endometrial cups between 40 and 130 days into their gestation, and once collected, it has been used to artificially induce estrus in female sheep, goats, cattle, and swine. Despite being less pure than pituitary extracts from sheep, goats or swine, PMSG tends to be used because of its longer circulatory half-life. In equids PMSG has only LH like activity, but in other species it has activity like both follicle-stimulating hormone (FSH) and luteinizing hormone (LH). Equine chorionic gonadotropin: Equine CG, like all glycoprotein hormones, is composed of two dissimilar subunits named alpha and beta. The alpha subunit is common to all glycoprotein hormones (LH, FSH, TSH, CG). The beta subunits are hormone-specific and are responsible for receptor binding specificity, although CG binds to the same Luteinizing hormone/choriogonadotropin receptor as LH. In the equids (horses, donkeys, zebras), the placental CGs and pituitary LH are expressed from the same gene and thus have the same protein sequence, differing only by their carbohydrate side-chains (particularly in their respective beta subunits). Criticism: The Swiss-based Animal Welfare Foundation has criticized how eCG is obtained from horse blood collected by inhumane practices in Uruguayan, Argentinian, and Icelandic horse farms, as documented in undercover videos shot there. A documentary was released in 2023 regarding the hormone.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nose cancer in cats and dogs** Nose cancer in cats and dogs: The most common type of cancers affecting the animal's nose are carcinomas and sarcomas, both of which are locally invasive. The most common sites for metastasis are the lymph nodes and the lungs, but can also include other organs. Signs and symptoms: Signs vary but may include bleeding from the nose, nasal discharge, facial deformity from bone erosion and tumor growth, sneezing, or difficulty breathing. Diagnosis: Standard X-rays are still acceptable and readily accessible imaging tools but their resolution and level of anatomical detail are not as good as for computed tomography (CT) scan. In order to definitively confirm cancer in the nasal cavity, a tissue biopsy should be obtained. Treatment: Radiation therapy has become the preferred treatment. Its advantage is that it treats the entire nasal cavity together with the affected bone and has shown the greatest improvement in survival. The radiation therapy is typically delivered in 10-18 treatment sessions over the course of 2–4 weeks. Radiation therapy has a multitude of accompanying side effects and should be recommended on a case-by-case basis. Dogs in which nose bleeds are observed have an average life expectancy of 88 days. In instances where nosebleeds are not seen, the prognosis is slightly less grim. On average, a dog with nasal cancer has a life expectancy of 95 days.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Geek Code** Geek Code: The Geek Code, developed in 1993, is a series of letters and symbols used by self-described "geeks" to inform fellow geeks about their personality, appearance, interests, skills, and opinions. The idea is that everything that makes a geek individual can be encoded in a compact format which only other geeks can read. This is deemed to be efficient in some sufficiently geeky manner.It was once common practice to use a geek code as one's email or Usenet signature, but the last official version of the code was produced in 1996, and it has now largely fallen out of use. History: The Geek Code was invented by Robert A. Hayden in 1993 and was defined at geekcode.com. It was inspired by a similar code for the bear subculture - which in turn was inspired by the Yerkes spectral classification system for describing stars.After a number of updates, the last revision of the code was v3.12, in 1996.Some alternative encodings have also been proposed. For example, the 1997 Acorn Code was a version specific to users of Acorn's RISC OS computers. Format: Geek codes can be written in two formats; either as a simple string: ...or as a "Geek Code Block", a parody of the output produced by the encryption program PGP: Note that this latter format has a line specifying the version of Geek Code being used. (Both these examples use Hayden's own geek code.) Encoding: Occupation The code starts with the letter G (for Geek) followed by the geek's occupation(s): GMU for a geek of music, GCS for a geek of computer science etc. There are 28 occupations that can be represented, but GAT is for geeks that can do anything and everything - and "usually precludes the use of other vocational descriptors". Categories The Geek Code website contains the complete list of categories, along with all of the special syntax options. Decoding: There have been several '"decoders" produced to transform a specific geek code into English, including: Bradley M. Kuhn, in late 1998, made Williams' program available as a web service. Joe Reiss made a similar page available in October 1999.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nickel sulfide** Nickel sulfide: Nickel sulfide is any inorganic compound with the formula NiSx. These compounds range in color from bronze (Ni3S2) to black (NiS2). The nickel sulfide with simplest stoichiometry is NiS, also known as the mineral millerite. From the economic perspective, Ni9S8, the mineral pentlandite, is the chief source of mined nickel. Other minerals include heazlewoodite (Ni3S2) and polydymite (Ni3S4), and the mineral Vaesite (NiS2). Some nickel sulfides are used commercially as catalysts. Structure: Like many related materials, nickel sulfide adopts the nickel arsenide motif. In this structure, nickel is octahedral and the sulfide centers are in trigonal prismatic sites. NiS has two polymorphs. The α-phase has a hexagonal unit cell, while the β-phase has a rhombohedral cell. The α-phase is stable at temperatures above 379 °C (714 °F), and converts into the β-phase at lower temperatures. That phase transition causes an increase in volume by 2-4%. Synthesis and reactions: The precipitation of solid black nickel sulfide is a mainstay of traditional qualitative inorganic analysis schemes, which begins with the separation of metals on the basis of the solubility of their sulfides. Such reactions are written: Ni2+ + H2S → NiS + 2 H+Many other more controlled methods have been developed, including solid state metathesis reactions (from NiCl2 and Na2S) and high temperature reactions of the elements.The most commonly practiced reaction of nickel sulfides involves conversion to nickel oxides. This conversion involves heating the sulfide ores in air: NiS + 1.5 O2 → NiO + SO2 Occurrence: Natural The mineral millerite is also a nickel sulfide with the molecular formula NiS, although its structure differs from synthetic stoichiometric NiS due to the conditions under which it forms. It occurs naturally in low temperature hydrothermal systems, in cavities of carbonate rocks, and as a byproduct of other nickel minerals. Occurrence: In glass manufacturing Float glass contains a small amount of nickel sulfide, formed from the sulfur in the fining agent Na2SO4 and the nickel contained in metallic alloy contaminants.Nickel sulfide inclusions are a problem for tempered glass applications. After the tempering process, nickel sulfide inclusions are in the metastable alpha phase. The inclusions eventually convert to the beta phase (stable at low temperature), increasing in volume and causing cracks in the glass. In the middle of tempered glass, the material is under tension, which causes the cracks to propagate and leads to spontaneous glass fracture. That spontaneous fracture occurs years or decades after glass manufacturing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shape grammar** Shape grammar: Shape grammars in computation are a specific class of production systems that generate geometric shapes. Typically, shapes are 2- or 3-dimensional, thus shape grammars are a way to study 2- and 3-dimensional languages. Shape grammars were first introduced in a seminal article by George Stiny and James Gips in 1971. The mathematical and algorithmic foundations of shape grammars (in particular, for linear elements in two-dimensions) were developed in "Pictorial and Formal Aspects of Shapes and Shape Grammars" (Birkhäuser Basel, 1975) by George Stiny. Applications of shape grammars were first considered in "Shape Grammars and their Uses" (Birkhäuser Basel, 1975) by James Gips. These publications also contain two independent, though equivalent, constructions showing that shape grammars can simulate Turing machines. Definition: A shape grammar consists of shape rules and a generation engine that selects and processes rules. A shape rule defines how an existing (part of a) shape can be transformed. A shape rule consists of two parts separated by an arrow pointing from left to right. The part left of the arrow is termed the Left-Hand Side (LHS). It depicts a condition in terms of a shape and a marker. The part right of the arrow is termed the Right-Hand Side (RHS). It depicts how the LHS shape should be transformed and where the marker is positioned. The marker helps to locate and orient the new shape. Definition: A shape grammar minimally consists of three shape rules: a start rule, at least one transformation rule, and a termination rule. The start rule is necessary to start the shape generation process. The termination rule is necessary to make the shape generation process stop. The simplest way to stop the process is by a shape rule that removes the marker. Shape grammars differ from Chomsky grammars in a major respect: the production rules may be applied serially (as with Chomsky grammars) or in parallel (not allowed in Chomsky grammars), similar to the way "productions" are done in L-Systems. Definition: A shape grammar system additionally has a working area where the created geometry is displayed. The generation engine checks the existing geometry, often referred to as Current Working Shape (CWS), for conditions that match the LHS of the shape rules. Shape rules with matching LHS are eligible for use. If more than one rule applies, the generation engine has to choose which rule to apply. In the alternative scenario, the engine first chooses one of the grammar rules and then tries to find all matches of the LHS of this rule in the CWS. If there are several matches, the engine can (depending on its configuration/implementation) apply the rule to all matches in parallel, apply the rule to all matches serially (which might lead to inconsistencies) or choose one of the detected matches and apply the rule to only this match.Shape grammars are most useful when confined to a small, well-defined generation problem such as housing layouts and structure refinement. Because shape rules typically are defined on small shapes, a shape grammar can quickly contain a lot of rules. The palladian villas shape grammar presented by William Mitchell for example contains 69 rules, that are applied throughout eight stages. Definition: Parametric shape grammars are an extension of shape grammars. The new shape in the RHS of the shape rule is defined by parameters so that it can take into account more of the context of the already existing shapes. This typically affects internal proportions of the new shape so that a greater variety of forms can be created. In this way, attempts are made to make shape grammars respond to structural conditions, for example the width of beams in roof structures which depends on span. Definition: Despite their popularity and applicability in academic circles, shape grammars have not found widespread use in generic Computer Aided Design applications. Applications: Shape grammars were originally presented for painting and sculpture but have been studied in particular in architecture (computer-aided architectural design), as they provide a formalism to create new designs. Other important domains shape grammars have been applied in are decorative arts, industrial design and engineering. Software Prototypes: This is a list of software prototypes that are available on the web (several of them are strictly speaking rather set grammar systems): Grammar Environment GRAPE SD2 Shape Grammar Interpreter Shaper2D spapper SubShapeDetector Yingzao fashi building generator SortalGI Literature: Stiny, G. & Gips, J. (1972). Shape grammars and the generative specification of painting and sculpture. In Information Processing 71, 1460–1465. North-Holland Publishing Company. link to article Stiny, G. (1975). Pictorial and Formal Aspects of Shape and Shape Grammars. Birkhäuser Basel. link to the book Stiny, G. (1980). Introduction to shape and shape grammars. Environment and Planning B: Planning and Design 7(3), 343-351. Literature: Knight, T.W. (1994). Transformations in Design: A Formal Approach to Stylistic Change and Innovation in the Visual Arts. Cambridge University Press. Stiny, G. (2006). Shape: Talking about Seeing and Doing. MIT Press, Cambridge, MA. link to book
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Medical emergency** Medical emergency: A medical emergency is an acute injury or illness that poses an immediate risk to a person's life or long-term health, sometimes referred to as a situation risking "life or limb". These emergencies may require assistance from another, qualified person, as some of these emergencies, such as cardiovascular (heart), respiratory, and gastrointestinal cannot be dealt with by the victim themselves. Dependent on the severity of the emergency, and the quality of any treatment given, it may require the involvement of multiple levels of care, from first aiders through emergency medical technicians, paramedics, emergency physicians and anesthesiologists. Medical emergency: Any response to an emergency medical situation will depend strongly on the situation, the patient involved, and availability of resources to help them. It will also vary depending on whether the emergency occurs whilst in hospital under medical care, or outside medical care (for instance, in the street or alone at home). Response: Summoning emergency services For emergencies starting outside medical care, a key component of providing proper care is to summon the emergency medical services (usually an ambulance), by calling for help using the appropriate local emergency telephone number, such as 999, 911, 111, 112 or 000. After determining that the incident is a medical emergency (as opposed to, for example, a police call), the emergency dispatchers will generally run through a questioning system such as AMPDS in order to assess the priority level of the call, along with the caller's name and location. Response: First aid and assisting emergency services Those who are trained to perform first aid can act within the bounds of the knowledge they have, whilst awaiting the next level of definitive care. Response: Those who are not able to perform first aid can also assist by remaining calm and staying with the injured or ill person. A common complaint of emergency service personnel is the propensity of people to crowd around the scene of a victim, as it is generally unhelpful, making the patient more stressed, and obstructing the smooth working of the emergency services. If possible, first responders should designate a specific person to ensure that the emergency services are called. Another bystander should be sent to wait for their arrival and direct them to the proper location. Additional bystanders can be helpful in ensuring that crowds are moved away from the ill or injured patient, allowing the responder adequate space to work. Response: Legal protections for responders To prevent the delay of life-saving aid from bystanders, many states of the USA have "Good Samaritan laws" which protect civilian responders who choose to assist in an emergency. In many situations, the general public may delay giving care due to fear of liability should they accidentally cause harm. Good Samaritan laws often protect responders who act within the scope of their knowledge and training, as a "reasonable person" in the same situation would act. Response: The concept of implied consent can protect first responders in emergency situations. A first responder may not legally touch a patient without the patient's consent. However, consent may be either expressed or implied: If a patient is able to make decisions, they must give expressed, informed consent before aid is given. Response: However, if a patient is too injured or ill to make decisions – for example, if they are unconscious, have an altered mental status, or cannot communicate - implied consent applies. Implied consent means that treatment can be given, because it is assumed that the patient would want that care.Usually, once care has begun, a first responder or first aid provider may not leave the patient or terminate care until a responder of equal or higher training (such as an emergency medical technician) assumes care. This can constitute abandonment of the patient and may subject the responder to legal liability. Care must be continued until the patient is transferred to a higher level of care; the situation becomes too unsafe to continue; or the responder is physically unable to continue due to exhaustion or hazards. Response: Unless the situation is particularly hazardous and is likely to further endanger the patient, evacuating an injured victim requires special skills, and should be left to the professionals of the emergency medical and fire service. The chain of survival The principles of the chain of survival apply to medical emergencies where the patient is not breathing and has no pulse. This involves four stages: Early access Early cardiopulmonary resuscitation (CPR) Early defibrillation Early advanced life support (ALS) Clinical response: Within hospital settings, an adequate staff is generally present to deal with the average emergency situation. Emergency medicine physicians and anaesthesiologists have training to deal with most medical emergencies, and maintain CPR and Advanced Cardiac Life Support (ACLS) certifications. In disasters or complex emergencies, most hospitals have protocols to summon on-site and off-site staff rapidly. Both emergency department and inpatient medical emergencies follow the basic protocol of Advanced Cardiac Life Support. Irrespective of the nature of the emergency, adequate blood pressure and oxygenation are required before the cause of the emergency can be eliminated. Possible exceptions include the clamping of arteries in severe hemorrhage. Non-trauma emergencies: While the golden hour is a trauma treatment concept, two emergency medical conditions have well-documented time-critical treatment considerations: stroke and myocardial infarction (heart attack). In the case of stroke, there is a window of three hours within which the benefit of thrombolytic drugs outweighs the risk of major bleeding. In the case of a heart attack, rapid stabilization of fatal arrhythmias can prevent sudden cardiac arrest. In addition, there is a direct relationship between time-to-treatment and the success of reperfusion (restoration of blood flow to the heart), including a time-dependent reduction in the mortality and morbidity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Radical 156** Radical 156: Radical 156 or radical run (走部) meaning "run" is one of the 20 Kangxi radicals (214 radicals in total) composed of 7 strokes. In the Kangxi Dictionary, there are 285 characters (out of 49,030) to be found under this radical. 走 is also the 150th indexing component in the Table of Indexing Chinese Character Components predominantly adopted by Simplified Chinese dictionaries published in mainland China. Sinogram: As an independent sinogram 走 is one of the Kyōiku kanji or Kanji taught in elementary school in Japan. It is a second grade kanji.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Exopheromone** Exopheromone: Exopheromone is a term coined by Terence McKenna, proposed in his book Food of the Gods for the controversial idea of chemical signals between members of different classes of living things, as opposed to among conspecifics. He suggested that certain chemicals produced in abundance in various hallucinogenic plants and fungi, such as dimethyltryptamine and psilocybin may act as pheromones produced by one kingdom (the vegetal) waiting for absorption by various others (for example, early primates or hominids). In this way a kind of ecological pheromonal system may be at work among biological kingdoms and ecosystems that have coevolved closely for long stretches of time. The term is not scientifically accepted.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bike bus** Bike bus: A bike bus, also known as a bike train or a cycle train, or cycle bus is a group of people who cycle together on a set route following a set timetable other than for sporting purposes. Cyclists may join or leave the bike bus at various points along the route. Most bike buses are a form of collective bicycle commuting (usually parents cycling their children together). Bike bus: A bike bus is often seen as a cyclist's version of a walking bus, although walking buses tend to be seen as exclusively for children travelling to school. Bike bus: Bike buses may have social, environmental, or political aims. One of the founders of the Aire Valley Bike Bus said "The Aire Valley Bike Bus was set up ... to encourage people to take up cycling and make the journey to work a more interesting and sociable experience.". The stated aim of the Central Florida Bike Bus is "bringing together cyclists who want to commute by bike using the same roads as every other vehicle". The aim of the D12BikeBus in Dublin 12, Ireland is to make cycling to school safer and easier while lobbying for safe cycling infrastructure. Examples of bike buses: Aire Valley Bike Bus The Aire Valley Bike Bus first started in August 2008. It runs once a week, on a Wednesday, between Keighley and Bradford in West Yorkshire, UK. In November 2009 it was featured as Cycling England's Scheme of the Month. Examples of bike buses: Bike Bus (AU) Bike Bus is run by members of Bicycle New South Wales, Australia on a voluntary basis and has received funding from The Department of Environment and Water Resources, through the Low Emissions Technology and Abatement – Strategic Abatement program. It is a "community-led service to introduce groups of people to ride a set route together, to introduce beginners to a route, or enjoy each others company". Examples of bike buses: Bike Bus (FR) Bike Bus is a French company based in the Dordogne delivering rental bikes to visitors all over France. Bike Bus (FR) began in 2002 and is currently the oldest bike rental company in the Dordogne. Biketrain PDX Biketrain PDX was started by Kiel Johnson in 2010. It co-ordinates bike trains, a form of bike bus, for school children in Portland, Oregon, with the aim of having a bike train running at every single school. Bus Cyclistes Perhaps the first bike bus initiative, based in Toulouse, France it now shows bike buses across France. Central Florida Bike Bus First run on 23 August 2010 this bike bus runs to and from the University of Central Florida. This bike bus has a real time Bike Bus Tracker, so that riders, with appropriate technology, can see where the bike bus is at any time. CicloExpresso (PT) CicloExpresso is a bike to school project that started in 2015 in Lisbon and has since spread to Aveiro. Examples of bike buses: Kingston upon Thames This bike bus ran every Monday in March 2011 from Sutton to Kingston upon Thames, UK, a total distance of roughly 7.5 miles. They claimed they would "complete the journey in about an hour – that's less time than the 213 bus takes during peak times!"The service was led by cycle instructors and was provided by Sutton Council and Kingston Council. Examples of bike buses: Massa Marmocchi Milano (IT) https://www.massamarmocchi.it/ https://www.youtube.com/watch?v=uKnZtjff5iU&t=2690s Nether Edge, Sheffield Started in June 2013, following the Bus Cyclistes Model, to provide a regular Saturday "bus service" that introduces local residents to safe routes in the corridor between Nether Edge and Sheffield City Centre. D12BikeBus, Crumlin to Greenhils, Dublin 12, Ireland (RoI) Arguably one of the longest continually running Bike Buses (2019 – ), with greatest distance travelled of over 5 kilometres, the D12BikeBus is parents and children banding together to cycle safely to school while lobbying councils to make their way safer and easier. Student transport: As walking buses tend to focus on student transport to and from school, bike buses can also operate to and from schools escorted and supervised by adults including for other human-powered transport such as scooters. Student transport: A riding school bus is a group of schoolchildren supervised by one or more adults, riding bicycles along a set route, picking children up at their homes along the way until they all arrive at school. Riding school buses are similar to walking bus or bike bus programs. Like a traditional bus, riding school buses (also known as RSBs) have a fixed route with designated "bus stops" and "pick up times" in which they pick up children. Ideally the riding school bus will include at least 1 adult 'driver', who leads the group, and an adult 'conductor' who supervises from the end of the group. Student transport: Riding school bus programs have been developed in a number of local councils in Victoria, Australia, including City of Merri-bek and Shire of East Gippsland.Riding school bus programs deliver a number of benefits: Improvement in child fitness and health Environmental benefits – reduction in car trips and associated air pollution Development of traffic skills and confidence in children Socialization and increased community engagement for children Reduction of traffic congestion around schoolsRiding school buses are also known as pedal pods or Cycling School Buses.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Test cross** Test cross: Under the law of dominance in genetics, an individual expressing a dominant phenotype could contain either two copies of the dominant allele (homozygous dominant) or one copy of each dominant and recessive allele (heterozygous dominant). By performing a test cross, one can determine whether the individual is heterozygous or homozygous dominant.In a test cross, the individual in question is bred with another individual that is homozygous for the recessive trait and the offspring of the test cross are examined. Since the homozygous recessive individual can only pass on recessive alleles, the allele the individual in question passes on determines the phenotype of the offspring. Thus, this test yields 2 possible situations: If any of the offspring produced express the recessive trait, the individual in question is heterozygous for the dominant allele. Test cross: If all of the offspring produced express the dominant trait, the individual in question is homozygous for the dominant allele. History: The first uses of test crosses were in Gregor Mendel’s experiments in plant hybridization. While studying the inheritance of dominant and recessive traits in pea plants, he explains that the “signification” (now termed zygosity) of an individual for a dominant trait is determined by the expression patterns of the following generation.Rediscovery of Mendel’s work in the early 1900s led to an explosion of experiments employing the principles of test crosses. From 1908-1911, Thomas Hunt Morgan conducted test crosses while determining the inheritance pattern of a white eye-colour mutation in Drosophila. These test cross experiments became hallmarks in the discovery of sex-linked traits. Applications in model organisms: Test crosses have a variety of applications. Common animal organisms, called model organisms, where test crosses are often used include Caenorhabditis elegans and Drosophila melanogaster. Basic procedures for performing test crosses in these organisms are provided below: C. elegans To perform a test cross with C. elegans, place worms with a known recessive genotype with worms of an unknown genotype on an agar plate. Allow the male and hermaphrodite worms time to mate and produce offspring. Using a microscope, the ratio of recessive versus dominant phenotype will elucidate the genotype of the dominant parent. Applications in model organisms: D. melanogaster To perform a test cross with D. melanogaster, select a trait with a known dominant and recessive phenotype. Red eye colour is dominant and white is recessive. Obtain virgin females with white eyes, young males with red eyes, and put them into a single tube. Once offspring begin to appear as larvae, remove parental lines and observe the phenotype of adult offsprings. Limitations: There are many limitations to test crosses. It can be a time-consuming process as some organisms require a long growing time in each generation to show the necessary phenotype. A large number of offspring are also required to have reliable data due to statistics. Test crosses are only useful if dominance is complete. Incomplete dominance is when the dominant allele and recessive allele come together to form a blend of the two phenotypes in the offspring. Variable expressivity is when a single allele produces a range of phenotypes, which is also not accounted for in a test cross. Limitations: As more advanced techniques to determine genotype emerge, the test cross is becoming less prevalent in genetics. Genetic testing and genome mapping are modern advances which allow for more efficient and detailed information about one’s genotype to be determined. Test crosses, however, are still used to this day and have created an excellent foundation for the development of more sophisticated techniques.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ambiguity** Ambiguity: Ambiguity is the type of meaning in which a phrase, statement, or resolution is not explicitly defined, making several interpretations plausible. A common aspect of ambiguity is uncertainty. It is thus an attribute of any idea or statement whose intended meaning cannot be definitively resolved, according to a rule or process with a finite number of steps. (The prefix ambi- reflects the idea of "two," as in "two meanings.") The concept of ambiguity is generally contrasted with vagueness. In ambiguity, specific and distinct interpretations are permitted (although some may not be immediately obvious), whereas with vague information it is difficult to form any interpretation at the desired level of specificity. Linguistic forms: Lexical ambiguity is contrasted with semantic ambiguity. The former represents a choice between a finite number of known and meaningful context-dependent interpretations. The latter represents a choice between any number of possible interpretations, none of which may have a standard agreed-upon meaning. This form of ambiguity is closely related to vagueness. Linguistic forms: Ambiguity in human language is argued to reflect principles of efficient communication. Languages that communicate efficiently will avoid sending information that is redundant with information provided in the context. This can be shown mathematically to result in a system which is ambiguous when context is neglected. In this way, ambiguity is viewed as a generally useful feature of a linguistic system. Linguistic ambiguity can be a problem in law, because the interpretation of written documents and oral agreements is often of paramount importance. Linguistic forms: Lexical ambiguity The lexical ambiguity of a word or phrase applies to it having more than one meaning in the language to which the word belongs. "Meaning" here refers to whatever should be represented by a good dictionary. For instance, the word "bank" has several distinct lexical definitions, including "financial institution" and "edge of a river". Or consider "apothecary". One could say "I bought herbs from the apothecary". This could mean one actually spoke to the apothecary (pharmacist) or went to the apothecary (pharmacy). Linguistic forms: The context in which an ambiguous word is used often makes it clearer which of the meanings is intended. If, for instance, someone says "I buried $100 in the bank", most people would not think someone used a shovel to dig in the mud. However, some linguistic contexts do not provide sufficient information to make a used word clearer. Lexical ambiguity can be addressed by algorithmic methods that automatically associate the appropriate meaning with a word in context, a task referred to as word-sense disambiguation. Linguistic forms: The use of multi-defined words requires the author or speaker to clarify their context, and sometimes elaborate on their specific intended meaning (in which case, a less ambiguous term should have been used). The goal of clear concise communication is that the receiver(s) have no misunderstanding about what was meant to be conveyed. An exception to this could include a politician whose "weasel words” and obfuscation are necessary to gain support from multiple constituents with mutually exclusive conflicting desires from his or her candidate of choice. Ambiguity is a powerful tool of political science. Linguistic forms: More problematic are words whose multiple meanings express closely related concepts. "Good", for example, can mean "useful" or "functional" (That's a good hammer), "exemplary" (She's a good student), "pleasing" (This is good soup), "moral" (a good person versus the lesson to be learned from a story), "righteous", etc. "I have a good daughter" is not clear about which sense is intended. The various ways to apply prefixes and suffixes can also create ambiguity ("unlockable" can mean "capable of being opened” or "impossible to lock"). Linguistic forms: Semantic and syntactic ambiguity Semantic ambiguity occurs when a word, phrase or sentence, taken out of context, has more than one interpretation. In "We saw her duck" (example due to Richard Nordquist), the words "her duck" can refer either to the person's bird (the noun "duck", modified by the possessive pronoun "her"), or to a motion she made (the verb "duck", the subject of which is the objective pronoun "her", object of the verb "saw").Syntactic ambiguity arises when a sentence can have two (or more) different meanings because of the structure of the sentence—its syntax. This is often due to a modifying expression, such as a prepositional phrase, the application of which is unclear. "He ate the cookies on the couch", for example, could mean that he ate those cookies that were on the couch (as opposed to those that were on the table), or it could mean that he was sitting on the couch when he ate the cookies. "To get in, you will need an entrance fee of $10 or your voucher and your drivers' license." This could mean that you need EITHER ten dollars OR BOTH your voucher and your license. Or it could mean that you need your license AND you need EITHER ten dollars OR a voucher. Only rewriting the sentence, or placing appropriate punctuation can resolve a syntactic ambiguity. Linguistic forms: For the notion of, and theoretic results about, syntactic ambiguity in artificial, formal languages (such as computer programming languages), see Ambiguous grammar. Linguistic forms: Usually, semantic and syntactic ambiguity go hand in hand. The sentence "We saw her duck" is also syntactically ambiguous. Conversely, a sentence like "He ate the cookies on the couch" is also semantically ambiguous. Rarely, but occasionally, the different parsings of a syntactically ambiguous phrase result in the same meaning. For example, the command "Cook, cook!" can be parsed as "Cook (noun used as vocative), cook (imperative verb form)!", but also as "Cook (imperative verb form), cook (noun used as vocative)!". It is more common that a syntactically unambiguous phrase has a semantic ambiguity; for example, the lexical ambiguity in "Your boss is a funny man" is purely semantic, leading to the response "Funny ha-ha or funny peculiar?" Spoken language can contain many more types of ambiguities which are called phonological ambiguities, where there is more than one way to compose a set of sounds into words. For example, "ice cream" and "I scream". Such ambiguity is generally resolved according to the context. A mishearing of such, based on incorrectly resolved ambiguity, is called a mondegreen. Philosophy: Philosophers (and other users of logic) spend a lot of time and effort searching for and removing (or intentionally adding) ambiguity in arguments because it can lead to incorrect conclusions and can be used to deliberately conceal bad arguments. For example, a politician might say, "I oppose taxes which hinder economic growth", an example of a glittering generality. Some will think they oppose taxes in general because they hinder economic growth. Others may think they oppose only those taxes that they believe will hinder economic growth. In writing, the sentence can be rewritten to reduce possible misinterpretation, either by adding a comma after "taxes" (to convey the first sense) or by changing "which" to "that" (to convey the second sense) or by rewriting it in other ways. The devious politician hopes that each constituent will interpret the statement in the most desirable way, and think the politician supports everyone's opinion. However, the opposite can also be true—an opponent can turn a positive statement into a bad one if the speaker uses ambiguity (intentionally or not). The logical fallacies of amphiboly and equivocation rely heavily on the use of ambiguous words and phrases. Philosophy: In continental philosophy (particularly phenomenology and existentialism), there is much greater tolerance of ambiguity, as it is generally seen as an integral part of the human condition. Martin Heidegger argued that the relation between the subject and object is ambiguous, as is the relation of mind and body, and part and whole. In Heidegger's phenomenology, Dasein is always in a meaningful world, but there is always an underlying background for every instance of signification. Thus, although some things may be certain, they have little to do with Dasein's sense of care and existential anxiety, e.g., in the face of death. In calling his work Being and Nothingness an "essay in phenomenological ontology" Jean-Paul Sartre follows Heidegger in defining the human essence as ambiguous, or relating fundamentally to such ambiguity. Simone de Beauvoir tries to base an ethics on Heidegger's and Sartre's writings (The Ethics of Ambiguity), where she highlights the need to grapple with ambiguity: "as long as there have been philosophers and they have thought, most of them have tried to mask it ... And the ethics which they have proposed to their disciples has always pursued the same goal. It has been a matter of eliminating the ambiguity by making oneself pure inwardness or pure externality, by escaping from the sensible world or being engulfed by it, by yielding to eternity or enclosing oneself in the pure moment." Ethics cannot be based on the authoritative certainty given by mathematics and logic, or prescribed directly from the empirical findings of science. She states: "Since we do not succeed in fleeing it, let us, therefore, try to look the truth in the face. Let us try to assume our fundamental ambiguity. It is in the knowledge of the genuine conditions of our life that we must draw our strength to live and our reason for acting". Other continental philosophers suggest that concepts such as life, nature, and sex are ambiguous. Corey Anton has argued that we cannot be certain what is separate from or unified with something else: language, he asserts, divides what is not, in fact, separate. Following Ernest Becker, he argues that the desire to 'authoritatively disambiguate' the world and existence has led to numerous ideologies and historical events such as genocide. On this basis, he argues that ethics must focus on 'dialectically integrating opposites' and balancing tension, rather than seeking a priori validation or certainty. Like the existentialists and phenomenologists, he sees the ambiguity of life as the basis of creativity. Literature and rhetoric: In literature and rhetoric, ambiguity can be a useful tool. Groucho Marx's classic joke depends on a grammatical ambiguity for its humor, for example: "Last night I shot an elephant in my pajamas. How he got in my pajamas, I'll never know". Songs and poetry often rely on ambiguous words for artistic effect, as in the song title "Don't It Make My Brown Eyes Blue" (where "blue" can refer to the color, or to sadness). Literature and rhetoric: In the narrative, ambiguity can be introduced in several ways: motive, plot, character. F. Scott Fitzgerald uses the latter type of ambiguity with notable effect in his novel The Great Gatsby. Mathematical notation: Mathematical notation is a helpful tool that eliminates a lot of misunderstandings associated with natural language in physics and other sciences. Nonetheless, there are still some inherent ambiguities due to lexical, syntactic, and semantic reasons that persist in mathematical notation. Mathematical notation: Names of functions The ambiguity in the style of writing a function should not be confused with a multivalued function, which can (and should) be defined in a deterministic and unambiguous way. Several special functions still do not have established notations. Usually, the conversion to another notation requires to scale the argument or the resulting value; sometimes, the same name of the function is used, causing confusions. Examples of such underestablished functions: Sinc function Elliptic integral of the third kind; translating elliptic integral form MAPLE to Mathematica, one should replace the second argument to its square, see Talk:Elliptic integral#List of notations; dealing with complex values, this may cause problems. Mathematical notation: Exponential integral Hermite polynomial: 775 Expressions Ambiguous expressions often appear in physical and mathematical texts. Mathematical notation: It is common practice to omit multiplication signs in mathematical expressions. Also, it is common to give the same name to a variable and a function, for example, f=f(x) . Then, if one sees f=f(y+1) , there is no way to distinguish whether it means f=f(x) multiplied by (y+1) , or function f evaluated at argument equal to (y+1) . In each case of use of such notations, the reader is supposed to be able to perform the deduction and reveal the true meaning. Mathematical notation: Creators of algorithmic languages try to avoid ambiguities. Many algorithmic languages (C++ and Fortran) require the character * as symbol of multiplication. The Wolfram Language used in Mathematica allows the user to omit the multiplication symbol, but requires square brackets to indicate the argument of a function; square brackets are not allowed for grouping of expressions. Fortran, in addition, does not allow use of the same name (identifier) for different objects, for example, function and variable; in particular, the expression f=f(x) is qualified as an error. Mathematical notation: The order of operations may depend on the context. In most programming languages, the operations of division and multiplication have equal priority and are executed from left to right. Until the last century, many editorials assumed that multiplication is performed first, for example, a/bc is interpreted as a/(bc) ; in this case, the insertion of parentheses is required when translating the formulas to an algorithmic language. In addition, it is common to write an argument of a function without parenthesis, which also may lead to ambiguity. Mathematical notation: In the scientific journal style, one uses roman letters to denote elementary functions, whereas variables are written using italics. For example, in mathematical journals the expression sin does not denote the sine function, but the product of the three variables s ,i ,n , although in the informal notation of a slide presentation it may stand for sin Commas in multi-component subscripts and superscripts are sometimes omitted; this is also potentially ambiguous notation. For example, in the notation Tmnk , the reader can only infer from the context whether it means a single-index object, taken with the subscript equal to product of variables m , n and k , or it is an indication to a trivalent tensor. Mathematical notation: Examples of potentially confusing ambiguous mathematical expressions An expression such as sin 2⁡α/2 can be understood to mean either sin ⁡(α/2))2 or sin ⁡α)2/2 . Often the author's intention can be understood from the context, in cases where only one of the two makes sense, but an ambiguity like this should be avoided, for example by writing sin 2⁡(α/2) or sin {\textstyle {\frac {1}{2}}\sin ^{2}\alpha } The expression sin −1⁡α means arcsin ⁡(α) in several texts, though it might be thought to mean sin ⁡α)−1 , since sin n⁡α commonly means sin ⁡α)n . Conversely, sin 2⁡α might seem to mean sin sin ⁡α) , as this exponentiation notation usually denotes function iteration: in general, f2(x) means f(f(x)) . However, for trigonometric and hyperbolic functions, this notation conventionally means exponentiation of the result of function application. Mathematical notation: The expression a/2b can be interpreted as meaning (a/2)b ; however, it is more commonly understood to mean a/(2b) Notations in quantum optics and quantum mechanics It is common to define the coherent states in quantum optics with |α⟩ and states with fixed number of photons with |n⟩ . Then, there is an "unwritten rule": the state is coherent if there are more Greek characters than Latin characters in the argument, and n photon state if the Latin characters dominate. The ambiguity becomes even worse, if |x⟩ is used for the states with certain value of the coordinate, and |p⟩ means the state with certain value of the momentum, which may be used in books on quantum mechanics. Such ambiguities easily lead to confusions, especially if some normalized adimensional, dimensionless variables are used. Expression |1⟩ may mean a state with single photon, or the coherent state with mean amplitude equal to 1, or state with momentum equal to unity, and so on. The reader is supposed to guess from the context. Mathematical notation: Ambiguous terms in physics and mathematics Some physical quantities do not yet have established notations; their value (and sometimes even dimension, as in the case of the Einstein coefficients), depends on the system of notations. Many terms are ambiguous. Each use of an ambiguous term should be preceded by the definition, suitable for a specific case. Just like Ludwig Wittgenstein states in Tractatus Logico-Philosophicus: "... Only in the context of a proposition has a name meaning."A highly confusing term is gain. For example, the sentence "the gain of a system should be doubled", without context, means close to nothing. Mathematical notation: It may mean that the ratio of the output voltage of an electric circuit to the input voltage should be doubled. It may mean that the ratio of the output power of an electric or optical circuit to the input power should be doubled. Mathematical notation: It may mean that the gain of the laser medium should be doubled, for example, doubling the population of the upper laser level in a quasi-two level system (assuming negligible absorption of the ground-state).The term intensity is ambiguous when applied to light. The term can refer to any of irradiance, luminous intensity, radiant intensity, or radiance, depending on the background of the person using the term. Mathematical notation: Also, confusions may be related with the use of atomic percent as measure of concentration of a dopant, or resolution of an imaging system, as measure of the size of the smallest detail which still can be resolved at the background of statistical noise. See also Accuracy and precision and its talk. The Berry paradox arises as a result of systematic ambiguity in the meaning of terms such as "definable" or "nameable". Terms of this kind give rise to vicious circle fallacies. Other terms with this type of ambiguity are: satisfiable, true, false, function, property, class, relation, cardinal, and ordinal. Mathematical interpretation of ambiguity: In mathematics and logic, ambiguity can be considered to be an instance of the logical concept of underdetermination—for example, X=Y leaves open what the value of X is—while its opposite is a self-contradiction, also called inconsistency, paradoxicalness, or oxymoron, or in mathematics an inconsistent system—such as X=2,X=3 , which has no solution. Logical ambiguity and self-contradiction is analogous to visual ambiguity and impossible objects, such as the Necker cube and impossible cube, or many of the drawings of M. C. Escher. Constructed language: Some languages have been created with the intention of avoiding ambiguity, especially lexical ambiguity. Lojban and Loglan are two related languages which have been created for this, focusing chiefly on syntactic ambiguity as well. The languages can be both spoken and written. These languages are intended to provide a greater technical precision over big natural languages, although historically, such attempts at language improvement have been criticized. Languages composed from many diverse sources contain much ambiguity and inconsistency. The many exceptions to syntax and semantic rules are time-consuming and difficult to learn. Biology: In structural biology, ambiguity has been recognized as a problem for studying protein conformations. The analysis of a protein three-dimensional structure consists in dividing the macromolecule into subunits called domains. The difficulty of this task arises from the fact that different definitions of what a domain is can be used (e.g. folding autonomy, function, thermodynamic stability, or domain motions), which sometimes results in a single protein having different—yet equally valid—domain assignments. Christianity and Judaism: Christianity and Judaism employ the concept of paradox synonymously with "ambiguity". Many Christians and Jews endorse Rudolf Otto's description of the sacred as 'mysterium tremendum et fascinans', the awe-inspiring mystery which fascinates humans. The apocryphal Book of Judith is noted for the "ingenious ambiguity" expressed by its heroine; for example, she says to the villain of the story, Holofernes, "my lord will not fail to achieve his purposes", without specifying whether my lord refers to the villain or to God.The orthodox Catholic writer G. K. Chesterton regularly employed paradox to tease out the meanings in common concepts which he found ambiguous or to reveal meaning often overlooked or forgotten in common phrases: the title of one of his most famous books, Orthodoxy (1908), itself employed such a paradox. Music: In music, pieces or sections which confound expectations and may be or are interpreted simultaneously in different ways are ambiguous, such as some polytonality, polymeter, other ambiguous meters or rhythms, and ambiguous phrasing, or (Stein 2005, p. 79) any aspect of music. The music of Africa is often purposely ambiguous. To quote Sir Donald Francis Tovey (1935, p. 195), "Theorists are apt to vex themselves with vain efforts to remove uncertainty just where it has a high aesthetic value." Visual art: In visual art, certain images are visually ambiguous, such as the Necker cube, which can be interpreted in two ways. Perceptions of such objects remain stable for a time, then may flip, a phenomenon called multistable perception. The opposite of such ambiguous images are impossible objects.Pictures or photographs may also be ambiguous at the semantic level: the visual image is unambiguous, but the meaning and narrative may be ambiguous: is a certain facial expression one of excitement or fear, for instance? Social psychology and the bystander effect: In social psychology, ambiguity is a factor used in determining peoples' responses to various situations. High levels of ambiguity in an emergency (e.g. an unconscious man lying on a park bench) make witnesses less likely to offer any sort of assistance, due to the fear that they may have misinterpreted the situation and acted unnecessarily. Alternately, non-ambiguous emergencies (e.g. an injured person verbally asking for help) elicit more consistent intervention and assistance. With regard to the bystander effect, studies have shown that emergencies deemed ambiguous trigger the appearance of the classic bystander effect (wherein more witnesses decrease the likelihood of any of them helping) far more than non-ambiguous emergencies. Computer science: In computer science, the SI prefixes kilo-, mega- and giga- were historically used in certain contexts to mean either the first three powers of 1024 (1024, 10242 and 10243) contrary to the metric system in which these units unambiguously mean one thousand, one million, and one billion. This usage is particularly prevalent with electronic memory devices (e.g. DRAM) addressed directly by a binary machine register where a decimal interpretation makes no practical sense. Computer science: Subsequently, the Ki, Mi, and Gi prefixes were introduced so that binary prefixes could be written explicitly, also rendering k, M, and G unambiguous in texts conforming to the new standard—this led to a new ambiguity in engineering documents lacking outward trace of the binary prefixes (necessarily indicating the new style) as to whether the usage of k, M, and G remains ambiguous (old style) or not (new style). 1 M (where M is ambiguously 1,000,000 or 1,048,576) is less uncertain than the engineering value 1.0e6 (defined to designate the interval 950,000 to 1,050,000). As non-volatile storage devices begin to exceed 1 GB in capacity (where the ambiguity begins to routinely impact the second significant digit), GB and TB almost always mean 109 and 1012 bytes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Window period** Window period: In medicine, the window period for a test designed to detect a specific disease (particularly infectious disease) is the time between first infection and when the test can reliably detect that infection. In antibody-based testing, the window period is dependent on the time taken for seroconversion. The window period is important to epidemiology and safe sex strategies, and in blood and organ donation, because during this time, an infected person or animal cannot be detected as infected but may still be able to infect others. For this reason, the most effective disease-prevention strategies combine testing with a waiting period longer than the test's window period. Examples: HIV The window period for HIV may be up to three months, depending on the test method and other factors. RNA based HIV tests has the lowest window period. Modern and accurate testing abilities can cut this period to 25 days, 16 days, or even as low as 12 days, again, depending on the type of test and the quality of its administration and interpretation. Examples: Hepatitis B Two periods may be referred to as window period in hepatitis B infection: (1) the period that elapses during HBsAg to HBsAb seroconversion, i.e. between the disappearance of surface antigen (HBsAg) from serum and the appearance of HBsAb (anti-HBs), and (2) the period between infection and appearance of HBsAg.During the window of HBsAg to HBsAb seroconversion, IgM anti-core (HBc-IgM) is the only detectable antibody. HBV DNA may be positive as well. This window period does not occur in persons who develop chronic hepatitis B, i.e. who continue to have detectable HBV DNA for greater than 6 months (HbsAg remains positive), or in people who develop isolated HBcAb positivity, i.e. who lose HBsAg, but do not develop HBsAb (HBV DNA may or may not remain positive).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**WiSA** WiSA: WiSA is a hardware and software standard for wirelessly transmitting digital audio from an audio source to wireless speakers. The standard is promoted by the Wireless Speaker and Audio Association (WiSA Association), which comprises consumer electronics manufacturers, retailers, and technology companies. The standard is based on technology from the WiSA Technologies corporation.WiSA removes the need to run speaker wires or RCA cables between an AV receiver (or similar device) and speakers in a home theater or home audio setup. However, cabling isn't completely eliminated; powered speakers are required, so the speakers need to be connected to electrical outlets. Technical specifications: Maximum channels: 8 Uncompressed audioBit depth: Up to 24-bit Sample rate: Up to 96 kHz Latency: 2.6 ms at 96 kHz, or 5.2 ms at 48 kHz Synchronization between speakers: ±2 µs Maximum supported room size: 30 ft x 30 ft Not designed to span multiple rooms. Transmission band: U-NII 5 GHz spectrumWiSA doesn't mandate support for audio compression codecs. WiSA permits, but doesn't require, support for object-based surround sound schemes such as DTS:X or Dolby Atmos.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Polymer-fullerene bulk heterojunction solar cell** Polymer-fullerene bulk heterojunction solar cell: Polymer-fullerene bulk heterojunction solar cells are a type of solar cell researched in academic laboratories. Photovoltaic cells featuring a polymeric blend of organics have shown promise in a field largely dominated by inorganic (e.g. silicon) solar cells. Specifically, fullerene derivatives act as electron acceptors for donor materials like P3HT (poly-3-hexyl thiophene-2,5-diyl), creating a polymer-fullerene based photovoltaic cell.Some of the improvements that organic solar cells have over inorganic solar cells are that they are flexible and therefore can be applied to a larger range of surfaces. They can also be produced much more easily via inkjet printing or spray deposition, and therefore are vastly cheaper to manufacture. A downside is that, because they are not crystalline (like silicon), but instead are produced in a purposely disordered blend of electron-acceptor and -donor materials (hence the name bulk heterojunction), they have a limited efficiency of charge transport.However, the efficiencies of these new types of photovoltaic cells have risen from 2.5% in 2001, to 5% in 2006, to greater than 10% in 2011. This is because improved methods for solution processing of acceptor and donor materials led to more efficient blending of the two materials. Further research can lead to polymer-fullerene based photovoltaic cells that approach the efficiency of current inorganic photovoltaic cells. Structure: Materials used in polymer-based photovoltaic cells are characterized by their total electron affinities and absorption power. The electron-rich, donor materials tend to be conjugated polymers with relatively high absorption power, whereas the acceptor in this case is a highly symmetric fullerene molecule with a strong affinity for electrons, ensuring sufficient electron mobility between the two. The arrangement of materials essentially determines the overall efficiency of the heterojunction solar cell. There are three donor-acceptor bulk morphologies: (a) the bilayer, (b) the bulk heterojunction, and (c) the “comb” structure. Typically, a polymer-fullerene bulk heterojunction solar cell has a layered structure. Functions/Applications: The primary function of a solar cell is the conversion of light energy into electrical energy by means of the photovoltaic effect. In particular, polymer-fullerene bulk heterojunction solar cells are promising because of their potential in low processing costs and mechanical flexibility in comparison to conventional inorganic solar cells. Solution processing potentially allows reductions in manufacturing costs through screen printing, doctor blading, inkjet printing, and spray deposition at low temperatures. To overcome the narrow spectral overlap of organic polymer absorption bands, experiments have blended conjugated polymer donors with high electron affinity fullerene derivatives as acceptors to extend the spectral sensitivity. Ternary solar cells are a promising approach to increased efficiency and light harvesting properties of organic photovoltaic cells (OPV).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dynamic Multipoint Virtual Private Network** Dynamic Multipoint Virtual Private Network: Dynamic Multipoint Virtual Private Network (DMVPN) is a dynamic tunneling form of a virtual private network (VPN) supported on Cisco IOS-based routers, and Huawei AR G3 routers, and on Unix-like operating systems. Benefits: DMVPN provides the capability for creating a dynamic-mesh VPN network without having to pre-configure (static) all possible tunnel end-point peers, including IPsec (Internet Protocol Security) and ISAKMP (Internet Security Association and Key Management Protocol) peers. DMVPN is initially configured to build out a hub-and-spoke network by statically configuring the hubs (VPN headends) on the spokes, no change in the configuration on the hub is required to accept new spokes. Using this initial hub-and-spoke network, tunnels between spokes can be dynamically built on demand (dynamic-mesh) without additional configuration on the hubs or spokes. This dynamic-mesh capability alleviates the need for any load on the hub to route data between the spoke networks. Technologies: Next Hop Resolution Protocol, RFC 2332 Generic Routing Encapsulation (GRE), RFC 1701, or multipoint GRE if spoke-to-spoke tunnels are desired An IP-based routing protocol, EIGRP, OSPF, RIPv2, BGP or ODR (DMVPN hub-and-spoke only). Technologies: IPsec (Internet Protocol Security) using an IPsec profile, which is associated with a virtual tunnel interface in IOS software. All traffic sent via the tunnel is encrypted per the policy configured (IPsec transform set) Internal routing Routing protocols such as OSPF, EIGRP v1 or v2 or BGP are generally run between the hub and spoke to allow for growth and scalability. Both EIGRP and BGP allow a higher number of supported spokes per hub. Technologies: Encryption As with GRE tunnels, DMVPN allows for several encryption schemes (including none) for the encryption of data traversing the tunnels. For security reasons Cisco recommend that customers use AES. Phases DMVPN has three phases that route data differently. Phase 1: All traffic flows from spokes to and through the hub. Phase 2: Start with Phase 1 then allows spoke-to-spoke tunnels based on demand and triggers. Phase 3: Starts with Phase 1 and improves scalability of and has fewer restrictions than Phase 2.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**31 (number)** 31 (number): 31 (thirty-one) is the natural number following 30 and preceding 32. It is a prime number. In mathematics: 31 is the 11th prime number. It is a superprime and a self prime (after 3, 5, and 7), as no integer added up to its base 10 digits results in 31. It is the third Mersenne prime of the form 2n − 1, and the eighth Mersenne prime exponent, in-turn yielding the maximum positive value for a 32-bit signed binary integer in computing: 2,147,483,647. After 3, it is the second Mersenne prime not to be a double Mersenne prime, while the 31st prime number (127) is the second double Mersenne prime, following 7. On the other hand, the thirty-first triangular number is the perfect number 496, of the form 2(5 − 1)(25 − 1) by the Euclid-Euler theorem. 31 is also a primorial prime like its twin prime (29), as well as both a lucky prime and a happy number like its dual permutable prime in decimal (13). In mathematics: 31 is the 11th and final consecutive supersingular prime. After 31, the only supersingular primes are 41, 47, 59, and 71. In mathematics: 31 is the first prime centered pentagonal number, the fifth centered triangular number, and a centered decagonal number.For the Steiner tree problem, 31 is the number of possible Steiner topologies for Steiner trees with 4 terminals.At 31, the Mertens function sets a new low of −4, a value which is not subceded until 110.31 is a repdigit in base 2 (11111) and in base 5 (111). In mathematics: The cube root of 31 is the value of π correct to four significant figures: 3.1413 8065 … The first five numbers of the form p1 × p2 × p3 × ... × pn + 1 (with pn the nth prime) are prime: 3 = 2 + 1 7 = 2 × 3 + 1 31 = 2 × 3 × 5 + 1 211 = 2 × 3 × 5 × 7 + 1 and 2311 = 2 × 3 × 5 × 7 × 11 + 1The following term, 30031 = 59 × 509 = 2 × 3 × 5 × 7 × 11 × 13 + 1, is composite. The next prime number of this form has a largest prime p of 31: 2 × 3 × 5 × 7 × 11 × 13 × ... × 31 + 1 ≈ 8.2 × 1033.While 13 and 31 in base-ten are the proper first duo of two-digit permutable primes and emirps with distinct digits in base ten, 11 is the only two-digit permutable prime that is its own permutable prime. Meanwhile 1310 in ternary is 1113 and 3110 in quinary is 1115, with 1310 in quaternary represented as 314 and 3110 as 1334 (their mirror permutations 3314 and 134, equivalent to 61 and 7 in decimal, respectively, are also prime). (11, 13) are the third twin prime pair, formed by the fifth and sixth prime numbers, whose indices add to 11, itself the prime index of 31. The numbers 31, 331, 3331, 33331, 333331, 3333331, and 33333331 are all prime. For a time it was thought that every number of the form 3w1 would be prime. However, the next nine numbers of the sequence are composite; their factorisations are: 333333331 = 17 × 19607843 3333333331 = 673 × 4952947 33333333331 = 307 × 108577633 333333333331 = 19 × 83 × 211371803 3333333333331 = 523 × 3049 × 2090353 33333333333331 = 607 × 1511 × 1997 × 18199 333333333333331 = 181 × 1841620626151 3333333333333331 = 199 × 16750418760469 and 33333333333333331 = 31 × 1499 × 717324094199.The next term (3171) is prime, and the recurrence of the factor 31 in the last composite member of the sequence above can be used to prove that no sequence of the type RwE or ERw can consist only of primes, because every prime in the sequence will periodically divide further numbers.31 is the maximum number of areas inside a circle created from the edges and diagonals of an inscribed six-sided polygon, per Moser's circle problem. It is also equal to the sum of the maximum number of areas generated by the first five n-sided polygons: 1, 2, 4, 8, 16, and as such, 31 is the first member that diverges from twice the value of its previous member in the sequence, by 1. In mathematics: 31 is the number of regular polygons with an odd number of sides that are known to be constructible with compass and straightedge, from combinations of known Fermat primes of the form 22n + 1 (they are 3, 5, 17, 257 and 65537). In science: The atomic number of gallium Astronomy Messier object M31, a magnitude 4.5 galaxy in the constellation Andromeda. It is also known as the Andromeda Galaxy, and is readily visible to the naked eye in a modestly dark sky. The New General Catalogue object NGC 31, a spiral galaxy in the constellation Phoenix In sports: Ice hockey goaltenders often wear the number 31. In other fields: Thirty-one is also: The number of days in each of the months January, March, May, July, August, October and December The number of the date that Halloween and New Year's Eve are celebrated The code for international direct-dial phone calls to the Netherlands Thirty-one, a card game The number of kings defeated by the incoming Israelite settlers in Canaan according to Joshua 12:24: "all the kings, one and thirty" (Wycliffe Bible translation) A type of game played on a backgammon board The number of flavors of Baskin-Robbins ice cream; the shops are called 31 Ice Cream in Japan ISO 31 is the ISO's standard for quantities and units In the title of the anime Ulysses 31 In the title of Nick Hornby's book 31 Songs A women's honorary at The University of Alabama (XXXI) The number of the French department Haute-Garonne In music, 31-tone equal temperament is a historically significant tuning system (31 equal temperament), first theorized by Christiaan Huygens and promulgated in the 20th century by Adriaan Fokker Number of letters in Macedonian alphabet Number of letters in Ottoman alphabet The number of years approximately equal to 1 billion seconds A slang term for masturbation in Turkish.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Glycol chiller** Glycol chiller: Glycol chillers are specialized refrigeration systems, and often involves the use of antifreeze. A popular application is in beverage production, wherein the food grade chemical propylene glycol is used. Cooling in Brewing and Other Applications: Glycol chillers are a specific kind of refrigeration system, often used to cool a variety of liquids, including alcohol and other beverages. Using a chiller allows producers to lower the temperature of the product dramatically over a short period of time, depending on the production needs. Propylene glycol plays a significant role in the application of a glycol chiller. For cooling in brewing, there are few processes where decreasing or maintaining temperature are important - like crash cooling a beer after fermentation, or keeping a steady temperature during fermentation (which generates heat), or cooling the wort after an initial boiling process. Glycol chillers in operation: A chiller is essentially a refrigerator that includes a compressor, evaporator, condenser and a metering device. An additional buffer tank is used with the chilling unit to provide additional system capacity to prevent excessive cycling, unexpected temperature fluctuations, and erratic system operation. Propylene glycol, a food-grade antifreeze, is typically used when consumable products are involved. Before using glycol in the brewing process, check that propylene glycol is of USP grade to ensure it is recommended for food use.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Postvaccinal encephalitis** Postvaccinal encephalitis: Postvaccinal encephalitis (PVE) is postvaccinal complication which was associated with vaccination with vaccinia virus during worldwide smallpox eradication campaign. With mortality ranging between 25 – 30% and lifelong consequences between 16 – 30% it was one of the most severe adverse events associated with this vaccination. The mechanism of its underlying condition is unknown. Symptoms and signs: PVE symptoms start to appear between 8th and 14th day after vaccination. Amongst the first are fever, headache, confusion and nausea. With passing time lethargy, seizures, short and long term memory dysfunctions, localized paralysis, hemiplegia, polyneuritis and convulsions. In extreme cases PVE can lead to coma and death.Among the several forms of viral brain inflammation are rabies, polio, and two types transmitted by the mosquito: equine encephalitis in its various forms and St. Louis encephalitis. The latter two have appeared in epidemic form in the United States and are characterized by high fever, prolonged coma (which is responsible for the disease being known as a "sleeping sickness"), and convulsions sometimes followed by death. Encephalitis that results as a complication of another systemic infection is known as parainfectious encephalitis and can follow such diseases as measles (rubeola), influenza, and scarlet fever. The AIDS virus also infects the brain and produces dementia in a predictably progressive pattern. Although no specific treatment can destroy the virus once the disease has become established, many types of encephalitis can be prevented by immunization. Histology: Inflammatory extra-adventitial lesions are found not only in the brain but in the spinal cord as well. Lesions might be uniform in acute phase or disseminated in subacute phase. Unlike in cases of encephalitis lethargica the main damage is found in white brain matter. Meninges are infiltrated with T cells, plasmatic cells and phagocytic cells. Polymorfonuclear cells were found only in severe lesions. Apart from cellular infiltrate in perivascular space there is a tissue rarefication in spaces close to damaged blood vessels. Accumulated small nuclei are found in places of such rarefication. Strong demyelination with rapid clearance of degraded myelin is also observed in cases of PVE. Tissue damage leads to necrosis in the end. Treatment: Vaccinia immunoglobulin was given to patients with PVE. But some significant effects of this treatment were observed only if given before PVE developed. That is why only supportive treatment was given to patients with PVE to attenuate symptoms. Incidence: Vaccination with vaccinia virus was accompanied with a spectrum of adverse events. Some of them lethal. Generally accepted number of deaths after vaccination with live vaccine is one per one million vaccinations. But during the eradication campaign, more than one vaccination strain was used and these strains differed significantly in causing adverse events. Incidence: The incidence of PVE was between 44.9 cases per one million vaccinations with Bern strain used in western Europe to 2.9 cases per one million vaccinations with NYCBH strain used in the US. Number of deaths directly connected to PVE also differed from strain to strain. With 11 deaths per one million vaccinations with the Bern strain to 1.2 deaths per one million vaccinations with the NYCBH strain. PVE incidence also depended on the age of the vaccinated person. That is why in the US children up to one year of age and in Europe children up to three years of age were excluded from vaccination. History: Complications with the central neural system after smallpox vaccination were observed for the first time right after the vaccination begun. The first diagnosed case of PVE was in 1905.In times of the smallpox eradication campaign, when PVE was a serious problem, there were no tools for identification of the immune mechanism behind PVE available. Considering the fact that modern smallpox vaccines are much safer and only chosen personnel are vaccinated, PVE is no longer in the centre of attention. Nevertheless, for its similarity with acute disseminated encephalomyelitis (ADEM), which is also postvaccinal adverse reaction (observed for example after anti hepatitis A or B virus vaccination), PVE is considered to be of autoimmune nature. There is no final proof of PVE being caused directly by vaccine virus replication in neural tissues.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Myelitis** Myelitis: Myelitis is inflammation of the spinal cord which can disrupt the normal responses from the brain to the rest of the body, and from the rest of the body to the brain. Inflammation in the spinal cord can cause the myelin and axon to be damaged resulting in symptoms such as paralysis and sensory loss. Myelitis is classified to several categories depending on the area or the cause of the lesion; however, any inflammatory attack on the spinal cord is often referred to as transverse myelitis. Types of myelitis: Myelitis lesions usually occur in a narrow region but can be spread and affect many areas. Acute flaccid myelitis: a polio-like syndrome that causes muscle weakness and paralysis. Types of myelitis: Poliomyelitis: disease caused by viral infection in the gray matter with symptoms of muscle paralysis or weakness Transverse myelitis: caused by axonal demyelination encompassing both sides of the spinal cord Leukomyelitis: lesions in the white matter Meningococcal myelitis (or meningomyelitis): lesions occurring in the region of meninges and the spinal cordOsteomyelitis of the vertebral bone surrounding the spinal cord (that is, vertebral osteomyelitis) is a separate condition, although some infections (for example, Staphylococcus aureus infection) can occasionally cause both at once. The similarity of the words reflects that the combining form myel(o)- has multiple (homonymous) senses referring to bone marrow or the spinal cord. Symptoms: Depending on the cause of the disease, such clinical conditions manifest different speed in progression of symptoms in a matter of hours to days. Most myelitis manifests fast progression in muscle weakness or paralysis starting with the legs and then arms with varying degrees of severity. Sometimes the dysfunction of arms or legs cause instability of posture and difficulty in walking or any movement. Also symptoms generally include paresthesia which is a sensation of tickling, tingling, burning, pricking, or numbness of a person's skin with no apparent long-term physical effect. Adult patients often report pain in the back, extremities, or abdomen. Patients also present increased urinary urgency, bowel or bladder dysfunctions such as bladder incontinence, difficulty or inability to void, and incomplete evacuation of bowel or constipation. Others also report fever, respiratory problems and intractable vomiting. Symptoms: Diseases associated with myelitis Conditions associated with myelitis include: Acute disseminated encephalomyelitis: autoimmune demyelination of the brain causing severe neurological signs and symptoms Multiple sclerosis: demyelination of the brain and spinal cord Neuromyelitis optica or Devic's disease: immune attack on optic nerve and spinal cord Sjögren's syndrome: destruction of the exocrine system of the body Systemic lupus erythematosus: a systemic autoimmune disease featuring a wide variety of neurological signs and symptoms Sarcoidosis: chronic inflammatory cells form as nodules in multiple organs Atopy: an immune disorder of children manifesting as eczema or other allergic conditions. It can include atopic myelitis, which causes weakness. Symptoms: Immune-mediated myelopathies, heterogeneous group of inflammatory spinal cord disorders including autoimmune disorders with known antibodies Cause: Myelitis occurs due to various reasons such as infections. Direct infection by viruses, bacteria, mold, or parasites such as human immunodeficiency virus (HIV), human T-lymphotropic virus types I and II (HTLV-I/II), syphilis, lyme disease, and tuberculosis can cause myelitis but it can also be caused due to non-infectious or inflammatory pathway. Myelitis often follows after the infections or after vaccination. These phenomena can be explained by a theory of autoimmune attack which states that the autoimmune bodies attack its spinal cord in response to immune reaction. Cause: Mechanism of myelitis The theory of autoimmune attack claims that a person with neuroimmunologic disorders have genetic predisposition to auto-immune disorder, and the environmental factors would trigger the disease. The specific genetics in myelitis is not completely understood. It is believed that the immune system response could be to viral, bacterial, fungal, or parasitic infection; however, it is not known why the immune system attacks itself. Especially, for the immune system to cause inflammatory response anywhere in the central nervous system, the cells from the immune system must pass through the blood brain barrier. In the case of myelitis, not only is the immune system dysfunctional, but the dysfunction also crosses this protective blood brain barrier to affect the spinal cord. Cause: Infectious myelitis Viral myelitisMost viral myelitis is acute, but the retroviruses (such as HIV and HTLV) can cause chronic myelitis. Poliomyelitis, or gray matter myelitis, is usually caused by infection of anterior horn of the spinal cord by the enteroviruses (polioviruses, enteroviruses (EV) 70 and 71, echoviruses, coxsackieviruses A and B) and the flaviviruses (West Nile, Japanese encephalitis, tick-borne encephalitis). On the other hand, transverse myelitis or leukomyelitis, or white matter myelitis, are often caused by the herpesviruses and influenza virus. It can be due to direct viral invasion or via immune mediated mechanisms. Cause: Bacterial myelitisBacterial myelitis includes Mycoplasma pneumoniae, which is a common agent for respiratory tract. Studies have shown respiratory tract infections within 4–39 days prior to the onset of transverse myelitis. Or, tuberculosis, syphilis, and brucellosis are also known to cause myelitis in immune-compromised individuals. Myelitis is a rare manifestation of bacterial infection. Cause: Fungal myelitisFungi have been reported to cause spinal cord disease either by forming abscesses inside the bone or by granuloma. In general, there are two groups of fungi that may infect the CNS and cause myelitis - primary and secondary pathogens. Primary pathogens include the following: Cryptococcus neoformans, Coccidioides immitis, Blastomyces dermatitides, and Hystoplasma capsulatum. Secondary pathogens are opportunistic agents that primarily infect immunocompromised hosts such as Candida species, Aspergillus species, and zygomycetes. Cause: Parasitic myelitisParasitic species infect human hosts through larvae that penetrate the skin. Then they enter the lymphatic and circulatory system, and migrate to liver and lung. Some reach the spinal cord. Parasitic infections have been reported with Schistosoma species, Toxocara canis, Echinococcus species, Taenia solium, Trichinella spiralis, and Plasmodium species. Autoimmune myelitis In 2016, it was identified in Mayo clinic an autoimmune form of myelitis due to the presence of anti-GFAP autoantibodies. Immunoglobulins directed against the α-isoform of glial fibrillary acidic protein (GFAP-IgG) predicted a special meningoencephalomyelitis termed autoimmune GFAP Astrocytopathy that later was found also to be able to appear as a myelitis. Diagnosis: Myelitis has an extensive differential diagnosis. The type of onset (acute versus subacute/chronic) along with associated symptoms such as the presence of pain, constitutional symptoms that encompass fever, malaise, weight loss or a cutaneous rash may help identify the cause of myelitis. In order to establish a diagnosis of myelitis, one has to localize the spinal cord level, and exclude cerebral and neuromuscular diseases. Also a detailed medical history, a careful neurologic examination, and imaging studies using magnetic resonance imaging (MRI) are needed. In respect to the cause of the process, further work-up would help identify the cause and guide treatment. Full spine MRI is warranted, especially with acute onset myelitis, to evaluate for structural lesions that may require surgical intervention, or disseminated disease. Adding gadolinium further increases diagnostic sensitivity. A brain MRI may be needed to identify the extent of central nervous system (CNS) involvement. Lumbar puncture is important for the diagnosis of acute myelitis when a tumoral process, inflammatory or infectious cause are suspected, or the MRI is normal or non-specific. Complementary blood tests are also of value in establishing a firm diagnosis. Rarely, a biopsy of a mass lesion may become necessary when the cause is uncertain. However, in 15–30% of people with subacute or chronic myelitis, a clear cause is never uncovered. Treatment: Since each case is different, the following are possible treatments that patients might receive in the management of myelitis. Intravenous steroidsHigh-dose intravenous methyl-prednisolone for 3–5 days is considered as a standard of care for patients suspected to have acute myelitis, unless there are compelling reasons otherwise. The decision to offer continued steroids or add a new treatment is often based on the clinical course and MRI appearance at the end of five days of steroids. Treatment: Plasma exchange (PLEX)Patients with moderate to aggressive forms of disease who do not show much improvement after being treated with intravenous and oral steroids will be treated with PLEX. Retrospective studies of patients with TM treated with IV steroids followed by PLEX showed a positive outcome. It also has been shown to be effective with other autoimmune or inflammatory central nervous system disorders. Particular benefit has been shown with patients who are in the acute or subacute stage of the myelitis showing active inflammation on MRI. However, because of the risks implied by the lumbar puncture procedure, this intervention is determined by the treating physician on a case-by-case basis. Treatment: Immunosuppressants/Immunomodulatory agentsMyelitis with no definite cause seldom recurs, but for others, myelitis may be a manifestation of other diseases that are mentioned above. In these cases, ongoing treatment with medications that modulate or suppress the immune system may be necessary. Sometimes there is no specific treatment. Either way, aggressive rehabilitation and long-term symptom management are an integral part of the healthcare plan. Treatment: Prospective research direction Central nervous system nerve regeneration would be able to repair or regenerate the damage caused to the spinal cord. It would restore functions lost due to the disease. Treatment: Engineering endogenous repairCurrently, there exists a hydrogel based scaffold which acts as a channel to deliver nerve growth-enhancing substrates while providing structural support. These factors would promote nerve repairs to the target area. Hydrogels' macroporous properties would enable attachment of cells and enhance ion and nutrient exchange. In addition, hydrogels' biodegradability or bioresolvability would prevent the need for surgical removal of the hydrogel after drug delivery. It means that it would be dissolved naturally by the body's enzymatic reaction. Treatment: Biochemical repairNeurotropic factor therapy and gene therapy Neurotropic growth factors regulate growth, survival, and plasticity of the axon. They benefit nerve regeneration after injury to the nervous system. They are a potent initiator of sensory axon growth and are up-regulated at the lesion site. The continuous delivery of neurotropic growth factor (NGF) would increase the nerve regeneration in the spinal cord. However, the excessive dosing of NGF often leads to undesired plasticity and sprouting of uninjured sensory nerves. Gene therapy would be able to increase the NGF efficacy by the controlled and sustained delivery in a site-specific manner.Stem cell-based therapiesThe possibility for nerve regeneration after injury to the spinal cord was considered to be limited because of the absence of major neurogenesis. However, Joseph Altman showed that cell division does occur in the brain which allowed potential for stem cell therapy for nerve regeneration. The stem cell-based therapies are used in order to replace cells lost and injured due to inflammation, to modulate the immune system, and to enhance regeneration and remyelination of axons. Neural stem cells (NSC) have the potential to integrate with the spinal cord because in the recent past investigations have demonstrated their potential for differentiation into multiple cell types that are crucial to the spinal cord. Studies show that NSCs that were transplanted into a demyelinating spinal cord lesion were found to regenerate oligodendrocytes and Schwann cells, and completely remyelinated axons.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pht01** Pht01: pHT01 is a plasmid used as a cloning vector for expressing proteins in Bacillus subtilis. It is 7,956 base pairs in length. pHT01 carries Pgrac, an artificial, strong, IPTG-inducible promoter consisting of the Bacillus subtilis groE promoter, a lac operator, and the gsiB ribosome binding site. It was first found on plasmid pNDH33. The plasmid also carries replication regions from the pMTLBs72. The plasmid also carries genes to confer resistance to ampicillin and chloramphenicol. Pht01: Plasmid pHT01 is generally stable in both B. subtilis and Escherichia coli, and can be used for protein expression in these host strains. pNDH33/pHT01 have been used to produce up to 16% of total protein output in B. subtilis. Pgrac100 is an improved version of Pgrac, which can produce up to 30% of total cellular proteins in B. subtilis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Swedish berries** Swedish berries: Swedish berries are red berry-shaped soft chewy candies which are manufactured by the candy company Maynards. The name Swedish Berries is trademarked by Vanderlei Candy, a division of Cadbury Canada. Their ingredients include sugar, glucose syrup, modified corn starch, citric acid, artificial flavours, mineral oil, carnauba wax, colour, and concentrated pear juice. Swedish Berries are similar in taste and consistency to Swedish Fish, another Maynards product.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Non-lock concurrency control** Non-lock concurrency control: In Computer Science, in the field of databases, non-lock concurrency control is a concurrency control method used in relational databases without using locking. There are several non-lock concurrency control methods, which involve the use of timestamps on transaction to determine transaction priority: Optimistic concurrency control Timestamp-based concurrency control Multiversion concurrency control
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded