text
stringlengths
60
353k
source
stringclasses
2 values
**Cmin** Cmin: Cmin is a term used in pharmacokinetics for the minimum blood plasma concentration reached by a drug during a dosing interval, which is the time interval between administration of two doses. This definition is slightly different from Ctrough, the concentration immediately prior to administration of the next dose. Cmin is the opposite of Cmax, the maximum concentration that the drug reaches. Cmin must be above certain thresholds, such as the minimum inhibitory concentration (MIC), to achieve a therapeutic effect.In most cases Cmin is directly measurable. At steady state the minimum plasma concentration can also be calculated using the following equation: Cmin=SFDkaVd(ka−k)×{e−kτ1−e−kτ−e−kaτ1−e−kaτ} S = Salt factor F = Bioavailability D = Dose ke = Elimination rate constant ka = Absorption rate constant Vd = Volume of distribution τ = Dosing intervalCmin is also an important parameter in bioavailability and bioequivalence studies, it is part of the pharmacokinetic information recommended for submission of investigational new drug applications.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Order operator** Order operator: In quantum field theory, an order operator or an order field is a quantum field version of Landau's order parameter whose expectation value characterizes phase transitions. There exists a dual version of it, the disorder operator or disorder field, whose expectation value characterizes a phase transition by indicating the prolific presence of defect or vortex lines in an ordered phase. Order operator: The disorder operator is an operator that creates a discontinuity of the ordinary order operators or a monodromy for their values. For example, a 't Hooft operator is a disorder operator. So is the Jordan–Wigner transformation. The concept of a disorder observable was first introduced in the context of 2D Ising spin lattices, where a phase transition between spin-aligned (magnetized) and disordered phases happens at some temperature. Books: Kleinert, Hagen, Gauge Fields in Condensed Matter, Vol. I, " SUPERFLOW AND VORTEX LINES", pp. 1–742, Vol. II, "STRESSES AND DEFECTS", pp. 743–1456, World Scientific (Singapore, 1989); Paperback ISBN 9971-5-0210-0 (also available online: Vol. I and Vol. II)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Batter (walls)** Batter (walls): In architecture, batter is a receding slope of a wall, structure, or earthwork. A wall sloping in the opposite direction is said to overhang. When used in fortifications it may be called a talus. Batter (walls): The term is used with buildings and non-building structures to identify when a wall or element is intentionally built with an inward slope. A battered corner is an architectural feature using batters. A batter is sometimes used in foundations, retaining walls, dry stone walls, dams, lighthouses, and fortifications. Other terms that may be used to describe battered walls are "tapered" and "flared". Typically in a battered wall, the taper provides a wide base to carry the weight of the wall above, with the top gradually resulting in the thinnest part as to ease the weight of wall below. The batter angle is typically described as a ratio of the offset and height or a degree angle that is dependent on the building materials and application. For example, typical dry-stone construction of retaining walls utilizes a 1:6 ratio, that is for every 1 inch that the wall steps back, it increases 6 inches in height. Historical uses: Walls may be battered to provide structural strength or for decorative reasons. In military architecture, they made walls harder to undermine or tunnel, and provided some defense against artillery, especially early siege engine projectiles and cannon, where the energy of the projectile might be largely deflected, on the same principle as modern sloped armor. Siege towers could not be pushed next to the top of a strongly battered wall. Types of fortification using batters included the talus and glacis. Regional examples: Asia Architectural styles that often include battered walls as a stylistic feature include Indo-Islamic architecture, where it was used in many tombs and some mosques, as well as many forts in India. Tughlaqabad Fort in Delhi is a good example, built by Ghiyath al-Din Tughluq, whose tomb opposite the fort (illustrated above) also has a strong batter. In Hindu temple architecture, the walls of the large Gopurams of South India are usually battered, often with a slight concave curve. Regional examples: In the Himalayan region, battered walls are one of the typifying characteristics of traditional Tibetan architecture. With minimal foreign influence over the centuries, the region's use of battered walls are considered to be an indigenous creation and part of Tibet's vernacular architecture. This style of batter wall architecture was the preferred style of construction for much of Inner-Asia, and has been used from Nepal to Siberia. The 13-story Potala Palace in Lhasa, is one of the best known examples of this style and was named a UNESCO World Heritage Site in 1994. Regional examples: Middle East Battered walls are a common architectural feature found in Ancient Egyptian architecture. Usually constructed from mud brick for residential applications, limestone, sandstone, or granite was used mainly in the construction of temples and tombs. In terms of monumental architecture, the Giza pyramid complex in Cairo utilized different grades of battered walls to achieve great heights with relative stability. The Pyramid of Djoser is an archeological remain in the Saqqara necropolis, northwest of the city of Memphis that is a quintessential example of battered walls used in sequence to produce a step pyramid. Regional examples: New World In the Americas, battered walls are seen as a fairly common aspect of Mission style architecture, where Spanish design was hybridized with Native American adobe building techniques. As exemplified by the San Estevan del Rey Mission Church in Acoma, New Mexico, c.1629-42, the heights desired by Spanish Catholic Mission design was achieved through battering adobe bricks to achieve structural stability.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Migraine surgery** Migraine surgery: Migraine surgery is a surgical operation undertaken with the goal of reducing or preventing migraines. Migraine surgery most often refers to surgical decompression of one or several nerves in the head and neck which have been shown to trigger migraine symptoms in many migraine sufferers. Following the development of nerve decompression techniques for the relief of migraine pain in the year 2000, these procedures have been extensively studied and shown to be effective in appropriate candidates. The nerves that are most often addressed in migraine surgery are found outside of the skull, in the face and neck, and include the supra-orbital and supra-trochlear nerves in the forehead, the zygomaticotemporal nerve and auriculotemporal nerves in the temple region, and the greater occipital, lesser occipital, and third occipital nerves in the back of the neck. Nerve impingement in the nasal cavity has additionally been shown to be a trigger of migraine symptoms. Indications and patient selection: Migraine surgery is usually reserved for migraine patients who fail more conservative therapy or who cannot tolerate the side effects of drugs used to treat their migraines. Appropriate patients are screened using injections of local anesthesia to provide a temporary nerve block. In some cases, Botox may be used to provide temporary decompression of the nerve. Patients who respond to nerve blocks often see an immediate though temporary reduction in their pain by "shutting off" the nerve that is triggering the migraine, while pain relief following Botox injections is provided by relaxation of nearby muscle tissue that may be compressing the nerve. Patients who respond well to these screening procedures are felt to be excellent candidates for migraine surgery. Surgical procedures: Migraine surgery is an outpatient procedure which addresses peripheral nerves through limited incisions. Depending on the symptoms of the patient and the screening results following nerve blocks or Botox, different areas of the head and neck may be addressed to treat the nerves found to be the migraine trigger in a given patient. Migraine surgery is always individualized to each patient's symptoms and anatomy. Surgical procedures: Anterior nerves Nerves found in the forehead (supra-orbital and supra-trochlear nerves) are either addressed using endoscopic surgery or by using an incision in the crease of the upper eyelid. Structures that are found pressing on the nerves here are released and may include bone at the upper orbit, fascia, blood vessels, or muscle tissue. The supra-orbital and supra-trochlear nerves travel through the corrugator supercilii muscle which enables frowning of the brow. These nerves are released from these muscles so they may lie free of pressure from these muscle structures. Small blood vessels which travel with these nerves may be divided to prevent pressure as well. In the bony notch where these nerves exit the eye socket, small pieces of bone or connective tissue may be removed so undue pressure is not placed on the nerves in this region. Surgical procedures: Nerves of the temple region The zygomaticotemporal nerve and auriculotemporal nerves are found in areas between the top of the ear and the lateral portion of the eye, in different areas of the temple. These nerves can also be addressed by endoscopic techniques, or well hidden small incisions. Blood vessels next to or crossing these nerves are often found to be the source of compression, and these blood vessels may be divided to prevent irritation of these nerves. Associated temporal muscle release in the region of these nerves may also be indicated. Because these nerves are very small and provide feeling to small regions of the scalp, they are often cut or avulsed, allowing the ends to retract into muscle tissue to prevent neuroma formation. Surgical procedures: Posterior nerves Chronic irritation of the occipital nerves is called occipital neuralgia and is frequently the cause of migraine symptoms. The greater occipital and third occipital nerves are addressed through an incision at the base of the scalp in the upper neck by either a vertical or transverse incision. Incisions are usually placed within the hairline. The greater occipital nerve travels through several muscle layers (including the trapezius muscle and splenius capitis muscle) where it is often compressed, and therefore surgery for this nerve involves releasing it from tight muscle and fascia in the upper neck. Blood vessels found crossing the nerve such as the occipital artery may be divided in order to avoid chronic pressure and irritation of the greater occipital nerve. The third occipital nerve is a small nerve that travels near the greater occipital nerve and may treated similarly in order to alleviate chronic irritation.The lesser occipital nerve is a small nerve that has additionally been found to be associated with migraine pain. This nerve is found near the sternocleidomastoid muscle and may be decompressed or divided here through a small incision. As this small nerve provides feeling for a small region of the scalp, the minimal numbness resulting from the division of this nerve often goes un-noticed. Surgical procedures: Nerves of the nose The nerves of the nasal lining may be impinged by structures in the nose such as the nasal septum and turbinates. Nasal surgery to decompress these regions may include septoplasty, turbinectomy, or other rhinoplasty procedures. Surgical outcomes: Though initially thought to be experimental surgery, the benefits of migraine surgery have now been well documented. Followup data has shown that 88% of migraine surgery patients experienced a positive response to the procedure after 5 years. 29% of patients have been shown to achieve complete elimination of their migraine disease, while an additional 59% of patients reported a significant decrease in their pain and symptoms 5 years following their migraine surgery. 12% of patients undergoing migraine surgery reported no change in their symptoms after 5 years.Migraine surgery has additionally been studied in a socioeconomic context and has been shown to reduce both direct and indirect costs associated with migraine disease. Such costs after migraine surgery have been shown to be reduced by a median of $3,949 per patient per year.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fruit salad** Fruit salad: Fruit salad is a dish consisting of various kinds of fruit, sometimes served in a liquid, either their juices or a syrup. In different forms, fruit salad can be served as an appetizer or a side as a salad. When served as an appetizer, a fruit salad is sometimes known as a fruit cocktail (often connoting a canned product), or fruit cup (when served in a small container). Fruit salad: There are many types of fruit salad, ranging from the basic (no nuts, marshmallows, or dressing) to the moderately sweet (Waldorf salad) to the sweet (ambrosia salad). Another "salad" containing fruit is a jello salad, with its many variations. A fruit cocktail is well-defined in the US to mean a well-distributed mixture of small diced pieces of (from highest percentage to lowest) peaches, pears, pineapple, grapes, and cherry halves. Fruit salad may also be canned (with larger pieces of fruit than a cocktail). Description: There are several home recipes for fruit salads that contains different kinds of fruit, or that use a different kind of sauce other than the fruit's juice or syrup. Common ingredients used in fruit salads include strawberries, pineapple, honeydew, watermelon, grapes, and kiwifruit. Various recipes may call for the addition of nuts, fruit juices, certain vegetables, yogurt, or other ingredients. Description: One variation is a Waldorf-style fruit salad, which uses a mayonnaise-based sauce. Other recipes use sour cream (such as in ambrosia), yogurt, or even custard as the primary sauce ingredient. A variation on fruit salad uses whipped cream mixed in with many varieties of fruits (usually a mixture of berries), and also often includes miniature marshmallows. Rojak, a Malaysian fruit salad, uses a spicy sauce with peanuts and shrimp paste. In the Philippines, fruit salads are popular party and holiday fare, usually made with buko, or young coconut, and condensed milk in addition to other canned or fresh fruit. Sicilian orange salad is a typical dish of Sicily (Italy) and Spain in which orange slices are dressed with olive oil, salt, and black pepper. Description: Mexico has a popular variation of the fruit salad called Bionico which consists of various fruits drenched in condensed milk and sour cream mix. Guacamole may also be considered a fruit salad, consisting predominantly of various fruits and fruit juices such as avocados, lemon and/or lime juice, tomatoes, chili peppers, and black peppercorns. There is also an extended variety of fruit salads in Moroccan cuisine, often as part of a kemia, a selection of appetizers or small dishes analogous to Spanish tapas or eastern Mediterranean mezze. A fruit salad ice cream is also commonly manufactured, with small pieces of real fruit embedded, flavored either with juices from concentrate, fruit extracts, or artificial chemicals. Fruit cocktail: Fruit cocktail is often sold canned and is a staple of cafeterias, but can also be made fresh. The use of the word "cocktail" in the name does not mean that it contains alcohol, but refers to the secondary definition: " an appetizer made by combining pieces of food, such as fruit or seafood". In the United States, the USDA stipulates that canned "fruit cocktail" must contain a certain percentage distribution of pears, grapes, cherries, peaches, and pineapples to be marketed as fruit cocktail. It must contain fruits in the following range of percentages: 30% to 50% diced peaches, any yellow variety 25% to 45% diced pears, any variety 6% to 16% diced pineapple, any variety 6% to 20% whole grapes, any seedless variety 2% to 6% cherry halves, any light sweet or artificially colored red variety (like maraschino cherries)Both William Vere Cruess of the University of California, Berkeley and Herbert Gray of the Barron-Gray Packing Company of San Jose, California have been credited with the invention of fruit cocktails. Barron–Gray was the first company to sell fruit cocktail commercially, beginning in 1930, and California Packing Corporation began selling it under its Del Monte brand a few years later.Canned fruit cocktails and canned fruit salad are similar, but fruit salad contains larger fruit while fruit cocktail is diced. Commercially, the fruit used was healthy but cosmetically damaged, such as a peach or pear that was bruised on one side. The bruised parts would be cut away and discarded, and the rest would be diced into small pieces. In popular culture: "Fruit Salad" (also known as "Fruit Salad Yummy Yummy") is the name of a song by Australian children's band the Wiggles. "Fruit-salad" is also a slang term used for medals on a soldier's uniform, e.g. "Look at the fruit-salad on that colonel." The term refers to the bright colors of a high percentage of the ribbons that usually go with medals."Fruit salad" is an alternative name for the party game Fruit Basket Turnover.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Extreme Loading for Structures** Extreme Loading for Structures: Extreme Loading for Structures (ELS) is commercial structural-analysis software based on the applied element method (AEM) for the automatic tracking and propagation of cracks, separation of elements, element collision, and collapse of structures under extreme loads. AEM combines features of Finite element method and Discrete element method simulation with its own solver capabilities for the generation of PC-based structural analysis. History: 2003 Research and development related to the software begins with the formation of Applied Science International. The first release of ELS appears in the form of 2D analysis with structures modeled, loading scenarios applied, and results viewed. 2008 Version 2.0 allows users to perform 3D analysis, though modeling is largely limited to 2D and restricted 3D functionality. The United States Department of Homeland Security assigns ELS Designation Status for Anti-terrorism under the SAFETY Act. 2009 ELS version 3.0 is released with complete 3D functionality. Academic institutions: More than 20 universities and academic institutions are currently involved in research and development projects resulting in the creation of publications on topics related to the Applied Element Method and Extreme Loading for Structures. Academic institutions working with ELS include:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Meta** Meta: Meta (from the Greek μετά, meta, meaning "after" or "beyond") is a prefix meaning "more comprehensive" or "transcending".In modern nomenclature, meta- can also serve as a prefix meaning self-referential, as a field of study or endeavor (metatheory: theory about a theory; metamathematics: mathematical theories about mathematics; meta-axiomatics or meta-axiomaticity: axioms about axiomatic systems; metahumor: joking about the ways humor is expressed; etc.). Original Greek meaning: In Greek, the prefix meta- is generally less esoteric than in English; Greek meta- is equivalent to the Latin words post- or ad-. The use of the prefix in this sense occurs occasionally in scientific English terms derived from Greek. For example, the term Metatheria (the name for the clade of marsupial mammals) uses the prefix meta- in the sense that the Metatheria occur on the tree of life adjacent to the Theria (the placental mammals). Epistemology: In epistemology, and often in common use, the prefix meta- is used to mean about (its own category). For example, metadata is data about data (who has produced them, when, what format the data are in and so on). In a database, metadata is also data about data stored in a data dictionary, describing information (data) about database tables such as the table name, table owner, details about columns, etc. – essentially describing the table. In psychology, metamemory refers to an individual's knowledge about whether or not they would remember something if they concentrated on recalling it. The modern sense of "an X about X" has given rise to concepts like "meta-cognition" (cognition about cognition), "meta-emotion" (emotion about emotion), "meta-discussion" (discussion about discussion), "meta-joke" (joke about jokes), and "metaprogramming" (writing programs that write programs). In a rule-based system, a metarule is a rule governing the application of other rules."Metagaming", accordingly, refers to games about games. However, it has a different meaning depending on the context. In role-playing games, this means that someone with a higher level of knowledge is playing; that is, that the player incorporates factors that are outside the actual framework of the game – the player has knowledge that was not acquired through experiencing the game, but through external sources. This type of metagaming is often frowned upon in many role-playing game communities because it impairs game balance and equality of opportunity. Metagaming can also refer to a game that is used to create or change the rules while playing a game. One can play this type of metagame and choose which rules apply during the game itself, potentially changing the level of difficulty. Such metagames include campaign role-playing games like Halo 3. Complex card or board games, e.g. poker or chess, are also often referred to as metagames. According to Nigel Howard, this type of metagame is defined as a decision-making process that is derived from the analysis of possible outcomes in relation to external variables that change a problem. Abstraction and self-reference: Any subject can be said to have a metatheory, a theoretical consideration of its properties – such as its foundations, methods, form, and utility – on a higher level of abstraction. In linguistics, grammar is considered to be a metalanguage: a language operating on a higher level to describe properties of the plain language, and not itself. Etymology: The prefix comes from the Greek preposition and prefix meta- (μετα-), from μετά, which means "after", "beside", "with", "among" (with respect to the preposition, some of these meanings were distinguished by case marking). Other meanings include "beyond", "adjacent" and "self", and it is also used in the forms μετ- before vowels and μεθ- "meth-" before aspirated vowels. Etymology: The earliest form of the word "meta" is the Mycenaean Greek me-ta, written in Linear B syllabic script. The Greek preposition is cognate with the Old English preposition mid "with", still found as a prefix in midwife. Its use in English is the result of back-formation from the word "metaphysics". In origin Metaphysics was just the title of one of the principal works of Aristotle; it was so named (by Andronicus of Rhodes) because in the customary ordering of the works of Aristotle it was the book following Physics; it thus meant nothing more than "[the book that comes] after [the book entitled] Physics". However, even Latin writers misinterpreted this as entailing metaphysics constituted "the science of what is beyond the physical". Nonetheless, Aristotle's Metaphysics enunciates considerations of a nature above physical reality, which one can examine through certain philosophy – for example, the existence of God. The use of the prefix was later extended to other contexts, based on the understanding of metaphysics as meaning "the science of what is beyond the physical". Early use in English: The Oxford English Dictionary cites uses of the meta- prefix as "beyond, about" (such as meta-economics and meta-philosophy) going back to 1917. However, these formations are parallel to the original "metaphysics" and "metaphysical", that is, as a prefix to general nouns (fields of study) or adjectives. Going by the OED citations, it began being used with specific nouns in connection with mathematical logic sometime before 1929. (In 1920 David Hilbert proposed a research project in what was called "metamathematics.") A notable early citation is W. V. O. Quine's 1937 use of the word "metatheorem", where meta- has the modern meaning of "an X about X". Douglas Hofstadter, in his 1979 book Gödel, Escher, Bach (and in the 1985 sequel, Metamagical Themas), popularized this meaning of the term. The book, which deals with self-reference and strange loops, and touches on Quine and his work, was influential in many computer-related subcultures and may be responsible for the popularity of the prefix, for its use as a solo term, and for the many recent coinages which use it. Hofstadter uses meta as a stand-alone word, as an adjective, and as a directional preposition ("going meta," a term he coins for the old rhetorical trick of taking a debate or analysis to another level of abstraction, as when somebody says "This debate isn't going anywhere"). This book may also be responsible for the association of "meta" with strange loops, as opposed to just abstraction. According to Hofstadter, it is about self-reference, which means a sentence, idea or formula refers to itself. The Merriam-Webster Dictionary describes it as "showing or suggesting an explicit awareness of itself or oneself as a member of its category: cleverly self-referential". The sentence "This sentence contains thirty-six letters," and the sentence which embeds it, are examples of "metasentences" referencing themselves in this way. As maintained in the book Gödel, Escher, Bach, a strange loop is given if different logical statements or theories are put together in contradiction, thus distorting the meaning and generating logical paradoxes. One example is the liar paradox, a paradox in philosophy or logic that arises when a sentence claims its own falsehood (or untruth); for instance: "This sentence is not true." Until the beginning of the 20th century, this kind of paradox was a considerable problem for a philosophical theory of truth. Alfred Tarski solved this difficulty by proving that such paradoxes do not exist with a consistent separation of object language and metalanguage. "For every formalized language, a formally correct and factually applicable definition of the true statement can be constructed in the metalanguage with the sole help of expressions of a general-logical character, expressions of the language itself and of terms from the morphology of the language, but on the condition that the metalanguage is of a higher order than the language that is the subject of the investigation." Meta in gaming: Metagaming is a general term to mean playing a game while exploiting or subverting its rules.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Oxide dispersion-strengthened alloy** Oxide dispersion-strengthened alloy: Oxide dispersion strengthened alloys (ODS) are alloys that consist of a metal matrix with small oxide particles dispersed within it. They have high heat resistance, strength, and ductility. Alloys of nickel are the most common but includes iron aluminum alloys.Applications include high temperature turbine blades and heat exchanger tubing, while steels are used in nuclear applications. ODS materials are used on spacecraft to protect the vehicle, especially during re-entry. Noble metal ODS alloys, for example, platinum-based alloys, are used in glass production. Oxide dispersion-strengthened alloy: When it comes to re-entry at hypersonic speeds, the properties of gases change dramatically. Shock waves that can cause serious damage on any structure are created. At these speeds and temperatures, oxygen becomes aggressive. Mechanism: Oxide dispersion strengthening is based on incoherency of the oxide particles within the lattice of the material. Coherent particles have a continuous lattice plane from the matrix to the particles whereas incoherent particles do not have this continuity and therefore both lattice planes end at the interface. This mismatch in interfaces results in a high interfacial energy, which impedes dislocation. The oxide particles instead are stable in the matrix, which helps prevent creep. Particle stability implies little dimensional change, embrittlement, effects on properties, stable particle spacing, and general resistance to change at high temperatures.Since the oxide particles are incoherent, dislocations can only overcome the particles by climb. If instead the particles are semi-coherent or coherent with the lattice, dislocations can simply cut the particles by a more favourable process that requires less energy called dislocation glide or by Orowan bowing between particles, both of which are athermal mechanisms. Dislocation climb is a diffusional process, which is less energetically favourable, and mostly occurs at higher temperatures that provide enough energy to advance via the addition and removal of atoms. Because the particles are incoherent, glide mechanisms alone are not enough and the more energetically exhausting climb process is dominant, meaning that dislocations are stopped more effectively. Climb can occur either at the particle-dislocation interface (local climb) or by overcoming multiple particles at once (general climb). In local climb, the part of the dislocation that is between two particles stays in the glide plane while the rest of the dislocation is climbing along the surface of the particle. For general climb, the dislocations all come out the glide plane. General climb requires less energy because the mechanism decreases the dislocation line length which reduces the elastic strain energy and therefore is the common climb mechanism. For γ’ volume fractions of 0.4 to 0.6 in nickel-based alloys, the threshold stress for local climb is only about 1.25 to 1.40 times higher than general climb.Dislocations are not limited to either all local or all general climb as the path that requires less energy is taken. Cooperative climb is an example of a more nuanced mechanism where a dislocation travels around a group of particles rather than climbing past each particle individually. McLean stated that the dislocation is most relaxed when climbing over multiple particles because of the skipping of some of the abrupt interfaces between segments in the glide plane to segments that travel along the particle surface.The presence of incoherent particles introduces a threshold stress (σt), since an additional stress will have to be applied for the dislocations to move past the oxides by climb. After overcoming a particle by climb, dislocations can remain pinned at the particle-matrix interface with an attractive phenomenon called interfacial pinning, which requires additional threshold stress to free a dislocation out of this pinning, which must be overcome for plastic deformation to occur. This detachment phenomenon is a result of the interaction between the particle and the dislocation where total elastic strain energy is reduced. Schroder and Arzt explain that the additional stress required is due to the relaxation caused by the reduction in the stress field as the dislocation climbs and accommodates the shear traction. The following equations represent the strain rate and stress as a result of oxide introduction. Mechanism: Strain Rate: ϵ.=A′(σ−σtμ)n Threshold Shear Stress: τth=α2Gbl Synthesis: Ball-milling ODS steels creep properties are dependent on the characteristics of the oxide particles in the metal matrix, specifically their ability to prevent dislocation motion as well as the size and distribution of the particles. Hoelzer and coworkers showed that an alloy containing a homogeneous dispersion of 1-5 nm Y2Ti2O7 nanoclusters has superior creep properties to an alloy with a heterogeneous dispersion of 5-20 nm nanoclusters of the same composition. Synthesis: ODS steels are commonly produced through ball-milling an oxide of interest (e.g. Y2O3, Al2O3) with pre-alloyed metal powders followed by compression and sintering. It is believed that the oxides enter into solid solution with the metal during ball-milling and subsequently precipitate during the thermal treatment. This process seems simple but many parameters need to be carefully controlled to produce a successful alloy. Leseigneur and coworkers carefully controlled some of these parameters and achieved more consistent and better microstructures. In this two step method the oxide is ball-milled for longer periods to ensure a homogeneous solid solution of the oxide. The powder is annealed at higher temperatures to begin a controlled nucleation of the oxide clusters. Finally the powder is again compressed and sintered to yield the final material. Synthesis: Additive manufacturing NASA used additive manufacturing to synthesize an alloy they termed GRX-810, which survived temperatures over 1,090 °C (1,990 °F). The alloy also featured improved strength, malleability, and durability. The printer dispersed oxide particles uniformly throughout the metal matrix. The alloy was identified using 30 simulations of thermodynamic modeling. Advantages and disadvantages: Advantages: Can be machined, brazed, formed, cut with available processes. Develops a protective oxide layer that is self-healing. This oxide layer is stable and has a high emission coefficient. Allows the design of thin-walled structures (sandwich). Resistant to harsh weather conditions in the troposphere. Low maintenance cost. Low material cost.Disadvantages: It has a higher expansion coefficient than other materials, causing higher thermal stresses. Higher density. Lower maximum allowable temperature.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Erbium tetraboride** Erbium tetraboride: Erbium tetraboride is a boride of the lanthanide metal erbium.It is hard and has a high melting point. Industrial applications of erbium boride include use in semiconductors, the blades of gas turbines, and the nozzles of rocket engines.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**AppKit** AppKit: AppKit (formally Application Kit) is a graphical user interface toolkit. It initially served as the UI framework for NeXTSTEP. Along with Foundation and Display PostScript, it became one of the core parts of the OpenStep specification of APIs. Later, AppKit and Foundation became part of Cocoa, the Objective-C API framework of macOS. GNUstep, GNU's implementation of the OpenStep/Cocoa API, also contains an implementation of the AppKit API. AppKit: AppKit comprises a collection of Objective-C classes and protocols that can be used to build an application in OpenStep/Cocoa. These classes can also be used in Swift through its Objective-C bridge. Xcode has built-in functionality for developing a Cocoa application using AppKit, including the ability to visually design user interfaces with Interface Builder. It relies heavily on patterns like reference types, delegation, notifications, target–action, and model–view–controller. A sign of the NeXTSTEP heritage, AppKit's classes and protocols still use the "NS" prefix. AppKit: Most of the applications bundled with macOS—for example, the Finder, TextEdit, Calendar, and Preview—use AppKit to provide their user interface. macOS, iOS, iPadOS, and tvOS also support other UI frameworks, including UIKit, which is derived from AppKit and uses many similar structures, and SwiftUI, a Swift-only declarative UI framework. Prior to macOS Catalina, macOS also supported Carbon, a UI framework derived from the Macintosh Toolbox. Classes: Of the more than 170 classes included in the Application Kit, the following classes form the core: NSApplication: a singleton object that represents the application as a whole and tracks its windows and other global state NSWindow: an object representing a window on screen, it holds a hierarchy of views NSView: an object representing a rectangular region; it may draw UI content of its own (using drawing engines like Quartz, Core Animation, and Metal), and it may also hold a subtree of other views NSResponder: an object that can respond to events during the application's lifetime; NSApplication, NSWindow, and NSView are all subclasses of NSResponder NSDocument: an object representing a document saved on disk that manages its display in a window NSController: an abstract class implementing some functionality for a controller, mediating between views and model objects
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Virtual metrology** Virtual metrology: In semiconductor manufacturing, virtual metrology refers to methods to predict the properties of a wafer based on machine parameters and sensor data in the production equipment, without performing the (costly) physical measurement of the wafer properties. Statistical methods such as classification and regression are used to perform such a task. Depending on the accuracy of this virtual data, it can be used in modelling for other purposes, such as predicting yield, preventative analysis, etc. This virtual data is helpful for modelling techniques that are adversely affected by missing data. Another option to handle missing data is to use imputation techniques on the dataset, but virtual metrology in many cases, can be a more accurate method. Virtual metrology: Examples of virtual metrology include: the prediction of the silicon nitride ( Si3N4 ) layer thickness in the chemical vapor deposition process (CVD), using multivariate regression methods; the prediction of critical dimension in photolithography, using multi-level and regularization approaches; the prediction of layer width in etching.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Milk borne diseases** Milk borne diseases: Milk borne diseases are any diseases caused by consumption of milk or dairy products infected or contaminated by pathogens. Milk borne diseases are one of the recurrent foodborne illnesses—between 1993 and 2012 over 120 outbreaks related to raw milk were recorded in the US with approximately 1,900 illnesses and 140 hospitalisations. With rich nutrients essential for growth and development such as proteins, lipids, carbohydrates, and vitamins in milk, pathogenic microorganisms are well nourished and are capable of rapid cell division and extensive population growth in this favourable environment. Common pathogens include bacteria, viruses, fungi, and parasites and among them, bacterial infection is the leading cause of milk borne diseases.To refine product quality, pasteurisation was invented centuries ago to kill pathogens. Despite popularisation of pasteurisation in modern days, the risk of contamination cannot be eliminated. Infection can turn milk into an optimal vehicle of disease transmission by contamination in dairy farms, cross-contamination in milk processing plants, and post-pasteurisation recontamination.Symptoms of milk borne diseases depend on the amount of pathogen ingestion, time of pathogen incubation, and individual variation like patient's susceptibility, age, and pre-existing medical conditions. Generally, milk borne diseases are not life-threatening, and taking medications like antibiotics and over-the-counter drugs helps relieve symptoms. Typical clinical signs are fever and mild gastrointestinal disturbance including diarrhoea, nausea, vomiting and abdominal pain. Nevertheless, severe complications can be fatal and are often observed in young children, aged individuals and immunocompromised patients. Common routes of infection and contamination: There are three major routes of infection and contamination of milk: Contamination in dairy farms – Milk-producing livestock can be infected by intaking contaminated water and fodders and bacteria and/ or viruses are excreted by the mammary glands in milk. Poor hygiene in the dairy farms can result in either contaminating raw milk during milking or contaminating the bulk tank milk during storage. Without pasteurisation, pathogens are retained in milk with high infectious risk. In some dairy farms, in particular family farms, farm owners and farmers have the tradition of consuming raw milk instead of pasteurised milk. However, after popularisation of pasteurisation, most dairy products available in the market are pasteurised to minimise the risk of contamination. Common routes of infection and contamination: Cross-contamination in milk processing plants – To derive a diversified variety of dairy products from milk, raw milk is sent to factories and those with poor standards of hygiene would pose a danger to the safety of dairy products. Pathogens from machines, packaging and any materials found in the manufacturing site can enter and contaminate milk. Common routes of infection and contamination: Post-pasteurisation recontamination – Some thermophilic bacteria or bacteria with high resistance to high temperatures are capable of thriving in pasteurisation and hence can recontaminate the pasteurised milk or dairy products. For instance, L. monocytogenes can survive in high temperatures and grow extensively in the post-pasteurisation process. Moreover, certain bacterial species can secrete toxins with a high thermostability which are harmful to the human body. Common bacterial pathogens: Salmonella Salmonella can survive within 5.5 °C to 45 °C with high sensitivity to acid and are more commonly found in unprocessed milk. Owing to the sensitivity to pH, Salmonella have different survival rates in different dairy products like cheese under different storage temperatures. In ripening Cheddar cheese, they can survive for several months at 13 °C but most fail to survive for more than 36 days in Domiati cheese. Most Salmonella strains are pathogenic, especially S. enterica subsp. enterica which account for 99% of human infections and can bring about Salmonellosis.Salmonellosis is induced by infection of Salmonella with a swift onset of disease 12 to 36 hours after consumption of contaminants and can be clinically classified into three types, namely enteric fever (also Typhoid fever), gastroenteritis and septicemia. Enteric fever usually has 7 to 14 days of incubation with mild symptoms like malaise and headache. In rare cases, the body temperature of the patients can surge up to 40 °C, rendering them delirious. Gastroenteritis has a much shorter incubation period than enteric fever (usually 3 to 72 hours) and shows common gastrointestinal disturbance symptoms characterised by watery faeces with an unpleasant and strong odour as well as blood and mucus. Septicemia can lead to serious complications in various organs, in particular arthritis in joints. A recent case of a large-scale Salmonellosis outbreak was reported in Iwamizawa, Japan in 2011 because of contamination in school meal processing facilities, affecting over 1,000 students and school staff at nine local Japanese schools. The majority of affected individuals had acute diarrhoea and 13 of them were hospitalised. Common bacterial pathogens: Campylobacter The preponderance of reported milk borne diseases arises from Campylobacter, most notably the strains C. jejuni and C. coli. Campylobacter is implicated in more than 80% of reported American disease outbreaks in relevance with raw milk from 2007 to 2012. Aside from the US, the UK also recorded around 59,000 confirmed cases of Campylobacteriosis triggered by raw milk consumption in 2016. As thermophilic strains, C. jejuni and C. coli can grow between 37 °C and 42 °C and they have a high biological activity rate inside host animals. C. jejuni, the predominant pathogenic strain, is found to have a noteworthy genetic variation that allows them to develop diversified phenotypes, for example high resistance to temperature fluctuations during pasteurisation and anti-bacterial agents in animal hosts, and improve their adaptability to changing environments in dairy products. Common bacterial pathogens: Campylobacteriosis has a relatively slow onset of 2 to 5 days subsequent to infection with a duration of symptoms of 3 to 6 days. Prevalent symptoms of Campylobacteriosis are fever and gastric intolerance with bloody stool. Vulnerable patients may suffer from autoimmune complications and sequelae with more far-reaching influences on their health conditions. Research found that Campylobacteriosis can activate immune cells and spur autoimmune responses against the patients’ own nerve cells to induce Guillain-barré syndrome (GBS) and affected patients would experience muscle weakness, pain in limbs and even paralysis. Similar to Salmonellosis, Campylobacteriosis can also overstimulate the immune system and prompt reactive arthritis, leading to inflammation in joints. Therefore, patients with an impaired immune system or suppressed immune function by chemotherapy are more prone to the above lethal complications. Common bacterial pathogens: Escherichia coli (E. coli) Most E. coli would barely pose health problems in the human body and only certain strains of E. coli would be pathogenic to humans. The pathogenic E. coli is highly prevalent among milk-producing domestic animals, including cattle and sheep, and bacteria would be potentially harboured in their faeces. Therefore, faecal contamination of udders is one of the risk factors triggering pathogens to enter the raw milk. These strains of E. coli are human pathogenic verotoxigenic E. coli (VTEC), also noted as Shiga-toxin producing E. coli (STEC), which are the most commonly encountered pathogens in raw milk-related outbreaks and the estimated frequency of outbreaks caused by the infection of VTEC is 33%. Most of the outbreaks were found to be caused by processed milk, indicating the potential risk of post-pasteurisation contamination and the underlying shortcoming of pasteurisation in the elimination of pathogens.The common feature of VTEC is the ability to produce a wide range of toxins highly toxic to Vero cells and they are collectively known as Verocytotoxins (VT). The common clinical onset of VTEC infection is mild diarrhoea. VTEC infection can be life-threatening given its critical symptoms including hemorrhagic colitis (HC), haemolytic-uremic syndrome (HUS), and thrombotic thrombocytopenic purpura (TTP) which can be complicated by kidney diseases. In the worst scenario, the above complications can lead to decease. Fragmentation of red blood cells termed as schistocytes is a common feature observed in HUS. Particularly, HUS is more common in infants, children, and the elderly, while TTP is frequently observed among adults. Notably, patients recovered from HUS would either die or develop strokes as well as chronic renal failure. Common bacterial pathogens: Listeria Listeria monocytogenes is one of the strains of the genus Listeria, which is a food-borne pathogen and can cause a grave and mortal illness termed listeriosis. Most of the listeriosis-related outbreaks in the West have been found to be associated with dairy food such as unprocessed milk. Many animal species can be infected with Listeria but listeriosis can be rarely observed in clinical animals. Listeria spp. can be shed in the excreta of carriers, and milk contamination is mainly due to faecal contamination during the milking process. Also, post-pasteurisation is a possible way of contamination involving the food processing environment as L. monocytogenes can survive in diverse environments, leading to the formation of biofilms in areas difficult to access. That's the reason why it is usually difficult to eliminate L. monocytogenes.L. monocytogenes infection is implicated in both sporadic episodes as well as large outbreaks of human illnesses around the world. Deceases related to milk contamination are frequently caused by listeriosis which has the highest fatality rate among all milk-borne diseases. The annual incidence of listeriosis in most countries within the European Union is approximately two and ten recorded incidence for one million population. What's more, in terms of food-borne illnesses, L. monocytogenes infections have set the highest hospitalisation rate record (91%) in the US.Children, pregnant women, the elderly, and immunocompromised individuals in the exposed population have a higher risk of suffering from listeriosis. Typical symptoms presented clinically are septicemia, meningitis, or meningoencephalitis. Particularly, the maternal-foetal interface in pregnant women, which is also called decidua with natural localised immunosuppression, favours the growth of L. monocytogenes and this would increase the risk of abortion. In addition, febrile gastroenteritis was recognised as a milder form of listeriosis in the 1990s. Milk safety and prevention: Milk safety should be closely monitored. Nowadays, safety, quality, and production conditions are standardised by different legal regulations around the world. Also, launching the hazard analysis and critical control point (HACCP) programs helps consolidate the foundation of many preventive measures to curb the incidence of milk-borne diseases. The concept of "hazard" stated by HACCP refers to “a biological, chemical or physical agent in food with the potential to cause an adverse health effect”. With this concept, the identification of the hazards can be systematically assessed during food production and distribution, and measures for hazards control are also defined. Milk safety and prevention: Hygiene in milk production Milk should be produced from physically healthy livestock in a standardised environment. Several points are required for the hygienic milk production: Milking is carried out in a well-ventilated barn with adequate lighting. After usage, milk vessels and equipment should be cleaned, sanitized, and dried under the sun on a drying rack. The milker should be healthy and only healthy cows should be milked. Pasteurisation of the milk: Milk is heated to a high temperature (72 °C for 15 seconds) to kill pathogens, followed by rapid cooling. Then, milk should be tested to confirm that the number of pathogens is controlled to an acceptable level. Hygiene in milk transportation, handling, and storage Hygiene in milk transportation An ice chest is needed for transporting fresh milk to keep the milk temperature at 4 °C or lower. Hygiene in milk handling and storage Before filling the milk jars, hand washing is required to prevent contamination. No.1 plastic (milk jugs) should be used to store and refrigerate milk. One inch of space in the milk jugs, or literally a headroom, should be left unoccupied in case of milk expansion. At home, milk should be stored in the coldest area of the refrigerator. Only the bottles in current use can be stored on the door shelf of the fridge. Normally, milk can be stored for 7 to 14 days with care under a constant and optimal temperature between 35 °F (2 °C) and 37 °F (3 °C).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quantized state systems method** Quantized state systems method: The quantized state systems (QSS) methods are a family of numerical integration solvers based on the idea of state quantization, dual to the traditional idea of time discretization. Unlike traditional numerical solution methods, which approach the problem by discretizing time and solving for the next (real-valued) state at each successive time step, QSS methods keep time as a continuous entity and instead quantize the system's state, instead solving for the time at which the state deviates from its quantized value by a quantum. They can also have many advantages compared to classical algorithms. Quantized state systems method: They inherently allow for modeling discontinuities in the system due to their discrete-event nature and asynchronous nature. They also allow for explicit root-finding and detection of zero-crossing using explicit algorithms, avoiding the need for iteration---a fact which is especially important in the case of stiff systems, where traditional time-stepping methods require a heavy computational penalty due to the requirement to implicitly solve for the next system state. Finally, QSS methods satisfy remarkable global stability and error bounds, described below, which are not satisfied by classical solution techniques. Quantized state systems method: By their nature, QSS methods are therefore neatly modeled by the DEVS formalism, a discrete-event model of computation, in contrast with traditional methods, which form discrete-time models of the continuous-time system. They have therefore been implemented in [PowerDEVS], a simulation engine for such discrete-event systems. Theoretical properties: In 2001, Ernesto Kofman proved a remarkable property of the quantized-state system simulation method: namely, that when the technique is used to solve a stable linear time-invariant (LTI) system, the global error is bounded by a constant that is proportional to the quantum, but (crucially) independent of the duration of the simulation. More specifically, for a stable multidimensional LTI system with the state-transition matrix A and input matrix B , it was shown in [CK06] that the absolute error vector e→(t) is bounded above by |e→(t)|≤|V||ℜ(Λ)−1Λ||V−1|ΔQ→+|V||ℜ(Λ)−1V−1B|Δu→ where ΔQ→ is the vector of state quanta, Δu→ is the vector with quanta adopted in the input signals, VΛV−1=A is the eigendecomposition or Jordan canonical form of A , and |⋅| denotes the element-wise absolute value operator (not to be confused with the determinant or norm). Theoretical properties: It is worth noticing that this remarkable error bound comes at a price: the global error for a stable LTI system is also, in a sense, bounded below by the quantum itself, at least for the first-order QSS1 method. This is because, unless the approximation happens to coincide exactly with the correct value (an event which will almost surely not happen), it will simply continue oscillating around the equilibrium, as the state is always (by definition) guaranteed to change by exactly one quantum outside of the equilibrium. Avoiding this condition would require finding a reliable technique for dynamically lowering the quantum in a manner analogous to adaptive stepsize methods in traditional discrete time simulation algorithms. First-order QSS method – QSS1: Let an initial value problem be specified as follows. x˙(t)=f(x(t),t),x(t0)=x0. The first-order QSS method, known as QSS1, approximates the above system by x˙(t)=f(q(t),t),q(t0)=x0. First-order QSS method – QSS1: where x and q are related by a hysteretic quantization function if otherwise where ΔQ is called a quantum. Notice that this quantization function is hysteretic because it has memory: not only is its output a function of the current state x(t) , but it also depends on its old value, q(t−) This formulation therefore approximates the state by a piecewise constant function, q(t) , that updates its value as soon as the state deviates from this approximation by one quantum. First-order QSS method – QSS1: The multidimensional formulation of this system is almost the same as the single-dimensional formulation above: the th quantized state qk(t) is a function of its corresponding state, xk(t) , and the state vector x→(t) is a function of the entire quantized state vector, q→(t) :x→(t)=f(q→(t),t) High-order QSS methods – QSS2 and QSS3: The second-order QSS method, QSS2, follows the same principle as QSS1, except that it defines q(t) as a piecewise linear approximation of the trajectory x(t) that updates its trajectory as soon as the two differ from each other by one quantum. The pattern continues for higher-order approximations, which define the quantized state q(t) as successively higher-order polynomial approximations of the system's state. High-order QSS methods – QSS2 and QSS3: It is important to note that, while in principle a QSS method of arbitrary order can be used to model a continuous-time system, it is seldom desirable to use methods of order higher than four, as the Abel–Ruffini theorem implies that the time of the next quantization, t , cannot (in general) be explicitly solved for algebraically when the polynomial approximation is of degree greater than four, and hence must be approximated iteratively using a root-finding algorithm. In practice, QSS2 or QSS3 proves sufficient for many problems and the use of higher-order methods results in little, if any, additional benefit. Software implementation: The QSS Methods can be implemented as a discrete event system and simulated in any DEVS simulator. QSS methods constitute the main numerical solver for PowerDEVS[BK011] software. They have also been implemented in as a stand-alone version.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Physalaemin** Physalaemin: Physalaemin is a tachykinin peptide obtained from the Physalaemus frog, closely related to substance P. Its structure was first elucidated in 1964.Like all tachykinins, physalaemin is a sialagogue (increases salivation) and a potent vasodilator with hypotensive effects. Structure: Physalaemin (PHY) is known to take on both a linear and helical three dimensional structure. Grace et al. (2010) have shown that in aqueous environments, PHY preferentially takes on the linear conformation whereas in an environment that simulates a cellular membrane, PHY takes on a helical confirmation from the Pro4 residue to the C-Terminus. This helical conformation is essential to allow the binding of PHY to neurokinin-1 (NK1) receptors. Consensus sequences between Substance P (a mammalian tachykinin and agonist of NK1) and PHY have been used to confirm that the helical confirmation is necessary for PHY to bind to NK1. Use In Research: Not only is PHY closely related to Substance P (SP), but it also has a higher affinity for the mammalian neurokinin receptors that Substance P can bind to. Researchers can make use of this behavior of PHY to study the behavior of smooth muscle - a tissue where NK1 can be found. Shiina et al. (2010) used PHY to show that tachykinins as a whole can cause the longitudinal contraction of smooth muscle tissue in esophageal tissue.Singh et Maji made use of PHY's similarity to SP along with its sequence similarity to Amyloid B-peptide 25-35 [AB(25-35)]. Despite its sequence similarity to SP, Singh et Maji showed that PHY had distinct amyloid forming capabilities . Under artificially elevated concentrations of tetrafluoroethylene (TFE) and a short incubation time, PHY was able to form amyloid fibrils. These fibrils originating from tackynins like PHY were also shown to reduce the neurotoxicity of other Amyloid fibers associated with amyloid induced diseases such as Alzheimer's disease.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**AC 20-115** AC 20-115: The Advisory Circular AC 20-115( ), Airborne Software Development Assurance Using EUROCAE ED-12( ) and RTCA DO-178( ) (previously Airborne Software Assurance), identifies the RTCA published standard DO-178 as defining a suitable means for demonstrating compliance for the use of software within aircraft systems. The present revision D of the circular identifies ED-12/DO-178 Revision C as the active revision of that standard and particularly acknowledges the synchronization of ED-12 and DO-178 at that revision. AC 20-115: This Advisory Circular calls attention to ED-12C/DO-178C as "an acceptable means, but not the only means," to secure FAA approval of software. The earliest revisions of the Advisory Circular were brief, serving little more than to call attention to active DO-178 revisions. The Advisory Circular revisions C and D are considerably longer, giving guidance in modifying and re-using software previously approved using DO-178, DO-178A, or DO-178B (preceding revisions of the DO-178 standard). Additionally, the expanded AC provides guidance for Field Loadable Software and User Modifiable Software within aircraft software. Transition of legacy tool qualification from DO-178B to DO-330 is also discussed, with comparison of ED-12B/DO-178B Tool Qualification Type with ED-12C/ED-215 DO-178C/DO-330 Tool Qualification Level.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DIFOT** DIFOT: DIFOT (delivery in full, on time) or OTIF (on-time and in-full [delivery]) is a measurement of logistics or delivery performance within a supply chain. Usually expressed as a percentage, it measures whether the supply chain was able to deliver: the expected product (reference and quality) in the quantity ordered by the customer at the place agreed by the customer at the time expected by the customer (in many cases, with a tolerance defined in conjunction with the customer). Function: OTIF measures how often the customer gets what they want at the time they want it. Some consider it superior to other delivery performance indicators, such as shipped-on-time (SOT) and on-time performance (OTP), because it looks at deliveries from the point of view of the customer. This key performance indicator (KPI) has the advantage of measuring the performance of the whole logistic organization in meeting customer service expectations. To reach a high OTIF level, all the functions of the supply chain (among which orders taking, procurement, suppliers, warehouses, transport ...) must work at their best level. Calculation: Generally OTIF is calculated by taking into account the number of deliveries: OTIF ( % ) = number of deliveries OTIF ÷ total number of deliveries * 100 But it can also, according to organizations, be calculated according to the number of orders or the number of the order lines. Some organizations calculate OTIF by the percentage of the total order quantity that has been on-time. This goes against the principle of OTIF as the in-full component of OTIF has not been met. Calculation: Requirements for the OTIF measurement are : have a delivery date (even hour for some organizations) stated on the customer order or specified by the customer measure the date or the hour of delivery and archive it in the system maintain record of the reasons why an order was not OTIF.If the orders are split at the customer request, then each delivery line is considered. Calculation: Companies which have set up a measure of the OTIF are unanimous in recognising its value. They quote among other positive aspects: the increase in operating profit due to the reduction of operating expenses (in relation with the non-quality reduction, better inventory control, better customer orders taking, higher reliability in storage and transport ...) and the increase of sales (due to a better product availability for sales). Calculation: Typically, leading practice in the [UK] retail sector identified by researchers Janet Godsell and Remko van Hoek would demand an OTIF in excess of 97 per cent measured at a stock keeping unit (SKU) level. However Godsell's research also found that instances where an OTIF target was met when measured against a target of the promised delivery date stated by the supplier to the customer (the "promise date"), would often fail to meet the same performance measure when the delivery date requested by the customer (the "request date") was used as the target.The OTIF notion was extended to DIFOTAI (delivery in full, on time, and accurately invoiced), which also takes into account the quality of the invoicing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Common subexpression elimination** Common subexpression elimination: In compiler theory, common subexpression elimination (CSE) is a compiler optimization that searches for instances of identical expressions (i.e., they all evaluate to the same value), and analyzes whether it is worthwhile replacing them with a single variable holding the computed value. Example: In the following code: a = b * c + g; d = b * c * e; it may be worth transforming the code to: tmp = b * c; a = tmp + g; d = tmp * e; if the cost of storing and retrieving tmp is less than the cost of calculating b * c an extra time. Principle: The possibility to perform CSE is based on available expression analysis (a data flow analysis). An expression b*c is available at a point p in a program if: every path from the initial node to p evaluates b*c before reaching p, and there are no assignments to b or c after the evaluation but before p.The cost/benefit analysis performed by an optimizer will calculate whether the cost of the store to tmp is less than the cost of the multiplication; in practice other factors such as which values are held in which registers are also significant. Principle: Compiler writers distinguish two kinds of CSE: local common subexpression elimination works within a single basic block global common subexpression elimination works on an entire procedure,Both kinds rely on data flow analysis of which expressions are available at which points in a program. Benefits: The benefits of performing CSE are great enough that it is a commonly used optimization. Benefits: In simple cases like in the example above, programmers may manually eliminate the duplicate expressions while writing the code. The greatest source of CSEs are intermediate code sequences generated by the compiler, such as for array indexing calculations, where it is not possible for the developer to manually intervene. In some cases language features may create many duplicate expressions. For instance, C macros, where macro expansions may result in common subexpressions not apparent in the original source code. Benefits: Compilers need to be judicious about the number of temporaries created to hold values. An excessive number of temporary values creates register pressure possibly resulting in spilling registers to memory, which may take longer than simply recomputing an arithmetic result when it is needed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TxTag** TxTag: TxTag , operated by the Texas Department of Transportation (TxDOT), is one of three interoperable electronic toll collection systems in Texas. The system is also interoperable with the K-TAG system used in Kansas and the Pikepass system used in Oklahoma. Current system status: The TxTag brand name is used on the following highways: Operated by TxDOT: the Central Texas Turnpike System, which encompasses SH 130, bypassing Austin to the east, SH 45, an east–west road roughly straddling the Austin-Round Rock boundary, and the northern portion of Loop 1 (Mopac) on Austin's north side SH 99 a.k.a.. Grand Parkway the third loop around Houston (partial). Current system status: Operated by the Central Texas Regional Mobility Authority: 183A, a toll bypass of US 183 through Leander. CTRMA is planning several other toll road projects throughout the Austin metropolitan area, which are planned to accept TxTag. Interoperability: In 2003, all Greater Houston area toll-roads operated by Harris County Toll Road Authority and Fort Bend County Toll Road Authority (EZ TAG), and all Dallas–Fort Worth metroplex area toll-roads operated by North Texas Tollway Authority (TollTag) became compatible with TxTag, with the exception of Dallas/Fort Worth International Airport and Dallas Love Field airport parking, where NTTA's TollTag is the only ETC system recognized. Interoperability: On May 17, 2017, the Kansas Turnpike Authority made TxTag compatible with the K-TAG system used on the Kansas Turnpike.On May 7, 2019, the Oklahoma Turnpike Authority made TxTag compatible with the PikePass system used on all of Oklahoma's turnpikes. NationalPass provides interoperability with systems outside Texas. TxTag transponders are currently not accepted at tolled border crossings with Mexico, although future interoperability is planned with the Laredo Trade Tag accepted at four crossings. TxTag is not compatible with transponders from the E-ZPass system, although the two companies have been in talks with each other. Technology: TxTag uses at least two types of transponders manufactured by TransCore: legacy hard case AT5100 transponders and newer eGo Plus flexible sticker-type transponders. The transponders are mounted on the inside of the vehicle at the top center of the windshield. The TxTag sticker can be used as a portable device, provided it is affixed to a small square of glass instead of a windshield. According to the patent for the device, the sticker was specifically designed such that if removed, among other things capacitor 66 is decoupled from 64, preventing the use of the sticker if it is torn from the glass. It would also appear the sticker can be simply taped to the inside of the windshield for temporary use.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chloromethyl chloroformate** Chloromethyl chloroformate: Chloromethyl chloroformate (CClO2CH2Cl), also known as palite gas, is a chemical compound developed into gas form and used for chemical warfare during World War I. It is a tearing agent designed to cause temporary blindness. It is a colorless liquid with a penetrating, irritating odor. Industrially, chloromethyl chloroformate is used to manufacture other chemicals.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Madol Kurupawa** Madol Kurupawa: Madol Kurupawa (Sinhala: මඩොල් කුරුපාව) is a wooden king post or catch pin, which is used to secure numerous wooden beams of a roof structure to a single point. It is a unique feature of Kandyan architecture/joinery.This distinctive structural arrangement occurs in medieval Sri Lankan buildings, where four pitch roofs have been provided. Rafters of the shorter sides are elbowed against the ridge plate and were held fast at its pinnacle by a timber boss known as madol kurupawa, which in turn attached to the end of the wall plate. The pekada provides an intermediate means of connection between the pillars and beams, where a modol kurupawa provides similar means between the rafters and ridge plate at shorter side of the pitched roof. No mechanical joinery (nails, bolts or glue) is used other than the wooden pegs and the structural stability is only achieved through compression.The most notable example can be found at Embekka Devalaya in Udunuwara, (built during the reign of King Rajadhi Rajasingha), where the upper ends of twenty six rafters are held together using a modol kurupawa at the hip end of the 'Digge' (dancing hall). Another example can be found at the National Museum of Kandy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shuttle tanker** Shuttle tanker: A shuttle tanker is a ship designed for oil transport from an off-shore oil field as an alternative to constructing oil pipelines. It is equipped with off-loading equipment compatible with the oil field in question. This normally consists of a taut hawser arrangement or dynamic positioning to maintain the position relative to the field, an off-loading arrangement of pipes, and redundant safety systems to ensure that the potentially flammable crude oil is handled safely in a harsh environment. Shuttle tanker: Shuttle tankers initially started operating in the North Sea. They are now in use also in Brazil, and trials have been carried out in Gulf Of Mexico. There are plans to take up such operation in the Arctic Sea, north of western Russia.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Heliodon** Heliodon: A heliodon (HEE-leo-don) is a device for adjusting the angle between a flat surface and a beam of light to match the angle between a horizontal plane at a specific latitude and the solar beam. Heliodons are used primarily by architects and students of architecture. By placing a model building on the heliodon’s flat surface and making adjustments to the light/surface angle, the investigator can see how the building would look in the three-dimensional solar beam at various dates and times of day. History: Shortly after World War II, in the 1950s, there was a wide interest in producing building design techniques that correspond to the climate. At Princeton Architectural Laboratory, Thermoheliodon was invented by Olgyays in hopes to create physiological conditions of human comfort through architectural design. Thermoheliodon was a domed insulated evaluation bed for scaled architectural models in certain climatic conditions measured to a high level of calculation and accuracy. The device was a covered simulating environment where a scaled model’s thermal performance could be evaluated under different temperatures. However, attaining precise evaluation was an issue with Thermoheliodion due to the impact of scale on thermal performance. Although Thermoheliodon failed to produce an accurately measured environment, the device led to further research on adaptive and efficient design orientation of buildings and developed the base of bioclimatic design principles. History: During the 1950s, The Building Research Station (BRS), a key institution in the UK designed Heliodon as part of Tropical Architecture and Bio-climatic Architecture. The institution aimed to enhance housing conditions and development of local resources for construction in colonial territories. Heliodon was designed to replicate the sun on architectural scale models through a point of light. The device can shift and tilt to obtain the accurate position of the sun on any given day, time, or location.In the 1960s, a heliodon was invented by Gershon Fruhling in Israel, recorded by the United States Patent Office. This heliodon consists of a platform created to hold a model of the building whose isolation is to be evaluated. The horizontal platform can sway on a rotatable vertical shaft which can turn on its axis. The rotation allows horal and seasonal adjustments and swings the unit almost to its base. The titling of the shaft enables the adjustment to various geographical locations on the latitude scale. Any external light source along with the sun can be utilised and the placement of this light source can be kept stationary throughout the observations. This heliodon is an accurate instrument that can make rapid and simple adjustments. And the requirement of providing a precisely located light source is not necessary.In the 1990s, modern heliodons with quicker simulations and a greater level of accuracy were invented. EPFL Solar Energy and Building Physics Laboratory LESO-PB in Lausanne designed a robotic heliodon to simulate direct light. This heliodon is combined with a sky scanning simulator (artificial sky) to predict the light distribution in a building over the entire year. The device can reproduce direct light at any location on earth.After the 2000s, Prof. Norbert Lechner, an architect, LEED AP and an expert in energy responsive architecture has invented a manual Sun Emulator Heliodon. He invented heliodons which were much easier to evaluate daylight simulation than the previous models. The Sun Emulator Heliodon can precisely show all solar responsive design principles and strategies. Although the device can hold only small architectural scale models, it is a great instrument for teaching solar geometry. This heliodon was manufactured by High Precision Devices and now an alternative device is Orchard Heliodon produced by betanit.com, with the approval of the inventor of the Sun Emulator Heliodon. History: Since 2004, the Italian company betanit.com is developing various heliodons designed by architect Giulio M. Podesta for use in daylighting laboratories of universities and architectural firms. The architect designed the Orchard Heliodon with similar features to the Sun Emulator heliodon (developed by Norbert Lechner). For more precise simulations, Orange Heliodon, an easy to use robotic heliodon with a fixed light source was designed and was launched in the market in 2007. Moreover, the Orange Heliodon was used at Politecnico di Milano in the architectural design laboratory of the BEST department. It used a computerized and automatic heliodon to reproduce the sunshade. Furthermore, the architect designed Tulip Heliodon, a robotic heliodon with a fixed light source that is often merged with full dome artificial sky for collaborative design and presentation used for daylight studies. Kwok Pun Cheung, a professor and researcher at the Department of Architecture in Hong Kong University developed various heliodons. Cheung developed a simple tabletop heliodon and multi-lamp heliodon for use in architectural schools. Moreover, a tabletop heliodon with a moving light source was developed for architect offices. A patented portable direct sunlight light-duty universal heliodon set up on a camera tripod was developed for evaluating the impacts of direct sunlight on small architectural models or building components. Scientific background: The Earth is a ball in space perpetually intercepting a cylinder of parallel energy rays from the sun. (Think of a tennis ball being held in the wind.) The angle of any earthly site to the solar beam is determined by The site’s latitude, which gives its position on the curve of the Earth between the Equator and one of the Poles. Scientific background: The time of day at the site, measured by its progress eastward around the Earth’s axis from sunrise to sunset. Scientific background: The date, which locates the Earth on its annual orbit of the sun.The change due to date is the most difficult to visualize. The Earth’s axis is steady but tilted: the plane that includes the Earth’s equator, which is perpendicular to the axis, is not parallel to the plane that includes the center of the sun and the center of the Earth, called the ecliptic. Think of the Earth as a car on a Ferris wheel. The car’s axis always points “down”, which changes its relation to the center of the wheel. A light at the center of the wheel would touch the bottom of the car at the top of the orbit and the top of the car at the bottom of the orbit. As the Earth orbits, the location of the centerline of the solar cylinder changes, sliding from the Tropic of Cancer (in June) to the Tropic of Capricorn (in December) and back again. This changes sun angles all over Earth according to the date. See more at analemma. Utility: Heliodons can mimic latitude, time of day, and date. They must also show a clear north-south direction on their surface in order to orient models. Some heliodons are very elaborate, using tracks in a high ceiling to carry a light across a large studio. Others are very simple, using a sundial as a guide to the adjustments and the sun of the day as a light source. In general, the date adjustment causes the most difficulty for the heliodon designer, while the light source presents the most problems in use. The parallel rays of the sun are not easy to duplicate with an artificial light at a useful scale, while the real sun is no respecter of deadlines or class hours. Utility: All heliodons can benefit by including a moveable, tiltable device that can be set to match any surface on a model to show angle of incidence. The angle of incidence device indicates the relative intensity of the direct beam on the surface. The device consists of a diagram of concentric rings around a shadow-casting pointer perpendicular to the diagram. Each ring represents a percent of the direct solar beam incident on the surface. The percentage varies from 100%—the ray runs straight down the pointer perpendicular to the diagram—to zero—the ray runs parallel to the diagram and misses surface. The cosine of the angle of incidence gives the percentage. A cosine of 0.9, 90%, for example, corresponds to an angle of incidence of 26.84 degrees. The radius of the ring for the angle is equal to its tangent times the height of the shadow casting pointer. A 45 degree angle of incidence would generate a cosine of about .7, 70%, for example. Since the tangent of 45 degrees is 1, the radius of the 70% ring would be equal to the height of the shadow-casting rod. Types of Heliodon: Manual Tabletop Heliodon Manual Tabletop heliodons are used for sun shading analysis at any given latitude and at any time. The model support platform is mounted on a conventional table or desk. It can rotate and tilt a scaled architectural model. These heliodons are manually operated without the use of computers and provide good accuracy. The model stand, mounted on the table, is tilted for latitude and rotated to get the time of the day. For replicating the time of the year, the single light source uses a ribbon marked with months of the year and attached to the edge of the door. The device can be used in interior spaces with lamps and exterior spaces with direct sunlight for better accuracy. While using outdoors, a sundial controls the tilt and rotation of the model stand. The main advantage is its affordability and small size. The heliodon is accurate when utilized by people who are already aware of solar geometry. But not good for learning solar geometry and the basic principles of solar responsive design. Types of Heliodon: Department of Architecture at the University of Hong Kong uses tabletop heliodon for the solar design of architectural scale models. The heliodon is mainly used for teaching and design purposes. Types of Heliodon: Manual Sun Emulator Heliodon The manual heliodon consists of a flat table with a scaled model on top whereas the table lies stationary with only sun lamps in motion. The heliodon consists of a horizontal platform and seven rings that represent the sun path for the 21st day of every month which can be rotated to replicate the time of the day. It acts as a teaching tool for architects, planners, and developers. The heliodon can be used to teach solar geometry and solar responsive design principles in science museums. Without reliance on external sky conditions, it is simple to evaluate sun shading analysis at any latitude. This type of heliodon is very intuitive to adjust and operate. This heliodon requires only limited training since it is easy to understand and operate. Types of Heliodon: Considering the characteristics, the manual sun emulator is also excellent for explaining solar dynamics and cardinal points to children in a function, scientific and fun way of demonstration. Manual sun emulator heliodon is used in various universities such as: Auburn University, Alabama - The university uses a 48” (1.2m) diameter Formica covered sun emulator heliodon known as HPD Model 126 Heliodon. Texas Christian University, Texas – The students at the university use sun emulator heliodon for daylight studies in lighting design projects. Southeastern Louisiana University, Louisiana – The university uses heliodon as an interactive tool for teaching purposes in designing solar aware architecture. Durham School of Engineering and Construction, Nebraska – Prof. Norbert Lechner uses sun emulator heliodon to explain sun shading analysis of scale model. Types of Heliodon: CERES Center at Ball State University, Indiana Judson College, Alabama Robotic Heliodon with Fixed Light Source This type of robotic heliodons is the most accurate sun simulator. It is used to evaluate scale models in a compact space with a fixed light source with the support of a robotic platform. It is an automatically operated heliodon in which a physical model is accurately positioned with the help of computers around two axes. The robotic heliodon can process frequent tests and evaluations on bigger and heavier models than the manual ones to produce precise results for experiments. They are used for daylighting studies in universities, research facilities and development laboratories for sustainable building designs. Types of Heliodon: Some robotic heliodons use a mirror to fold the light path and allow the installation in a small room. The room is normally kept dark without windows and the walls, ceilings and floors are usually in black. Types of Heliodon: Robotic Heliodon is used in architectural schools, research laboratories and large engineering firms such as: EPFL Solar Energy and Building Physics Laboratory LESO-PB in Lausanne (laboratory-made heliodon) uses a robotic heliodon for the simulation of direct light. This heliodon is combined with a sky scanning simulator (artificial sky) to predict the light distribution in a building over the entire year. The tool can replicate direct sunlight at any location on earth. This heliodon is a laboratory-made tool that allows daylighting simulations produced inside scale models for various research and design purposes. The daylighting laboratory manufactured this tool to limit energy savings and increase user comfort through better utilisation of daylighting in buildings. The instrument allows architects, designers, and planners to understand the effect of their architectural concepts. Besides limiting energy consumption, the laboratory focuses to increase the health of building occupants and productivity through efficient use of daylighting. Moreover, the robotic heliodon will help the laboratory’s aim to achieve energy efficiency and renewable energy in buildings and cities. Types of Heliodon: National Laboratory for Housing and Sustainable Communities at Sonora, Mexico uses Orange heliodon produced by Betanit. The robotic heliodon is used for evaluating solar paths and their interaction with pre-existing or new constructions. By using a scale physical model under the robotic heliodon, it helps the project to measure the comfort and energy efficiency of the building. The heliodon is efficient as the project considers climatic conditions and the integration of energy sources. As the building is situated in a hot dry climate, the automatic heliodon helped in providing design solutions to offer indirect natural lighting and ventilation while avoiding direct sunlight into the building’s interior spaces. Due to the accuracy of the robotic heliodon, several studies with photos were used to compare a built house with a solar simulated scale model. The results revealed the efficiency of using appropriate design features in the building, which was derived from automatic heliodon. Types of Heliodon: Arup (London), an engineering firm uses orange heliodon produced by Betanit in their lighting lab to help simulate the sun for their experiments and buildings. The lighting simulation from the heliodon helps in quickly identifying the daylight penetration on the buildings. Arup uses the robotic heliodon to design sustainable, energy-efficient, and award-winning concepts in lighting. The engineers and experts analyse daylighting of buildings by replicating the sun for their innovative projects. Furthermore, a thesis project, supervised by Dr Francesco Anselmo, also utilised the heliodon for experimental purposes. The heliodon is also used for curvature and annual reflectivity studies of “Leaf” at Arup in coordination with Betanit. Types of Heliodon: Robotic Heliodon with Fixed Model This robotic heliodon is fully automated with a computer and has lights that go around the fixed scale model placed horizontally on the table. This kind of robotic heliodon is used separately or integrated with dome artificial sky for presentation, lighting design and research purposes. While in use with the artificial sky, the combined tool can replicate both the sun and the sky for great accuracy and obtain results of the daylight study. The fixed scale model can be bigger and heavier models than the other types which allow the source to go around the model for obtaining evaluation results, conducting presentations and observation. The robotic heliodon allows people to move easily around and inside it for daylighting studies. Types of Heliodon: The automated robotic heliodon with a fixed model is used in research facilities, lighting companies and university laboratories such as: University of Kansas Lighting Research Lab, Lawrence, United States uses a heliodon sunlight simulator for daylight studies and research purposes. The device was developed by Dr Hongyi Cai and custom-made in China by Quanzhou HuaTian Measurement Equipment LLC. The simulation tool has the ability of 3D angular movements of the scale model of buildings around a fixed point of the model at a precision of 0.1o. The heliodon is installed in the darkroom for teaching purposes, research, and daylighting studies. Types of Heliodon: HFT, Stuttgart University of Applied Sciences, Stuttgart uses a robotic heliodon at their Daylight Planning Lab. The lab uses two elements in their daylight simulator – an artificial sky and artificial sun. The artificial sun simulator consists of a halogen bulb with a parabolic reflector to replicate parallel sunlight. The heliodon integrated an artificial sky of 4.20m diameter with 30 fluorescent lamps. The control panel allows the replication of any solar orbit at any day and any global location. This integration facilitates the reproduction of the sky’s brightness and the circumsolar radiation in great accuracy. The architecture and design department uses the device for teaching purposes, daylight study, shadow study and research. Types of Heliodon: Bartenbach, Tyrol – the lighting firm uses a heliodon (inside an artificial sky of 6.5m diameter) with many small lamps for daylighting design with visualization models and calculations. The lighting firm uses heliodon for daylight simulation in research and development for complex building structures. The firm uses the tool for architectural lighting design used for green building certification, reports, and consultations. Types of Heliodon: United Arab Emirates University (UAEU) uses a powerful robotic heliodon made by betanit.com inside a full-dome artificial sky to evaluate daylighting of scale models with great accuracy. The heliodon is powered by a 1200W HMI lamp with a custom-designed optical setup able to reproduce a range of 200,000 lx to 600,000 lx on the table supporting the scale model. The robotic heliodon with the artificial sky is used for research purposes in sustainable building design and technology. Heliodon in Lighting Handbook: Illuminating Engineering Society (IES) publishes a lighting handbook that features the heliodon as one of the tools used for the evaluation of daylighting design. The handbook is a globally well-known reference and a guide to allow lighting professionals and practitioners to understand the impact of light on human health and promote sustainability through efficient lighting study and design. The heliodon is featured in the handbook as a lighting software tool that is used to study daylighting performance for physical scale models. It is generally used by architects and engineers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Satellite geolocation** Satellite geolocation: Satellite geolocation is the process of locating the origin of a signal appearing on a satellite communication channel. Typically, this process is used to mitigate interference on communication satellites. Usually, these interference signals are caused by human error or equipment failure, but can also be caused by deliberate jamming. Identifying the geographical location of an interfering signal informs the mitigation activity. How Satellite Geolocation Works: Many communication satellites share a given frequency band. As a signal is transmitted to a particular satellite there is some amount of side lobe or spillover energy that is transmitted to adjacent satellites. At a receive station that has two antennas, one pointed at the primary satellite (the satellite the signal is intended for) and a secondary satellite (a satellite that is receiving side lobe energy), both paths of the signal are received and measured. From a comparison of those paths, two measurements can be made: Differential Time Offset (DTO) and Differential Frequency Offset (DFO). These measurements are often implemented through correlation processing. DTO represents the difference in time that it takes the signal to travel through the two satellites, while DFO represents the difference in frequency the received signals present through the two satellites. The frequency differences observed are due to different Doppler shift resulting from relative satellite motion and differences in the translation frequencies of the two satellite channels. Channel translation frequencies and downlink Doppler shift and delay can be calibrated out of the measurements by observing transmitters of known location simultaneously on the channels. This leaves the uplink DTO and DFO as the observables. See 'Reference Signals' below. How Satellite Geolocation Works: Lines of Position Once a DTO calculated, it can be combined with the known position of the satellites and the receiving station. This combination provides a locus of positions on the Earth’s surface for the source of the signal; from this result a line of position (LOP) can be derived. A similar line can be derived for the frequency differences. Where the two LOPs intersect is the signal transmission location. In addition to geolocation with a time LOP and a frequency LOP, a location can also be determined by finding the crossing point of two time LOPs. The second time LOP is an identical measurement using a different secondary satellite, or using the same secondary satellite, but later in time. Similarly, two frequency LOPs can be used to determine a location. It can be shown that, in general, it is expected that the two LOPs intersect in two places. In many circumstances it is possible to discount one of the intersections e.g. due to it not being in the coverage area of one or both satellites. In some circumstances, it is not possible to distinguish intersections from a pair of LOPs, in which case, additional LOPs need to be determined. How Satellite Geolocation Works: Reference Signals While measuring the DTO and DFO will give you an idea of the location of the signal source, the location will be inaccurate. There are many biases within the measurement system that, if not accounted for properly, will manifest themselves as time delays or frequency offsets. For example, while a satellite translation frequency is known to within a few kHz, accurate geolocation requires frequency measurement accuracies of single mHz. How Satellite Geolocation Works: In order to determine the position of signal source, a second set of measurements is required. Typically, this is done by making DTO and DFO measurements for a reference signal simultaneous with the target signal measurement. The measurement of the reference signal is purely passive and simply serves to remove the biases in the system. The same measurements that are made for the target signal, DTO and DFO, are made for the reference signal. The key to a reference signal is that the transmit location of that signal is known. By comparing the DTO of the reference signal and the DTO of the target signal a result known as Time Difference of Arrival (TDOA) can be calculated. Likewise, from the DFO of the target and the DFO of the reference signal, a Frequency Difference of Arrival (FDOA) can be determined. The TDOA and FDOA results provide a finite number of locations on the Earth’s surface, and therefore, lines of position (LOPs) are determined from the TDOA and FDOA results. How Satellite Geolocation Works: A limitation as to how accurately a location can be obtained is knowledge of the satellites' positions and velocities generated by the satellite ephemerides (orbit descriptors). A single reference geographically close to the target will give a high degree of cancellation of the location effects of ephemeris error. Measurements on signals from multiple reference sites can be used to improve the accuracy of the satellite ephemerides thereby provided improved geolocation accuracy generally. Various Geolocation Methods: TDOA and FDOA results can be gathered and combined in various methods to produce geolocation results. Each method has its advantages and disadvantages in different measurement scenarios. Various Geolocation Methods: TDOA-TDOA Geolocation TDOA-TDOA geolocation is performed, generally, by measuring DTO values using two secondary satellites, or three total satellites. By doing this, two TDOA lines are generated, hopefully, with a crossing point. TDOA – TDOA geolocation is ideal for moving targets, since the movement of the target will introduce varying and random frequency changes, causing an FDOA result to be useless, unless obtained from a highly inclined satellite. TDOA-TDOA geolocation will not work for unmodulated signals. Due to the repetitive nature of the signal, no unique TDOA solution will exist. One problem with using only TDOA lines of position is that they tend to be north-south orientated and close to parallel, so that the “crossing point” of a TDOA-TDOA measurement can be error prone and uncertain, as it is “hidden” in a long intersection of the lines. Care is also necessary in interpreting the results from moving targets if the two TDOA observations are not obtained simultaneously since the target will have moved between observations. Various Geolocation Methods: FDOA-FDOA Geolocation FDOA-FDOA geolocation is accomplished by using three satellites, or by using time separated measurements on two satellites. The time separation can be as little as 5 minutes or as much as an hour or more. Again, the two FDOA lines are used to find a crossing point, or target location. FDOA-FDOA geolocation is necessary for CW signals. Geolocation on highly inclined satellites, either one of both being used in the measurement, will result in more accurate results by performing FDOA – FDOA geolocation. This is due to a large difference in relative motion, leading to a large difference in relative frequency between the two satellites. A related point is the error to FDOA-FDOA calculation contributed by ephemeris uncertainty is relatively small. Moving targets are not likely to be located using FDOA methods, unless using a highly inclined satellite. FDOA – FDOA geolocation has an interesting weakness in that for some amount of time per day, the two satellites used have very little differential frequency. This is due to the cyclical movement of the satellites. During those periods, FDOA measurements will not be ideal. In addition, the small amount of frequency difference being measured is much harder to accurately measure than the time differences. Various Geolocation Methods: TDOA-FDOA Geolocation TDOA-FDOA geolocation, in most scenarios, gives ideal results. By combining time lines, which, generally, are oriented north-south, and frequency lines, which, generally, are orientated east-west, you get a nearly perpendicular crossing. A perpendicular crossing means less uncertainty in the calculated location. TDOA-FDOA geolocation also has an interesting limitation in that there are generally two times per day, separated by around 12 hours, where the FDOA becomes very small and hard to relate to an accurate LOP. These times can be calculated based on known satellite ephemeris information and approximate transmitter location, and can therefore be avoided when taking FDOA measurements. Various Geolocation Methods: The process of geolocating a signal requires some knowledge of the signal and all the techniques in order to get an accurate location. Various Geolocation Methods: The geolocation of a CW signal is nearly impossible with TDOA-FDOA. Nevertheless, a nominally CW transmission can contain imperfections, especially if a station transmits near its maximum EIRP. Hence, it often has a phase noise component which might be recognized as a modulated signal and therefore used to make TDOA measurements. However, it is generally more accurate to locate a CW carrier using FDOA-FDOA geolocation, even for non inclined satellites. Various Geolocation Methods: This is especially used today whenever high power CW jamming of actual full power multiplex transmissions occurs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bromobimane** Bromobimane: Bromobimane or monobromobimane is a heterocyclic compound and bimane dye that is used as a reagent in biochemistry. While bromobimane itself is essentially nonfluorescent, it alkylates thiol groups, displacing the bromine and adding the fluorescent tag (λemission = 478 nm) to the thiol. Its alkylating properties are comparable to iodoacetamide. Synthesis: Bromobimane is prepared from 3,4-dimethyl-2-pyrazolin-5-one (a condensation product of ethyl 2-methylacetoacetate with hydrazine) by chlorination followed by basic treatment; with aqueous K2CO3 under heterogeneous conditions, the required syn-bimane, 2,3,5,6-tetramethyl-1H,7H-pyrazolo[1,2-a]pyrazole-1,7-dione, is the major product. It can then be selectively brominated to the target bromobimane (with 1 equivalent of Br2; or dibromobimane, if 2 equivalents of Br2 are used): Bromobimanes are light-sensitive compounds and should be kept refrigerated and protected from light.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Protease** Protease: A protease (also called a peptidase, proteinase, or proteolytic enzyme) is an enzyme that catalyzes proteolysis, breaking down proteins into smaller polypeptides or single amino acids, and spurring the formation of new protein products. They do this by cleaving the peptide bonds within proteins by hydrolysis, a reaction where water breaks bonds. Proteases are involved in many biological functions, including digestion of ingested proteins, protein catabolism (breakdown of old proteins), and cell signaling. Protease: In the absence of functional accelerants, proteolysis would be very slow, taking hundreds of years. Proteases can be found in all forms of life and viruses. They have independently evolved multiple times, and different classes of protease can perform the same reaction by completely different catalytic mechanisms. Classification: Based on catalytic residue Proteases can be classified into seven broad groups: Serine proteases - using a serine alcohol Cysteine proteases - using a cysteine thiol Threonine proteases - using a threonine secondary alcohol Aspartic proteases - using an aspartate carboxylic acid Glutamic proteases - using a glutamate carboxylic acid Metalloproteases - using a metal, usually zinc Asparagine peptide lyases - using an asparagine to perform an elimination reaction (not requiring water)Proteases were first grouped into 84 families according to their evolutionary relationship in 1993, and classified under four catalytic types: serine, cysteine, aspartic, and metalloproteases. The threonine and glutamic proteases were not described until 1995 and 2004 respectively. The mechanism used to cleave a peptide bond involves making an amino acid residue that has the cysteine and threonine (proteases) or a water molecule (aspartic, glutamic and metalloproteases) nucleophilic so that it can attack the peptide carbonyl group. One way to make a nucleophile is by a catalytic triad, where a histidine residue is used to activate serine, cysteine, or threonine as a nucleophile. This is not an evolutionary grouping, however, as the nucleophile types have evolved convergently in different superfamilies, and some superfamilies show divergent evolution to multiple different nucleophiles. Classification: Peptide lyases A seventh catalytic type of proteolytic enzymes, asparagine peptide lyase, was described in 2011. Its proteolytic mechanism is unusual since, rather than hydrolysis, it performs an elimination reaction. During this reaction, the catalytic asparagine forms a cyclic chemical structure that cleaves itself at asparagine residues in proteins under the right conditions. Given its fundamentally different mechanism, its inclusion as a peptidase may be debatable. Classification: Based on evolutionary phylogeny An up-to-date classification of protease evolutionary superfamilies is found in the MEROPS database. In this database, proteases are classified firstly by 'clan' (superfamily) based on structure, mechanism and catalytic residue order (e.g. the PA clan where P indicates a mixture of nucleophile families). Within each 'clan', proteases are classified into families based on sequence similarity (e.g. the S1 and C3 families within the PA clan). Each family may contain many hundreds of related proteases (e.g. trypsin, elastase, thrombin and streptogrisin within the S1 family). Classification: Currently more than 50 clans are known, each indicating an independent evolutionary origin of proteolysis. Based on optimal pH Alternatively, proteases may be classified by the optimal pH in which they are active: Acid proteases Neutral proteases involved in type 1 hypersensitivity. Here, it is released by mast cells and causes activation of complement and kinins. This group includes the calpains. Basic proteases (or alkaline proteases) Enzymatic function and mechanism: Proteases are involved in digesting long protein chains into shorter fragments by splitting the peptide bonds that link amino acid residues. Some detach the terminal amino acids from the protein chain (exopeptidases, such as aminopeptidases, carboxypeptidase A); others attack internal peptide bonds of a protein (endopeptidases, such as trypsin, chymotrypsin, pepsin, papain, elastase). Catalysis Catalysis is achieved by one of two mechanisms: Aspartic, glutamic, and metallo-proteases activate a water molecule, which performs a nucleophilic attack on the peptide bond to hydrolyze it. Enzymatic function and mechanism: Serine, threonine, and cysteine proteases use a nucleophilic residue (usually in a catalytic triad). That residue performs a nucleophilic attack to covalently link the protease to the substrate protein, releasing the first half of the product. This covalent acyl-enzyme intermediate is then hydrolyzed by activated water to complete catalysis by releasing the second half of the product and regenerating the free enzyme Specificity Proteolysis can be highly promiscuous such that a wide range of protein substrates are hydrolyzed. This is the case for digestive enzymes such as trypsin, which have to be able to cleave the array of proteins ingested into smaller peptide fragments. Promiscuous proteases typically bind to a single amino acid on the substrate and so only have specificity for that residue. For example, trypsin is specific for the sequences ...K\... or ...R\... ('\'=cleavage site).Conversely some proteases are highly specific and only cleave substrates with a certain sequence. Blood clotting (such as thrombin) and viral polyprotein processing (such as TEV protease) requires this level of specificity in order to achieve precise cleavage events. This is achieved by proteases having a long binding cleft or tunnel with several pockets that bind to specified residues. For example, TEV protease is specific for the sequence ...ENLYFQ\S... ('\'=cleavage site). Enzymatic function and mechanism: Degradation and autolysis Proteases, being themselves proteins, are cleaved by other protease molecules, sometimes of the same variety. This acts as a method of regulation of protease activity. Some proteases are less active after autolysis (e.g. TEV protease) whilst others are more active (e.g. trypsinogen). Biodiversity of proteases: Proteases occur in all organisms, from prokaryotes to eukaryotes to virus. These enzymes are involved in a multitude of physiological reactions from simple digestion of food proteins to highly regulated cascades (e.g., the blood-clotting cascade, the complement system, apoptosis pathways, and the invertebrate prophenoloxidase-activating cascade). Proteases can either break specific peptide bonds (limited proteolysis), depending on the amino acid sequence of a protein, or completely break down a peptide to amino acids (unlimited proteolysis). The activity can be a destructive change (abolishing a protein's function or digesting it to its principal components), it can be an activation of a function, or it can be a signal in a signalling pathway. Biodiversity of proteases: Plants Plant genomes encode hundreds of proteases, largely of unknown function. Those with known function are largely involved in developmental regulation. Plant proteases also play a role in regulation of photosynthesis. Biodiversity of proteases: Animals Proteases are used throughout an organism for various metabolic processes. Acid proteases secreted into the stomach (such as pepsin) and serine proteases present in the duodenum (trypsin and chymotrypsin) enable us to digest the protein in food. Proteases present in blood serum (thrombin, plasmin, Hageman factor, etc.) play an important role in blood-clotting, as well as lysis of the clots, and the correct action of the immune system. Other proteases are present in leukocytes (elastase, cathepsin G) and play several different roles in metabolic control. Some snake venoms are also proteases, such as pit viper haemotoxin and interfere with the victim's blood clotting cascade. Proteases determine the lifetime of other proteins playing important physiological roles like hormones, antibodies, or other enzymes. This is one of the fastest "switching on" and "switching off" regulatory mechanisms in the physiology of an organism. Biodiversity of proteases: By a complex cooperative action, proteases can catalyze cascade reactions, which result in rapid and efficient amplification of an organism's response to a physiological signal. Biodiversity of proteases: Bacteria Bacteria secrete proteases to hydrolyse the peptide bonds in proteins and therefore break the proteins down into their constituent amino acids. Bacterial and fungal proteases are particularly important to the global carbon and nitrogen cycles in the recycling of proteins, and such activity tends to be regulated by nutritional signals in these organisms. The net impact of nutritional regulation of protease activity among the thousands of species present in soil can be observed at the overall microbial community level as proteins are broken down in response to carbon, nitrogen, or sulfur limitation.Bacteria contain proteases responsible for general protein quality control (e.g. the AAA+ proteasome) by degrading unfolded or misfolded proteins. Biodiversity of proteases: A secreted bacterial protease may also act as an exotoxin, and be an example of a virulence factor in bacterial pathogenesis (for example, exfoliative toxin). Bacterial exotoxic proteases destroy extracellular structures. Viruses The genomes of some viruses encode one massive polyprotein, which needs a protease to cleave this into functional units (e.g. the hepatitis C virus and the picornaviruses). These proteases (e.g. TEV protease) have high specificity and only cleave a very restricted set of substrate sequences. They are therefore a common target for protease inhibitors. Archaea Archaea use proteases to regulate various cellular processes from cell-signaling, metabolism, secretion and protein quality control. Only two ATP-dependent proteases are found in archaea: the membrane associated LonB protease and a soluble 20S proteosome complex . Uses: The field of protease research is enormous. Since 2004, approximately 8000 papers related to this field were published each year. Proteases are used in industry, medicine and as a basic biological research tool.Digestive proteases are part of many laundry detergents and are also used extensively in the bread industry in bread improver. A variety of proteases are used medically both for their native function (e.g. controlling blood clotting) or for completely artificial functions (e.g. for the targeted degradation of pathogenic proteins). Highly specific proteases such as TEV protease and thrombin are commonly used to cleave fusion proteins and affinity tags in a controlled fashion. Uses: Protease-containing plant-solutions called vegetarian rennet have been in use for hundreds of years in Europe and the Middle East for making kosher and halal Cheeses. Vegetarian rennet from Withania coagulans has been in use for thousands of years as a Ayurvedic remedy for digestion and diabetes in the Indian subcontinent. It is also used to make Paneer. Inhibitors: The activity of proteases is inhibited by protease inhibitors. One example of protease inhibitors is the serpin superfamily. It includes alpha 1-antitrypsin (which protects the body from excessive effects of its own inflammatory proteases), alpha 1-antichymotrypsin (which does likewise), C1-inhibitor (which protects the body from excessive protease-triggered activation of its own complement system), antithrombin (which protects the body from excessive coagulation), plasminogen activator inhibitor-1 (which protects the body from inadequate coagulation by blocking protease-triggered fibrinolysis), and neuroserpin.Natural protease inhibitors include the family of lipocalin proteins, which play a role in cell regulation and differentiation. Lipophilic ligands, attached to lipocalin proteins, have been found to possess tumor protease inhibiting properties. The natural protease inhibitors are not to be confused with the protease inhibitors used in antiretroviral therapy. Some viruses, with HIV/AIDS among them, depend on proteases in their reproductive cycle. Thus, protease inhibitors are developed as antiviral therapeutic agents. Inhibitors: Other natural protease inhibitors are used as defense mechanisms. Common examples are the trypsin inhibitors found in the seeds of some plants, most notable for humans being soybeans, a major food crop, where they act to discourage predators. Raw soybeans are toxic to many animals, including humans, until the protease inhibitors they contain have been denatured.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CACNA1B** CACNA1B: The voltage-dependent N-type calcium channel subunit alpha-1B is a protein that in humans is encoded by the CACNA1B gene. The α1B protein, together with β and α2δ subunits forms N-type calcium channel (Cav2.2 channel) PMID 26386135. It is a R-type calcium channel.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Protein catabolism** Protein catabolism: In molecular biology, protein catabolism is the breakdown of proteins into smaller peptides and ultimately into amino acids. Protein catabolism is a key function of digestion process. Protein catabolism often begins with pepsin, which converts proteins into polypeptides. These polypeptides are then further degraded. In humans, the pancreatic proteases include trypsin, chymotrypsin, and other enzymes. In the intestine, the small peptides are broken down into amino acids that can be absorbed into the bloodstream. These absorbed amino acids can then undergo amino acid catabolism, where they are utilized as an energy source or as precursors to new proteins.The amino acids produced by catabolism may be directly recycled to form new proteins, converted into different amino acids, or can undergo amino acid catabolism to be converted to other compounds via the Krebs cycle. Interface with other cycles: Protein catabolism produces amino acids that are used to form bacterial proteins or oxidized for to meet the energy needs of the cell. Among the several degrading processes for amino acids are Deamination (removal of an amino group), transamination (transfer of amino group), decarboxylation (removal of carboxyl group), and dehydrogenation (removal of hydrogen). The degraded amino acid that they can be processed as fuel for the Krebs/Citric Acid (TCA) Cycle. Protein degradation: Protein degradation differs from protein catabolism. Proteins are produced and destroyed routinely as part of the normal operations of the cell. Transcription factors, proteins that help regulate protein synthesis, are targets of such degradations. Their degradation is not a significant contributor to the energy needs of the cell. The addition of ubiquitin (ubiquitylation) marks a protein for degradation via the proteasome. Amino acid degradation: Oxidative deamination is the first step to breaking down the amino acids so that they can be converted to sugars. The process begins by removing the amino group of the amino acids. The amino group becomes ammonium as it is lost and later undergoes the urea cycle to become urea, in the liver. It is then released into the blood stream, where it is transferred to the kidneys, which will secrete the urea as urine. The remaining portion of the amino acid becomes oxidized, resulting in an alpha-keto acid. The alpha-keto acid will then proceed into the TCA cycle, in order to produce energy. The acid can also enter glycolysis, where it will be eventually converted into pyruvate. The pyruvate is then converted into acetyl-CoA so that it can enter the TCA cycle and convert the original pyruvate molecules into ATP, or usable energy for the organism.Transamination leads to the same result as deamination: the remaining acid will undergo either glycolysis or the TCA cycle to produce energy that the organism's body will use for various purposes. This process transfers the amino group instead of losing the amino group to be converted into ammonium. The amino group is transferred to alpha-ketoglutarate, so that it can be converted to glutamate. Then glutamate transfers the amino group to oxaloacetate. This transfer is so that the oxaloacetate can be converted to aspartate or other amino acids. Eventually, this product will also proceed into oxidative deamination to once again produce alpha-ketoglutarate, an alpha-keto acid that will undergo the TCA cycle, and ammonium, which will eventually undergo the urea cycle.Transaminases are enzymes that help catalyze the reactions that take place in transamination. They help catalyze the reaction at the point when the amino group is transferred from the original amino acid, like glutamate to alpha-ketoglutarate, and hold onto it to transfer it to another alpha-ketoacid. Factors determining protein half-life: Some key factors that determine overall rate include protein half-life, pH, and temperature. Factors determining protein half-life: Protein half-life helps determine the overall rate as this designates the first step in protein catabolism. Depending on whether this step is short or long will influence the rest of the metabolic process. One key component in determining the protein half-life is based on the N-end rule. This states that the amino acid present at the N-terminus of a protein helps determine the protein's half-life.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ethane-1,2-dithiol** Ethane-1,2-dithiol: Ethane-1,2-dithiol, also known as EDT, is a colorless liquid with the formula C2H4(SH)2. It has a very characteristic odor which is compared by many people to rotten cabbage. It is a common building block in organic synthesis and an excellent ligand for metal ions. Preparation: Ethane-1,2-dithiol is made commercially by the reaction of 1,2-dichloroethane with aqueous sodium bisulfide. In the laboratory, it can also be prepared by the action of 1,2-dibromoethane on thiourea followed by hydrolysis. Applications: As a 1,2-dithiol, this compound is widely used in organic chemistry because it reacts with aldehydes and ketones to give 1,3-dithiolanes, which are useful intermediates. C2H4(SH)2 + RR'CO → C2H4S2CRR' + H2OOther 1,2- and 1,3-dithiols undergo this reaction to give related 1,3-dithiolanes and 1,3-dithianes (six-membered rings). Diols such as ethylene glycol undergo analogous reactions to 1,3-dioxolanes and 1,3-dioxanes. One distinguishing feature of the dithiolanes and dithianes derived from aldehydes is that the methyne group can be deprotonated and the resulting carbanion alkylated. 1,2-Ethanedithiol is commonly used as a scavenger in peptide cleavage synthesis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sprite Lemon+** Sprite Lemon+: Sprite Lemon+ is a range of primarily lemonade-flavoured soft drinks produced by The Coca-Cola Company in Australia and the Philippines under the Sprite brand. Sprite Lemon+ Zero Sugar is an artificially sweetened version. History: The predecessor to this brand, Lift was introduced in Australia in the 1990s as a replacement for the Mello Yello brand. Mello Yello had replaced Leed Lemon Soda Squash, which was a variant of the Leed Lemonade brand. The only flavour in the Lift range (besides limited time flavours) was Lemon. In 2015 it was rebranded as Fanta Lemon Lift and then back to the original Lift in 2016 with a flavour change, supposedly making the drink more sour. History: Lift was discontinued in the Australian market in September 2022. The Sprite brand was used to launch a new zesty lemon flavour variant with caffeine and marketed as Sprite Lemon+. This flavour was announced by Coca-Cola Europacific Partners on 13 September 2022, following months of rumours on social media.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fatwood** Fatwood: Fatwood, also known as "fat lighter", "lighter wood", "rich lighter", "pine knot", "lighter knot", "heart pine", "fat stick" or "lighter'd" [sic], is derived from the heartwood of pine trees. The stump (and tap root) that is left in the ground after a tree has fallen or has been cut is the primary source of fatwood, as the resin-impregnated heartwood becomes hard and rot-resistant after the tree has died. Wood from other locations can also be used, such as the joints where limbs intersect the trunk. Although most resinous pines can produce fatwood, in the southeastern United States the wood is commonly associated with longleaf pine (Pinus palustris), which historically was highly valued for its high pitch production. History: The commercial use of fatwood from stumps stemmed from the production of pitch and pine tar. In 1648, a company was formed in Sweden called Norrländska Tjärkompaniet (The Wood Tar Company of North Sweden), and was given exclusive export rights for pine tar by the King of Sweden. Composition: Coniferous tree sap is a viscous liquid that contains terpene, a volatile hydrocarbon. Over time the evaporation of the terpene changes the state of the sap; it slowly gets thicker until it hardens into resin. New fatwood leaks the sticky sap, while in aged fatwood the sap has hardened and is no longer sticky. At every stage of the aging process, fatwood will burn readily, unless excessively damp. Wood kindling and tinder: Because of the flammability of terpene, fatwood is prized for use as kindling in starting fires. It lights quickly even when wet, is very wind resistant, and burns hot enough to light larger pieces of wood. A small piece of fatwood can be used many times to create tinder by shaving small curls and using them to light other larger tinder. The pitch-soaked wood produces an oily, sooty smoke, and it is recommended that one should not cook on a fire until all the fatwood has completely burned out. Distribution: There are between 105 and 125 species classified as resinous pine trees around the world. Species usable for fatwood are distributed across a range including Eurasia, where they range from the Canary Islands, Iberian Peninsula and Scotland east to the Russian Far East. From the Philippines, Norway, Finland and Sweden (Scots Pine), and eastern Siberia (Siberian Dwarf Pine), and south to northernmost Africa. From the Himalaya and Southeast Asia, with one species (Sumatran Pine) just crossing the Equator in Sumatra. In North America, they range from 66°N in Canada (Jack Pine), to Central America to 12°N in Nicaragua (Caribbean Pine). The highest diversity in the genus occurs in Mexico and California. In the sub-tropics of the Southern Hemisphere, including Chile, Brazil, South Africa, Australia, Argentina and New Zealand, the trees are not indigenous but were introduced. Anywhere there is a pine tree or pine stump, there can be fatwood that can be found on top of the ground, but it is more concentrated and preserved in stumps. Distribution: The United States In the United States the pine tree Pinus palustris, known as the longleaf pine, once covered as much as 90,000,000 acres (360,000 km2) but due to clear cutting was reduced by between 95% and 97%. The trees grow very large (up to 150 feet), taking 100 to 150 years to mature and can live up to 500 years. The wood was prized and cutting resulted in many hundreds of thousands of stumps that are very resinous, do not rot, and eventually become fatwood. This ushered in a new industry for many years. There is still a market for the wood, but supplies are less abundant. Due to the length of growing time, the Pinus taeda, also called the loblolly pine, replaced it for commercial replanting, with a maturity of only 38 to 45 years. Industrial uses: Industrial uses for fatwood include production of turpentine; when fatwood is cooked down in a fire kiln, the heavier resin product that results is pine tar. The steam that vaporizes from this process is turned into a liquid that becomes turpentine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Differentially private analysis of graphs** Differentially private analysis of graphs: Differentially private analysis of graphs studies algorithms for computing accurate graph statistics while preserving differential privacy. Such algorithms are used for data represented in the form of a graph where nodes correspond to individuals and edges correspond to relationships between them. For examples, edges could correspond to friendships, sexual relationships, or communication patterns. A party that collected sensitive graph data can process it using a differentially private algorithm and publish the output of the algorithm. The goal of differentially private analysis of graphs is to design algorithms that compute accurate global information about graphs while preserving privacy of individuals whose data is stored in the graph. Variants: Differential privacy imposes a restriction on the algorithm. Intuitively, it requires that the algorithm has roughly the same output distribution on neighboring inputs. If the input is a graph, there are two natural notions of neighboring inputs, edge neighbors and node neighbors, which yield two natural variants of differential privacy for graph data. Variants: Let ε be a positive real number and A be a randomized algorithm that takes a graph as input and returns an output from a set O The algorithm A is ϵ -differentially private if, for all neighboring graphs G1 and G2 and all subsets S of O where the probability is taken over the randomness used by the algorithm. Variants: Edge differential privacy Two graphs are edge neighbors if they differ in one edge. An algorithm is ϵ -edge-differentially private if, in the definition above, the notion of edge neighbors is used. Intuitively, an edge differentially private algorithm has similar output distributions on any pair of graphs that differ in one edge, thus protecting changes to graph edges. Variants: Node differential privacy Two graphs are node neighbors if one can be obtained from the other by deleting a node and its adjacent edges. An algorithm is ϵ -node-differentially private if, in the definition above, the notion of node neighbors is used. Intuitively, a node differentially private algorithm has similar output distributions on any pair of graphs that differ in one one nodes and edges adjacent to it, thus protecting information pertaining to each individual. Node differential privacy give a stronger privacy protection than edge differential privacy. Research history: The first edge differentially private algorithm was designed by Nissim, Raskhodnikova, and Smith. The distinction between edge and node differential privacy was first discussed by Hay, Miklau, and Jensen. However, it took several years before first node differentially private algorithms were published in Blocki et al., Kasiviswanathan et al., and Chen and Zhou. In all three papers, the algorithms are for releasing a single statistic, like a triangle count or counts of other subgraphs. Raskhodnikova and Smith gave the first node differentially private algorithm for releasing a vector, specifically, the degree count and the degree distribution.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Octagonal bipyramid** Octagonal bipyramid: The octagonal bipyramid is one of the infinite set of bipyramids, dual to the infinite prisms. If an octagonal bipyramid is to be face-transitive, all faces must be isosceles triangles. 16-sided dice are often octagonal bipyramids. Images: It can be drawn as a tiling on a sphere which also represents the fundamental domains of [4,2], *422 symmetry:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Semantometrics** Semantometrics: Semantometrics is a tool for evaluating research. It is functionally an extension of tools such as bibliometrics, webometrics, and altmetrics, but instead of just evaluating citations – which entails relying on outside evidence – it uses a semantic evaluation of the full text of the research paper being evaluated.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Code cave** Code cave: A code cave is a series of unused bytes in a process's memory. The code cave inside a process's memory is often a reference to a section that has capacity for injecting custom instructions. Common uses: The concept of a code cave is often employed by hackers and reverse engineers to execute arbitrary code in a compiled program. It can be a helpful method to make modifications to a compiled program in the example of including additional dialog boxes, variable modifications or even the removal of software key validation checks. Often using a call instruction commonly found on many CPU architectures, the code jumps to the new subroutine and pushes the next address onto the stack. After execution of the subroutine a return instruction can be used to pop the previous location off of the stack into the program counter. This allows the existing program to jump to the newly added code without making significant changes to the program flow itself. Advantages: Easy and fast – This means the modification process is fast and easy. When modifying the existing code with tools such as Ollydbg, the added functions can be assembled and tested without any external dependencies. No need for source – Using code caves can be extremely efficient even if there is no source code provided for the programmer. This allows for the programmer to make adjustments such as adding or removing functions in the code without having to rewrite the entire program or link any external dependencies into an existing project. Disadvantages: Easy to break the program – In many cases the executable file is modified. This means that there may not be an existing code cave in the existing script for any code injection due to the lack of resources provided in script. Any replacement of the existing script may lead to program failure/crash. Disadvantages: Lack of versatility – Injecting code into an existing script means that the limited space given only allows for simple instruction modifications and the language used is only assembly. This can be mitigated by the use of shared library injectors (DLL injection [Windows] or LD_PRELOAD [Linux]) such that the injected library contains already compiled code and existing instructions in the target binary are simply modified to use it. Tools: pycave: Simple tool to find code caves in Portable Executable (PE) files. Ollydbg: a debugger for code analysis. It traces the script calls and executes, as well as displays any iterations in the libraries and binaries. Code can be injected or removed into/from the EXE file directly with this debugger. PE: Explorer: it allows a user to open and edit executable files called PE files (portable executable files). This includes .EXE, .DLLs and other less common file types. Cheat Engine: a powerful tool that reads process memory and writes process memory. This means any client-side data values can be changed and edited. It also can display changes in the values. TSearch: a powerful tool that reads process memory and writes process memory. Like Cheat Engine, it can change client-side values data.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Abdominal exercise** Abdominal exercise: Abdominal exercises are a type of strength exercise that affect the abdominal muscles (colloquially known as the stomach muscles or "abs"). Human abdominal consist of four muscles which are the rectus abdomens, internal oblique, external oblique, and transversus abdominis. When performing abdominal exercises it is important to understand the effects, functions, the types of exercises, and think about how to perform this exercise safely. Effects: Abdominal exercises are useful for building abdominal muscles. This is useful for improving performance with certain sports, back pain, and for withstanding abdominal impacts (e.g., taking punches). According to a 2011 study, abdominal muscle exercises are known to increase the strength and endurance of the abdominal muscles.It has been highly disputed whether or not abdominal exercises have any reducing effect on abdominal fat. The aforementioned 2011 study found that abdominal exercise does not reduce abdominal fat; to achieve that, a deficit in energy expenditure and caloric intake must be created—abdominal exercises alone are not enough to reduce abdominal fat and the girth of the abdomen. Early results from a 2006 study found that walking exercise (not abdominal exercise specifically) reduced the size of subcutaneous abdominal fat cells; cell size predicts type 2 diabetes according to a lead author. Moderate exercise reduced cell size by about 18% in 45 obese women over 20 weeks; diet alone did not appear to affect cell size. Functions of abdominal muscles: Abdominal muscles have many important functions, including breathing, coughing, and sneezing, and maintaining posture and speech in a number of species. Other abdominal functions are that it helps "in the function of support, containment of viscera, and help in the process of expiration, defecation, urination, vomiting, and also at the time of childbirth." The anterior abdominal wall is made up of four muscles—the rectus abdominis muscle, the internal and external obliques, and the transversus abdominis."The two internal muscles, the internal oblique, and the transverse abdominis, respond more to increases in chemical or volume-related drive than the two external muscles, the rectus abdominis and external oblique; the basis for this differential sensitivity is unknown". Core training: Not only can a one-sided preference for abdominal muscles (lack of exercise focused on other core muscles) result in creating muscle imbalances, but the effectiveness of exercise is also far from what could be achieved with a balanced workout planning. Core training frequently utilizes balance exercises, such as training of transverse abdomens and multifidus, training of diaphragm, and training of pelvic floor muscles. Core strength exercises that are performed are to help influence core stability. The goal of core training is definitely not to develop muscle hypertrophy but to improve functional predispositions of physical activity. This particularly involves improving intermuscular coordination or synchronization of participating muscles.The involvement of the core means more than just compressing abdominal muscles when in crouching or seated position. The role of the core muscles is to stabilize the spine. Resisting expansion or rotation is as important as the ability to execute the movement. Abdominal exercises: There are multiple ways to work on our abdominals but here are various abdominal exercises someone can do that are effective. One of the most popular exercise is what is known as the abdominal crunch. It activates the four abdominal muscles because it flexes the spine while laying down with their feet on the ground while raising their upper body up and then back down. For those who are new to this exercise, it can help perform this exercise by crossing their arms and putting them crossed on their chest. Another effective exercise is an abdominal plank because it is used when strengthening their trunk and their inner and outer oblique of their core. This exercise is performed by being facedown, legs straight with their elbows bent, and holding the exercise in place by putting their weight on their forearms.Moving forward, another exercise people can begin doing is to lie on their back and putting their feet at a 45° angle while moving their legs as if they were riding a bicycle. In addition, people can lay down with their hands on their side of their body and position a book on their stomach while raising their stomach up and down to feel the burn in their core. People may also lay down and position their feet at a 45° angle and lift them straight and bend them back down to the 45° angle then repeat. Once people have completed those they can stand straight with both of their arms opened and straight and bend down to the left then to the right by using one hand at a time. While standing people can also stand straight and position their hands on their hips and rotate their bodies from right to left and vice versa while bending forward and backward. Another way someone can work on their abdominals is by sitting on top of their legs in a bed while bending their chest forward until it touches the bed then coming back up to their normal position. Also, people can sit down on a bed with their legs straight and they will lie back and come back up without using their hands. While using a chair they can place their arms on the side of a chair and with their legs backward they will push down until their abdominal touches the chair. Finally, people can lay down with their feet straight and raise their legs to a right angle and then back down. For a better visual understanding, all these exercises were obtained from an Abdominal Exercise Journal. Abdominal exercises: Momentaneous activity One way to estimate the effectiveness of any abdominal exercise is in measuring the momentaneous activity by electromyography (EMG), with the activity generally being compared to that of the traditional crunch. However, an exercise of lower activity performed during a long time can give at least as much exercise as a high-activity exercise, with the main difference being that a prolonged duration results in more in aerobic exercise than strength training. Abdominal exercises: The following tables rank abdominal exercises from highest to lowest in terms of activity as determined by the EMG measures: 1Compared to traditional crunch (100%) Bicycle crunch The bicycle targets the rectus abdominals and the obliques. Also, the rectus abdominals can be worked out with the basic crunch, the vertical crunch, the reverse crunch, and the full vertical crunch, and when at a low enough body fat percentage (10-12% for males, 15-18% for females) the individual parts of the muscle become visible; many refer to this visible separation as a six-pack. By exercising the internal and external obliques the stomach can be flattened. The long arm crunch, in which arms are straightened behind, adds a longer lever to the move and emphasizes the upper part of the abs. The plank exercise not only strengthens the abs but also the back and stabilizes the muscles. Abdominal exercises: Gadgets Abdominal exercises can also be performed with the help of some machines and the captain's chair is one of the most popular machines used in gyms and health clubs. Other machines are the Ab Roller, the Ab Rocket Twister, the Chin-up bar in conjunction with Ab Straps, and the Torso Track. An exercise ball is also a tool that helps strengthen the abs. It may be more effective than the crunches on the floor because the abs do more work as the legs are not involved in the exercise. With respect to the Ab-Slide, the study performed by Bird et al. showed greater muscle activation in the upper rectus abdominis, lower rectus abdominis, and external oblique when compared to the standard abdominal crunch. The Ab-Slide has proven to be an effective tool in strengthening the abdominal muscles from a concentric muscle action perspective. However, this research does not support replacing the traditional crunch exercise with the Ab-Slide gadget due to the lack of proven effectiveness in the eccentric loading of the abdominal muscles and the greater postural control. Potentially the most effective equipment for abdominal strengthening is those that offer the least stability. Examples include the CoreFitnessRoller, bodyweight suspension training such as TRX, and stability balls with or without the Halo. Safety of abdominal exercises: Abdominal exercises also put some degree of compressive force on the lumbar spine, putting unwanted stress on the lower back. In addition, exaggerated abdominal exercise can cause respiratory problems. A study of twelve exercises concluded that no single exercise covered all abdominal muscles with high intensity and low compression. The benefit of focused training on the "deep core" muscles such as the transversus abdominis has been disputed, with some experts advocating a more comprehensive training regimen.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Scottish Fold** Scottish Fold: The Scottish Fold is a breed of domestic cat with a natural dominant gene mutation that affects cartilage throughout the body, causing the ears to "fold", bending forward and down towards the front of the head, which gives the cat what is often described as an "owl-like" appearance.Originally called lop-eared or lops after the lop-eared rabbit, Scottish Fold became the breed's name in 1966. Depending on registries, longhaired Scottish Folds are varyingly known as Highland Fold, Scottish Fold Longhair, Longhair Fold and Coupari. Scottish Fold: All Fold cats are affected by osteochondrodysplasia (OCD) a developmental abnormality that affects cartilage and bone development throughout the body. This condition causes the ear fold in the breed and studies point to all Fold cats being affected by it. Fold cats therefore have malformed bone structures and can develop severe painful degenerative joint diseases at an early age. Due to these health conditions, breeding Fold cats is prohibited in several countries and some major cat registries do not recognise the cat breed. History: Origin The original Scottish Fold was a white barn cat named Susie, who was found at a farm near Coupar Angus in Perthshire, Scotland, in 1961. Susie's ears had an unusual fold in their middle, making her resemble an owl. When Susie had kittens, two of them were born with folded ears, and one was acquired by William Ross, a neighbouring farmer and cat-fancier. Ross registered the breed with the Governing Council of the Cat Fancy (GCCF) in the United Kingdom in 1966 and started to breed Scottish Fold kittens with the help of geneticist Pat Turner. The breeding programme produced 76 kittens in the first three years – 42 with folded ears and 34 with straight ears. The conclusion from this was that the ear mutation is due to a simple dominant gene.Susie's only reproducing offspring was a female Fold named Snooks who was also white; a second kitten was neutered shortly after birth. Three months after Snooks' birth, Susie was killed by a car. All Scottish Fold cats share a common ancestry to Susie. History: Acceptance The breed was not accepted for showing in Europe and the GCCF withdrew registrations in 1971 due to crippling deformity of the limbs and tail in some cats and concerns about genetic difficulties and ear problems such as infection, mites, and deafness, but the Folds were exported to America and the breed continued to be established using crosses with British Shorthairs and American Shorthairs. Since the initial concerns were brought, the Fold breed has not had the mite and infection problems, though wax buildup in the ears may be greater than in other cats. The concerns about deformities may have been caused by osteochondrodysplasia, which causes abnormalities in bone and cartilage throughout the body. History: Popularity The rare distinctive physical traits of the breed, combined with their reputation as unusually loving companions, make Folds highly sought-after pets, with Fold kittens typically costing considerably more than kittens of more common breeds. Scottish folds are also popular among celebrities, one of them being American singer Taylor Swift, who owns two Scottish fold cats named Meredith Grey (the titular character of the medical drama series Grey's Anatomy), and Olivia Benson (the protagonist of the police drama series Law & Order: Special Victims Unit). Breeding ban: In order to protect animal welfare, several countries and states have prohibited breeding with Scottish folds, including the Netherlands in 2014, Austria in 2020, Flanders (Belgium) in 2021, Victoria (Australia), Norway in 2023, and even the birthplace of the breed, Scotland. Some countries have also banned selling of Scottish fold cats, or the breeding with any cat that bears the gene mutation resulting in osteochondrodysplasia, so even breeding with some of the Scottish straights. Potential parent cats can be tested for this osteochondrodysplasia mutation before breeding.Furthermore, some of the major cat registries, such as the GCCF and FIFé, do not recognise, nor allow for the registry of, Scottish folds due to their health issues. Characteristics: Ears Scottish Fold kittens that do not develop folded ears are known as Scottish Straights. The original cats only had one fold in their ears, but due to selective breeding, breeders have increased the fold to a double or triple crease that causes the ear to lie totally flat against the head. Characteristics: The breed's distinctive folded ears are produced by an incompletely dominant gene that affects the cartilage of the ears, causing the ears to fold forward and downward, giving a cap-like appearance to the head. Smaller, tightly folded ears set in a cap-like fashion are preferred to a loose fold and larger ear. The large, round eyes and rounded head, cheeks, and whisker pads add to the overall rounded appearance. Despite the folded ears, folds still use their aural appendages to express themselves—the ears swivel to listen, lie back in anger and prick up when the treat bag rustles. Characteristics: Body The Scottish Fold is a medium to large sized cat, which can come in any colour, even calico. Males typically weigh 4–6 kg (8.8–13.2 lb), and females weigh 2.7–4 kg (6.0–8.8 lb). The Fold's entire body structure, especially the head and face, is generally rounded, and the eyes large and round. The nose is short with a gentle curve, and the cat's body is well-rounded with a padded look and medium-to-short legs. The head is domed at the top, and the neck very short. The broadly-spaced eyes give the Scottish Fold a "sweet expression". The Scottish Fold's ears are folded hence the name "Scottish Fold". Characteristics: Coat Scottish Folds can be either long- or short-haired, and they may have nearly any coat colour or combination of colours (including white). Shorthair Scottish Folds have thick and soft fur, with longhair Folds having longer and exceptionally dense fur around their upper thighs, toes, ears, and tail. Characteristics: Temperament Scottish Folds, whether with folded ears or with normal ears, are typically good-natured and placid and adjust to other animals within a household extremely well. They tend to become very attached to their human caregivers and are by nature quite affectionate. Folds also receive high marks for playfulness, grooming and intelligence. Scottish Folds like to be outdoors and enjoy outdoor games and activities. Loneliness is something they heavily dislike. Characteristics: Habits Folds are also known for sleeping on their backs. Scottish Folds typically have soft voices and display a complex repertoire of meows and purrs not found in better-known breeds. Folds are also known for sitting with their legs stretched out and their paws on their belly. This position is called the "Buddha Position". Genetics: An early study suggested that the fold is inherited as an autosomal dominant trait. A later study suggested an incomplete dominance. A cat with folded ears may have either one (heterozygous) or two copies (homozygous) of the dominant fold gene (Fd). A cat with normal ears should have two copies of the normal gene (fd). Mating a homozygous fold with any cat will produce all folds, but because homozygous folds are prone to severe health issues, breeding for them is generally considered unethical. A homozygous to normal mating will produce only heterozygous folds but presumably in ethical breeding programs, there will be no homozygous cats available to breed from. The only generally accepted breeding gives a 50% chance of producing heterozygous folds and 50% chance of producing progeny with normal genes. Genetics: There is suspicion that some non-fold litters are genetically heterozygous folds but because of very low expression of the gene, appear to be straight-eared. Such kittens may develop folded ears initially which then straighten back out. Because of this there are suggestions by some breeders to avoid mating Folds with straight-eared Scottish Folds but only use British Shorthairs (BSH) as outcross. If Scottish Shorthairs are to be used, they should be test mated to a BSH to make sure that they are not genetically folds. If such apparent straight-eared cats are mated with a fold, there is a 75% chance of folds (25% homozygous folds, 50% heterozygous folds) and 25% chance of straight ears. Genetics: In 2016 the genetic mutation responsible for the folded ears and the osteochondrodysplasia (OCD) was identified. It was found in a gene encoding a calcium permeable ion channel, transient receptor potential cation channel, subfamily V, member 4 (TRPV4). The mutation is a V342F substitution (c.1024G>T) in the fifth ankyrin repeat within the N-terminal cytoplasmic domain. It was also found in a human patient with metatropic dysplasia. Health: The typical lifespan of a Scottish Fold is 15 years.Scottish folds are susceptible to polycystic kidney disease (PKD) and cardiomyopathy. Scottish folds are also prone to degenerative joint disease (a type of arthritis), most commonly affecting the tail, ankles, and knees which can result in reduced range of motion. Health: Osteochondrodysplasia Osteochondrodysplasia (OCD) is a developmental abnormality that affects cartilage and bone development throughout the body. This condition causes the ear fold in the breed and, in studies conducted so far, all Fold cats are affected by it. Homozygous Folds are affected by malformed bone structures and develop severe painful degenerative joint diseases at an early age. Some breeders claim that this condition also affects heterozygous Folds, but usually to a much lesser extent and at a later age. Some will be asymptomatic.. There's no scientific proof to this claim. Health: In a study of Rorden four radiologists, blinded to the ear phenotype, assessed radiographs of 22 Scottish Fold/Straight cats. All cats were genotyped showing the heterozygous mutation in all folded ear cats but not in straight cats. Each reviewer gave on average the folded ear cats a worse "severity score", however the images showed much milder signs than previously published. The authors state that the severity of OCD in heterozygous cats is very variable and subtle. This could be due to other modifier genes or nurture (climate, diet, exercise). So it was shown that the least affected folded ear cat was given identical or less score than the highest rated straight ear cat. Health: In a case study of Takanosu two Scottish Fold mixed cats with severe exostosis in the hind leg are described. Interestingly both cats were homozygous for the TrpV4 mutation, assuming the parental cats had both the c.1024G>T mutation in the TrpV4 gene. This reinforces the hypothesis that mostly homozygous Scottish Folds are severely affected. On the other hand it is concerning that still Scottish Fold cats are bred with each other, also breeding with other cat strains with skeletal abnormalities (Munchkin, American Curl) should be avoided. Health: While ethical breeders breed Fold/non-Fold and not Fold/Fold (in the same way Munchkins are bred) to avoid producing homozygous Folds, because heterozygous Folds can also develop progressive arthritis of varying severity, some researchers recommend abandoning the breeding of Fold cats entirely. For this reason, the breed is not accepted by either the Governing Council of the Cat Fancy or the Fédération Internationale Féline (FIFé). Health: CFA breeders have stated that using only Fold to non-Fold breeding has eliminated problems with stiff tails, shortened tails and bone lesions. In the FIFé discussion, the representative for British breeders claimed that they were not seeing the problem in their cats, and that the study which showed that all heterozygous also have the condition had a small sample size. An offer of free X-ray radiography was presented to 300 breeders to find a Fold cat with healthy hind legs, but it was never taken up. A similar offer was set up by the World Cat Federation together with researcher Leslie Lyons but there was also no response. FIFé stated that they will not consider recognizing Scottish Folds if breeders will not allow their breed to be scrutinized.In a report on Scottish Folds, the Breed Standards Advisory Council (BSAC) for New Zealand Cat Fancy (NZCF) states that "Breeders may not have appreciated the strength of the evidence that heterozygous cats can and do develop [feline] OCD." While research shows that all heterozygous Folds develop OCD, and anecdotal evidence shows that heterozygous Folds can and do develop OCD, they do not show whether mildly affected parents are more likely to have mildly affected offspring. They also do not show what percentage of Folds are severely affected. The report states that there is not enough information to justify banning Scottish Fold matings, but enough to justify a level of concern. Recommended guidelines include: A requirement for periodic vet examination of breeding cats for any evidence of lameness, stiffness, or pain—breeding cats with signs to be desexed. Health: A requirement for periodic X-rays of breeding cats and comparison of X-ray evidence with clinical symptoms, possibly leading to a requirement that cats with a specified degree of skeletal change to be desexed. Requesting the agreement of pet owners to be periodically contacted by the NZCF or by a researcher, to provide reports about the health of their cat. All information to be reported/submitted to the BSAC to allow information to be collated to give an overall picture of FOCD in Scottish Folds in NZ. Requirements to be in place for a minimum of 5 years to enable tracking of the health of Folds over time. The Cat Who Went to Paris: The short novel The Cat Who Went to Paris by Peter Gethers features "the most famous Scottish Fold" according to Grace Sutton of The Cat Fanciers' Association. The book documents the life of Gethers and his Fold, Norton, from their first meeting to Norton's eventual death and Gethers' experiences after the loss.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Venture round** Venture round: A venture round is a type of funding round used for venture capital financing, by which startup companies obtain investment, generally from venture capitalists and other institutional investors. The availability of venture funding is among the primary stimuli for the development of new companies and technologies. Features: Parties Founders or stakeholders. Introduce companies to investors. Features: A lead investor, typically the best known or most aggressive venture capital firm that is participating in the investment, or the one contributing the largest amount of cash. The lead investor typically oversees most of the negotiation, legal work, due diligence, and other formalities of the investment. It may also introduce the company to other investors, generally in an informal unpaid capacity. Features: Co-investors, other major investors who contribute alongside the lead investor Follow-on or piggyback investors. Typically angel investors, high-net worth individuals, family offices, institutional investors, and others who contribute money but take a passive role in the investment and company management Law firms and accountants are typically retained by all parties to advise, negotiate, and document the transaction Stages in a venture round Introduction. Investors and companies seek each other out through formal and informal business networks, personal connections, paid or unpaid finders, researchers and advisers, and the like. Because there are no public exchanges listing their securities, private companies meet venture capital firms and other private equity investors in several ways, including warm referrals from the investors' trusted sources and other business contacts; investor conferences and symposia; and summits where companies pitch directly to investor groups in face-to-face meetings, including a variant known as "Speed Venturing", which is akin to speed dating for capital, where the investor decides within 10 minutes whether s/he wants a follow-up meeting. Features: Offering. The company provides the investment firm a confidential business plan to secure initial interest Private placement memorandum. A PPM/prospectus is generally not used in the Silicon Valley model Negotiation of terms. Non-binding term sheets, letters of intent, and the like are exchanged back and forth as negotiation documents. Once the parties agree on terms, they sign the term sheet as an expression of commitment. Features: Signed term sheet. These are usually non-binding and commit the parties only to good faith attempts to complete the transaction on specified terms, but may also contain some procedural promises of limited (30- to 60-day) duration like confidentiality, exclusivity on the part of the company (i.e. the company will not seek funding from other sources), and stand-still provisions (e.g. the company will not undertake any major business changes or enter agreements that would make the transaction infeasible). Features: Definitive transaction documents. A drawn-out (usually 2–4 weeks) process of negotiating and drafting a series of contracts and other legal papers used to implement the transaction. In theory, these simply follow the terms of the term sheet. In practice they contain many important details that are beyond the scope of the major deal terms. Definitive transaction documents are not required in all situations. Specifically where the parties have entered into a separate agreement that does not require that the parties execute all such documents. Features: Definitive documents, the legal papers that document the final transaction. Generally includes:Stock purchase agreements – the primary contract by which investors exchange money for newly minted shares of preferred stock Buy-sell agreements, co-sale agreements, right of first refusal, etc. – agreements by which company founders and other owners of common stock agree to limit their individual ability to sell their shares in favor of the new investors Investor rights agreements – covenants the company makes to the new investors, generally include promises with respect to board seats, negative covenants not to obtain additional financing, sell the company, or make other specified business and financial decisions without the investors' approval, and positive covenants such as inspection rights and promises to provide ongoing financial disclosures Amended and restated articles of incorporation – formalize issues like authorization and classes of shares and certain investor protectionsDue diligence. Simultaneously with negotiating the definitive agreements, the investors examine the financial statements and books and records of the company, and all aspects of its operations. They may require that certain matters be corrected before agreeing to the transaction, e.g. new employment contracts or stock vesting schedules for key executives. At the end of the process the company offers representations and warranties to the investors concerning the accuracy and sufficiency of the company's disclosures, as well as the existence of certain conditions (subject to enumerated exceptions), as part of the stock purchase agreement. Features: Final agreement occurs when the parties execute all of the transaction documents. This is generally when the funding is announced and the deal considered complete, although there are often rumors and leaks. Features: Closing occurs when the investors provide the funding and the company provides stock certificates to the investors. Ideally this would be simultaneous, and contemporaneous with the final agreement. However, conventions in the venture community are fairly lax with respect to timing and formality of closing, and generally depend on the goodwill of the parties and their attorneys. To reduce cost and speed up transactions, formalities common in other industries such as escrow of funds, signed original documents, and notarization, are rarely required. This creates some opportunity for incomplete and erroneous paperwork. Some transactions have "rolling closings" or multiple closing dates for different investors. Others are "tranched," meaning the investors only give part of the funds at a time, with the remainder disbursed over time subject to the company meeting specified milestones. Features: Post-closing. After the closing a few things may occurConversion of convertible notes. If there are outstanding notes they may convert at or after closing. Features: securities filing with relevant state and/or federal regulators Filing of amended Articles of Incorporation Preparation of closing binder – contains documentation of entire transaction Rights and privileges Venture investors obtain special privileges that are not granted to holders of common stock. These are embodied in the various transaction documents. Common rights include: Anti-dilution protection – if the company ever sells a significant amount of stock at a price lower than the investor paid, then to protect investors against stock dilution they are issued additional shares (usually by changing the "conversion ratio" used to calculate their liquidation preference). Features: Guaranteed board seats Positive and negative covenants by the company Registration right – the investors have special rights to demand registration of their stock on public exchanges, and to participate in an initial public offering and subsequent public offerings Representations and warranties as to the state of the company Liquidation preferences – in any liquidation event such as a merger or acquisition, the investors get their money back, often with interest and/or at a multiple, before common stock is paid any funds from liquidation. The preference may be "participating", in which case the investors get their preference and their proportionate share of the surplus, or "non-participating" in which case the preference is a floor. Features: Dividends – dividend amounts are usually stated but not mandatory on the part of the company, except that the investors will get their dividends before any dividends may be declared for common stock. Most venture-backed start-ups are initially unprofitable so dividends are rarely paid. Unpaid dividends are generally forgiven but they may be accumulated and are added to the liquidation preference. Round names: Venture capital financing rounds typically have names relating to the class of stock being sold: A pre-seed or angel round is the earliest infusion of capital by founders, supporters, high net worth individuals ("angel investors"), and sometimes a small amount of institutional capital to launch the company, build a prototype, and discover initial product-market fit. Round names: Seed round is generally the first formal equity round with an institutional lead. The series seed can be priced, meaning investors purchase preferred stock at a valuation set by the lead investor, or take the form of convertible note or simple agreement for future equity (SAFE) that can be converted at a discount to preferred shares at the first priced round. A Seed round is often used to demonstrate market traction in preparation for the Series A. Although in the past seed rounds were mainly reserved for pre-revenue companies, as of 2019 two-thirds of companies raising seed rounds already had revenues. Round names: Series A, Series B, Series C, etc. priced equity rounds. Generally, the progression and price of stock at these rounds is an indication that a company is progressing as expected. Investors may become concerned when a company has raised too much money in too many rounds, considering it a sign of delayed progress. Series A', B', and so on. Indicate small follow-on rounds that are integrated into the preceding round, generally on the same terms, to raise additional funds. Round names: Series AA, BB, etc. Once used to denote a new start after a crunchdown or downround, i.e. the company failed to meet its growth objectives and is essentially starting again under the umbrella of a new group of funders. Increasingly, however, Series AA Preferred Stock investment rounds are becoming used more widely along with convertible note financings or other "lightweight" preferred stock financings, such as "Series Seed" or "Series AA" preferred stock, to support less capital-intensive business growth, as their simplicity and generally lower legal costs can be attractive to early investors and founders. Round names: Mezzanine finance rounds, bridge loans, and other debt instruments used to support a company between venture rounds or before its initial public offering
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lightweighting** Lightweighting: Lightweighting is a concept in the auto industry about building cars and trucks that are less heavy as a way to achieve better fuel efficiency and handling. Carmakers make parts from carbon fiber, windshields from plastic, and bumpers out of aluminum foam, as ways to lessen vehicle load. Replacing car parts with lighter materials does not lessen overall safety for drivers, according to one view, since many plastics have a high strength-to-weight ratio.The search to replace car parts with lighter ones is not limited to any one type of part; according to a spokesman for Ford Motor Company, engineers strive for lightweighting "anywhere we can." Using lightweight materials such as plastics can mean less strain on the engine and better gas mileage as well as improved handling. One material sometimes used to reduce weight is carbon fiber. The auto industry has used the term for many years, as the effort to keep making cars lighter is ongoing.Another common material used for lightweighting is aluminum. Incorporating aluminum has grown continuously to not only meet CAFE standards, but to also improve automotive performance. A vehicle with lower weight has better acceleration, braking and handling. In addition, lighter vehicles can tow and haul larger loads because the engine is not carrying unnecessary weight. A light weighting magazine finds: "Even though aluminum is light, it does not sacrifice strength. Aluminum body structure is equal in strength to steel and can absorb twice as much crash-induced energy." Many other materials are used to meet lightweighting goals.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**C Puppis** C Puppis: The Bayer designations c Puppis and C Puppis are distinct and refer to two different stars in the constellation Puppus: c Puppis (HD 63032) C Puppis (HD 53704)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Instant coffee** Instant coffee: Instant coffee is a beverage derived from brewed coffee beans that enables people to quickly prepare hot coffee by adding hot water or milk to coffee solids in powdered or crystallized form and stirring. The product was first invented in Invercargill, the largest city in Southland, New Zealand, in 1890. Instant coffee solids (also called soluble coffee, coffee crystals, coffee powder, or powdered coffee) refers to the dehydrated and packaged solids available at retail used to make instant coffee. Instant coffee solids are commercially prepared by either freeze-drying or spray drying, after which it can be rehydrated. Instant coffee in a concentrated liquid form, as a beverage, is also manufactured. Advantages of instant coffee include speed of preparation (instant coffee dissolves quickly in hot water), lower shipping weight and volume than beans or ground coffee (to prepare the same amount of beverage), and long shelf life—though instant coffee can spoil if not kept dry. Instant coffee also reduces cleanup since there are no coffee grounds, and at least one study has found that it has a lower environmental footprint than drip filter coffee and capsule espresso coffee, on a prepared beverage basis, disregarding quality and appeal of the beverage produced. History: Instant or soluble coffee was invented and patented in 1890, by David Strang of Invercargill, New Zealand, under patent number 3518 sold under the trading name Strang's Coffee citing the patented "Dry Hot-Air" process. Some modern sources have credited French humorist and writer Alphonse Allais with the invention.The invention was previously attributed to Satori Kato, a Japanese scientist working in Chicago in 1901. Kato introduced the powdered substance in Buffalo, New York, at the Pan-American Exposition. George Constant Louis Washington developed his own instant coffee process shortly thereafter, and first marketed it commercially (1910). The Nescafé brand, which introduced a more advanced coffee refining process, was launched in 1938. History: High-vacuum freeze-dried coffee was developed shortly after World War II, as an indirect result of wartime research into other areas. The National Research Corporation (NRC) was formed in Massachusetts as a process-development company employing high-vacuum technology. It developed high-vacuum processes to produce penicillin, blood plasma, and streptomycin for US military use. As the war ended, NRC looked to adapt its processes for peacetime uses. It formed Florida Foods Corporation to produce concentrated orange juice powder and originally sold its product to the United States Army. That company later changed its name to Minute Maid. History: A concentrated coffee/milk/sugar mixture was produced for the Union army during the American Civil War under the name Essence of Coffee, a teaspoonful of which was mixed with a cup of hot water. It had the consistency of axle grease, and proved so unpopular with the troops that it was soon discontinued. The brand Camp Coffee, a coffee and chicory essence, was first produced in 1876 by Paterson & Sons Ltd in Scotland. Use: Close to 50% of the world's green coffee is used to produce instant coffee. As food Instant coffee is available in powder or granulated form contained in glass and plastic jars, sachets, or tins. The user controls the strength of the resulting product by adding less or more powder or granules to the water. Instant coffee is also convenient for preparing iced coffee like the Greek frappé. Use: In some countries, such as Portugal, Spain, and India, instant coffee is commonly mixed with hot milk instead of boiling water. In other countries, such as South Korea, instant coffee commonly comes pre-mixed with non-dairy creamer and sugar and is called "coffee mix". Said to have been popularised in the UK by GIs during World War II, instant coffee still accounts for over 75 percent of coffee bought to drink in British homes, as opposed to well under 10 percent in the U.S. and France and one percent in Italy.In the United Kingdom, instant coffee granules are sometimes used to enhance the flavour of sauces used in preparing spaghetti Bolognese. Use: Non-food use Instant coffee is one of the ingredients in Caffenol, a home-made, non-toxic black-and-white photographic developer. The other ingredients in the basic formula are ascorbic acid (vitamin C) and anhydrous sodium carbonate; some recipes also include potassium bromide as a fog-reducing agent. The active ingredient appears to be caffeic acid. Initial experiments on Caffenol were performed in 1995 at the Rochester Institute of Technology; addition of ascorbic acid began around 2000, yielding the improved Caffenol-C, which is less likely to stain negatives than the original formulation. Experiments have shown that cheaper, less desirable brands of coffee work better for this application than more expensive brands. Production: As with regular coffee, the green coffee bean itself is first roasted to bring out flavour and aroma. Rotating cylinders containing the green beans and hot combustion gases are used in most roasting plants. When the bean temperature reaches 165 °C (329 °F) the roasting begins. It takes about 8–15 minutes to complete roasting. After cooling, the beans are then ground finely. Grinding reduces the beans to 0.5-to-1.1-millimetre (0.020 to 0.043 in) pieces. Until here, the process is in general the same as for other types of coffee. Production: Extraction To produce instant coffee, the soluble and volatile contents of the beans, which provide the coffee aroma and flavor, have to be extracted. This is done using water. Pressurized water heated to around 175 °C (347 °F) is used for this process. The coffee concentration in the liquid is then increased by either evaporation or by freeze concentration. Freeze drying The basic principle of freeze drying is the removal of water by sublimation. Since the mass production of instant coffee began in post-WWII America, freeze-drying has grown in popularity to become a common method. Although it is more expensive, it generally results in a higher-quality product. The coffee extract is rapidly frozen and is broken into small granules. (Slower freezing would lead to larger ice crystals and a porous product; it can also affect the colour of the coffee granules). The granules are sifted and sorted on size. Frozen coffee granules are placed in the drying chamber, often on metal trays. A vacuum is created within the chamber. The strength of the vacuum is critical in the speed of the drying and therefore the quality of the product. Care must be taken to produce a vacuum of suitable strength. The drying chamber is warmed, most commonly by radiation, but conduction is used in some plants and convection has been proposed in some small pilot plants. A possible problem with convection is uneven drying rates within the chamber, which would give an inferior product. Sublimation—the previously frozen water in the coffee granules expands to ten times its previous volume. The removal of this water vapor from the chamber is vitally important, making the condenser the most critical and expensive component in a freeze-drying plant. The freeze-dried granules are removed from the chamber and packaged. Spray drying Spray drying is preferred to freeze-drying in some cases because it allows larger scale economic production, shorter drying times, and because it produces fine rounded particles. Production: The process produces spherical particles about 300 micrometres (0.012 in) in size with a density of 0.22 g/cm3. To achieve this, nozzle atomization is used. Various ways of nozzle atomization can be used each having its own advantages and disadvantages. High speed rotating wheels operating at speeds of about 20,000 rpm are able to process up to 6,000 pounds (2,700 kg) of solution per hour. The use of spray wheels requires that the drying towers have a wide radius to avoid the atomized droplets collecting onto the drying chamber walls. Production: Completed in 5–30 seconds (dependent on factors such as heat, size of particle, and diameter of chamber). Production: Moisture content change: IN = 75–85% OUT = 3–3.5% Air temperature: IN = 270 °C (518 °F) OUT = 110 °C (230 °F)One drawback with spray drying is that the particles it produces are too fine to be used effectively by the consumer; they must first be either steam-fused in towers similar to spray dryers or by belt agglomeration to produce particles of suitable size. Production: Decaffeination In commercial processes, the decaffeination of instant coffee almost always happens before the critical roasting process which will determine the coffee's flavour and aroma processes. Byproducts The main byproduct of the instant coffee production process is spent coffee grounds. These grounds can be used as biomass, for example to produce heat used in the manufacturing process. Roughly two times the mass in spent coffee grounds is generated for each quantity of soluble coffee. Composition: The caffeine content of instant coffee is generally less than that of brewed coffee. One study comparing various home-prepared samples came to the result that regular instant coffee (not decaffeinated) has a median caffeine content of 66 mg per cup (range 29–117 mg per cup), with a median cup size of 225 ml (range 170-285 ml) and a caffeine concentration of 328 µg/ml (range 102-559 µg/ml). In comparison, drip or filter coffee was estimated to have a median caffeine content of 112 mg, with a median concentration of 621 µg/ml for the same cup size.Regarding antioxidants, the polyphenol content of a 180 ml cup of instant coffee has been estimated to be approximately 320 mg, compared to approximately 400 mg in a cup of brewed coffee of the same size. Malabsorption: Instant coffee decreases intestinal iron absorption more than drip coffee. One study estimated that, when a cup of instant coffee was ingested with a meal composed of semipurified ingredients, intestinal absorption was reduced from 5.88% to 0.97%, compared to an absorption of 1.64% with drip coffee. It was also estimated that, when the strength of the instant coffee was doubled, intestinal iron absorption fell to 0.53%. However, there is no decrease in iron absorption when instant coffee is consumed 1 hour before a meal, but the same degree of inhibition as with simultaneous ingestion occurs when instant coffee is taken 1 hour after a meal. Regulation: In the European Union, regulations include the species of coffee bean, geographical origin, processing detail, year of crop, solvents used in decaffeination, and caffeine level. Various institutions govern the coffee industry and help to achieve standardization and release information to the public, including the International Coffee Organization (London), Codex Alimentarius Commission of the UN (Rome), and National Coffee Association (New York).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nuclear Instruments and Methods in Physics Research** Nuclear Instruments and Methods in Physics Research: Nuclear Instruments and Methods in Physics Research (Nucl. Instrum. Methods Phys. Res.) is a peer-reviewed scientific journal published by Elsevier. It was established in 1957 as Nuclear Instruments. It focuses on detectors descriptions and data analysis methods. History: Nuclear Instruments (1957–1958) Nuclear Instruments and Methods (1959–1981) Nuclear Instruments and Methods in Physics Research (1981–present)Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment (1984–present) Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms (1984–present)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Leg side** Leg side: The leg side, or on side, is defined to be a particular half of the field used to play the sport of cricket. It is the side of the field that corresponds to the batsman's non-dominant hand, from their perspective. From the point of view of a right-handed batsman facing the bowler, it is the left hand side of the cricket field (being to the bowler's right). With a left-handed batsman the on side is to the batsman's right (and to the bowler's left). Leg side: A cricket field is notionally divided into two halves, by an imaginary line running down the long axis of the pitch. In normal batting stance, the striking batsman stands side on to the bowler. The leg side is the half of the field behind the batsman. The half of the field in front of him is called the off side. Leg side: In the picture, the bowler is bowling from the bottom half of the image, the right-handed batsman (S), facing him sideways on, has his legs more on the right side of the picture, the leg-side. If the ball goes down that side of the pitch it will be "on" the batsman's legs, the on side. Leg side: The definition is relative to the batsman. If the batsman were to directly face the bowler, the leg side would be: on the left side for a right-handed batsman, but on the right side for a left-handed batsman.The leg side is usually less well defended with fielders than the off side, because of the typical line of attack of the bowlers, which is frequently on or outside off stump. This makes it more difficult to hit the ball to the leg side because it involves swinging the bat across the line of the ball, which can lead to mishits and catches. Leg side: While the terms "leg side" and "on side" can refer to an entire half of the field, each term is often used to denote only part of this half. When the batsman plays the ball into this half in front of the wicket, it is usually said that the ball has been played to the on side. However, when the ball is played into the region level with or behind the wicket, it is said that the ball has been played to the leg side. The names of fielding positions often include the words "leg" or "on", and they reflect this convention. For example, fine leg is located behind the wicket, whereas mid on is located in front of it. When the batsman steps backwards from his normal batting stance on the crease as the ball is bowled, he is said to be moving towards the leg side. Comparison with baseball: Since the leg side comprises the half of the field behind the batsman, with a right-handed batsman it is roughly analogous to the half of the baseball field that includes left field and third base. With a left-handed batsman, the leg side is analogous to the half that includes right field and first base. Thus hitting to the leg side is directly visually analogous to "pull" hitting in baseball (though since all fair territory in baseball is forward of the batter, "on" would more exactly match this area of the field). Conversely, off is analogous to baseball's "opposite-field" hitting.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Secondary glaucoma** Secondary glaucoma: Secondary glaucoma is a collection of progressive optic nerve disorders associated with a rise in intraocular pressure (IOP) which results in the loss of vision. In clinical settings, it is defined as the occurrence of IOP above 21 mmHg requiring the prescription of IOP-managing drugs. It can be broadly divided into two subtypes: secondary open-angle glaucoma and secondary angle-closure glaucoma, depending on the closure of the angle between the cornea and the iris. Principal causes of secondary glaucoma include optic nerve trauma or damage, eye disease, surgery, neovascularization, tumours and use of steroid and sulfa drugs. Risk factors for secondary glaucoma include uveitis, cataract surgery and also intraocular tumours. Common treatments are designed according to the type (open-angle or angle-closure) and the underlying causative condition, in addition to the consequent rise in IOP. These include drug therapy, the use of miotics, surgery or laser therapy. Pathophysiology: Secondary glaucoma has different forms based on the varying underlying ocular conditions. These conditions result in an increase in IOP that manifests as secondary glaucoma. Pathophysiology: Paediatric congenital cataract associated glaucoma Based on the onset of secondary glaucoma in paediatric patients, it can be classified into early-stage and late-stage glaucoma cases. Early-stage secondary glaucoma, observed as angle-closure glaucoma, results from the blockage and inflammation of the peripheral anterior synechiae structure. However, early-stage secondary glaucoma rarely occurs with the readily available prescription of anti-inflammatory medications. On the other hand, late-stage glaucoma is commonly associated with open-angle glaucoma but the mechanisms are currently unconfirmed. Yet, it is believed to be closely related to the onset of trabeculitis or vitreous toxicity.In paediatric congenital cataract patients under the age of two, cataract surgery is considered and frequently employed as the primary treatment. There are two types of therapeutic combination, primary and secondary lens implantation (IOL). In primary IOL, cataract surgery is performed alongside immediate implantation of IOL. However, in secondary IOL implantation, the patient is prescribed aphakic glasses or contact lenses till the implantation of IOL after a varied period of time between a few months or years. Primary IOL implantation is observed to significantly reduce and avoid the occurrence of secondary glaucoma in paediatric patients under the age of two. Pathophysiology: Herpetic anterior uveitis associated glaucoma In patients diagnosed with herpetic anterior uveitis, elevated IOP and secondary glaucoma are often detected. This is due to two main reasons, the blockage of vitreous flow resulting from inflammation in the structures of the trabecular meshwork, and the sedimentation of inflamed cells. Specifically for viral anterior uveitis, patients with IOP levels above 30 mmHg are often suffer from secondary glaucoma caused by cytomegalovirus. Pathophysiology: Other forms of secondary glaucoma Pigmentary glaucoma: In pigmentary glaucoma, the obstruction of the trabecular meshwork caused by iris pigment release results in increased IOP. This release in iris pigment occurs as a result of the interaction of a flaccid iris with the zonular fibres.Exfoliation syndrome: Exfoliation syndrome is a classic cause of secondary open-angle glaucoma, a common symptom of exfoliation syndrome is a cloudy layer on the anterior lens capsule.Aphakic and pseudophakic glaucoma: Aphakic glaucoma is a common side-effect of cataract surgery which causes an increase in IOP.Corticosteroid-induced glaucoma: Corticosteroids is a risk factor for the development of secondary glaucoma, as there had been increased IOP observed as a drug side-effect.Post-traumatic glaucoma: Trauma to the eye is often observed to cause secondary glaucoma. The incidence is notably higher in populations with increased levels of physical activity.Ghost-cell glaucoma: Ruptured red blood cells will release haemoglobin in the form of Heinz bodies, which are potent in increasing the IOP level.Inflammatory glaucoma: The inflammatory reaction will affect the drainage of aqueous humour in the eye, causing an increase in IOP.Glaucoma associated with ocular tumours: Although each tumour subtype has its own mechanism in causing secondary glaucoma, the general cause is the restriction of the meshwork resulting in the obstruction of aqueous humour flow.Increased episcleral venous pressure: According to the Goldmann equation, the relationship between episcleral venous pressure (EVP) is directly proportional to the IOP. Therefore, an increase in the EVP will result in an increase in IOP. Pathophysiology: Neovascular glaucoma: As a consequence of neovascularisation, or the formation of new blood vessels and supporting connective structures, there is blockage of the anterior chamber angle. This leads to elevation of IOP causing neovascular glaucoma. Epidemiology: The overall prevalence of secondary glaucoma across China between 1990 and 2015 was reported to be 0.15%, lower than the overall estimates for East Asia (0.39%). Epidemiology: Varying forms of secondary glaucoma Pigmentary glaucoma has lower incidence in Black and Asian populations, due to their characteristically thicker irises that result in a lower likelihood of pigment release, as compared to the White populations. Incidence of exfoliation syndrome-caused secondary glaucoma is estimated to be approximately 10% of the glaucoma patient population in the United States and over 20% of the patient population in Iceland and Finland.In populations above the age of 40, neovascular glaucoma has a prevalence of 0.4% worldwide. The incidence of pigmentary glaucoma decreases with age while in exfoliation syndrome the incidence increases with age. However, given the derived nature of secondary glaucoma, there may be no significant association between age, ethnicity or gender and the prevalence of the condition.Secondary glaucoma indicated after congenital cataract surgery is found between 6 and 24% of the cases noted, whereas, secondary glaucoma caused by primary IOL implantation was observed as 9.5%. Additionally, for patients with aphakia and secondary IOL implantation, 15.1% of the cases were determined. The incidence risk in primary IOL implantation in children with cataract in both eyes is lower than secondary IOL implantation and aphakic condition. However, this difference is not observed in the general population and populations with cataract in one eye.Due to lack of concrete and specific epidemiological evidence, further research is required to accurately estimate the prevalence of secondary glaucoma and its subtypes. Risk factors: In general, elevated IOP is a major risk factor in the development of secondary glaucoma. However, there are several risk factors contributing to the fluctuation in IOP levels. Risk factors: Uveitis Secondary glaucoma is commonly associated with uveitis. Uveitis is the inflammation of the uvea, a middle layer tissue of the eye consisting of the ciliary body, choroid and iris. Various causes have been identified as potential risk factors contributing to the occurrence of secondary glaucoma. These include viral anterior uveitis due to cytomegalovirus infection, and herpetic anterior uveitis caused by herpes simplex virus. The observed pathophysiology of secondary glaucoma in uveitis is found to be linked to the increase and fluctuation of IOP. Inflammation of eye tissues contributes to the blockage of IOP produced in the ciliary body. This results in the accumulation of aqueous and thus elevated IOP, which is a common risk factor for the progression of secondary glaucoma. Risk factors: Paediatric congenital cataract surgery Paediatric congenital cataract surgery is also identified as a risk factor for the progression of secondary glaucoma. Cataract is an ocular disease, identified by the progressive clouding of the lens. Surgical procedures are often employed to replace the lens and allow for clear vision. However, there is an increased risk of secondary glaucoma development in children due to the secondary IOL implantation procedure. The increased inflammatory sensitivity in the anterior chamber angle may contribute to the risks of secondary glaucoma. Risk factors: Intraocular tumour Intraocular tumours (uveal and retinal tumours) are also found to be closely associated with the development of secondary glaucoma. The pathophysiology of secondary glaucoma in these cases is affected by the type of tumour, location and other tumour-associated factors. Among the many subtypes of uveal tumours, secondary glaucoma is the most prominent among patients with trabecular meshwork iris melanoma. The blockage of vitreous flow due to inflammation in the structures of the trabecular meshwork is also observed in herpetic anterior uveitis patients. In addition to this, angle invasion is a mechanism that is observed to contribute greatly to the development of secondary glaucoma in patients with iris tapioca melanoma, iris lymphoma, choroidal melanoma, and medulloepithelioma. Treatment and management: Pharmacological interventions Miotic drugs are a class of cholinergic drugs that are frequently employed in the treatment and management of all types of glaucoma. These drugs stimulate the contraction of the pupil causing the iris to pull away from the trabecular meshwork. Consequently, the normal drainage of the aqueous humour is restored, relieving IOP. In addition to causing a direct effect on IOP, these drugs are applied to reduce pigment release (from the iris pigment epithelium) in the treatment of pigmentary glaucoma. Despite the advantages, the widespread use of miotic drugs is limited by its associated side effects. There is an increased risk of development of posterior synechiae in glaucoma secondary to exfoliation syndrome and ocular trauma. Other side effects include increased risk of miosis-induced headaches, blurred vision, retinal detachment and damage to the blood-aqueous barrier.Alternative drugs which can reduce the synthesis of aqueous humour, called aqueous suppressants, or increase the drainage of aqueous humour emerged as effective first-line treatments. Aqueous suppressants include beta-blockers, alpha-agonists and carbonic anhydrase inhibitors. They are particularly effective in treating corticosteroid, uveitic, aphakic, pseudophakic, ghost-cell and post-traumatic glaucoma. Prostaglandin analogues increase aqueous drainage and are thus used in the reduction of IOP. There are contradictory findings regarding the occurrence of prostaglandin analogue mediated side effects in the treatment of uveitic glaucoma. It was previously identified that the side effects comprise damage to the blood-aqueous barrier, cystoid macular oedema, risk of developing anterior uveitis and recurrence of keratitis caused by herpes simplex virus. However, current scientific evidence only supports the reactivation of herpes simplex keratitis among the other side effects.In uveitic and inflammatory glaucoma, reduction in inflammation is a critical step during the treatment and management process. This is commonly done using corticosteroids coupled with immunosuppressants. Steroidal treatment is also used in management of aphakic, pseudophakic, and post-traumatic glaucoma. Inflammatory glaucoma may further be treated using cycloplegics, a class of drugs that treats pain, ciliary spasm, uveoscleral tract blockage and disrupted blood-aqueous barrier linked with this form of glaucoma. While some studies recommend the use of anti-vascular endothelial growth factor drugs for inhibition of neovascularization in neovascular glaucoma, there is a lack of substantial evidence for the effectiveness of this treatment method. Treatment and management: Laser therapy Among different laser therapies, laser peripheral iridotomy and laser trabeculoplasty are the most common procedures for secondary glaucoma. Both methods involve creating new outlets for the aqueous humour to flow out of, effectively reducing the IOP. In peripheral laser iridotomy, the opening is created in the iris tissue while in trabeculoplasty, this opening is made in the trabecular meshwork. Further, there are two types of laser trabeculoplasty: argon laser trabeculoplasty and selective laser trabeculoplasty.Laser peripheral iridotomy has high efficacy in the treatment of pigmentary glaucoma. Argon laser trabeculoplasty is effective in the management of corticosteroid and pigmentary glaucoma. However, this is often contraindicated due to high rates of failure in patients with uveitic glaucoma. For uveitic glaucoma, treatment with selective laser trabeculoplasty is associated with fewer adverse effects and risks of failure. Treatment and management: Surgical treatment Surgical procedures are effective in cases where pharmacological management is not successful or suitable. Such methods work by facilitating aqueous outflow through the modification of the obstructing trabecular meshwork using trabeculectomy, goniotomy, non-penetrating deep sclerectomy or canaloplasty. Alternatively, introduction of new drainage pathways may also be achieved by the implantation of glaucoma shunts or glaucoma drainage devices.Trabeculectomy is held as the gold standard for surgical management of glaucoma. Studies indicate that treatment of uveitic glaucoma using trabeculectomy with antimetabolites administration has a high success rate of 62%-81%. Thus, it is also commonly used in the treatment of pigmentary glaucoma. Drainage tube implants are also implicated in treatment of uveitic and inflammatory glaucoma.Minimally invasive glaucoma surgery is performed in order to overcome the risks and adverse effects associated with conventional surgical procedures. However, there are limited studies testing the efficacy of utilising this type of surgery for the treatment of uveitic glaucoma.In addition to the direct reduction of IOP, surgical procedures are used to remove blood, viscoelastic fluid and debris in glaucoma caused by cataract extraction and ocular trauma. They may also be utilized to remove depot steroids in corticosteroid glaucoma and ghost cells from the vitreous humour in ghost-cell glaucoma through a procedure known as vitrectomy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Semagacestat** Semagacestat: Semagacestat (LY-450139) was a candidate drug for a causal therapy against Alzheimer's disease. It was originally developed by Eli Lilly and Elan, and clinical trials were conducted by Eli Lilly. Phase III trials included over 3000 patients, but in August 2010, a disappointing interim analysis, in which semagacestat performed worse than the placebo, led to the trials being stopped. Mechanism of action: β-Amyloid is a peptide of 39 to 43 amino acids. The isoforms with 40 and 42 amino acids (Aβ40/42) are the main constituents of amyloid plaques in the brains of Alzheimer's disease patients. β-Amyloid is formed by proteolysis of amyloid precursor protein (APP). Research on laboratory rats suggest that the soluble form of this peptide is a causative agent in the development of Alzheimer's. Mechanism of action: Semagacestat blocks the enzyme γ-secretase, which (along with β-secretase) is responsible for APP proteolysis. Clinical trials: Phase III double-blind clinical trials started in March 2008 with the IDENTITY study (Interrupting Alzheimer's dementia by evaluating treatment of amyloid pathology), including 1500 patients from 22 countries. This study was intended to run until May 2011. The successor trial with further 1500 patients, IDENTITY-2, started in September 2008. The open-label trial IDENTITY-XT, which included patients who have completed one of the two studies, started in December 2009. On 17 August 2010, it was announced that the phase III trials failed. Preliminary findings show that not only did semagacestat fail to slow disease progression, but that it was actually associated with “worsening of clinical measures of cognition and the ability to perform activities of daily living”. Furthermore, the incidence of skin cancer was significantly higher in the treatment group than the placebo group. Issues: A number of issues have already been raised during clinical trials: Phase I and II studies showed a decrease of Aβ40/42 concentration in the blood plasma about three hours after application of semagacestat, but an increase of 300% 15 hours after application. No reduction was shown in the cerebrospinal fluid. As a consequence, the phase III studies worked with much higher doses. Issues: γ-Secretase has other targets, for example the notch receptor. It is not known whether this could cause long-term side effects. Issues: In a 2008, histological analysis of post-mortem brains from deceased subjects who had previously been enrolled in a phase 1 study of an experimental vaccine (Elan AN1792) demonstrated that the drug appeared to have cleared patients of amyloid plaques but did not have any significant effect on their dementia, which in some people's mind cast doubt on the utility of approaches lowering β-amyloid levels. Issues: A notable feature of the results of the semagacestat phase III interim analysis is that subjects on treatment did significantly worse in cognitive assessment and activities of daily living than did subjects in the placebo group. This contrasts with the results from the phase III trial of Myriad's γ-secretase modulator tarenflurbil, which found that the subjects in the treatment group tracked the placebo control group very closely. The implications of this finding on other companies undertaking development of molecules targeting γ-secretase is not clear.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PhpSQLiteAdmin** PhpSQLiteAdmin: phpSQLiteAdmin is a name of two independent web applications, written in PHP, for managing SQLite databases. phpSQLiteAdmin is a web-based client which leverages PHP scripting and the SQLite file-database system to provide a simple way for users to create databases, create tables, and query their own data using non-industry-standard SQLite syntax.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PCI hole** PCI hole: The PCI hole or PCI memory hole is a limitation of 32-bit hardware and 32-bit operating systems that causes a computer to appear to have less memory available than is physically installed. This memory addressing limitation and the later workarounds necessary to overcome it are functionally similar to the memory limits of the early 8088 IBM PC memory architecture (see Conventional memory). PCI hole: Similar situations have often arisen in the history of computing, when hardware intended to have up to a certain level of resources is designed to handle several times the maximum expected amount, which eventually becomes a severe restriction as Moore's law increases resources economically available. The original IBM PC was typically supplied with 64 KB of memory or less; it was designed to take a maximum of 640 KB, far more than it was thought would ever be needed. This rapidly became a restriction that had to be handled by complex DOS memory management. Similar successive restrictions in size have been imposed and overcome on hard drives. Unavailable memory: The loss of usable memory caused by the PCI hole, when using memory-mapped I/O, is caused by using the same address space for both physical memory and to communicate with hardware devices. Thus, installed hardware devices need some of the address space in order to communicate with the processor and system software. As 32-bit hardware has a total of four gigabytes of addressable memory, some of the real physical memory of a 32-bit machine, when enough memory is installed, needs to be sacrificed by making it hidden so the devices have room to communicate. Which part of physical memory becomes replaced with the device communication space depends upon the machine, but it is usually anything above 2.5 to 3.5 GB. Unavailable memory: The amount of system memory that is hidden and unavailable varies widely with the actual mainboard and chipset, the BIOS, the amount of physical memory, the amount of video RAM installed on graphics cards, and the number and type of PCI cards installed in the system. More than a gigabyte of 32-bit system memory can be unavailable when four gigabytes of physical memory and multiple 3D cards with large amounts of video memory are installed; on some mainboards, the hole is always at least one gigabyte in size regardless of the installed expansion cards. Physical address extension: A workaround first developed in the Pentium Pro, known as Physical Address Extension (PAE), allows certain 32-bit operating systems to access up to 36-bit memory addresses, even though individual programs are still limited to operating within 32 bits of address space. Provided there is enough memory installed, each program can have its own four-gigabyte addressing space, together utilizing up to 64 gigabytes of memory across all programs. Physical address extension: But PAE alone is not enough to address the PCI hole issue, as memory addresses and I/O PCI addresses are still overlapping somewhere between the 3rd and 4th gigabyte. A PAE compatible operating system together with a PAE compatible CPU cannot do better than accessing memory from the 1st to the 3rd gigabyte, then from the 5th to the 64th gigabyte. The PCI hole is still there. On a 4GB host, and in the absence of one or another additional workaround, PAE does nothing for accessing the ~1GB memory overlapped by the PCI I/O. Physical address extension: PAE was fully supported in Windows XP up to the Service Pack 1 (SP1) release, but then withdrawn for SP2; the only 32-bit versions of Microsoft Windows to fully support this are certain high-end server versions of Windows Server 2003 and earlier; as of 2014, it is mainly in use by 32-bit Linux distributions; Ubuntu has made it mandatory for its 32-bit version since 2013. Microsoft disabled the support in Windows XP SP2 and later operating systems because there were many compatibility problems with graphics card and other devices, which needed PAE-aware drivers, distinct from both standard 32-bit and later 64-bit drivers. Many versions of MS Windows can activate what is still called PAE for the purpose of using the NX bit, but this no longer extends the address space. Filling the memory hole: As stated earlier, in a 32-bit PAE-enabled and even in 64-bit systems, memory below and above the "memory hole" is available, but 512 MB to 1.5 GB of RAM is unavailable, around the 3rd gigabyte, because it uses there memory addresses required for devices. With the decreasing cost of memory this may not be a serious issue, but there are ways to regain access to the missing memory. Filling the memory hole: Mapping devices to addresses above 4 GB The limitations of the 32-bit PCI hole can affect purely 64-bit operating systems as the system BIOS must cater for all operating systems which are supported by the hardware (16-, 32-, and 64-bit operating systems all run on the same hardware). The BIOS must be able to boot mapping all devices below four gigabytes, although a 64-bit system does not require this. Many BIOSes can be configured by the user to fill the memory hole by mapping devices high up in the 64-bit address space, so long as the devices, their drivers, and the chipset all support this. A machine configured this way cannot boot into a 16- or 32-bit operating system; if a machine is set up this way, the BIOS setup must be temporarily changed to boot into a 16- or 32-bit operating system, e.g. from a bootable CD or USB storage device. Filling the memory hole: Mapping memory to addresses above 4 GB Another way to remove the PCI hole, which is only useful for 64-bit operating systems and those 32-bit systems that support the Physical Address Extension method described above, is to "remap" some or all of the memory between the two- and four-gigabyte limits to addresses above four gigabytes. This needs to be supported by the chipset of the computer and can usually be activated in the BIOS Setup. This remapping works on the level of physical addresses, unlike the higher-level remapping of virtual to physical addresses that happens inside the CPU core. Activating this for traditional 32-bit operating systems does more harm than good, as the remapped memory (often larger than the PCI hole itself) is unusable to such operating systems, even though e.g. Windows Vista will show such memory to physically exist on the "System Properties" page.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Optical solar reflector** Optical solar reflector: An optical solar reflector (OSR) is a component of a vehicle or machine designed to fly in outer space. The reflector consists of a top layer made out of quartz, over a reflecting layer made of metal. OSRs are used for radiators on spacecraft. Optical solar reflector: The quartz outer layer lets the solar light through which reflects on the metal layer. This results in a low absorption coefficient. The quartz layer is a good IR emitter. The result of these properties is a good emitting, low absorbing material, thus making it a cold material.OSRs are often used in Geostationary orbits, where high radiation levels would cause other thermal surface coatings to rapidly degrade. This is due to the fact that the vast majority of geostationary orbits lie in the Van Allen Radiation Belt.Optical solar reflectors are a type of second surface mirror.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Explicit memory** Explicit memory: Explicit memory (or declarative memory) is one of the two main types of long-term human memory, the other of which is implicit memory. Explicit memory is the conscious, intentional recollection of factual information, previous experiences, and concepts. This type of memory is dependent upon three processes: acquisition, consolidation, and retrieval.Explicit memory can be divided into two categories: episodic memory, which stores specific personal experiences, and semantic memory, which stores factual information. Explicit memory requires gradual learning, with multiple presentations of a stimulus and response. Explicit memory: The type of knowledge that is stored in explicit memory is called declarative knowledge, the counterpart to explicit memory is known as implicit memory, refers to memories acquired and used unconsciously such as skills (e.g. knowing how to get dressed) or perception. Unlike explicit memory, implicit memory learns rapidly, even from a single stimulus, and it is influenced by other mental systems. Explicit memory: Sometimes a distinction is made between explicit memory and declarative memory. In such cases, explicit memory relates to any kind of conscious memory, and declarative memory relates to any kind of memory that can be described in words; however, if it is assumed that a memory cannot be described without being conscious and vice versa, then the two concepts are identical. Types: Episodic memory Episodic memory consists of the storage and recollection of observational information attached to specific life-events. These can be memories that happened to the subject directly or just memories of events that happened around them. Episodic memory is what people generally think of when they talk about memory. Episodic memory allows for recalling various contextual and situational details of one's previous experiences. Types: Some examples of episodic memory include the memory of entering a specific classroom for the first time, the memory of storing your carry-on baggage while boarding a plane, headed to a specific destination on a specific day and time, the memory of being notified that one are being terminated from their job, or the memory of notifying a subordinate that they are being terminated from their job. The retrieval of these episodic memories can be thought of as the action of mentally reliving in detail the past events that they concern. Episodic memory is believed to be the system that provides the basic support for semantic memory. Types: Semantic memory Semantic memory refers to general world knowledge (facts, ideas, meaning and concepts) that can be articulated and is independent of personal experience. This includes world knowledge, object knowledge, language knowledge, and conceptual priming. Semantic memory is distinct from episodic memory, which is the memory of experiences and specific events that occur during people's lives, from which they can recreate at any given point. For instance, semantic memory might contain information about what a cat is, whereas episodic memory might contain a specific memory of petting a particular cat. Humans can learn about new concepts by applying their knowledge learned from things in the past.Other examples of semantic memory include types of food, capital cities of a geographic region, facts about people, dates, and the lexicon of flowers; a language, such as a one's vocabulary or a person's final vocabulary both exemplify semantic memory. Types: Hybrid types Autobiographical memory is a memory system consisting of episodes recollected from an individual's life, based on a combination of episodic (personal experiences and specific objects, people and events experienced at particular time and place) and semantic (general knowledge and facts about the world) memory.Spatial memory is the part of memory responsible for recording information about one's environment and its spatial orientation. For example, a person's spatial memory is required in order to navigate around a familiar city, just as a rat's spatial memory is needed to learn the location of food at the end of a maze. It is often argued that in both humans and animals, spatial memories are summarized as a cognitive map. Spatial memory has representations within working, short-term and long-term memory. Research indicates that there are specific areas of the brain associated with spatial memory. Many methods are used for measuring spatial memory in children, adults, and animals. The model of language: Declarative and procedural memory fall into two categories of human language. Declarative memory system is used by the lexicon. Declarative memory stores all arbitrary, unique word-specific knowledge, including word meanings, word sounds, and abstract representations such as word category. In other words, declarative memory is where random bits and pieces of knowledge about language that are specific and unpredictable are stored. Declarative memory includes representations of simple words (e.g. cat), bound morphemes (morphemes that have to go together), irregular morphological forms, verb complements, and idioms (or non-compositional semantic units). Irregular morphological structures fall into the declarative system; the irregularities (such as went being the past form of go or idioms) are what we have to memorize. The model of language: Declarative memory supports a superposition associative memory, which allows for generalizations across representations. For example, the memorization of phonologically similar stem-irregular past tense pairs (e.g. spring-sprung, sing-sang) may allow for memory-based generalization to new irregularities, either from real words (bring-brought) or from novel ones (spring-sprung). This ability to generalize could underlie some degree of productivity within the memory system. The model of language: While declarative memory deals with irregularities of morphology, procedural memory uses regular phonology and regular morphology. Procedural memory system is used by grammar, where grammar is defined by the building of a rule governed structure. Language's ability to use grammar comes from procedural memory, making grammar like another procedure. It underlies the learning of new, and already learned, rule-based procedures that oversee the regularities of language, particularly those procedures related to combining items into complex structures that have precedence and hierarchical relations- precedence in the sense of left to right and hierarchical in the sense of top to bottom. Procedural memory builds rule-governed structure (merging or series) of forms and representations into complex structures such as: Phonology Inflectional and derivational morphology Compositional semantics (the meaning of composition of words into complex structures) Syntax Broca and Wernicke's Brain Region Broca's area is important to procedural memory, because, "Broca's area is involved in the expressive aspects of spoken and written language (production of sentences constrained by the rules of grammar and syntax)." Broca's area corresponds to parts of the inferior frontal gyrus, presumably Brodmann's area 44 and 45. Procedural memory is affected by Broca's aphasia. Agrammatism is apparent in Broca's aphasia patients, where a lack of fluency and omission of morphology and function words occur. While those with Broca's aphasia are still able to understand or comprehend speech, they have difficulty producing it. Speech production becomes more difficult when sentences are complex; for example, the passive voice is a grammatically complex structure that is harder for those with Broca's aphasia to comprehend. Wernicke's area is crucial for language development, focusing on the comprehension of speech, rather than speech production. Wernicke's aphasia affects declarative memory. Opposite of Broca's aphasia, paragrammatism is apparent, which causes normal or excessive fluency and use of inappropriate words (neologisms). Those with Wernicke's aphasia struggle to understand the meaning of words and may not recognize their mistakes in speech. History: The study of human memory stretches back over the last 2000 years. An early attempt to understand memory can be found in Aristotle's major treatise, On the Soul, in which he compares the human mind to a blank slate. He theorized that all humans are born free of any knowledge and are the sum of their experiences. It was only in the late 1800s, however, that a young German philosopher by the name of Herman Ebbinghaus developed the first scientific approach to studying memory. While some of his findings have endured and remain relevant to this day (Learning Curve), his greatest contribution to the field of memory research was demonstrating that memory can be studied scientifically. In 1972, Endel Tulving proposed the distinction between episodic and semantic memory. This was quickly adopted and is now widely accepted. Following this, in 1985, Daniel Schacter proposed a more general distinction between explicit (declarative) and implicit (procedural) memoryWith the recent advances in neuroimaging technology, there have been a multitude of findings linking specific brain areas to declarative memory. Despite those advances in cognitive psychology, there is still much to be discovered in terms of the operating mechanisms of declarative memory. It is unclear whether declarative memory is mediated by a particular memory system, or if it is more accurately classified as a type of knowledge. Also it is unknown how or why declarative memory evolved in the first place. Neuropsychology: Normal brain function Hippocampus Although many psychologists believe that the entire brain is involved with memory, the hippocampus, and surrounding structures appear to be most important in declarative memory specifically. The ability to retain and recall episodic memories is highly dependent on the hippocampus, whereas the formation of new declarative memories relies on both the hippocampus and the parahippocampus. Other studies have found that the parahippocampal cortices were related to superior recognition memory.The Three Stage Model was developed by Eichenbaum, et al. (2001), and proposes that the hippocampus does three things with episodic memory: Mediates the recording of episodic memories Identifies common features between episodes Links these common episodes in a memory space.To support this model, a version of Piaget's Transitive Inference Task was used to show that the hippocampus is in fact used as the memory space.When experiencing an event for the first time, a link is formed in the hippocampus allowing us to recall that event in the future. Separate links are also made for features related to that event. For example, when you meet someone new, a unique link is created for them. More links are then connected to that person's link so you can remember what colour their shirt was, what the weather was like when you met them, etc. Specific episodes are made easier to remember and recall by repeatedly exposing oneself to them (which strengthens the links in the memory space) allowing for faster retrieval when remembering.Hippocampal cells (neurons) are activated depending on what information one is exposed to at that moment. Some cells are specific to spatial information, certain stimuli (smells, etc.), or behaviours as has been shown in a Radial Maze Task. It is therefore the hippocampus that allows us to recognize certain situations, environments, etc. as being either distinct or similar to others. However, the Three Stage Model does not incorporate the importance of other cortical structures in memory. Neuropsychology: The anatomy of the hippocampus is largely conserved across mammals, and the role of these areas in declarative memory are conserved across species as well. The organization and neural pathways of the hippocampus are very similar in humans and other mammal species. In humans and other mammals, a cross-section of the hippocampus shows the dentate gyrus as well as the dense cell layers of the CA fields. The intrinsic connectivity of these areas are also conserved.Results from an experiment by Davachi, Mitchell, and Wagner (2003) and subsequent research (Davachi, 2006) shows that activation in the hippocampus during encoding is related to a subject's ability to recall prior events or later relational memories. These tests did not differentiate between individual test items later seen and those forgotten. Neuropsychology: Prefrontal cortex The lateral Prefrontal cortex (PFC) is essential for remembering contextual details of an experience rather than for memory formation. The PFC is also more involved with episodic memory than semantic memory, although it does play a small role in semantics.Using PET studies and word stimuli, Endel Tulving found that remembering is an automatic process. It is also well documented that a hemispheric asymmetry occurs in the PFC: When encoding memories, the Left Dorsolateral PFC (LPFC) is activated, and when retrieving memories, activation is seen in the Right Dorsolateral PFC (RPFC).Studies have also shown that the PFC is extremely involved with autonoetic consciousness (See Tulving's theory). This is responsible for humans' recollective experiences and 'mental time travelling' abilities (characteristics of episodic memory). Neuropsychology: Amygdala The amygdala is believed to be involved in the encoding and retrieval of emotionally charged memories. Much of the evidence for this has come from research on a phenomenon known as flashbulb memories. These are instances in which memories of powerful emotional events are more highly detailed and enduring than regular memories (e.g. September 11 attacks, assassination of JFK). These memories have been linked to increased activation in the amygdala. Recent studies of patients with damage to the amygdala suggest that it is involved in memory for general knowledge, and not for specific information. Neuropsychology: Other structures involved The regions of the diencephalon have shown brain activation when a remote memory is being recovered and the occipital lobe, ventral temporal lobe, and fusiform gyrus all play a role in memory formation. Lesion studies Lesion studies are commonly used in cognitive neuroscience research. Lesions can occur naturally through trauma or disease, or they can be surgically induced by researchers. In the study of declarative memory, the hippocampus and the amygdala are two structures frequently examined using this technique. Neuropsychology: Hippocampal lesion studies The Morris water navigation task tests spatial learning in rats. In this test rats learn to escape from a pool by swimming toward a platform submerged just below the surface of the water. Visual cues that surround the pool (e.g. a chair or window) help the rat to locate the platform on subsequent trials. The rats' use of specific events, cues, and places are all forms of declarative memory. Two groups of rats are observed: a control group with no lesions and an experimental group with hippocampal lesions. In this task created by Morris, rats are placed in the pool at the same position for 12 trials. Each trial is timed and the path taken by the rats is recorded. Rats with hippocampal lesions successfully learn to find the platform. If the starting point is moved, the rats with hippocampal lesions typically fail to locate the platform. The control rats, however, are able to find the platform using the cues acquired during the learning trials. This demonstrates the involvement of the hippocampus in declarative memory.The Odor-odor Recognition Task, devised by Bunsey and Eichenbaum, involves a social encounter between two rats (a subject and a demonstrator). The demonstrator, after eating a specific type of food, interacts with the subject rat, who then smells the food odor on the other's breath. The experimenters then present the subject rat with a decision between two food options; the food previously eaten by the demonstrator, and a novel food. The researchers found that when there was no time delay, both control rats and rats with lesions chose the familiar food. After 24 hours, however, the rats with hippocampal lesions were just as likely to eat both types of food, while control rats chose the familiar food. This can be attributed to the inability to form episodic memories due to lesions in the hippocampus. The effects of this study can be observed in humans with amnesia, indicating the role of the hippocampus in developing episodic memories that can be generalized to similar situations.Henry Molaison, previously known as H.M., had parts of both his left and right medial temporal lobes (hippocampi) removed which resulted in the loss of the ability to form new memories. The long-term declarative memory was crucially affected when the structures from the medial temporal lobe were removed, including the ability to form new semantic knowledge and memories. The dissociation in Molaison between the acquisition of declarative memory and other kinds of learning was seen initially in motor learning. Molaison's declarative memory was not functioning, as was seen when Molaison completed the task of repetition priming. Neuropsychology: His performance does improve over trials, however, his scores were inferior to those of control participants. In the condition of Molaison the same results from this priming task are reflected when looking at the other basic memory functions like remembering, recall and recognizing. Lesions should not be interpreted as an all-or-nothing condition, in the case of Molaison not all memory and recognition is lost, although the declarative memory is severely damaged he still has a sense of self and memories that were developed before the lesion occurred.Patient R.B. was another clinical case reinforcing the role of the hippocampus in declarative memory. After suffering an ischemic episode during a cardiac bypass operation, Patient R.B. awoke with a severe anterograde amnesic disorder. IQ and cognition were unaffected, but declarative memory deficits were observed (although not to the extent of that seen in Molaison). Upon death, an autopsy revealed that Patient R.B. had bilateral lesions of the CA1 cell region along the whole length of the hippocampus. Neuropsychology: Amygdala lesion studies Adolph, Cahill and Schul completed a study showing that emotional arousal facilitates the encoding of material into long term declarative memory. They selected two subjects with bilateral damage to the amygdala, as well as six control subjects and six subjects with brain damage. All subjects were shown a series of twelve slides accompanied by a narrative. The slides varied in the degree to which they evoked emotion – slides 1 through 4 and slides 9 through 12 contain non-emotional content. Slides 5 through 8 contain emotional material, and the seventh slide contained the most emotionally arousing image and description (a picture of surgically repaired legs of a car crash victim).The emotionally arousing slide (slide 7) was remembered no better by the bilateral damage participants than any of the other slides. All other participants notably remembered the seventh slide the best and in most detail out of all the other slides. This shows that the amygdala is necessary to facilitate encoding of declarative knowledge regarding emotionally arousing stimuli, but is not required for encoding knowledge of emotionally neutral stimuli. Affecting factors: Stress Stress may have an effect on the recall of declarative memories. Lupien, et al. completed a study that had 3 phases for participants to take part in. Phase 1 involved memorizing a series of words, phase 2 entailed either a stressful (public speaking) or non-stressful situation (an attention task), and phase 3 required participants to recall the words they learned in phase 1. There were signs of decreased declarative memory performance in the participants that had to complete the stressful situation after learning the words. Recall performance after the stressful situation was found to be worse overall than after the non-stressful situation. It was also found that performance differed based on whether the participant responded to the stressful situation with an increase in measured levels of salivary cortisol. Affecting factors: Posttraumatic stress disorder (PTSD) emerges after exposure to a traumatic event eliciting fear, horror or helplessness that involves bodily injury, the threat of injury, or death to one's self or another person. The chronic stress in PTSD contributes to an observed decrease in hippocampal volume and declarative memory deficits.Stress can alter memory functions, reward, immune function, metabolism and susceptibility to different diseases. Disease risk is particularly pertinent to mental illnesses, whereby chronic or severe stress remains a common risk factor for several mental illnesses. One system suggests there are five types of stress labeled acute time-limited stressors, brief naturalistic stressors, stressful event sequences, chronic stressors, and distant stressors. An acute time-limited stressor involves a short-term challenge, while a brief natural stressor involves an event that is normal but nevertheless challenging. A stressful event sequence is a stressor that occurs, and then continues to yield stress into the immediate future. A chronic stressor involves exposure to a long-term stressor, and a distant stressor is a stressor that is not immediate. Affecting factors: Neurochemical factors of stress on the brain Cortisol is the primary glucocorticoid in the human body. In the brain, it modulates the ability of the hippocampus and prefrontal cortex to process memories. Although the exact molecular mechanism of how glucocorticoids influence memory formation is unknown, the presence of glucocorticoid receptors in the hippocampus and prefrontal cortex tell us these structures are some of its many targets. It has been demonstrated that cortisone, a glucocorticoid, impaired blood flow in the right parahippocampal gyrus, left visual cortex and cerebellum.A study by Damoiseaux et al. (2007) evaluated the effects of glucocorticoids on hippocampal and prefrontal cortex activation during declarative memory retrieval. They found that administration of hydrocortisone (name given to cortisol when it is used as a medication) to participants one hour before retrieval of information impairs free recall of words, yet when administered before or after learning they had no effect on recall. They also found that hydrocortisone decreases brain activity in the above-mentioned areas during declarative memory retrieval. Therefore, naturally occurring elevations of cortisol during periods of stress lead to impairment of declarative memory.It is important to note that this study involved only male subjects, which may be significant as sex steroid hormones may have different effects in response to cortisol administration. Men and women also respond to emotional stimuli differently and this may affect cortisol levels. This was also the first Functional magnetic resonance imaging(fMRI) study done utilising glucocorticoids, therefore more research is necessary to further substantiate these findings. Consolidation during sleep: It is believed that sleep plays an active role in consolidation of declarative memory. Specifically, sleep's unique properties enhance memory consolidation, such as the reactivation of newly learned memories during sleep. For example, it has been suggested that the central mechanism for consolidation of declarative memory during sleep is the reactivation of hippocampal memory representations. This reactivation transfers information to neocortical networks where it is integrated into long-term representations. Studies on rats involving maze learning found that hippocampal neuronal assemblies that are used in the encoding of spatial information are reactivated in the same temporal order. Similarly, positron emission tomography (PET) has shown reactivation of the hippocampus in slow-wave sleep (SWS) after spatial learning. Together these studies show that newly learned memories are reactivated during sleep and through this process new memory traces are consolidated. In addition, researchers have identified three types of sleep (SWS, sleep spindle and REM) in which declarative memory is consolidated. Consolidation during sleep: Slow-wave sleep, often referred to as deep sleep, plays the most important role in consolidation of declarative memory and there is a large amount of evidence to support this claim. One study found that the first 3.5 hours of sleep offer the greatest performance enhancement on memory recall tasks because the first couple of hours are dominated by SWS. Additional hours of sleep do not add to the initial level of performance. Thus this study suggests that full sleep may not be important for optimal performance of memory. Another study shows that people who experience SWS during the first half of their sleep cycle compared to subjects who did not, showed better recall of information. However this is not the case for subjects who were tested for the second half of their sleep cycle, as they experience less SWS.Another key piece of evidence regarding SWS's involvement in declarative memory consolidation is a finding that people with pathological conditions of sleep, such as insomnia, exhibit both reduction in Slow-Wave Sleep and also have impaired consolidation of declarative memory during sleep. Another study found that middle aged people compared to young group had a worse retrieval of memories. This in turn indicated that SWS is associated with poor declarative memory consolidation but not with age itself.Some researchers suggest that sleep spindle, a burst of brain activity occurring during stage 2 sleep, plays a role in boosting consolidation of declarative memories. Critics point out that spindle activity is positively correlated with intelligence. In contrast, Schabus and Gruber point out that sleep spindle activity only relates to performance on newly learned memories and not to absolute performance. This supports the hypothesis that sleep spindle helps to consolidate recent memory traces but not memory performance in general. The relationship between sleep spindles and declarative memory consolidation is not yet fully understood.There is a relatively small body of evidence that supports the idea that REM sleep helps consolidate highly emotional declarative memories. For instance Wagner, et al. compared memory retention for emotional versus neutral text over two instances; early sleep that is dominated by SWS and late sleep that is dominated by REM phase. This study found that sleep improved memory retention of emotional text only during late sleep phase, which was primarily REM. Similarly, Hu & Stylos-Allen, et al. performed a study with emotional versus neutral pictures and concluded that REM sleep facilitates consolidation of emotional declarative memories.The view that sleep plays an active role in declarative memory consolidation is not shared by all researchers. For instance Ellenbogen, et al. argue that sleep actively protects declarative memory from associative interference. Furthermore, Wixted believes that the sole role of sleep in declarative memory consolidation is nothing more but creating ideal conditions for memory consolidation. For example, when awake, people are bombarded with mental activity which interferes with effective consolidation. However, during sleep, when interference is minimal, memories can be consolidated without associative interference. More research is needed to make a definite statement whether sleep creates favourable conditions for consolidation or it actively enhances declarative memory consolidation. Encoding and retrieval: The encoding of explicit memory depends on conceptually driven, top-down processing, in which a subject reorganizes the data to store it. The subject makes associations with previously related stimuli or experiences. This was termed deep encoding by Fergus Craik and Robert Lockhart. This way a memory persists longer and will be remembered well. The later recall of information is thus greatly influenced by the way in which the information was originally processed.The depth-of-processing effect is the improvement in subsequent recall of an object about which a person has given thought to its meaning or shape. Simply put: To create explicit memories, you have to do something with your experiences: think about them, talk about them, write them down, study them, etc. The more you do, the better you will remember. Testing of information while learning has also shown to improve encoding in explicit memory. If a student reads a text book and then tests themselves afterward, their semantic memory of what was read is improved. This study – test method improves encoding of information. This Phenomenon is referred to as the Testing Effect.Retrieval: Because a person has played an active role in processing explicit information, the internal cues that were used in processing it can also be used to initiate spontaneous recall. When someone talks about an experience, the words they use will help when they try to remember this experience at a later date. The conditions in which information is memorized can affect recall. If a person has the same surroundings or cues when the original information is presented, they are more likely to remember it. This is referred to as encoding specificity and it also applies to explicit memory. In a study where subjects were asked to perform a cued recall task participants with a high working memory did better than participants with a low working memory when the conditions were maintained. When the conditions were changed for recall both groups dropped. The subjects with higher working memory declined more. This is thought to happen because matching environments activates areas of the brain known as the left inferior frontal gyrus and the hippocampus. Neural structures involved: Several neural structures are proposed to be involved in explicit memory. Most are in the temporal lobe or closely related to it, such as the amygdala, the hippocampus, the rhinal cortex in the temporal lobe, and the prefrontal cortex. Nuclei in the thalamus also are included, because many connections between the prefrontal cortex and temporal cortex are made through the thalamus. The regions that make up the explicit memory circuit receive input from the neocortex and from brainstem systems, including acetylcholine, serotonin, and noradrenaline systems. Traumatic brain injury: While the human brain is certainly regarded for its plasticity, there is some evidence that shows traumatic brain injury (TBI) in young children can have negative effects on explicit memory. Researchers have looked at children with TBI in early childhood (i.e. infancy) and late childhood. Findings showed that children with severe TBI in late childhood experienced impaired explicit memory while still maintaining implicit memory formation. Researchers also found that children with severe TBI in early childhood had both increased chance of having both impaired explicit memory and implicit memory. While children with severe TBI are at risk for impaired explicit memory, the chances of impaired explicit memory in adults with severe TBI is much greater. Memory loss: Alzheimer's disease has a profound effect on explicit memory. Mild cognitive impairment is an early sign of Alzheimer's disease. People with memory conditions often receive cognitive training. When an fMRI was used to view brain activity after training, it found increased activation in various neural systems that are involved with explicit memory. People with Alzheimer's have problems learning new tasks. However, if the task is presented repeatedly they can learn and retain some new knowledge of the task. This effect is more apparent if the information is familiar. The person with Alzheimer's must also be guided through the task and prevented from making errors. Alzheimer's also has an effect on explicit spatial memory. This means that people with Alzheimer's have difficulty remembering where items are placed in unfamiliar environments. The hippocampus has been shown to become active in semantic and episodic memory.The effects of Alzheimer's disease are seen in the episodic part of explicit memory. This can lead to problems with communication. A study was conducted where Alzheimer's patients were asked to name a variety of objects from different periods. The results shown that their ability to name the object depended on frequency of use of the item and when the item was first acquired. This effect on semantic memory also has an effect on music and tones. Alzheimer's patients have difficulty distinguishing between different melodies they have never heard before. People with Alzheimer's also have issues with picturing future events. This is due to a deficit in episodic future thinking. There are many other reasons why adults and others may begin to have memory loss. In popular culture: Amnesia is frequently portrayed in television and movies. Some of the better-known examples include: In the romantic comedy 50 First Dates (2004), Adam Sandler plays veterinarian Henry Roth, who falls for Lucy Whitmore, played by Drew Barrymore. Having lost her short-term memory in a car crash, Lucy can only remember the current day's events until she falls asleep. When she wakes up the next morning, she has no recollection of the previous day's experiences. Those experiences would normally be transferred into declarative knowledge and allow them to be recalled in the future. tTe movie is not the most accurate representation of a true amnesic patient, but it is useful to inform viewers of the detrimental effects of amnesia. In popular culture: Memento (2000) a film inspired by the case of Henry Molaison (H.M.). Guy Pearce plays a former insurance investigator suffering from severe anterograde amnesia, which was caused by a head injury. Unlike most other amnesiacs, Leonard retains his identity and the memories of events that occurred before the injury but has lost all ability to form new memories. That loss of ability indicates that the head injury affected the medial temporal lobe of the brain, which has resulted in his inability to form declarative memory. In popular culture: Finding Nemo features a reef fish named Dory with an inability to develop declarative memory. That prevents her from learning or retaining any new information such as names or directions. The exact origin of Dory's impairment is not mentioned in the film, but her memory loss accurately portrays the difficulties facing amnesiacs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Polyvinylcarbazole** Polyvinylcarbazole: Polyvinylcarbazole (PVK) is a temperature-resistant thermoplastic polymer produced by radical polymerization from the monomer N-vinylcarbazole. It is a photoconductive polymer and thus the basis for photorefractive polymers and organic light-emitting diodes. History: Polyvinylcarbazole was discovered by the chemists Walter Reppe (1892-1969), Ernst Keyssner and Eugen Dorrer and patented by I.G. Farben in the USA in 1937. PVC was the first polymer whose photoconductivity was known. Starting in the 1960s, further polymers of this kind were sought. Production: Polyvinylcarbazole is obtained from N-vinylcarbazole by radical polymerization in various ways. It can be produced by suspension polymerization at 180 °C with sodium chloride and potassium chromate as catalyst. Alternatively, AIBN can also be used as a radical starter or a Ziegler-Natta catalyst. Properties: Physical properties PVK can be used at temperatures of up to 160 - 170 °C and is therefore a temperature-resistant thermoplastic. The electrical conductivity changes depending on the illumination. For this reason, PVK is classified as a semiconductor or photoconductor. The polymer is extremely brittle, but the brittleness can be reduced by copolymerization with a little isoprene. Chemical properties Polyvinylcarbazole is soluble in aromatic hydrocarbons, halogenated hydrocarbons and ketones. It is resistant to acids, alkalis, polar solvents and aliphatic hydrocarbons. The addition of PVC to other plastic masses increases their temperature resistance. Use: Due to its high price and special properties, the use of PVK is limited to special areas. It is used in insulation technology, electrophotography (e.g. in copiers and laser printers), for the fabrication of polymer photonic crystals, for organic light-emitting diodes and photovoltaic devices. In addition, PVC is a well researched component in photorefractive polymers and therefore plays an important role in holography. Another application is the production of cooking-proof copolymers with styrene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Biometrika** Biometrika: Biometrika is a peer-reviewed scientific journal published by Oxford University Press for the Biometrika Trust. The editor-in-chief is Paul Fearnhead (Lancaster University). The principal focus of this journal is theoretical statistics. It was established in 1901 and originally appeared quarterly. It changed to three issues per year in 1977 but returned to quarterly publication in 1992. History: Biometrika was established in 1901 by Francis Galton, Karl Pearson, and Raphael Weldon to promote the study of biometrics. The history of Biometrika is covered by Cox (2001). The name of the journal was chosen by Pearson, but Francis Edgeworth insisted that it be spelt with a "k" and not a "c". Since the 1930s, it has been a journal for statistical theory and methodology. Galton's role in the journal was essentially that of a patron and the journal was run by Pearson and Weldon and after Weldon's death in 1906 by Pearson alone until he died in 1936. In the early days, the American biologists Charles Davenport and Raymond Pearl were nominally involved but they dropped out. On Pearson's death his son Egon Pearson became editor and remained in this position until 1966. David Cox was editor for the next 25 years. So, in its first 65 years Biometrika had effectively a total of just three editors, and in its first 90 years only four. Other people who were deeply involved in the journal included William Palin Elderton, an associate of Pearson's who published several articles in the early days and in 1935 became chairman of the Biometrika Trust.In the very first issue, the editors presented a clear statement of purpose: It is intended that Biometrika shall serve as a means not only of collecting or publishing under one title biological data of a kind not systematically collected or published elsewhere in any other periodical, but also of spreading a knowledge of such statistical theory as may be requisite for their scientific treatment. History: Its contents were to include: memoirs on variation, inheritance, and selection in animals and plants, based upon the examination of statistically large numbers of specimens those developments of statistical theory which are applicable to biological problems numerical tables and graphical solutions tending to reduce the labour of statistical arithmetic abstracts of memoirs, dealing with these subjects, which are published elsewhere notes on current biometric work and unsolved problemsEarly volumes contained many reports on biological topics, but over the twentieth century, Biometrika became a "journal of statistics in which emphasis is placed on papers containing original theoretical contributions of direct or potential value in applications." Thus, of the five types of contents envisaged by its founders, only the second and to a lesser extent the third remain, largely shorn of their biological roots. In his centenary tribute to Karl Pearson, J. B. S. Haldane likened him to Columbus who "set out for China, and discovered America." The same might be said of Pearson's journal. Historical reference: To mark the centenary of "one of the world's leading academic journals in statistical theory and methodology" a commemorative volume was produced, containing articles that had appeared in a special issue of the journal and a selection of classic papers published in the journal in the years 1939–1971. Editors: The editors of the journal have been: 1901–1905: Francis Galton, Karl Pearson, and Raphael Weldon 1906–1936: Karl Pearson 1936–1966: Egon Pearson 1966–1991: David Cox 1991–1992: David Hinkley 1992–1996: Philip Dawid 1996–2007: Michael Titterington 2008–2017: Anthony Davison 2018–present: Paul Fearnhead Abstracting and indexing: The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.445.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FastPort** FastPort: The FastPort was a proprietary polyconnection interface used on all Sony Ericsson cellphones between 2005 and 2010. Designed in response to Nokia's proprietary Pop-Port, FastPort provided data transfer, charging, headset and speaker connections through a common interface. It was discontinued in 2010 and replaced with a micro-USB for charging and data, and a TRRS connection for audio (headphones). Functions: Transfer of data and files A USB FastPort-cable enables file and data transfer between a computer and a Sony Ericsson cellphone. Most models could act as a USB-storage-device, modem, phone and could load new firmware either with Sony Ericsson Update Service application, or with 3rd party software. FastPort was the interface to the PC to realize these functions. Functions: Charging the battery/powering the phone The port can charge the battery and power the phone while it is connected to, for example, a hands-free solution in a car. The FastPort became the only way to get external power to the phones. Chargers comes in several varieties, from 12/24 volt DC to use in cars, to 100-250 volt AC to use elsewhere. Some charger-models can only charge the phone (the cable is attached at the middle), in others all the connector pins through to the plug end, thus supporting data/signal transfer while the phone is being charged. Functions: Sound accessories and headsets The port also connects wired headsets or speakers, etc. Location: Originally, the FastPort was placed on the bottom edge of the phone (when viewed from the front), for a while on the top edge, and finally on the left edge. These changes caused some accessories to become unusable, such as holders with charging options and docks. Layout: The connector has 12 pins for electrical connections (both power and data), 2 double-sided "hooks" on the plug and matching holes in the phones connector for keeping the plug safely in place. One hook contains a small polarity key to prevent the connector being inserted upside down. The dimension of the connector on the phone is approximately 20 mm × 5 mm (0.79 in × 0.20 in). To help users identify the type of cable and see how to correctly insert the plug, a small symbol is placed on the side intended to be towards the front of the phone. Powerplugs display a small lightning bolt, headsets and hands-free-plugs show an old-fashioned headset, data-cables present a computer screen and music accessories reveal a note-sign.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Glossary of category theory** Glossary of category theory: This is a glossary of properties and concepts in category theory in mathematics. (see also Outline of category theory.) Notes on foundations: In many expositions (e.g., Vistoli), the set-theoretic issues are ignored; this means, for instance, that one does not distinguish between small and large categories and that one can arbitrarily form a localization of a category. Like those expositions, this glossary also generally ignores the set-theoretic issues, except when they are relevant (e.g., the discussion on accessibility.)Especially for higher categories, the concepts from algebraic topology are also used in the category theory. For that see also glossary of algebraic topology. Glossary of category theory: The notations and the conventions used throughout the article are: [n] = {0, 1, 2, …, n}, which is viewed as a category (by writing i→j⇔i≤j .) Cat, the category of (small) categories, where the objects are categories (which are small with respect to some universe) and the morphisms functors. Fct(C, D), the functor category: the category of functors from a category C to a category D. Set, the category of (small) sets. sSet, the category of simplicial sets. "weak" instead of "strict" is given the default status; e.g., "n-category" means "weak n-category", not the strict one, by default. By an ∞-category, we mean a quasi-category, the most popular model, unless other models are being discussed. The number zero 0 is a natural number. A: abelian A category is abelian if it has a zero object, it has all pullbacks and pushouts, and all monomorphisms and epimorphisms are normal. accessible 1. Given a cardinal number κ, an object X in a category is κ-accessible (or κ-compact or κ-presentable) if Hom ⁡(X,−) commutes with κ-filtered colimits. 2. Given a regular cardinal κ, a category is κ-accessible if it has κ-filtered colimits and there exists a small set S of κ-compact objects that generates the category under colimits, meaning every object can be written as a colimit of diagrams of objects in S. additive A category is additive if it is preadditive (to be precise, has some pre-additive structure) and admits all finite coproducts. Although "preadditive" is an additional structure, one can show "additive" is a property of a category; i.e., one can ask whether a given category is additive or not. A: adjunction An adjunction (also called an adjoint pair) is a pair of functors F: C → D, G: D → C such that there is a "natural" bijection Hom Hom C⁡(X,G(Y)) ;F is said to be left adjoint to G and G to right adjoint to F. Here, "natural" means there is a natural isomorphism Hom Hom C⁡(−,G(−)) of bifunctors (which are contravariant in the first variable.) algebra for a monad Given a monad T in a category X, an algebra for T or a T-algebra is an object in X with a monoid action of T ("algebra" is misleading and "T-object" is perhaps a better term.) For example, given a group G that determines a monad T in Set in the standard way, a T-algebra is a set with an action of G. A: amnestic A functor is amnestic if it has the property: if k is an isomorphism and F(k) is an identity, then k is an identity. B: balanced A category is balanced if every bimorphism is an isomorphism. Beck's theorem Beck's theorem characterizes the category of algebras for a given monad. bicategory A bicategory is a model of a weak 2-category. bifunctor A bifunctor from a pair of categories C and D to a category E is a functor C × D → E. For example, for any category C, Hom ⁡(−,−) is a bifunctor from Cop and C to Set. bimonoidal A bimonoidal category is a category with two monoidal structures, one distributing over the other. bimorphism A bimorphism is a morphism that is both an epimorphism and a monomorphism. Bousfield localization See Bousfield localization. C: calculus of functors The calculus of functors is a technique of studying functors in the manner similar to the way a function is studied via its Taylor series expansion; whence, the term "calculus". cartesian closed A category is cartesian closed if it has a terminal object and that any two objects have a product and exponential. cartesian functor Given relative categories p:F→C,q:G→C over the same base category C, a functor f:F→G over C is cartesian if it sends cartesian morphisms to cartesian morphisms. C: cartesian morphism 1. Given a functor π: C → D (e.g., a prestack over schemes), a morphism f: x → y in C is π-cartesian if, for each object z in C, each morphism g: z → y in C and each morphism v: π(z) → π(x) in D such that π(g) = π(f) ∘ v, there exists a unique morphism u: z → x such that π(u) = v and g = f ∘ u. C: 2. Given a functor π: C → D (e.g., a prestack over rings), a morphism f: x → y in C is π-coCartesian if, for each object z in C, each morphism g: x → z in C and each morphism v: π(y) → π(z) in D such that π(g) = v ∘ π(f), there exists a unique morphism u: y → z such that π(u) = v and g = u ∘ f. (In short, f is the dual of a π-cartesian morphism.) Cartesian square A commutative diagram that is isomorphic to the diagram given as a fiber product. C: categorical logic Categorical logic is an approach to mathematical logic that uses category theory. categorification Categorification is a process of replacing sets and set-theoretic concepts with categories and category-theoretic concepts in some nontrivial way to capture categoric flavors. Decategorification is the reverse of categorification. C: category A category consists of the following data A class of objects, For each pair of objects X, Y, a set Hom ⁡(X,Y) , whose elements are called morphisms from X to Y, For each triple of objects X, Y, Z, a map (called composition) Hom Hom Hom ⁡(X,Z),(g,f)↦g∘f For each object X, an identity morphism id Hom ⁡(X,X) subject to the conditions: for any morphisms f:X→Y , g:Y→Z and h:Z→W ,(h∘g)∘f=h∘(g∘f) and id id X=f For example, a partially ordered set can be viewed as a category: the objects are the elements of the set and for each pair of objects x, y, there is a unique morphism x→y if and only if x≤y ; the associativity of composition means transitivity. C: category of categories The category of (small) categories, denoted by Cat, is a category where the objects are all the categories which are small with respect to some fixed universe and the morphisms are all the functors. classifying space The classifying space of a category C is the geometric realization of the nerve of C. co- Often used synonymous with op-; for example, a colimit refers to an op-limit in the sense that it is a limit in the opposite category. But there might be a distinction; for example, an op-fibration is not the same thing as a cofibration. C: coend The coend of a functor op ×C→X is the dual of the end of F and is denoted by ∫c∈CF(c,c) For example, if R is a ring, M a right R-module and N a left R-module, then the tensor product of M and N is M⊗RN=∫RM⊗ZN where R is viewed as a category with one object whose morphisms are the elements of R. C: coequalizer The coequalizer of a pair of morphisms f,g:A→B is the colimit of the pair. It is the dual of an equalizer. coherence theorem A coherence theorem is a theorem of a form that states a weak structure is equivalent to a strict structure. coimage The coimage of a morphism f: X → Y is the coequalizer of X×YX⇉X colored operad Another term for multicategory, a generalized category where a morphism can have several domains. The notion of "colored operad" is more primitive than that of operad: in fact, an operad can be defined as a colored operad with a single object. comma Given functors f:C→B,g:D→B , the comma category (f↓g) is a category where (1) the objects are morphisms f(c)→g(d) and (2) a morphism from α:f(c)→g(d) to β:f(c′)→g(d′) consists of c→c′ and d→d′ such that f(c)→f(c′)→βg(d′) is f(c)→αg(d)→g(d′). For example, if f is the identity functor and g is the constant functor with a value b, then it is the slice category of B over an object b. comonad A comonad in a category X is a comonoid in the monoidal category of endofunctors of X. compact Probably synonymous with #accessible. complete A category is complete if all small limits exist. composition 1. A composition of morphisms in a category is part of the datum defining the category. C: 2. If f:C→D,g:D→E are functors, then the composition g∘f or gf is the functor defined by: for an object x and a morphism u in C, (g∘f)(x)=g(f(x)),(g∘f)(u)=g(f(u)) 3. Natural transformations are composed pointwise: if φ:f→g,ψ:g→h are natural transformations, then ψ∘φ is the natural transformation given by (ψ∘φ)x=ψx∘φx concrete A concrete category C is a category such that there is a faithful functor from C to Set; e.g., Vec, Grp and Top. C: cone A cone is a way to express the universal property of a colimit (or dually a limit). One can show that the colimit lim → is the left adjoint to the diagonal functor Fct ⁡(I,C) , which sends an object X to the constant functor with value X; that is, for any X and any functor f:I→C Hom lim Hom ⁡(f,ΔX), provided the colimit in question exists. The right-hand side is then the set of cones with vertex X. C: connected A category is connected if, for each pair of objects x, y, there exists a finite sequence of objects zi such that z0=x,zn=y and either Hom ⁡(zi,zi+1) or Hom ⁡(zi+1,zi) is nonempty for any i. conservative functor A conservative functor is a functor that reflects isomorphisms. Many forgetful functors are conservative, but the forgetful functor from Top to Set is not conservative. constant A functor is constant if it maps every object in a category to the same object A and every morphism to the identity on A. Put in another way, a functor f:C→D is constant if it factors as: C→{A}→iD for some object A in D, where i is the inclusion of the discrete category { A }. C: contravariant functor A contravariant functor F from a category C to a category D is a (covariant) functor from Cop to D. It is sometimes also called a presheaf especially when D is Set or the variants. For example, for each set S, let P(S) be the power set of S and for each function f:S→T , define P(f):P(T)→P(S) by sending a subset A of T to the pre-image f−1(A) . With this, P:Set→Set is a contravariant functor. C: coproduct The coproduct of a family of objects Xi in a category C indexed by a set I is the inductive limit lim → of the functor I→C,i↦Xi , where I is viewed as a discrete category. It is the dual of the product of the family. For example, a coproduct in Grp is a free product. core The core of a category is the maximal groupoid contained in the category. D: Day convolution Given a group or monoid M, the Day convolution is the tensor product in Fct(M,Set) density theorem The density theorem states that every presheaf (a set-valued contravariant functor) is a colimit of representable presheaves. Yoneda's lemma embeds a category C into the category of presheaves on C. The density theorem then says the image is "dense", so to say. The name "density" is because of the analogy with the Jacobson density theorem (or other variants) in abstract algebra. D: diagonal functor Given categories I, C, the diagonal functor is the functor Δ:C→Fct(I,C),A↦ΔA that sends each object A to the constant functor with value A and each morphism f:A→B to the natural transformation Δf,i:ΔA(i)=A→ΔB(i)=B that is f at each i. diagram Given a category C, a diagram in C is a functor f:I→C from a small category I. differential graded category A differential graded category is a category whose Hom sets are equipped with structures of differential graded modules. In particular, if the category has only one object, it is the same as a differential graded module. direct limit A direct limit is the colimit of a direct system. discrete A category is discrete if each morphism is an identity morphism (of some object). For example, a set can be viewed as a discrete category. distributor Another term for "profunctor". Dwyer–Kan equivalence A Dwyer–Kan equivalence is a generalization of an equivalence of categories to the simplicial context. E: Eilenberg–Moore category Another name for the category of algebras for a given monad. empty The empty category is a category with no object. It is the same thing as the empty set when the empty set is viewed as a discrete category. E: end The end of a functor op ×C→X is the limit lim ←⁡(F#:C#→X) where C# is the category (called the subdivision category of C) whose objects are symbols c#,u# for all objects c and all morphisms u in C and whose morphisms are b#→u# and u#→c# if u:b→c and where F# is induced by F so that c# would go to F(c,c) and u#,u:b→c would go to F(b,c) . For example, for functors F,G:C→X Hom ⁡(F(c),G(c)) is the set of natural transformations from F to G. For more examples, see this mathoverflow thread. The dual of an end is a coend. E: endofunctor A functor between the same category. E: enriched category Given a monoidal category (C, ⊗, 1), a category enriched over C is, informally, a category whose Hom sets are in C. More precisely, a category D enriched over C is a data consisting of A class of objects, For each pair of objects X, Y in D, an object Map D⁡(X,Y) in C, called the mapping object from X to Y, For each triple of objects X, Y, Z in D, a morphism in C, Map Map Map D⁡(X,Z) called the composition, For each object X in D, a morphism Map D⁡(X,X) in C, called the unit morphism of X subject to the conditions that (roughly) the compositions are associative and the unit morphisms act as the multiplicative identity. E: For example, a category enriched over sets is an ordinary category. epimorphism A morphism f is an epimorphism if g=h whenever g∘f=h∘f . In other words, f is the dual of a monomorphism. equalizer The equalizer of a pair of morphisms f,g:A→B is the limit of the pair. It is the dual of a coequalizer. equivalence 1. A functor is an equivalence if it is faithful, full and essentially surjective. 2. A morphism in an ∞-category C is an equivalence if it gives an isomorphism in the homotopy category of C. equivalent A category is equivalent to another category if there is an equivalence between them. essentially surjective A functor F is called essentially surjective (or isomorphism-dense) if for every object B there exists an object A such that F(A) is isomorphic to B. evaluation Given categories C, D and an object A in C, the evaluation at A is the functor Fct(C,D)→D,F↦F(A). For example, the Eilenberg–Steenrod axioms give an instance when the functor is an equivalence. F: faithful A functor is faithful if it is injective when restricted to each hom-set. F: fundamental category The fundamental category functor τ1:sSet→Cat is the left adjoint to the nerve functor N. For every category C, τ1NC=C fundamental groupoid The fundamental groupoid Π1X of a Kan complex X is the category where an object is a 0-simplex (vertex) Δ0→X , a morphism is a homotopy class of a 1-simplex (path) Δ1→X and a composition is determined by the Kan property. F: fibered category A functor π: C → D is said to exhibit C as a category fibered over D if, for each morphism g: x → π(y) in D, there exists a π-cartesian morphism f: x' → y in C such that π(f) = g. If D is the category of affine schemes (say of finite type over some field), then π is more commonly called a prestack. Note: π is often a forgetful functor and in fact the Grothendieck construction implies that every fibered category can be taken to be that form (up to equivalences in a suitable sense). F: fiber product Given a category C and a set I, the fiber product over an object S of a family of objects Xi in C indexed by I is the product of the family in the slice category C/S of C over S (provided there are Xi→S ). The fiber product of two objects X and Y over an object S is denoted by X×SY and is also called a Cartesian square. F: filtered 1. A filtered category (also called a filtrant category) is a nonempty category with the properties (1) given objects i and j, there are an object k and morphisms i → k and j → k and (2) given morphisms u, v: i → j, there are an object k and a morphism w: j → k such that w ∘ u = w ∘ v. A category I is filtered if and only if, for each finite category J and functor f: J → I, the set lim Hom ⁡(f(j),i) is nonempty for some object i in I. F: 2. Given a cardinal number π, a category is said to be π-filtrant if, for each category J whose set of morphisms has cardinal number strictly less than π, the set lim Hom ⁡(f(j),i) is nonempty for some object i in I. finitary monad A finitary monad or an algebraic monad is a monad on Set whose underlying endofunctor commutes with filtered colimits. finite A category is finite if it has only finitely many morphisms. forgetful functor The forgetful functor is, roughly, a functor that loses some of data of the objects; for example, the functor Grp→Set that sends a group to its underlying set and a group homomorphism to itself is a forgetful functor. free functor A free functor is a left adjoint to a forgetful functor. For example, for a ring R, the functor that sends a set X to the free R-module generated by X is a free functor (whence the name). Frobenius category A Frobenius category is an exact category that has enough injectives and enough projectives and such that the class of injective objects coincides with that of projective objects. Fukaya category See Fukaya category. full 1. A functor is full if it is surjective when restricted to each hom-set. 2. A category A is a full subcategory of a category B if the inclusion functor from A to B is full. F: functor Given categories C, D, a functor F from C to D is a structure-preserving map from C to D; i.e., it consists of an object F(x) in D for each object x in C and a morphism F(f) in D for each morphism f in C satisfying the conditions: (1) F(f∘g)=F(f)∘F(g) whenever f∘g is defined and (2) id id F(x) . For example, P:Set→Set,S↦P(S) where P(S) is the power set of S is a functor if we define: for each function f:S→T , P(f):P(S)→P(T) by P(f)(A)=f(A) functor category The functor category Fct(C, D) or DC from a category C to a category D is the category where the objects are all the functors from C to D and the morphisms are all the natural transformations between the functors. G: Gabriel–Popescu theorem The Gabriel–Popescu theorem says an abelian category is a quotient of the category of modules. Galois category 1. In SGA 1, Exposé V (Definition 5.1.), a category is called a Galois category if it is equivalent to the category of finite G-sets for some profinite group G. 2. For technical reasons, some authors (e.g., Stacks project or ) use slightly different definitions. generator In a category C, a family of objects Gi,i∈I is a system of generators of C if the functor Hom ⁡(Gi,X) is conservative. Its dual is called a system of cogenerators. Grothendieck's Galois theory A category-theoretic generalization of Galois theory; see Grothendieck's Galois theory. Grothendieck category A Grothendieck category is a certain well-behaved kind of an abelian category. G: Grothendieck construction Given a functor U:C→Cat , let DU be the category where the objects are pairs (x, u) consisting of an object x in C and an object u in the category U(x) and a morphism from (x, u) to (y, v) is a pair consisting of a morphism f: x → y in C and a morphism U(f)(u) → v in U(y). The passage from U to DU is then called the Grothendieck construction. G: Grothendieck fibration A fibered category. groupoid 1. A category is called a groupoid if every morphism in it is an isomorphism. 2. An ∞-category is called an ∞-groupoid if every morphism in it is an equivalence (or equivalently if it is a Kan complex.) H: Hall algebra of a category See Ringel–Hall algebra. heart The heart of a t-structure ( D≥0 , D≤0 ) on a triangulated category is the intersection D≥0∩D≤0 . It is an abelian category. Higher category theory Higher category theory is a subfield of category theory that concerns the study of n-categories and ∞-categories. H: homological dimension The homological dimension of an abelian category with enough injectives is the least non-negativer integer n such that every object in the category admits an injective resolution of length at most n. The dimension is ∞ if no such integer exists. For example, the homological dimension of ModR with a principal ideal domain R is at most one. H: homotopy category See homotopy category. It is closely related to a localization of a category. homotopy hypothesis The homotopy hypothesis states an ∞-groupoid is a space (less equivocally, an n-groupoid can be used as a homotopy n-type.) I: identity 1. The identity morphism f of an object A is a morphism from A to A such that for any morphisms g with domain A and h with codomain A, g∘f=g and f∘h=h 2. The identity functor on a category C is a functor from C to C that sends objects and morphisms to themselves. 3. Given a functor F: C → D, the identity natural transformation from F to F is a natural transformation consisting of the identity morphisms of F(X) in D for the objects X in C. image The image of a morphism f: X → Y is the equalizer of Y⇉Y⊔XY ind-limit A colimit (or inductive limit) in op ,Set) inductive limit Another name for colimit. I: ∞-category An ∞-category C is a simplicial set satisfying the following condition: for each 0 < i < n, every map of simplicial sets f:Λin→C extends to an n-simplex f:Δn→C where Δn is the standard n-simplex and Λin is obtained from Δn by removing the i-th face and the interior (see Kan fibration#Definitions). For example, the nerve of a category satisfies the condition and thus can be considered as an ∞-category. I: initial 1. An object A is initial if there is exactly one morphism from A to each object; e.g., empty set in Set. 2. An object A in an ∞-category C is initial if Map C⁡(A,B) is contractible for each object B in C. injective 1. An object A in an abelian category is injective if the functor Hom ⁡(−,A) is exact. It is the dual of a projective object. 2. The term “injective limit” is another name for a direct limit. internal Hom Given a monoidal category (C, ⊗), the internal Hom is a functor op ×C→C such that [Y,−] is the right adjoint to −⊗Y for each object Y in C. For example, the category of modules over a commutative ring R has the internal Hom given as Hom R⁡(M,N) , the set of R-linear maps. I: inverse 1. A morphism f is an inverse to a morphism g if g∘f is defined and is equal to the identity morphism on the codomain of g, and f∘g is defined and equal to the identity morphism on the domain of g. The inverse of g is unique and is denoted by g−1. f is a left inverse to g if f∘g is defined and is equal to the identity morphism on the domain of g, and similarly for a right inverse. I: 2. An inverse limit is the limit of an inverse system. isomorphic 1. An object is isomorphic to another object if there is an isomorphism between them. 2. A category is isomorphic to another category if there is an isomorphism between them. isomorphism A morphism f is an isomorphism if there exists an inverse of f. K: Kan complex A Kan complex is a fibrant object in the category of simplicial sets. K: Kan extension 1. Given a category C, the left Kan extension functor along a functor f:I→J is the left adjoint (if it exists) to Fct Fct ⁡(I,C) and is denoted by f! . For any α:I→C , the functor f!α:J→C is called the left Kan extension of α along f. One can show: lim →f(i)→j⁡α(i) where the colimit runs over all objects f(i)→j in the comma category. K: 2. The right Kan extension functor is the right adjoint (if it exists) to f∗ Ken Brown's lemma Ken Brown's lemma is a lemma in the theory of model categories. Kleisli category Given a monad T, the Kleisli category of T is the full subcategory of the category of T-algebras (called Eilenberg–Moore category) that consists of free T-algebras. L: lax A lax functor is a generalisation of a pseudo-functor, in which the structural transformations associated to composition and identities are not required to be invertible. length An object in an abelian category is said to have finite length if it has a composition series. The maximum number of proper subobjects in any such composition series is called the length of A. limit 1. The limit (or projective limit) of a functor op →Set is lim for any s:i→j}. 2. The limit lim ←i∈I⁡f(i) of a functor op →C is an object, if any, in C that satisfies: for any object X in C, Hom lim lim Hom ⁡(X,f(i)) ; i.e., it is an object representing the functor lim Hom ⁡(X,f(i)). L: 3. The colimit (or inductive limit) lim →i∈I⁡f(i) is the dual of a limit; i.e., given a functor f:I→C , it satisfies: for any X, Hom lim lim Hom ⁡(f(i),X) . Explicitly, to give lim →⁡f(i)→X is to give a family of morphisms f(i)→X such that for any i→j , f(i)→X is f(i)→f(j)→X . Perhaps the simplest example of a colimit is a coequalizer. For another example, take f to be the identity functor on C and suppose lim →X∈C⁡f(X) exists; then the identity morphism on L corresponds to a compatible family of morphisms αX:X→L such that αL is the identity. If f:X→L is any morphism, then f=αL∘f=αX ; i.e., L is a final object of C. L: localization of a category See localization of a category. M: Mittag-Leffler condition An inverse system ⋯→X2→X1→X0 is said to satisfy the Mittag-Leffler condition if for each integer n≥0 , there is an integer m≥n such that for each l≥m , the images of Xm→Xn and Xl→Xn are the same. M: monad A monad in a category X is a monoid object in the monoidal category of endofunctors of X with the monoidal structure given by composition. For example, given a group G, define an endofunctor T on Set by T(X)=G×X . Then define the multiplication μ on T as the natural transformation μ:T∘T→T given by μX:G×(G×X)→G×X,(g,(h,x))↦(gh,x) and also define the identity map η in the analogous fashion. Then (T, μ, η) constitutes a monad in Set. More substantially, an adjunction between functors F:X⇄A:G determines a monad in X; namely, one takes T=G∘F , the identity map η on T to be a unit of the adjunction and also defines μ using the adjunction. M: monadic 1. An adjunction is said to be monadic if it comes from the monad that it determines by means of the Eilenberg–Moore category (the category of algebras for the monad). 2. A functor is said to be monadic if it is a constituent of a monadic adjunction. monoidal category A monoidal category, also called a tensor category, is a category C equipped with (1) a bifunctor ⊗:C×C→C , (2) an identity object and (3) natural isomorphisms that make ⊗ associative and the identity object an identity for ⊗, subject to certain coherence conditions. monoid object A monoid object in a monoidal category is an object together with the multiplication map and the identity map that satisfy the expected conditions like associativity. For example, a monoid object in Set is a usual monoid (unital semigroup) and a monoid object in R-mod is an associative algebra over a commutative ring R. monomorphism A morphism f is a monomorphism (also called monic) if g=h whenever f∘g=f∘h ; e.g., an injection in Set. In other words, f is the dual of an epimorphism. multicategory A multicategory is a generalization of a category in which a morphism is allowed to have more than one domain. It is the same thing as a colored operad. N: n-category 1. A strict n-category is defined inductively: a strict 0-category is a set and a strict n-category is a category whose Hom sets are strict (n-1)-categories. Precisely, a strict n-category is a category enriched over strict (n-1)-categories. For example, a strict 1-category is an ordinary category. 2. The notion of a weak n-category is obtained from the strict one by weakening the conditions like associativity of composition to hold only up to coherent isomorphisms in the weak sense. 3. One can define an ∞-category as a kind of a colim of n-categories. Conversely, if one has the notion of a (weak) ∞-category (say a quasi-category) in the beginning, then a weak n-category can be defined as a type of a truncated ∞-category. N: natural 1. A natural transformation is, roughly, a map between functors. Precisely, given a pair of functors F, G from a category C to category D, a natural transformation φ from F to G is a set of morphisms in D Ob ⁡(C)} satisfying the condition: for each morphism f: x → y in C, ϕy∘F(f)=G(f)∘ϕx . For example, writing GLn(R) for the group of invertible n-by-n matrices with coefficients in a commutative ring R, we can view GLn as a functor from the category CRing of commutative rings to the category Grp of groups. Similarly, R↦R∗ is a functor from CRing to Grp. Then the determinant det is a natural transformation from GLn to -*. N: 2. A natural isomorphism is a natural transformation that is an isomorphism (i.e., admits the inverse). N: nerve The nerve functor N is the functor from Cat to sSet given by Hom Cat⁡([n],C) . For example, if φ is a functor in N(C)2 (called a 2-simplex), let xi=φ(i),0≤i≤2 . Then φ(0→1) is a morphism f:x0→x1 in C and also φ(1→2)=g:x1→x2 for some g in C. Since 0→2 is 0→1 followed by 1→2 and since φ is a functor, φ(0→2)=g∘f . In other words, φ encodes f, g and their compositions. N: normal A monomorphism is normal if it is the kernel of some morphism, and an epimorphism is conormal if it is the cokernel of some morphism. A category is normal if every monomorphism is normal. O: object 1. An object is part of a data defining a category. O: 2. An [adjective] object in a category C is a contravariant functor (or presheaf) from some fixed category corresponding to the "adjective" to C. For example, a simplicial object in C is a contravariant functor from the simplicial category to C and a Γ-object is a pointed contravariant functor from Γ (roughly the pointed category of pointed finite sets) to C provided C is pointed. O: op-fibration A functor π:C → D is an op-fibration if, for each object x in C and each morphism g : π(x) → y in D, there is at least one π-coCartesian morphism f: x → y' in C such that π(f) = g. In other words, π is the dual of a Grothendieck fibration. opposite The opposite category of a category is obtained by reversing the arrows. For example, if a partially ordered set is viewed as a category, taking its opposite amounts to reversing the ordering. P: perfect Sometimes synonymous with "compact". See perfect complex. pointed A category (or ∞-category) is called pointed if it has a zero object. polynomial A functor from the category of finite-dimensional vector spaces to itself is called a polynomial functor if, for each pair of vector spaces V, W, F: Hom(V, W) → Hom(F(V), F(W)) is a polynomial map between the vector spaces. A Schur functor is a basic example. preadditive A category is preadditive if it is enriched over the monoidal category of abelian groups. More generally, it is R-linear if it is enriched over the monoidal category of R-modules, for R a commutative ring. presentable Given a regular cardinal κ, a category is κ-presentable if it admits all small colimits and is κ-accessible. A category is presentable if it is κ-presentable for some regular cardinal κ (hence presentable for any larger cardinal). Note: Some authors call a presentable category a locally presentable category. presheaf Another term for a contravariant functor: a functor from a category Cop to Set is a presheaf of sets on C and a functor from Cop to sSet is a presheaf of simplicial sets or simplicial presheaf, etc. A topology on C, if any, tells which presheaf is a sheaf (with respect to that topology). product 1. The product of a family of objects Xi in a category C indexed by a set I is the projective limit lim ← of the functor I→C,i↦Xi , where I is viewed as a discrete category. It is denoted by ∏iXi and is the dual of the coproduct of the family. 2. The product of a family of categories Ci's indexed by a set I is the category denoted by ∏iCi whose class of objects is the product of the classes of objects of Ci's and whose hom-sets are Hom Ci⁡(Xi,Yi) ; the morphisms are composed component-wise. It is the dual of the disjoint union. profunctor Given categories C and D, a profunctor (or a distributor) from C to D is a functor of the form op ×C→Set projective 1. An object A in an abelian category is projective if the functor Hom ⁡(A,−) is exact. It is the dual of an injective object. 2. The term “projective limit” is another name for an inverse limit. PROP A PROP is a symmetric strict monoidal category whose objects are natural numbers and whose tensor product addition of natural numbers. pseudoalgebra A pseudoalgebra is a 2-category-version of an algebra for a monad (with a monad replaced by a 2-monad). Q: Q Q-category. Quillen Quillen’s theorem A provides a criterion for a functor to be a weak equivalence. R: reflect 1. A functor is said to reflect identities if it has the property: if F(k) is an identity then k is an identity as well. 2. A functor is said to reflect isomorphisms if it has the property: F(k) is an isomorphism then k is an isomorphism as well. representable A set-valued contravariant functor F on a category C is said to be representable if it belongs to the essential image of the Yoneda embedding op ,Set) ; i.e., Hom C⁡(−,Z) for some object Z. The object Z is said to be the representing object of F. retraction A morphism is a retraction if it has a right inverse. rig A rig category is a category with two monoidal structures, one distributing over the other. S: section A morphism is a section if it has a left inverse. For example, the axiom of choice says that any surjective function admits a section. Segal space Segal spaces were certain simplicial spaces, introduced as models for (∞, 1)-categories. semisimple An abelian category is semisimple if every short exact sequence splits. For example, a ring is semisimple if and only if the category of modules over it is semisimple. Serre functor Given a k-linear category C over a field k, a Serre functor f:C→C is an auto-equivalence such that Hom Hom ⁡(B,f(A))∗ for any objects A, B. simple object A simple object in an abelian category is an object A that is not isomorphic to the zero object and whose every subobject is isomorphic to zero or to A. For example, a simple module is precisely a simple object in the category of (say left) modules. simplex category The simplex category Δ is the category where an object is a set [n] = { 0, 1, …, n }, n ≥ 0, totally ordered in the standard way and a morphism is an order-preserving function. simplicial category A category enriched over simplicial sets. Simplicial localization Simplicial localization is a method of localizing a category. simplicial object A simplicial object in a category C is roughly a sequence of objects X0,X1,X2,… in C that forms a simplicial set. In other words, it is a covariant or contravariant functor Δ → C. For example, a simplicial presheaf is a simplicial object in the category of presheaves. S: simplicial set A simplicial set is a contravariant functor from Δ to Set, where Δ is the simplex category, a category whose objects are the sets [n] = { 0, 1, …, n } and whose morphisms are order-preserving functions. One writes Xn=X([n]) and an element of the set Xn is called an n-simplex. For example, Hom Δ⁡(−,[n]) is a simplicial set called the standard n-simplex. By Yoneda's lemma, Nat ⁡(Δn,X) site A category equipped with a Grothendieck topology. S: skeletal 1. A category is skeletal if isomorphic objects are necessarily identical. 2. A (not unique) skeleton of a category is a full subcategory that is skeletal. S: slice Given a category C and an object A in it, the slice category C/A of C over A is the category whose objects are all the morphisms in C with codomain A, whose morphisms are morphisms in C such that if f is a morphism from pX:X→A to pY:Y→A , then pY∘f=pX in C and whose composition is that of C. S: small 1. A small category is a category in which the class of all morphisms is a set (i.e., not a proper class); otherwise large. A category is locally small if the morphisms between every pair of objects A and B form a set. Some authors assume a foundation in which the collection of all classes forms a "conglomerate", in which case a quasicategory is a category whose objects and morphisms merely form a conglomerate. (NB: some authors use the term "quasicategory" with a different meaning.) 2. An object in a category is said to be small if it is κ-compact for some regular cardinal κ. The notion prominently appears in Quiilen's small object argument (cf. https://ncatlab.org/nlab/show/small+object+argument) species A (combinatorial) species is an endofunctor on the groupoid of finite sets with bijections. It is categorically equivalent to a symmetric sequence. S: stable An ∞-category is stable if (1) it has a zero object, (2) every morphism in it admits a fiber and a cofiber and (3) a triangle in it is a fiber sequence if and only if it is a cofiber sequence. strict A morphism f in a category admitting finite limits and finite colimits is strict if the natural morphism Coim Im ⁡(f) is an isomorphism. strict n-category A strict 0-category is a set and for any integer n > 0, a strict n-category is a category enriched over strict (n-1)-categories. For example, a strict 1-category is an ordinary category. Note: the term "n-category" typically refers to "weak n-category"; not strict one. subcanonical A topology on a category is subcanonical if every representable contravariant functor on C is a sheaf with respect to that topology. Generally speaking, some flat topology may fail to be subcanonical; but flat topologies appearing in practice tend to be subcanonical. subcategory A category A is a subcategory of a category B if there is an inclusion functor from A to B. subobject Given an object A in a category, a subobject of A is an equivalence class of monomorphisms to A; two monomorphisms f, g are considered equivalent if f factors through g and g factors through f. subquotient A subquotient is a quotient of a subobject. subterminal object A subterminal object is an object X such that every object has at most one morphism into X. symmetric monoidal category A symmetric monoidal category is a monoidal category (i.e., a category with ⊗) that has maximally symmetric braiding. symmetric sequence A symmetric sequence is a sequence of objects with actions of symmetric groups. It is categorically equivalent to a (combinatorial) species. T: t-structure A t-structure is an additional structure on a triangulated category (more generally stable ∞-category) that axiomatizes the notions of complexes whose cohomology concentrated in non-negative degrees or non-positive degrees. T: Tannakian duality The Tannakian duality states that, in an appropriate setup, to give a morphism f:X→Y is to give a pullback functor f∗ along it. In other words, the Hom set Hom ⁡(X,Y) can be identified with the functor category Fct ⁡(D(Y),D(X)) , perhaps in the derived sense, where D(X) is the category associated to X (e.g., the derived category). T: tensor category Usually synonymous with monoidal category (though some authors distinguish between the two concepts.) tensor triangulated category A tensor triangulated category is a category that carries the structure of a symmetric monoidal category and that of a triangulated category in a compatible way. tensor product Given a monoidal category B, the tensor product of functors op →B and G:C→B is the coend: F⊗CG=∫c∈CF(c)⊗G(c). terminal 1. An object A is terminal (also called final) if there is exactly one morphism from each object to A; e.g., singletons in Set. It is the dual of an initial object. 2. An object A in an ∞-category C is terminal if Map C⁡(B,A) is contractible for every object B in C. thick subcategory A full subcategory of an abelian category is thick if it is closed under extensions. thin A thin is a category where there is at most one morphism between any pair of objects. triangulated category A triangulated category is a category where one can talk about distinguished triangles, generalization of exact sequences. An abelian category is a prototypical example of a triangulated category. A derived category is a triangulated category that is not necessary an abelian category. U: universal 1. Given a functor f:C→D and an object X in D, a universal morphism from X to f is an initial object in the comma category (X↓f) . (Its dual is also called a universal morphism.) For example, take f to be the forgetful functor Veck→Set and X a set. An initial object of (X↓f) is a function j:X→f(VX) . That it is initial means that if k:X→f(W) is another morphism, then there is a unique morphism from j to k, which consists of a linear map VX→W that extends k via j; that is to say, VX is the free vector space generated by X. U: 2. Stated more explicitly, given f as above, a morphism X→f(uX) in D is universal if and only if the natural map Hom Hom D⁡(X,f(c)),α↦(X→f(ux)→f(α)f(c)) is bijective. In particular, if Hom Hom D⁡(X,f(−)) , then taking c to be uX one gets a universal morphism by sending the identity morphism. In other words, having a universal morphism is equivalent to the representability of the functor Hom D⁡(X,f(−)) W: Waldhausen category A Waldhausen category is, roughly, a category with families of cofibrations and weak equivalences. wellpowered A category is wellpowered if for each object there is only a set of pairwise non-isomorphic subobjects. Y: Yoneda 1. The Yoneda lemma says: for each set-valued contravariant functor F on C and an object X in C, there is a natural bijection Nat Hom C⁡(−,X),F) where Nat means the set of natural transformations. In particular, the functor op Hom C⁡(−,X) is fully faithful and is called the Yoneda embedding. 2. If F:C→D is a functor and y is the Yoneda embedding of C, then the Yoneda extension of F is the left Kan extension of F along y. Z: zero A zero object is an object that is both initial and terminal, such as a trivial group in Grp.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Xylylene dibromide** Xylylene dibromide: Xylylene dibromide is an organic compound with the formula C6H4(CH2Br)2. It is an off-white solid that, like other benzyl halides, a strong lachrymator. It is a useful reagent owing to the convenient reactivity of the two C-Br bonds. Two other isomers are known, para- and meta-xylylene dibromide. Synthesis: It is prepared by the photochemical reaction of ortho-xylene with bromine: C6H4(CH3)2 + 2 Br2 → C6H4(CH2Br)2 + 2 HBr Reactions: Further bromination gives the tetrabromide: C6H4(CH2Br)2 + 2 Br2 → C6H4(CHBr2)2 + 2 HBrUpon reaction with thiourea followed by hydrolysis of the intermediate bisisothiouronium salts, xylylene dibromide can be converted to the dithiol C6H4(CH2SH)2.Xylylene dibromide is a precursor to the ephemeral molecule ortho-quinonedimethane, also known as xylylene. This species can be trapped when the dehalogenation is conducted in the presence of iron carbonyl.Coupling of xylylene dibromide by treatment with lithium metal gives dibenzocyclooctane, precursor to dibenzocyclooctadiene. Related compounds: Xylylene dichloride, the dichloro analogue of the title compound. Benzyl bromide, the simplest benzylic bromide.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Popliteal vein** Popliteal vein: The popliteal vein is a vein of the lower limb. It is formed from the anterior tibial vein and the posterior tibial vein. It travels medial to the popliteal artery, and becomes the femoral vein. It drains blood from the leg. It can be assessed using medical ultrasound. It can be affected by popliteal vein entrapment. Structure: The popliteal vein is formed by the junction of the venae comitantes of the anterior tibial vein and the posterior tibial vein at the lower border of the popliteus muscle. It travels on the medial side of the popliteal artery. It is superficial to the popliteal artery. As it ascends through the fossa, it crosses behind the popliteal artery so that it comes to lie on its lateral side. It passes through the adductor hiatus (the opening in the adductor magnus muscle) to become the femoral vein. Structure: Tributaries The tributaries of the popliteal vein include: Veins that correspond to branches given off by the popliteal artery (see popliteal artery). the small saphenous vein, which perforates the deep fascia and passes between the two heads of the gastrocnemius muscle to end in the popliteal vein. the fibular veins. Variation The popliteal vein may be doubled in up to 35% of people. Function: The popliteal vein drains blood from the leg. Clinical significance: The popliteal vein is readily palpated in the popliteal fossa adjacent to the adductor magnus muscle.The popliteal vein can be visualised using medical ultrasound, including Doppler ultrasonography. It may be affected by a thrombus. Popliteal vein entrapment The popliteal vein may become trapped. This reduces the flow of blood out of the leg, causing oedema, pain, and venous ulcers. Entrapment is usually caused by gastrocnemius muscle. Venography (using an x-ray) or magnetic resonance imaging can investigate it. Surgery can be used to remove tissue creating pressure.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pongee** Pongee: Pongee is a type of slub-woven fabric, created by weaving with yarns that have been spun by varying the tightness of the yarn's twist at various intervals. Pongee is typically made from silk, and results in a textured, "slubbed" appearance; pongee silks range from appearing similar to satin to appearing matte and unreflective. Though pongee is typically made out of silk, it can be woven from a variety of fabrics, such as cotton, linen and wool. Pongee: In the early 20th century, pongee was an important export from China to the United States. Pongee is still woven in silk by many mills across China, especially along the banks of the Yangtze River at mills in Sichuan, Anhui, Zhejiang and Jiangsu provinces. Pongee varies in weight from 36 to 50 grams per square metre (0.12 to 0.16 oz/sq ft); lighter variants are known as Paj. Pongee types: Pongee is created through weaving yarns that have been twisted unevenly at various points; the resulting fabric typically has horizontal "slubs" running along the weft, where yarns increase and decrease in thickness. Pongee fabrics vary in their weight, fibre types, weave and yarn types; though some types of pongee display large, visible slubs, others, such as tsumugi, may only display minimally varying yarn thickness, resulting in a still-textured, but far more uniform, pongee fabric.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sharpening jig** Sharpening jig: A sharpening jig is often used when sharpening woodworking tools. Many of the tools used in woodworking have steel blades which are sharpened to a fine edge. A cutting edge is created on the blade at the point at which two surfaces of the blade meet. To create this cutting edge a bevel is formed on the blade, usually by grinding. This bevel is subsequently refined by honing until a satisfactorily sharp edge is created. Sharpening jig: The purpose of the sharpening jig is to hold the blade or tool at the desired angle while grinding or honing the bevel. In some cases, the angle of the bevel is critical to the performance of the cutting edge—a jig allows for repeatability of this angle over a number of sharpening sessions. Sharpening jig: There are many styles of jig available commercially. Fundamentally, all jigs are similar in that they allow the user to clamp the blade or tool in some way. The jig then has some means of referencing the clamped blade to the grinding or honing apparatus so that the bevel angle is maintained. One of the more common approaches is to have the jig ride on a roller. These types of jigs are usually used with a sharpening stone or plate, such as a waterstone, oilstone or diamond abrasive surface. Other types of jigs are used to present the blade to the wheel of a grinder. There are generally two types of hand sharpening jigs, push jigs and side-to-side jigs. Push jigs run perpendicular to the length of the stone and a side-to-side jig runs with the blade parallel to the length of the stone. Sharpening jig: Many woodworkers prefer to learn the technique of sharpening by hand. This method does not require a jig, but requires a lot of practice to achieve satisfactory results – especially in situations where the bevel angle is critical.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ablative brain surgery** Ablative brain surgery: Ablative brain surgery (also known as brain lesioning) is the surgical ablation by various methods of brain tissue to treat neurological or psychological disorders. The word "Ablation" stems from the Latin word Ablatus meaning "carried away". In most cases, however, ablative brain surgery does not involve removing brain tissue, but rather destroying tissue and leaving it in place. The lesions it causes are irreversible. There are some target nuclei for ablative surgery and deep brain stimulation. Those nuclei are the motor thalamus, the globus pallidus, and the subthalamic nucleus.Ablative brain surgery was first introduced by Pierre Flourens (1774–1867), a French physiologist. He removed different parts of the nervous system from animals and observed what effects were caused by the removal of certain parts. For example, if an animal could not move its arm after a certain part was removed, it was assumed that the region would control arm movements. The method of removal of part of the brain was termed "experimental ablation". With the use of experimental ablation, Flourens claimed to find the area of the brain that controlled heart rate and breathing.Ablative brain surgery is also often used as a research tool in neurobiology. For example, by ablating specific brain regions and observing differences in animals subjected to behavioral tests, the functions of all the removed areas may be inferred. Ablative brain surgery: Experimental ablation is used in research on animals. Such research is considered unethical on humans due to the irreversible effects and damages caused by the lesion and by the ablation of brain tissues. However, the effects of brain lesions (caused by accidents or diseases) on behavior can be observed to draw conclusions on the functions of different parts of the brain. Uses: Parkinson's disease Parkinson's disease (PD) is a progressive degenerative disease of the basal ganglia, characterized by the loss of dopaminergic cells of the substantia nigra, pars compacta (SNc). Surgical ablation has been used to treat Parkinson's disease. In the 1990s, the pallidum was a common surgical target. Unilateral pallidotomy improves tremor and dyskinesia on one side of the body (opposite the side of the brain surgery), but bilateral pallidotomy was found to cause irreversible deterioration in speech and cognition.Two other rapidly evolving or potential surgical approaches to Parkinson's disease are deep brain stimulation (DBS) and restorative therapies.Deep brain stimulation is a surgical treatment involving the implantation of a neurostimulator medical device, sometimes called a 'brain pacemaker', which sends electrical impulses to specific parts of the brain. Generally, deep brain stimulation surgery is considered preferable to ablation because it has the same effect and is adjustable and reversible.The advent of deep brain stimulation has been an important advance in the treatment of Parkinson's disease. DBS may be employed in the management of medication-refractory tremor or treatment-related motor complications, and may benefit between 4.5% and 20% of patients at some stage of their disease course. DBS at high frequency often has behavioral effects that are similar to those of lesioning. Uses: In Australia, patients with PD are reviewed by specialized DBS teams who assess the likely benefits and risks associated with DBS for each individual. The aim of these guidelines is to assist neurologists and general physicians identify patients who may benefit from referral to a DBS team. Common indications for referral are motor fluctuations and/or dyskinesias that are not adequately controlled with optimised medical therapy, medication-refractory tremor, and intolerance to medical therapy. Early referral for consideration of DBS is recommended as soon as optimised medical therapy fails to offer satisfactory motor control.The thalamus is another potential target for treating a tremor; in some countries, so is the subthalamic nucleus, although not in the United States due to its severe side effects. Stimulation of portions of the thalamus or lesioning has been used for various psychiatric and neurological conditions, and when practiced for movement disorders the target is in the motor nuclei of the thalamus. Thalamotomy is another surgical option in the treatment of Parkinson's disease. However, rigidity is not fully controlled after successful thalamotomy, it is replaced by hypotonia. Furthermore, significant complications can occur, for example, left ventral-lateral thalamotomy in a right-handed patient results in verbal deterioration while right thalamotomy causes visual-spatial defects. However, for patients for whom DBS is not feasible, ablation of the subthalamic nucleus has been shown to be safe and effective. DBS is not suitable for certain patients. Patients with immunodeficiencies are an example of a situation in which DBS is not a suitable procedure. However, a major reason as to why DBS is not often performed is the cost. Because of its high cost, DBS cannot be performed in regions of the world that are not wealthy. In the case of such circumstances, a permanent lesion in the subthalamic nucleus (STN) is created as it is a more favourable surgical procedure. The surgical procedure is going to be done on the non-dominant side of the brain; a lesion might be favored to evade numerous pacemaker replacements. More so, patients who gain relief from stimulation devoid of any side effects and need a pacemaker change may have a lesion performed on them in the same position. The stimulation parameters act as a guide for the preferred size of the lesion. In order to identify the part of the brain that is to be destroyed, new techniques such as micro electrode mapping have been developed. Uses: Cluster headaches Cluster headaches occur in cyclical patterns or clusters—which gives the condition of its name. Cluster headache is one of the most painful types of headache. Cluster headache is sometimes called the "alarm clock headache" because it commonly awakens you in the middle of the night with intense pain in or around the eye on one side of your head. The bouts of frequent attacks may last from weeks to months. When drug treatment fails, an invasive nerve stimulation procedure shows promise.Cluster headaches have been treated by ablation of the trigeminal nerve, but have not been very effective. Other surgical treatments for cluster headaches are under investigation. Uses: Psychiatric disorders Ablative psychosurgery continues to be used in a few centres in various countries. In the US there are a few centres including Massachusetts General Hospital that carry out ablative psychosurgical procedures. Belgium, the United Kingdom, and Venezuela are other examples of countries where the technique is still used. Uses: In the People's Republic of China, surgical ablation was used to treat psychological and neurological disorders, particularly schizophrenia, but also including clinical depression, and obsessive-compulsive disorder. The official Xinhua News Agency has since reported that China's Ministry of Health has banned the procedure for schizophrenia and severely restricted the practice for other conditions. In recent studies, Deep Brain Stimulation (DBS) is beginning to replace Ablative Brain Surgery for severe psychiatric conditions that are generally treatment resistant, such as obsessive-compulsive disorder. Methods: Experimental ablation involves the drilling of holes in the skull of an animal and inserting an electrode or a small tube called a cannula into the brain using a stereotactic apparatus. A brain lesion can be created by conducting electricity through the electrode which damages the targeted area of the brain. likewise, chemicals can be inserted in the cannula which could possibly damages the area of interest. By comparing the prior behavior of the animal to after the lesion, the researcher can predict the function of damaged brain segment. Recently, lasers have been shown to be effective in ablation of both cerebral and cerebellar tissue. A laser technology called MRI-guided laser ablation, for example, allows great precision in location and size of the lesion and the causes little to no thermal damage to adjacent tissue. The Texas Children's Hospital is one of the first to use this MRI guided method to destroy and treat brain lesions effectively and precisely. A prime example is a patient at this hospital who now no longer undergoes frequent seizures because of the success of this treatment. MRI-guided laser ablation is also used for ablating brain, prostate and liver tumors. Heating or freezing are also alternative methods to ablative brain surgery. Methods: Sham lesions A sham lesion is a way for researchers to give a placebo lesion to animals involved in experimental ablation. Whenever a cannula or electrode is placed into brain tissue, unintended additional damage is caused by the instrument itself. A sham lesion is simply the placement of the lesioning instrument into the same spot it would be placed in a regular lesion, only there is no chemical or electrical process. This technique allows researchers to properly compare to an appropriate control group by controlling for the damage done separate from the intended lesion. Methods: Excitotoxic lesions An excitotoxic lesion is the process of an excitatory amino acid being injected into the brain using a cannula. The amino acid is used to kill neurons by essentially stimulating them to death. Kainic acid is an example of an excitatory amino acid used in this type of lesion. One crucial benefit to this lesion is its specificity. The chemicals are selective in that they do not damage the surrounding axons of nearby neurons, but only the target neurons. Methods: Radio frequency lesions Radio frequency (RF) lesions are produced by electrodes placed in the brain tissue. RF current is an alternating current of very high frequency. The process during which the current passes through tissue produces heat that kills cells in the surrounding area. Unlike excitotoxic lesions, RF lesions destroy everything in the nearby vicinity of the electrode tip.The use of ablative brain surgery on the nucleus accumbens is the wrong method to treat addictions according to Dr. Charles O'Brien. Dr. John Adler, however, believes ablation can provide valuable information about how the nucleus accumbens works.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Congenital chloride diarrhea** Congenital chloride diarrhea: Congenital chloride diarrhea (CCD, also congenital chloridorrhea or Darrow Gamble syndrome) is a genetic disorder due to an autosomal recessive mutation on chromosome 7. The mutation is in downregulated-in-adenoma (DRA), a gene that encodes a membrane protein of intestinal cells. The protein belongs to the solute carrier 26 family of membrane transport proteins. More than 20 mutations in the gene are known to date. A rare disease, CCD occurs in all parts of the world but is more common in some populations with genetic founder effects, most notably in Finland. Symptoms and signs: Chronic diarrhoea starting from early neonatal period. Failure to thrive is usually accompanying diarrhea. Pathophysiology: CCD causes persistent secretory diarrhea. In a fetus, it leads to polyhydramnios and premature birth. Immediately after birth, it leads to dehydration, hypoelectrolytemia, hyperbilirubinemia, abdominal distention, and failure to thrive. Diagnosis: CCD may be detectable on prenatal ultrasound. After birth, signs in affected babies typically are abdominal distension, visible peristalsis, and watery stools persistent from birth that show chloride loss of more than 90 mmol/L. An important feature in this diarrhea that helps in the diagnosis, is that it is the only type of diarrhea that causes metabolic alkalosis rather than metabolic acidosis. Treatment: Available treatments address the symptoms of CCD, not the underlying defect. Early diagnosis and aggressive salt replacement therapy result in normal growth and development, and generally good outcomes. Replacement of NaCl and KCl has been shown to be effective in children. History: Observations leading to the characterization of the SLC26 family were based on research on rare human diseases. Three rare recessive diseases in humans have been shown to be caused by genes of this family. Diastrophic dysplasia, congenital chloride diarrhea, and Pendred syndrome are caused by the highly related genes SLC26A2 (first called DTDST), SLC26A3 (first called CLD or DRA), and SLC26A4 (first called PDS), respectively. Two of these diseases, diastrophic dysplasia and congenital chloride diarrhea, are Finnish heritage diseases.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SHPRH** SHPRH: E3 ubiquitin-protein ligase SHPRH is an enzyme that in humans is encoded by the SHPRH gene. Function: SHPRH is a ubiquitously expressed protein that contains motifs characteristics of several DNA repair proteins, transcription factors, and helicases.[supplied by OMIM]
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Scunthorpe problem** Scunthorpe problem: The Scunthorpe problem is the unintentional blocking of online content by a spam filter or search engine because their text contains a string (or substring) of letters that appear to have an obscene or otherwise unacceptable meaning. Names, abbreviations, and technical terms are most often cited as being affected by the issue. The problem arises since computers can easily identify strings of text within a document, but interpreting words of this kind requires considerable ability to interpret a wide range of contexts, possibly across many cultures, which is an extremely difficult task. As a result, broad blocking rules may result in false positives affecting innocent phrases. Etymology and origin: The problem was named after an incident in 1996 in which AOL's profanity filter prevented residents of the town of Scunthorpe, North Lincolnshire, England, from creating accounts with AOL, because the town's name contains the substring "cunt". In the early 2000s, Google's opt-in SafeSearch filters made the same error, with local services and businesses that included Scunthorpe in their names or URLs among those mistakenly excluded from appearing in search results. Workarounds: The Scunthorpe problem is challenging to completely solve due to the difficulty of creating a filter capable of understanding words in context.One solution involves creating a whitelist of known false positives. Any word appearing on the whitelist can be ignored by the filter, even though it contains text that would otherwise not be allowed. Other examples: Mistaken decisions by obscenity filters include: Refused web domain names and account registrations In April 1998, Jeff Gold attempted to register the domain name shitakemushrooms.com, but due to the substring shit he was blocked by an InterNIC filter prohibiting the "seven dirty words". (Shiitake, also commonly spelled shitake, is the Japanese name for the edible fungus Lentinula edodes.) In 2000, a Canadian television news story on web filtering software found that the website for the Montreal Urban Community (Communauté Urbaine de Montréal, in French) was entirely blocked because its domain name was its French acronym CUM (www.cum.qc.ca); "cum" (among other meanings) is English-language vulgar slang term for semen. Other examples: In February 2004 in Scotland, Craig Cockburn reported that he was unable to use his surname (pronounced "Coburn", IPA: /ˈkoʊbərn/) with Hotmail because it contains the substring cock, a vulgar slang word for the penis. Separately he had problems with his workplace email because his job title, software specialist, contained the substring Cialis, an erectile dysfunction medication commonly mentioned in spam e-mails. Hotmail initially told him to spell his name C0ckburn (with a zero instead of the letter "o") but later reversed the ban. In 2010, he had a similar problem registering on the BBC website, where again the first four characters of his surname caused a problem for the content filter. Other examples: In February 2006, Linda Callahan was initially prevented from registering her name with Yahoo! as an e-mail address as it contained the substring Allah. Yahoo! later reversed the ban. In July 2008, Dr. Herman I. Libshitz could not register an e-mail address containing his name with Verizon because his surname contained the substring shit, and Verizon initially rejected his request for an exception. In a subsequent statement, a Verizon spokeswoman apologized for not approving his desired e-mail address. Blocked web searches In the months leading up to January 1996, some web searches for Super Bowl XXX were being filtered, because the Roman numeral for the game and the site (XXX) is also used to identify pornography. Other examples: Gareth Roelofse, the web designer for RomansInSussex.co.uk, noted in 2004: "We found many library Net stations, school networks and Internet cafes block sites with the word 'sex' in the domain name. This was a challenge for RomansInSussex.co.uk because its target audience is school children." In 2008, the filter of the free wireless service of the town of Whakatane in New Zealand blocked searches involving the town's own name because the filter's phonetic analysis deemed the "whak" to sound like fuck; the town name is in Māori, and in the Māori language "wh" is most commonly pronounced as /f/. The town subsequently put the town name on the filter's whitelist. Other examples: In July 2011, web searches in China on the name Jiang were blocked following claims on the Sina Weibo microblogging site that former Chinese Communist Party (CCP) general secretary Jiang Zemin had died. Since the word "Jiang" meaning "river" is written with the same Chinese character (江), searches related to rivers including the Yangtze (Cháng Jiāng) produced the message: "According to the relevant laws, regulations and policies, the results of this search cannot be displayed." In February 2018, web searches on Google's shopping platform were blocked for items such as glue guns, Guns N' Roses, and Burgundy wine after Google hastily patched its search system that was displaying results for weapons and accessories that violated Google's stated policies. Other examples: Blocked emails In 2001, Yahoo! Mail introduced an email filter which automatically replaced JavaScript-related strings with alternative versions, to prevent the possibility of cross-site scripting in HTML email. The filter would hyphenate the terms "JavaScript", "JScript", "VBScript" and "LiveScript"; and replaced "eval", "mocha" and "expression" with the similar but not quite synonymous terms "review", "espresso" and "statement", respectively. Assumptions were involved in the writing of the filters: no attempts were made to limit these string replacements to anchor script sections and attributes, or to respect word boundaries, in case this would leave some loopholes open. This resulted in such errors as medireview in place of medieval. Other examples: In February 2003, Members of Parliament at the British House of Commons found that a new spam filter was blocking emails containing references to the Sexual Offences Bill then under debate, as well as some messages relating to a Liberal Democrat consultation paper on censorship. It also blocked emails sent in Welsh because it did not recognise the language. In October 2004, it was reported that the Horniman Museum in London was failing to receive some of its email because filters mistakenly treated its name as a version of the words horny man. Blocked for words with multiple meanings In October 2004, e-mails advertising the pantomime Dick Whittington sent to schools in the UK were blocked by school computers because of the use of the name Dick, sometimes used as slang for penis. In May 2006, a man in Manchester in the UK found that e-mails he wrote to his local council to complain about a planning application had been blocked as they contained the word erection when referring to a structure. Other examples: Blocked e-mails and web searches relating to The Beaver, a magazine based in Winnipeg, caused the publisher to change its name to Canada's History in 2010, after 89 years of publication. Publisher Deborah Morrison commented: "Back in 1920, The Beaver was a perfectly appropriate name. And while its other meaning [vulva] is nothing new, its ambiguity began to pose a whole new challenge with the advance of the Internet. The name became an impediment to our growth". Other examples: In June 2010, Twitter blocked a user from Luxembourg 29 minutes after he had opened his account and posted his first tweet. The tweet read: "Finally! A pair of great tits (Parus major) has moved into my birdhouse!" Despite including the Latin name to point out that the tweet was about birds, any attempts to unblock the account were in vain. Other examples: In 2011, a councillor in Dudley found an email flagged for profanity by his council's security software after mentioning the Black Country dish faggots (a type of meatball, but also a pejorative term for gay men). Residents of Penistone in South Yorkshire have had e-mails blocked because the town's name includes the substring penis. Residents of Clitheroe (Lancashire, England) have been repeatedly inconvenienced because their town's name includes the substring clit, which is short for "clitoris". Résumés containing references to graduating with Latin honors such as cum laude, magna cum laude, and summa cum laude have been blocked by spam filters because of inclusion of the word cum, which is Latin for with (in this usage), but is sometimes used as slang for semen or ejaculation in English usage. News articles In June 2008, a news site run by the anti-LGBT lobby group American Family Association filtered an Associated Press article on sprinter Tyson Gay, replacing instances of "gay" with "homosexual", thus rendering his name as "Tyson Homosexual". This same function had previously changed the name of basketball player Rudy Gay to "Rudy Homosexual". The word or string "ass" may be replaced by "butt", resulting in "clbuttic" for "classic", "buttignment" for "assignment", and "buttbuttinate" for "assassinate". Other In 2008, Microsoft confirmed that its policy to prevent the use of words relating to sexual orientation had meant that Richard Gaywood's name was deemed offensive and could not be used in his "gamertag" or in the "Real Name" field of his bio. Other examples: In 2011, the release of Pokémon Black and White introduced Cofagrigus, which could not be traded online to other players without a nickname because its species name contained the substring fag. The system has since been updated to allow players to trade it without nicknames. The same problem occurred with Nosepass, Probopass and Froslass due to their inclusion of the substring ass. Other examples: In 2013, file transfers named for the Swedish city of Falun caused web connection outages at Diakrit, a firm based in China. Diakrit resolved the issue by renaming the files. Fredrik Bergman of Diakrit believes that the file names triggered the Great Firewall's censors used to block discussion of Falun Gong, a banned religious movement founded in China. In November 2013, Facebook temporarily blocked British users for using the word faggot in reference to the traditional dish of the same name. In January 2014, files used in the online game League of Legends were reportedly blocked by some UK ISP filters due to the names 'VarusExpirationTimer.luaobj' and 'XerathMageChainsExtended.luaobj', which contain the substring sex. This was later corrected. In May 2018, the website of the grocery store Publix would not allow a cake to be ordered containing the Latin phrase summa cum laude. The customer attempted to rectify the problem by including special instructions but still ended up with a cake reading "Summa --- Laude". In May 2020, despite extensive media scrutiny, some hashtags directly referring to British political advisor Dominic Cummings were unable to trend on Twitter because the substring cum triggered an anti-porn filter. In October 2020, a paleontology conference's virtual meeting platform blocked various words including "bone", "pubic", and "stream". In January 2021, Facebook apologized for muting and banning users after it had erroneously flagged the Devon landmark Plymouth Hoe as misogynistic. In April 2021, the official Facebook page for the French Commune of Bitche was taken down. In response, commune officials created a new page referencing instead the postal code, Mairie 57230. Facebook later apologized and restored the original page. As a precaution, the officials of Rohrbach-lès-Bitche renamed their Facebook page Ville de Rohrbach.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bioconcentration** Bioconcentration: In aquatic toxicology, bioconcentration is the accumulation of a water-borne chemical substance in an organism exposed to the water.There are several ways in which to measure and assess bioaccumulation and bioconcentration. These include: octanol-water partition coefficients (KOW), bioconcentration factors (BCF), bioaccumulation factors (BAF) and biota-sediment accumulation factor (BSAF). Each of these can be calculated using either empirical data or measurements, as well as from mathematical models. One of these mathematical models is a fugacity-based BCF model developed by Don Mackay.Bioconcentration factor can also be expressed as the ratio of the concentration of a chemical in an organism to the concentration of the chemical in the surrounding environment. The BCF is a measure of the extent of chemical sharing between an organism and the surrounding environment.In surface water, the BCF is the ratio of a chemical's concentration in an organism to the chemical's aqueous concentration. BCF is often expressed in units of liter per kilogram (ratio of mg of chemical per kg of organism to mg of chemical per liter of water). BCF can simply be an observed ratio, or it can be the prediction of a partitioning model. A partitioning model is based on assumptions that chemicals partition between water and aquatic organisms as well as the idea that chemical equilibrium exists between the organisms and the aquatic environment in which it is found Calculation: Bioconcentration can be described by a bioconcentration factor (BCF), which is the ratio of the chemical concentration in an organism or biota to the concentration in water: BCF=ConcentrationBiotaConcentrationWater Bioconcentration factors can also be related to the octanol-water partition coefficient, Kow. The octanol-water partition coefficient (Kow) is correlated with the potential for a chemical to bioaccumulate in organisms; the BCF can be predicted from log Kow, via computer programs based on structure activity relationship (SAR) or through the linear equation: log log ⁡KOW+b Where: OW octanol water =COCW at equilibrium Fugacity capacity Fugacity and BCF relate to each other in the following equation: Fish Fish ×BCFH where ZFish is equal to the Fugacity capacity of a chemical in the fish, PFish is equal to the density of the fish (mass/length3), BCF is the partition coefficient between the fish and the water (length3/mass) and H is equal to the Henry's law constant (Length2/Time2) Regression equations for estimations in fish Uses: Regulatory uses Through the use of the PBT Profiler and using criteria set forth by the United States Environmental Protection Agency under the Toxic Substances Control Act (TSCA), a substance is considered to be not bioaccumulative if it has a BCF less than 1000, bioaccumulative if it has a BCF from 1000 to 5000 and very bioaccumulative if it has a BCF greater than 5,000.The thresholds under REACH are a BCF of > 2000 L/kg bzw. for the B and 5000 L/kg for vB criteria. Applications: A bioconcentration factor greater than 1 is indicative of a hydrophobic or lipophilic chemical. It is an indicator of how probable a chemical is to bioaccumulate. These chemicals have high lipid affinities and will concentrate in tissues with high lipid content instead of in an aqueous environment like the cytosol. Models are used to predict chemical partitioning in the environment which in turn allows the prediction of the biological fate of lipophilic chemicals. Applications: Equilibrium partitioning models Based on an assumed steady state scenario, the fate of a chemical in a system is modeled giving predicted endpoint phases and concentrations.It needs to be considered that reaching steady state may need a substantial amount of time as estimated using the following equation (in hours). 0.00654 55.31 For a substance with a log(KOW) of 4, it thus takes approximately five days to reach effective steady state. For a log(KOW) of 6, the equilibrium time increases to nine months. Applications: Fugacity models Fugacity is another predictive criterion for equilibrium among phases that has units of pressure. It is equivalent to partial pressure for most environmental purposes. It is the absconding propensity of a material. BCF can be determined from output parameters of a fugacity model and thus used to predict the fraction of chemical immediately interacting with and possibly having an effect on an organism. Applications: Food web models If organism-specific fugacity values are available, it is possible to create a food web model which takes trophic webs into consideration. This is especially pertinent for conservative chemicals that are not easily metabolized into degradation products. Biomagnification of conservative chemicals such as toxic metals can be harmful to apex predators like orca whales, osprey, and bald eagles. Applications to toxicology: Predictions Bioconcentration factors facilitate predicting contamination levels in an organism based on chemical concentration in surrounding water. BCF in this setting only applies to aquatic organisms. Air breathing organisms do not take up chemicals in the same manner as other aquatic organisms. Fish, for example uptake chemicals via ingestion and osmotic gradients in gill lamellae.When working with benthic macroinvertebrates, both water and benthic sediments may contain chemical that affects the organism. Biota-sediment accumulation factor (BSAF) and biomagnification factor (BMF) also influence toxicity in aquatic environments. Applications to toxicology: BCF does not explicitly take metabolism into consideration so it needs to be added to models at other points through uptake, elimination or degradation equations for a selected organism. Body burden Chemicals with high BCF values are more lipophilic, and at equilibrium organisms will have greater concentrations of chemical than other phases in the system. Body burden is the total amount of chemical in the body of an organism, and body burdens will be greater when dealing with a lipophilic chemical. Biological factors: In determining the degree at which bioconcentration occurs biological factors have to be kept in mind. The rate at which an organism is exposed through respiratory surfaces and contact with dermal surfaces of the organism, competes against the rate of excretion from an organism. The rate of excretion is a loss of chemical from the respiratory surface, growth dilution, fecal excretion, and metabolic biotransformation. Growth dilution is not an actual process of excretion but due to the mass of the organism increasing while the contaminant concentration remains constant dilution occurs. Biological factors: The interaction between inputs and outputs is shown here: dCBdt=(k1CWD)−(k2+kE+kM+kG)CB The variables are defined as: CBis the concentration in the organism (g*kg−1). t represents a unit of time (d−1). k1 is the rate constant for chemical uptake from water at the respiratory surface (L*kg−1*d−1). CWD is the chemical concentration dissolved in water (g*L−1). Biological factors: k2,kE,kG,kB are rate constants that represent excretion from the organism from the respiratory surface, fecal excretion, metabolic transformation, and growth dilution (d−1).Static variables influence BCF as well. Because organisms are modeled as bags of fat, lipid to water ratio is a factor that needs to be considered. Size also plays a role as the surface to volume ratio influence the rate of uptake from the surrounding water. The species of concern is a primary factor in influencing BCF values due to it determining all of the biological factors that alter a BCF. Environmental parameters: Temperature Temperature may affect metabolic transformation, and bioenergetics. An example of this is the movement of the organism may change as well as rates of excretion. If a contaminant is ionic, the change in pH that is influenced by a change in temperature may also influence the bioavailability Water quality The natural particle content as well as organic carbon content in water can affect the bioavailability. The contaminant can bind to the particles in the water, making uptake more difficult, as well as become ingested by the organism. This ingestion could consist of contaminated particles which would cause the source of contamination to be from more than just water.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Paul Liu (geologist)** Paul Liu (geologist): Jingpu "Paul" Liu is a geologist and professor at North Carolina State University. Education: Ph.D. 2001 (Geological Oceanography), Virginia Institute of Marine Science (VIMS), College of Williams & Mary (W&M), Williamsburg. M.S. 1995 (Marine Geology), Institute of Oceanology, Chinese Academy of Sciences. B.E., 1992 (Hydrology and Engineering Geology+Marine Geology), Ocean University of China. Research: Fluxes and fates of river-derived sediments in the sea. Riverine sediment dispersal, transport, and accumulation in different continental margin environments, in particular those delatic and clinoform deposits from the large rivers in Asia, e.g. Yellow, Yangtze, Pearl, Red, Mekong, Irrawaddy, and Salween, etc.; The impacts of post-glacial sea-level changes, coastal environmental changes; beach and delta erosions. Publications: Flux and fate of Yangtze River sediment delivered to the East China Sea JP Liu, KH Xu, AC Li, JD Milliman, DM Velozzi, SB Xiao, ZS Yang Geomorphology 85 (3-4), 208-224 (2007) (Cited 896 times, according to Google Scholar ) Holocene development of the Yellow River's subaqueous delta, North Yellow Sea . JP Liu, JD Milliman, S Gao, P Cheng Marine geology 209 (1-4), 45-67 (2004) (Cited 748 times, according to Google Scholar.) Stepwise decreases of the Huanghe (Yellow River) sediment load (1950–2005): Impacts of climate change and human activities. H Wang, Z Yang, Y Saito, JP Liu, X Sun, Y Wang, Global and Planetary Change 57 (3-4), 331-354 (2007) (Cited 693 times, according to Google Scholar.) Flux and fate of small mountainous rivers derived sediments into the Taiwan Strait JP Liu, CS Liu, KH Xu, JD Milliman, JK Chiu, SJ Kao, SW Lin Marine Geology 256 (1-4), 65-76 (2008) Dispersal of the Zhujiang River (Pearl River) derived sediment in the Holocene Q Ge, JP Liu, Z Xue, F Chu Acta Oceanologica Sinica 33 (8), 1-9 (2014) Stratigraphic Formation of the Mekong River Delta and Its Recent Shoreline Changes JP Liu, DJ DeMaster, TT Nguyen, Y Saito, VL Nguyen, TKO Ta, X Li Oceanography 30 (3), 72-83 (2017) A seismic study of the Mekong subaqueous delta: Proximal versus distal sediment accumulation JP Liu, DJ DeMaster, CA Nittrouer, EF Eidam, TT Nguyen Continental Shelf Research 147, 197-212 (2017) Sediment dispersal and accumulation off the Ayeyarwady delta–Tectonic and oceanographic controls SA Kuehl, J Williams, JP Liu, C Harris, DW Aung, D Tarpley, M Goodwyn, ... Marine Geolog'y 417, 106000 (2019)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RedBoot** RedBoot: RedBoot (an acronym for Red Hat Embedded Debug and Bootstrap firmware) is an open-source application that uses the eCos real-time operating system Hardware Abstraction Layer to provide bootstrap firmware for embedded systems. RedBoot: RedBoot allows download and execution of embedded applications via serial or Ethernet, including embedded Linux and eCos applications. It provides debug support in conjunction with GDB to allow development and debugging of embedded applications. It also provides an interactive command line interface to allow management of the Flash images, image download, RedBoot configuration, etc., accessible via serial or Ethernet. For unattended or automated startup, boot scripts can be stored in Flash allowing, for example, loading of images from Flash, hard disk, or a TFTP server.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Foramen tympanicum** Foramen tympanicum: The foramen tympanicum, or also known as the foramen of Huschke, is an anatomical variation of the tympanic part of the temporal bone in humans resulting from a defect in normal ossification during the first five years of life. The structure was found in 4.6% to as high as 23% of the population. Structure: If present, the foramen tympanicum is located at the anteroinferior portion of the external auditory canal, locating posteromedial to the temporomandibular joint. The structure connects the external auditory canal to the infratemporal fossa. Reduction in thickness of the temporal bone may also occur in the same location. During development of the skull, the foramen tympanicum normally closes by the age of 5 years. The foramen, however, may persists in rare cases resulting in its presence in adults. The persistence of this foramen may be the result of abnormal mechanical forces during development of face and/or ossification abnormalities attributed to genetic factors. Clinical relevance: Persistence of the foramen tympanicum may also predispose the individual to the spread of infection or tumor from the external auditory canal to the infratemporal fossa or vice versa. It is associated with herniation of soft tissues from the temporomandibular joint into the external auditory meatus, and with formation of fistula between the parotid gland and the external auditory canal. During arthroscopy of the temporomandibular joint, the endoscope may inadvertently pass into the joint via the foramen, with resulting damage.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hypnosis in works of fiction** Hypnosis in works of fiction: For over a century, hypnosis has been a popular theme in fiction – literature, film, and television. It features in movies almost from their inception and more recently has been depicted in television and online media. As Harvard hypnotherapist Deirdre Barrett points out in 'Hypnosis in Popular Media', the vast majority of these depictions are negative stereotypes of either control for criminal profit and murder or as a method of seduction. Others depict hypnosis as all-powerful or even a path to supernatural powers.This article only lists stories in which hypnosis is featured as an important element. Written works: Edgar Allan Poe, "The Facts in the Case of M. Valdemar" (1845) about a mesmerist who puts a man in a suspended hypnotic state at the moment of death. Ambrose Bierce's story "The Realm of the Unreal" (1890) pivots on the idea of a very long hypnosis. The protagonist is supposed to be able to keep "a peculiarly susceptible subject in the realm of the unreal for weeks, months, and even years, dominated by whatever delusions and hallucinations the operator may from time to time suggest". Ambrose Bierce, "The Hypnotist" (1893), in which the narrator glibly relates his use of hypnosis in committing a variety of crimes. George du Maurier, Trilby (1894), in which a tone-deaf girl is hypnotized and turned into a singer. Bolesław Prus, Pharaoh (1895), in which a Chaldean is hypnotized in a circus act (chapter 33) and High Priest Mefres gives post-hypnotic suggestions to the Greek, Lykon, in chapters 63 and 66 and passim. Thomas Mann, Mario and the Magician (1930), relates the effect of a hypnotist on a mass audience. The story is said to be symbolic of the power of Fascism. Cigars of the Pharaoh (1934) Richard Condon, The Manchurian Candidate (1959), in which an American soldier is put into a hypnotic trance to implement an assassination plot. There have been two film versions, in 1962 and 2004. Dean Koontz, False Memory (1999) Georgia Byng, Molly Moon's Incredible Book of Hypnotism (2002). Lucas Hyde, Hypnosis (2005). Written works: Donald K. Hartman, Death by Suggestion: An Anthology of 19th and Early 20th-Century Tales of Hypnotically Induced Murder, Suicide, and Accidental Death. Gathers together twenty-two short stories from the 19th and early 20th century where hypnotism is used to cause death—either intentionally or by accident. (2018) Donald K. Hartman, The Hypno-Ripper: Or, Jack the Hypnotically Controlled Ripper; Containing Two Victorian Era Tales Dealing with Jack the Ripper and Hypnotism (2021) Allison Jones, "A Hypnotic Suggestion" (2009), has a forensic hypnotherapist as the protagonist. Written works: Madelaine Lawrence, "Why Kill A Parapsychologist?" (2011), a sequel to "A Hypnotic Suggestion". Madelaine Lawrence is the other's real name. More books are expected in this series about a forensic hypnotherapist. Lars Kepler (pseudonym), The Hypnotist (2011), in which a hypnotist attempts to recover lost memories from the witness to a murder. David Stuart Davies, The Instrument of Death (2019), in which Sherlock Holmes battles master hypnotist Dr. Caligari, who controls a homeless man to commit murders. Film: The Cabinet of Dr. Caligari (1920), a German silent horror film in which the main character is a master hypnotist. Dr. Mabuse, the Gambler (1922) Dracula (1931) Svengali (1931) Thirteen Women (1932) The Mummy (1932) Rasputin and the Empress (1932), Rasputin is a hypnotist. The Testament of Dr. Mabuse (1933) Dracula's Daughter (1936) Cat People (1942) The Climax (1944) Spellbound (1945) Starring Ingrid Bergman as a psychotherapist and Gregory Peck as her amnesiac patient. The Woman in Green (1945), an American Sherlock Holmes film starring Basil Rathbone as Holmes and Nigel Bruce as Dr. Watson, with Hillary Brooke as "the woman in green" and Henry Daniell as Holmes' arch-enemy, Professor Moriarty. Road to Rio (1947), starring Bing Crosby, Bob Hope, and Dorothy Lamour. Sleep, My Love (1948) The Pirate (1948), an MGM musical starring Gene Kelly and Judy Garland, in which Kelly's character hypnotizes Garland's character into a trance, freeing her spirit to reveal her fantasies and desires. Black Magic (1949), starring Orson Welles. The Three Faces of Eve (1957) Night of the Demon (1957), starring Dana Andrews. The Thousand Eyes of Dr. Mabuse (1960) The Manchurian Candidate (1962), based on Richard Condon's 1959 novel The Manchurian Candidate (see "Written works"). Freud: The Secret Passion (1962) The Ipcress File (1965), a British espionage film directed by Sidney J. Furie and starring Michael Caine. The screenplay was based on Len Deighton's 1962 novel, The IPCRESS File. In the story, Harry Palmer (the secret agent portrayed by Caine) is caught by the enemy and subjected to brainwashing through torture and hypnosis. Pharaoh (1966), a Polish feature-film adaptation of Bolesław Prus' novel Pharaoh (see "Written works"). Bram Stoker's Dracula (1973) The Exorcist (1973) Heart of Glass (1976), written, directed and produced by Werner Herzog, in which almost all the actors perform while under hypnosis. Film: Telefon (1977), an American espionage / crime drama starring Charles Bronson, Lee Remick and Donald Pleasence, in which brainwashed Russian sleeper agents in USA during the Cold War must be stopped from mindlessly attacking government interests when they hear certain post-hypnotic trance trigger codes. Trigger commands which activate the agents centre around words from a Robert Frost poem, Stopping by Woods on a Snowy Evening. Film: In The Element of Crime (1984), directed by Lars von Trier, an English detective named Fisher, who has become an expatriate living in Cairo, undergoes hypnosis in order to recall his last case. The Naked Gun (1988) Dead Again (1991), a psychological thriller/neo-noir directed by Kenneth Branagh. The character Frank (Derek Jacobi) regresses his patients using hypnosis. Bram Stoker's Dracula (1992) Candyman film series (1992-2020): a horror psychological thriller in which the titular supernatural villain Candyman hypnotizes some of his victims who deny his existence by summoning him. He possesses some of his victims with mind control and a trance-like psychotic state, framing them as they carry out his murders. Hocus Pocus (1993) The Shadow (1994) Dracula: Dead and Loving It (1995) Batman & Robin (1997) Cure (1997) In Good Will Hunting (1997), Will Hunting undergoes attempted hypnotherapy, poking fun at the process by producing ridiculous answers to the therapist's questions. Stir of Echoes (1999): after being hypnotized, Tom Witzky begins to see haunting visions of a girl's ghost, and a mystery begins to unfold around her. The Adventures of Rocky and Bullwinkle (2000) In Zoolander (2001), Derek Zoolander is hypnotized to Mugatu's song "Relax" to kill the Prime Minister of Malaysia. In Donnie Darko (2001), the titular character undergoes hypnosis in an attempt to locate the root of his mental difficulties. Office Space (2001), in which the protagonist is hypnotized in order to relieve stress and burnout; his hypnotist has a heart attack and dies before he is brought out of the trance. The Curse of the Jade Scorpion (2001) by Woody Allen. In Shallow Hal (2001), Hal Larson is hypnotized by Tony Robbins into seeing people as their inner beauty instead of their external selves. In K-PAX (2001), a man claiming to be an extraterrestrial from a planet called "K-PAX", named Prot (pronounced like the word "goat", played by Kevin Spacey), is hypnotized by psychiatrist Mark Powell (Jeff Bridges). The Manchurian Candidate (2004), based on Richard Condon's 1959 novel The Manchurian Candidate (see "Written works"). Film: The Hypnotist (2012), Lasse Hallström's Swedish-language film adaption of a Lars Kepler psychological crime novel, centering on a professional hypnotherapist, overtly suggesting potential benefits of hypnotherapy for criminal detective work, but also its dangerous potential for damaged persons. The film's subtext hints at potentials of hypnosis for abuse and crime, including murder. This theme, less explicitly and in less focused fashion than in The Manchurian Candidate, is examined within a less organised social context. Film: Trance (2013), a Danny Boyle film in which Simon Newton (James McAvoy), an extremely suggestible individual, is hypnotized on repeated occasions by therapist Elizabeth Lamb (Rosario Dawson) in order to recover the memory of a stolen painting's location. Get Out (2017), the Armitage family transplant the brains of white people into black people's bodies; hypnosis is used to banish the conscious mind of the host into "the sunken place", where they are conscious but powerless. Television: In the British series New Tricks, series (season) 6, episode 5, “Magic Majestic,” the team investigates a case where a woman who killed her husband may have been influenced by the script used to hypnotize her. As a demonstration of the power of suggestion, a friendly magician hypnotizes Gerry. Cut to black, Gerry cannot remember what he did and they won't tell him. During the investigation, a suspect uses hypnosis to torment Brian, but the real villain is eventually caught—or is he? In the end, at the pub, Gerry inadvertently plays the music that triggers him. His colleagues cry “Noooooo!”—and the credits roll. Television: In the Canadian series Murdoch Mysteries, series (season) 3 episode 5, “Me, Myself and Murdoch” hypnosis plays an important role in a murder case where a woman suspected of killing her father manifests dissociative identity disorder. Television: In the TV series Pokémon, Hypnosis is a move which causes a sleep-induced trance, causing the target to fall asleep or allowing temporary mind control or even a hallucination. Two Pokémon, Drowzee and its evolved form Hypno are known as The Hypnosis Pokémon. Drowzee's name is a reference to feeling drowsy and its ability put someone to sleep. Hypno always carries a pendulum, although this is more of a reference to stage hypnosis. Television: In the CBS TV series The Mentalist, the main character Patrick Jane is a former TV psychic and uses hypnosis on several characters. An episode of the show dealt with several hypnotists, one of whom was a murderer, who use their abilities several times during the course of the episode. Television: The Showtime Network television show Penn & Teller: Bullshit!, which features comedy duo Penn & Teller, took a skeptical look at hypnosis in one of their episodes. They took the view that the so-called hypnotic trance does not exist at all, and that all hypnosis sessions are merely voluntary shared fantasies. Penn and Teller also state that the unusual behaviors people exhibit during a hypnosis session have always been well within their reach. Television: The Paramount-syndicated television show The Montel Williams Show featured a presentation by Boris Cherniak where hypnotized subjects reacted to a variety of comical situations, while at the same time showcasing the therapeutic effects of hypnosis such as quitting smoking. Television: The popular British car show Top Gear featured one of its presenters, Richard Hammond, under the effects of hypnosis (courtesy of Paul McKenna). Once hypnotized, he underwent several mental changes. Believing he was unable to drive a car (confused when presented with an Alfa Romeo to take around the test track), and thinking that a miniature child's pedal version of a Porsche 911 was his own and a properly functional car. Even imitating its engine noise. Driving it around the studio floor, he threw a minor tantrum when Jeremy Clarkson purposely crashed into it, driving a similar pedal operated Jeep Cherokee. Television: An episode of the television series MythBusters examines hypnosis, attempting to ascertain if post-hypnotic suggestion could influence the actions of a subject against their will and/or be used to improve memory. The conclusion was that hypnosis did not alter their behaviour, but was based on unnamed author published 'self-hypnosis' CDs of indiscernible quality or expertise. However the show did in fact find hypnosis increased the ability to recall details of a staged incident during their investigation. Television: In an episode of Doug, Dr. Klotzenstein hypnotizes children into eating junk food, and Quailman must save the day. In the animated TV series Futurama, a recurring character is the Hypnotoad. He is first seen in the episode The Day the Earth Stood Stupid having hypnotized the judges of a dog show, enabling him to win. In a later episode, he is shown to have his own popular television show, Everybody Loves Hypnotoad. Television: In the BBC science fiction series Doctor Who, the recurring Time Lord villain the Master will sometimes use hypnosis to bring subjects under his control. This is usually achieved by him staring the victim in the eyes and saying, "I am the Master and you will obey me!". In the 1985 story "The Mark of the Rani", the Master uses a pendulum to hypnotize a victim. The Doctor also displays proficiency in the use of hypnosis, requiring only a second glance into a person's eyes or a mind meld like technique to put someone under his spell. Another instance is when the Silence use post-hypnotic suggestions to control the actions of the human race and coax them to launch Apollo 11, all the while hiding their own existence through those same suggestions. Television: In the anime Nodame Cantabile, a character nicknamed Nodame uses hypnosis to uncover the traumatic events Chiaki experienced on a plane when he was young, and help him overcome his fear of flying. This allows Chiaki to chase his dreams of becoming a conductor in Europe. In the series X-Files, one of the main characters Fox Mulder is able to access repressed memory of his sister's abduction by aliens through regression hypnosis. In the Columbo episode "A Deadly State of Mind", a psychiatrist murders his lover by hypnotizing her into believing she can dive safely into a pool from a high balcony. In the anime Bleach, the main antagonist Sosuke Aizen has a sword which allows him to place anyone who looks at it into a state of perfect hypnosis, which he uses to manipulate others. Derren Brown claims to use suggestion as part of his performances in Mind Control with Derren Brown. He has however stated that 'hypnotic techniques' are the result of suggestion and that in reality there is no such thing as a hypnotic trance. In Ninjago, a snake tribe called the Hypnobrai have the power of hypnosis. Magician/mentalist The Amazing Kreskin disputes the validity of hypnosis, and once offered $100,000 to anyone who could prove to his satisfaction that such a thing as "hypnotic trance" exists. Television: In the television series Monk, the season 7 episode "Mr. Monk Gets Hypnotized" has a principal plot involving hypnosis. While investigating the initial disappearance of actress Sally Larkin (Dina Meyer), Adrian Monk is miserable, having not been able to enjoy a double rainbow, and his mood isn't helped when an uncharacteristically upbeat Harold Krenshaw shows up on the scene, saying that he has been cured of his OCD through hypnosis with a new therapist, named Dr. Climan (played by Richard Schiff). Monk, inspired by Harold, tries an appointment with Dr. Climan and it leaves him in a childlike mental state for most of the episode (a state that is described by Dr. Bell as being that he is living the childhood he always wanted), although it proves helpful in breaking some leads in the Sally Larkin kidnapping case. For Harold Krenshaw, however, the hypnosis backfires when it strengthens his feelings of euphoria, causing him to take off all of his clothes in public and get arrested for indecent exposure. Television: In the season 3 episode of Gilligan's Island, Mary Ann hits her head and believes she is Ginger. When Mary Ann goes into a traumatic shock from seeing Ginger, the Professor attempts to cure her delusion by hypnotizing her. During the hypnosis session, Gilligan secretly watches and falls under with Mary Ann. The hypnosis doesn't succeed in returning Mary Ann to her original identity, but it causes Gilligan to believe he's Mary Ann. The Professor later uses hypnosis again, this time on Gilligan to return him to his original identity. Television: During the season 10 premiere of America's Got Talent, contestant Chris Jones performed a hypnotism act with judge Howie Mandel as the volunteer (who is known to suffer from mysophobia to the point where he never touches anyone's hands unless they are wearing latex gloves) where he was hypnotized and persuaded to shake hands with the contestant and judges Howard Stern, Mel B, and Heidi Klum while no one was wearing gloves. At the time of this post, he has advanced to the Judge Cuts Week stage of the competition. However, he failed to impress the judges during Judge Cuts Week 2 and he was eliminated from the competition. Online media: The fictional crime-fighter, The Red Panda, featured on Decoder Ring Theatre, uses a highly fictionalized form of hypnotic power.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Second-generation antidepressant** Second-generation antidepressant: The second-generation antidepressants are a class of antidepressants characterized primarily by the era of their introduction, approximately coinciding with the 1970s and 1980s, rather than by their chemical structure or by their pharmacological effect. As a consequence, there is some controversy over which treatments actually belong in this class. The term "third generation antidepressant" is sometimes used to refer to newer antidepressants, from the 1990s and 2000s, often selective serotonin reuptake inhibitors (SSRIs) such as; fluoxetine (Prozac), paroxetine (Paxil) and sertraline (Zoloft), as well as some non-SSRI antidepressants such as mirtazapine, nefazodone, venlafaxine, duloxetine and reboxetine. However, this usage is not universal. Examples: This list is not exhaustive, and different sources vary upon which items should be considered second-generation. Amineptine Amoxapine Bupropion Iprindole Maprotiline Medifoxamine Mianserin Nomifensine Tianeptine Trazodone Venlafaxine Viloxazine
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Razor wire** Razor wire: Barbed tape or razor wire is a mesh of metal strips with sharp edges whose purpose is to prevent trespassing by humans. The term "razor wire", through long usage, has generally been used to describe barbed tape products. Razor wire is much sharper than the standard barbed wire; it is named after its appearance but is not razor sharp. The points are very sharp and made to rip and snag clothing and flesh. Razor wire: The multiple blades of a razor-wire fence are designed to inflict serious cuts on anyone attempting to climb through or over it and therefore also has a strong psychological deterrent effect. Razor wire is used in many security applications because, although it can be circumvented relatively quickly by humans with tools, penetrating a razor-wire barrier without tools is very slow and typically injurious, often thwarting such attempts or giving security forces more time to respond. Use: The first use of barbed wire for warfare was in 1898 during the Spanish American War. This is just thirty one years since the first patents in 1867 ( One of the most notable examples during the Spanish American War is the defense provided by the Moron-Jucardo Trocha. The trocha (or trench) stretched for fifty miles between the cities of Moron and Jucardo. Within this trench, and in addition to fallen trees, barbed wire was used. The barbed wire was arranged in a cat’s cradle formation that for every 12 yards of barbed fence built, 420 yards of barbed wire was strung (or 35 yards of wire per yard of fence).Later versions of this type of barbed wire were manufactured by Germany during the First World War. The reason for this was a wartime shortage of wire to make conventional barbed wire. Therefore, flat wire with triangular cutting edges began to be punched out of steel strips ("band barbed wire"). A welcome side effect was that a comparable length of barbed wire of this new type could be produced in less time. These precursors to NATO wire did not yet have an inner wire for stabilization, were therefore easy to cut with tin snips, and were also not as robust as normal barbed wire. However, they withstood the wire cutters used at the time to cut normal barbed wire, as was common at the front.An article in a 1918 issue of The Hardware Trade Journal tells the story under the headline: “This Cruel War’s Abuse of Our Old Friend ‘Bob Wire.'” After telling a little about Glidden and his invention, the article goes on as follows: “Quite naturally some animals enclosed by Glidden’s fencing gashed themselves on the barbs. Just as naturally, men and boys tried to climb over or under those fences and had their clothes and flesh torn...These wounds upon man and beast and the suddenness with which Glidden’s barbs halted all living things came to the attention of military men, and the barbed wire entanglement of which we now read almost every day in the war news was born...And it may be said right here that soldiers who have been halted by wire entanglements while making a charge say the devil never invented anything nastier.”Starting in the late 1960s, barbed tape was typically found in prisons and secure mental hospitals, where the increased breaching time for a poorly equipped potential escapee was a definite advantage. Until the development of reinforced barbed tape in the early 1980s (and especially after the September 11 attacks), it was rarely used for military purposes or genuine high security facilities because, with the correct tools, it was easier to breach than ordinary barbed wire. Since then, some military forces have replaced barbed wire with barbed tape for many applications, mainly because it is slightly lighter for the same effective coverage, and it takes up very little space compared to barbed wire or reinforced barbed tape when stored on drums. Use: More recently, barbed tape has been used in more commercial and residential security applications. This is often primarily a visual deterrent since a well-prepared burglar can breach barbed wire and barbed tape barriers in similar amounts of time, using simple techniques such as cutting the wire or throwing a piece of carpet over its strands. Due to its dangerous nature, razor wire/barbed tape and similar fencing/barrier materials are prohibited in some locales. Norway prohibits any barbed wire except in combination with other fencing, in order to protect domesticated animals from exposure. Construction: Razor wire has a central strand of high tensile strength wire, and a steel tape punched into a shape with barbs. The steel tape is then cold-crimped tightly to the wire everywhere except for the barbs. Flat barbed tape is very similar, but has no central reinforcement wire. The process of combining the two is called roll forming. Construction: Types Like barbed wire, razor wire is available as either straight wire, spiral (helical) coils, concertina (clipped) coils, flat wrapped panels or welded mesh panels. Unlike barbed wire, which usually is available only as plain steel or galvanized, barbed tape razor wire is also manufactured in stainless steel to reduce corrosion from rusting. The core wire can be galvanized and the tape stainless, although fully stainless barbed tape is used for permanent installations in harsh climatic environments or under water. Construction: Barbed tape is also characterized by the shape of the barbs. Although there are no formal definitions, typically short barb barbed tape has barbs from 10–12 millimetres (0.4–0.5 in), medium barb tape has barbs 20–22 millimetres (0.8–0.9 in), and long barb tape has barbs 60–66 millimetres (2.4–2.6 in). According to the structure Helical type: Helical type razor wire is the most simple pattern. There are no concertina attachments and each spiral loop is left. It shows a natural spiral freely. Concertina type: It is the most widely used type in the security defense applications. The adjacent loops of helical coils are attached by clips at specified points on the circumference. It shows an accordion-like configuration condition. Blade type: The razor wire are produced in straight lines and cut into a certain length to be welded onto the galvanized or powder coated frame. It can be used individually as a security barrier. Flat type: A popular razor wire type with flat and smooth configuration (like Olympic rings). According to different technology, it can be clipped or the welded type. Welded type: The razor wire tape are welded into panels, then the panels are connected by clips or tie wires to form a continuous razor wire fence. Flattened type: A transformation of single coil concertina razor wire. The concertina wire is flattened to form the flat-type razor wire. According to the coil type Single coil: Commonly seen and widely used type, which is available in both helical and concertina types. Double coil: A complex razor wire type to supply higher security grade. A smaller diameter coil is placed inside of the larger diameter coil. It is also available in both helical and concertina types. Common specifications of razor wire
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CHIME syndrome** CHIME syndrome: CHIME syndrome, also known as Zunich–Kaye syndrome or Zunich neuroectodermal syndrome, is a rare congenital ichthyosis first described in 1983. The acronym CHIME is based on its main symptoms: colobomas, heart defects, ichthyosiform dermatosis, intellectual disability, and either ear defects or epilepsy. It is a congenital syndrome with only a few cases studied and published. Symptoms and signs: Associated symptoms range from things such as colobomas of the eyes, heart defects, ichthyosiform dermatosis, intellectual disability, and ear abnormalities. Further symptoms that may be suggested include characteristic facies, hearing loss, and cleft palate. Genetics: CHIME syndrome is considered to have an autosomal recessive inheritance pattern. This means the defective gene is located on an autosome, and two copies of the gene, one from each parent, are required to inherit the disorder. The parents of an individual with autosomal recessive disorder both carry one copy of the defective gene, but usually do not have the disorder. Treatment: Treatment with isotretinoin may induce substantial resolution of skin lesions, but the risk of secondary infection remains.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lanthanum hafnate** Lanthanum hafnate: Lanthanum hafnate (La2Hf2O7) or lanthanum hafnium oxide is a mixed oxide of lanthanum and hafnium. Properties: Lanthanum hafnate is a colorless ceramic material with the La and Hf atoms arranged in a cubic lattice. The arrangement is a disordered fluorite-like structure below 1,000 °C (1,270 K; 1,830 °F), above which it transitions to a pyrochlore phase; an amorphous phase also exists below 800 °C (1,070 K; 1,470 °F).The compound decomposes into its constituent oxides at 18 GPa. Properties: Luminescence Oxygen vacancies in the base material give luminescence spanning across the visible light spectrum, with a peak near 460 nm. The luminescent properties can be fine-tuned by doping with various rare earth and group 4 metals; for example, La2Hf2O7:Eu3+ nanoparticles exhibit a red photoluminescence or radioluminescence near 612 nm when exposed to ultraviolet or X-ray radiation. Synthesis: Bulk ceramics can obtained by combusting the elements in powder form, and then pressing and sintering the powder at 180 MPa and 1,850 °C (2,120 K; 3,360 °F) for 6 hours: 4 La + 4 Hf + 7 O2 → 2 La2Hf2O7.It may also be made by precipitating hafnium and lanthanum hydroxides from solution and then calcinating in air at 600–1,400 °C (873–1,673 K; 1,112–2,552 °F) for 3 hours: 2 La(OH)3 + 2 Hf(OH)4 → La2Hf2O7 + 7 H2O.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fizz-nik** Fizz-nik: Fizz-Nik was a product marketed by the United States beverage company 7 Up. It was used in much the same way as a drinking straw, and was primarily developed to allow creation of an "instant ice cream float" (also known as an ice cream soda). Origin: The Fizz-Nik was modeled after the "Astro-Float" product, which was released shortly beforehand by the Coca Cola Company. Reflecting upon this trend, 7 Up named their similar product after the Russian "Sputnik". Description: The Fizz-Nik resembled a round bubble. It was composed of two half-spheres that snapped together with a nozzle on each side. Usage: The Fizz-Nik design was intended to create an instant ice cream float or to instantly chill soda as it passes through the bubble, which would be placed into the opening of the glass soda bottle. The opposite end of the bubble was used for drinking. The Fizz-Nik was filled with either ice cream or ice, depending on whether one wanted to make an ice cream float or chill the soda. Usage: The Fizz-Nik was a sponsor on The Soupy Sales Show in the early 1960s. Soupy Sales would do a live demonstration of the product using ice cream that had melted under the studio lights, then assemble the device incorrectly, causing the concoction to spill all over himself, the demonstration table and the studio floor.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Grain size** Grain size: Grain size (or particle size) is the diameter of individual grains of sediment, or the lithified particles in clastic rocks. The term may also be applied to other granular materials. This is different from the crystallite size, which refers to the size of a single crystal inside a particle or grain. A single grain can be composed of several crystals. Granular material can range from very small colloidal particles, through clay, silt, sand, gravel, and cobbles, to boulders. Krumbein phi scale: Size ranges define limits of classes that are given names in the Wentworth scale (or Udden–Wentworth scale) used in the United States. The Krumbein phi (φ) scale, a modification of the Wentworth scale created by W. C. Krumbein in 1934, is a logarithmic scale computed by the equation log 2⁡DD0, where φ is the Krumbein phi scale, D is the diameter of the particle or grain in millimeters (Krumbein and Monk's equation) and D0 is a reference diameter, equal to 1 mm (to make the equation dimensionally consistent).This equation can be rearranged to find diameter using φ: D=D0⋅2−φ In some schemes, gravel is anything larger than sand (comprising granule, pebble, cobble, and boulder in the table above). International scale: ISO 14688-1:2017, establishes the basic principles for the identification and classification of soils on the basis of those material and mass characteristics most commonly used for soils for engineering purposes. ISO 14688-1 is applicable to natural soils in situ, similar man-made materials in situ and soils redeposited by people. Sorting: An accumulation of sediment can also be characterized by the grain size distribution. A sediment deposit can undergo sorting when a particle size range is removed by an agency such as a river or the wind. The sorting can be quantified using the Inclusive Graphic Standard Deviation: 84 16 95 6.6 where σI is the Inclusive Graphic Standard Deviation in phi units 84 is the 84th percentile of the grain size distribution in phi units, etc.The result of this can be described using the following terms:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**2024 aluminium alloy** 2024 aluminium alloy: 2024 aluminium alloy is an aluminium alloy, with copper as the primary alloying element. It is used in applications requiring high strength to weight ratio, as well as good fatigue resistance. It is weldable only through friction welding, and has average machinability. Due to poor corrosion resistance, it is often clad with aluminium or Al-1Zn for protection, although this may reduce the fatigue strength. In older systems of terminology, 2XXX series alloys were known as duralumin, and this alloy was named 24ST. 2024 aluminium alloy: 2024 is commonly extruded, and also available in alclad sheet and plate forms. It is not commonly forged (the related 2014 aluminium alloy is, though). Basic properties: Aluminium alloy 2024 has a density of 2.78 g/cm3 (0.1 lb/in3), electrical conductivity of 30% IACS, Young's Modulus of 73 GPa (10.6 Msi) across all tempers, and begins to melt at 500 °C (932 °F).2024 aluminium alloy's composition roughly includes 4.3–4.5% copper, 0.5–0.6% manganese, 1.3–1.5% magnesium and less than a half a percent of silicon, zinc, nickel, chromium, lead and bismuth. Chemical composition: The alloy composition of 2024 is: Silicon no minimum, maximum 0.5% by weight Iron no minimum, maximum 0.5% Copper minimum 3.8%, maximum 4.9% Manganese minimum 0.3%, maximum 0.9% Magnesium minimum 1.2%, maximum 1.8% Chromium no minimum, maximum 0.1% Zinc no minimum, maximum 0.25% Titanium no minimum, maximum 0.15% Other elements no more than 0.05% each, 0.15% total Remainder aluminium (90.7–94.7%) Mechanical properties: The mechanical properties of 2024 depend greatly on the temper of the material. 2024-O 2024-O temper aluminium has no heat treating. It has an ultimate tensile strength of 140–210 MPa (21–30 ksi), and maximum yield strength of no more than 97 MPa (14,000 psi). The material has elongation (stretch before ultimate failure) of 10–25%, this is the allowable range per applicable AMS specifications. 2024-T3 T3 temper 2024 sheet has an ultimate tensile strength of 400–430 MPa (58–62 ksi) and yield strength of at least 270–280 MPa (39–40 ksi). It has an elongation of 10–15%. 2024-T4 Solution treated at foundry and naturally aged. 2024-T5 Cooled from hot working and artificially aged (at elevated temperature) 2024-T351 T351 temper 2024 plate has an ultimate tensile strength of 470 MPa (68 ksi) and yield strength of 280 MPa (41 ksi). It has elongation of 20%. Uses: Due to its high strength and fatigue resistance, 2024 is widely used in aircraft, especially wing and fuselage structures under tension. Additionally, since the material is susceptible to thermal shock, 2024 is used in qualification of liquid penetrant tests outside of normal temperature ranges.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Action potential pulse** Action potential pulse: An action potential pulse is a mathematically and experimentally correct Synchronized Oscillating Lipid Pulse coupled with an Action Potential. This is a continuation of Hodgkin Huxley's work in 1952 with the inclusion of accurately modelling ion channel proteins, including their dynamics and speed of activation. Action potential pulse: The action potential pulse is a model of the speed an action potential that is dynamically dependent upon the position and number of ion channels, and the shape and make up of the axon. The action potential pulse model takes into account entropy and the conduction speed of the action potential along an axon. It is an addition to the Hodgkin Huxley model. Action potential pulse: Investigation into the membranes of axons have shown that the spaces in between the channels are sufficiently large, such that cable theory cannot apply to them, because it depends upon the capacitance potential of a membrane to be transferred almost instantly to other areas of the membrane surface. In electrical circuits this can happen because of the special properties of electrons, which are negatively charged, whereas in membrane biophysics potential is defined by positively charged ions instead. These ions are usually Na1+ or Ca2+, which move slowly by diffusion and have limited ionic radii in which they can affect adjacent ion channels. It is mathematically impossible for these positive ions to move from one channel to the next, in the time required by the action potential flow model, due to instigated depolarization. Furthermore entropy measurements have long demonstrated that an action potential's flow starts with a large increase in entropy followed by a steadily decreasing state, which does not match the Hodgkin Huxley theory. In addition a soliton pulse is known to flow at the same rate and follow the action potential. From measurements of the speed of an action potential, hyperpolarization must have a further component of which the 'soliton' mechanical pulse is the only candidate.The resulting action potential pulse therefore is a synchronized, coupled pulse with the entropy from depolarization at one channel providing sufficient entropy for a pulse to travel to sequential channels and mechanically open them. Action potential pulse: This mechanism explains the speed of transmission through both myelinated and unmyelinated axons. This is a timed pulse, that combines the entropy from ion transport with the efficiency of a flowing pulse. The action potential pulse model has many advantages over the simpler Hodgkin Huxley version including evidence, efficiency, timing entropy measurements, and the explanation of nerve impulse flow through myelinated axons. Myelinated axons This model replaces saltatory conduction, which was a historical theory that relied upon cable theory to explain conduction, and was an attempt at a model that has no basis is either physiology or membrane biophysics. Action potential pulse: In myelinated axons the myelin acts as a mechanical transducer preserving the entropy of the pulse and insulating against mechanical loss. In this model the nodes of Ranvier (where ion channels are highly concentrated) concentrate the ion channels providing maximum entropy to instigate a pulse that travels from node to node along the axon with the entropy being preserved by the shape and dynamics of the myelin sheath.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Meat cutter** Meat cutter: A meat cutter prepares primal cuts into a variety of smaller cuts intended for sale in a retail environment. The duties of a meat cutter largely overlap those of the butcher, but butchers tend to specialize in pre-sale processing (i.e., reducing carcasses to primal cuts), whereas meat cutters further cut and process the primal cuts per individual customer request. In the U.S., the job title of "butcher" has been mostly replaced in corporate storefronts in the last two decades after customer trends showed that modern, particularly urban, customers increasingly associated the term with animal slaughter and unsanitary conditions (regardless of the condition of the store). With the advent of off-premises, pre-packaged, supermarket meat, many supermarkets now avoid mention of either cutting or butchering and simply call their meat cutters "Meat Department Associates", or similar. In the U.K., the term butcher is still used to describe a person who offers for retail sale meat ready for cooking by the customer. They will also prepare cuts, joints, etc., for the customer. Most U.K. corporate retailers still use the term butcher for their meat department operatives. Overview: A meat cutter is responsible to prepare standard cuts of meat (including poultry and fish) to be sold in either a self-serve or specialty counter. In the UK the term used for retail meat cutter, is still butcher. Retail meat cutters are found in a customer-oriented, retail environment. This can be anything from a small family-owned meat shop to a large international supermarket chain. Meat cutters are a registered trade. Industrial meat cutters are found in production-oriented facilities, and generally perform fewer tasks, but repeatedly. Overview: The term "meat cutters" typically deal with "primal" cuts - segments of the carcass broken down into smaller (but still unfinished) pieces to make them easier to handle. Working conditions: Retail meat cutters traditionally work indoors, in large, refrigerated rooms, with temperatures ranging between 2 and –4 degrees Celsius. These environments are kept sanitary, and are washed every day with powerful antibacterial cleaners. In larger retail outlets or plant-facilities, working environments are generally equipped with power tools such as band saws and circular slicers. Meat cutters are also generally required to be in good physical shape; the duties of a meat cutter include standing for long periods of time, regularly lifting over 50 lbs, and working in cold conditions. Retail meat cutters also often have to deal with customers. Duties: The duties of a retail meat cutter often include the trimming of primal cuts, making ground meat out of trimmings from the primal cuts, ensuring meat cuts are displayed in an eye-catching manner and are of sufficient quality, and serving customized orders to customers. Retail meat cutters are also responsible to keep their working areas clean, and ensure that proper sanitization procedures are followed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**EEM syndrome** EEM syndrome: EEM syndrome (or Ectodermal dysplasia, Ectrodactyly and Macular dystrophy syndrome) is an autosomal recessive congenital malformation disorder affecting tissues associated with the ectoderm (skin, hair, nails, teeth), and also the hands, feet and eyes. Presentation: EEM syndrome exhibits a combination of prominent symptoms and features. These include: ectodermal dysplasia (systemic malformations of ectodermal tissues), ectrodactyly ("lobster claw" deformity in the hands and feet), macular dystrophy (a progressive eye disease), syndactyly (webbed fingers or toes), hypotrichosis (a type of hair-loss), and dental abnormalities (hypodontia). Pathophysiology: EEM syndrome is caused by mutations in the P-cadherin gene (CDH3). Distinct mutations in CDH3 (located on human chromosome 16) are responsible for the macular dystrophy and spectrum of malformations found in EEM syndrome, due in part to developmental errors caused by the resulting inability of CDH3 to respond correctly to the P-cadherin transcription factor p63.The gene for p63 (TP73L, found on human chromosome 3) may also play a role in EEM syndrome. Mutations in this gene are associated with the symptoms of EEM and similar disorders, particularly ectrodactyly.EEM syndrome is an autosomal recessive disorder, which means the defective gene is located on an autosome, and two copies of the defective gene - one from each parent - are required to inherit the disorder. The parents of an individual with an autosomal recessive disorder both carry one copy of the defective gene, but usually do not experience any signs or symptoms of the disorder.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**InFORM Decisions** InFORM Decisions: Since 1994, inFORM Decisions has been a developer and distributor of electronic document automation and payment automation software for IBM i (System i, AS/400, iSeries) and IBM Power Systems computing environments. Designed to burst, sort, format and distribute reports and provide simplified web access to electronic documents, iDocs works with any IBM i-based ERP/accounting software with no additional coding. inFORM Decisions was one of the first IBM Business Partners to implement a comprehensive eDocument distribution system powered by intelligent routing capabilities for fax, email, archive-retrieval, and laser forms. History: inFORM Decisions was founded in 1994 by Dan Forster. Its focus then was to provide simple, IBM AS/400-based (now known as IBM i) utilities to address the growing need to efficiently manage printed documents from applications. In the ensuing years, this expanded into the development of electronic document output applications, re-mapped forms and reports, MICR check technologies, faxing applications, bar code applications, document archive/retrieval, signature pad applications, and more, all of which have grown rapidly in recent years.inFORM's worldwide headquarters are located in the Orange County area of Southern California in the city of Rancho Santa Margarita. Platforms: IBM i System i AS/400 iSeries IBM Power Systems Windows Server
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Erlichmanite** Erlichmanite: Erlichmanite is the naturally occurring mineral form of osmium sulfide (OsS2). It is grey with a metallic luster, hardness around 5, and specific gravity about 9. It is found in noble metal placer deposits. Named for Jozef Erlichman, electron microprobe analyst at the NASA Ames Research Center.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lateral palpebral raphe** Lateral palpebral raphe: The lateral palpebral raphe is a ligamentous band near the eye. Its existence is contentious, and many sources describe it as the continuation of nearby muscles. It is formed from the lateral ends of the orbicularis oculi muscle. It connects the orbicularis oculi muscle, the frontosphenoidal process of the zygomatic bone, and the tarsi of the eyelids. Structure: The lateral palpebral raphe is formed from the lateral ends of the orbicularis oculi muscle. It may also be formed from the pretarsal muscles of the eyelids. It is attached to the margin of the frontosphenoidal process of the zygomatic bone. It passes towards the midline to the lateral commissure of the eyelids. Here, it divides into two slips, which are attached to the margins of the respective tarsi of the eyelids. Structure: The lateral palpebral ligament has a tensile strength of around 12 newtons. Relations The lateral palpebral raphe is a much weaker structure than the medial palpebral ligament on the other side of the eyelids. Variation The lateral palpebral raphe may be absent in some people. If it is not present, it is replaced with muscular fibres of orbicularis oculi muscle. It is often very hard to identify as a distinct anatomical feature. Some sources claim that it does not exist.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GrADS** GrADS: The Grid Analysis and Display System (GrADS) is an interactive desktop tool that is used for easy access, manipulation, and visualization of earth science data. The format of the data may be either binary, GRIB, NetCDF, or HDF-SDS (Scientific Data Sets). GrADS has been implemented worldwide on a variety of commonly used operating systems and is freely distributed over the Internet. GrADS: GrADS uses a 4-Dimensional data environment: longitude, latitude, vertical level, and time. Data sets are placed within the 4-D space by use of a data descriptor file. GrADS interprets station data as well as gridded data, and the grids may be regular, non-linearly spaced, Gaussian, or of variable resolution. Data from different data sets may be graphically overlaid, with correct spatial and time registration. It uses the ctl mechanism to join differing time group data sets. Operations are executed interactively by entering FORTRAN-like expressions at the command line. A rich set of built-in functions are provided, but users may also add their own functions as external routines written in any programming language. GrADS: Data may be displayed using a variety of graphical techniques: line and bar graphs, scatter plots, smoothed contours, shaded contours, streamlines, wind vectors, grid boxes, shaded grid boxes, and station model plots. Graphics may be output in PostScript or image formats. GrADS provides geophysically intuitive defaults, but the user has the option to control all aspects of graphics output. GrADS: GrADS has a programmable interface (scripting language) that allows for sophisticated analysis and display applications. Scripts can display buttons and drop menus as well as graphics, and then take action based on user point-and-clicks. GrADS can be run in batch mode, and the scripting language facilitates using GrADS to do long overnight batch jobs. As of version 2.2.0, graphics display and printing are now handled as independent plug-ins. A C-language Python extension for GrADS called GradsPy was introduced in version 2.2.1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Common Data Link** Common Data Link: Common Data Link (CDL) is a secure U.S. military communication protocol. It was established by the U.S. Department of Defense in 1991 as the military's primary protocol for imagery and signals intelligence. CDL operates within the Ku band at data rates up to 274 Mbit/s. CDL allows for full duplex data exchange. CDL signals are transmitted, received, synchronized, routed, and simulated by Common data link (CDL) Interface Boxes (CIBs). Common Data Link: The FY06 Authorization Act (Public Law 109-163) requires use of CDL for all imagery, unless waiver is granted. The primary reason waivers are granted is from the inability to carry the 300 pound radios on a small (30 pound) aircraft. Emerging technology expects to field a 2-pound version by the end of the decade (2010).The Tactical Common Data Link (TCDL) is a secure data link being developed by the U.S. military to send secure data and streaming video links from airborne platforms to ground stations. The TCDL can accept data from many different sources, then encrypt, multiplex, encode, transmit, demultiplex, and route this data at high speeds. It uses a Ku narrowband uplink that is used for both payload and vehicle control, and a wideband downlink for data transfer. Common Data Link: The TCDL uses both directional and omnidirectional antennas to transmit and receive the Ku band signal. The TCDL was designed for UAVs, specifically the MQ-8B Fire Scout, as well as crewed non-fighter environments. The TCDL transmits radar, imagery, video, and other sensor information at rates from 1.544 Mbit/s to 10.7 Mbit/s over ranges of 200 km. It has a bit error rate of 10e-6 with COMSEC and 10e-8 without COMSEC. It is also intended that the TCDL will in time support the required higher CDL rates of 45, 137, and 274 Mbit/s.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tetracaine** Tetracaine: Tetracaine, also known as amethocaine, is an ester local anesthetic used to numb the eyes, nose, or throat. It may also be applied to the skin before starting an intravenous (injection) to decrease pain from the procedure. Typically it is applied as a liquid to the area. Onset of effects when used in the eyes is within 30 seconds and last for less than 15 minutes.Common side effects include a brief period of burning at the site of use. Allergic reactions may uncommonly occur. Long-term use is generally not recommended as it may slow healing of the eye. It is unclear if use during pregnancy is safe for the baby. Tetracaine is in the ester-type local anesthetic family of medications. It works by blocking the sending of nerve impulses.Tetracaine was patented in 1930 and came into medical use in 1941. It is on the World Health Organization's List of Essential Medicines Medical uses: A systematic review investigated tetracaine for use in emergency departments, especially for starting intravenous lines in children, in view of its analgesic and cost-saving properties. However, it did not find an improvement in first-attempt cannulations.Tetracaine is the T in TAC, a mixture of 5 to 12% tetracaine, 0.05% adrenaline, and 4 or 10% cocaine hydrochloride used in ear, nose, and throat surgery and in the emergency department where numbing of the surface is needed rapidly, especially when children have been injured in the eye, ear, or other sensitive locations. Mechanism: In biomedical research, tetracaine is used to alter the function of calcium release channels (ryanodine receptors) that control the release of calcium from intracellular stores. Tetracaine is an allosteric blocker of channel function. At low concentrations, tetracaine causes an initial inhibition of spontaneous calcium release events, while at high concentrations, tetracaine blocks release completely.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Minor league football (gridiron)** Minor league football (gridiron): Minor league football, also known as alternative football or secondary football, is an umbrella term for pro football (gridiron) that is played below the major league level. Minor league football (gridiron): The National Football League and Canadian Football League are both designated as major leagues, but contrary to the four other major sports leagues in North America—Major League Baseball, Major League Soccer, the National Basketball Association, and the National Hockey League—no formal development farm system was in use after the NFL severed ties with all minor league teams in 1948. The developmental league concept was shuttered again with the cancellation of NFL Europe in 2006.Since 2018, the CFL has had a partnership agreement with the Professional American Football League of Mexico (LFA) for player development, but does not consider it a minor league in the traditional sense. In 2023, the NFL signed a collaboration agreement with the XFL on rules, equipment, and safety testing, but the agreement does not cover sharing players for developmental purposes.There have been professional football leagues of varying levels since the invention of the sport, and over the years there have been attempts to organize development or farm leagues such as the Association of Professional Football Leagues and the World League of American Football, later known as NFL Europe and then NFL Europa, but they failed to produce profits and were eventually shut down. As a result, over time the North American leagues settled into an informal hierarchy, with many aspiring entrepreneurs trying to establish rival, alternative, or supplemental leagues to the NFL, similar to baseball's independent leagues. To this day, beside the All-America Football Conference and the American Football League, which merged with the NFL, none of the other leagues have succeeded, particularly because the leagues' inability to generate television revenue to keep them afloat in their first years of existence.In modern times, the NFL has developed players not ready for the active roster through each team's practice squad, or relied on college football and separate entities like the now-defunct Arena Football League as their feeder organizations. Since the beginning of the 21st century, three fledgling pro football leagues—the United Football League, the Fall Experimental Football League and the Alliance of American Football (AAF)—had hoped to create a relationship with the NFL as developmental minor leagues, but all folded without any such connection being made. Nevertheless, some players did find a path to the NFL through those leagues, especially the high-level ones like the AAF, XFL, and United States Football League (USFL).Currently, there are five active minor leagues in North America: the USFL, XFL, the Gridiron Developmental Football League, the Rivals Professional Football League, and the Liga de Fútbol Americano Profesional, with the latter the only Mexican league. The USFL and XFL are considered high-level leagues, and the rest are viewed as low-level leagues. History: Early circuits (1890–1919) The birth of semi-professional football can be traced back to the 1880s, when most sports clubs in America had a team playing football, and ostensibly played without paid players. In reality, most teams often found ways around that, and acquired the best players with the promise of jobs and trophies or watches to play against top regional clubs and colleges. While the practice of professional and semi-pro teams playing college and amateur teams was common in the 1880s and 1890s, most notably with the establishment of a coalition of teams that operated from 1886 to 1895 in the New York metropolitan area called the American Football Union, in the 20th century college and professional football began to diverge, and college-professional interplay effectively ended after the NCAA formed in 1906. During this time, the most prominent circuit was the Western Pennsylvania Professional Football Circuit.The first attempt to form a pro league was the National Football League of 1902, which, despite its name, was a Pennsylvania regional league, with two teams based in Philadelphia and the third from Pittsburgh. The next step came when promoter Tom O'Rourke established the World Series of Football, also in 1902. The series played indoors at New York City's Madison Square Garden and consisted of five teams: three from the state of New York, one from New Jersey, and another team called New York comprising two Philadelphia teams, the Athletics and Phillies. The 1903 series also featured the Franklin Athletic Club from Pennsylvania.At the same time, the Massillon Tigers, the Columbus Panhandles, and the Canton Bulldogs, all from Ohio, started attracting much of the top professional football talent in America: Harry McChesney, Bob Shiring, the Nesser brothers, Blondy Wallace, Cub Buck, and later Jim Thorpe, and gave rise to the Ohio League. The league was actually a circuit— an informal and loose association of independent teams playing other local teams and competing for the Ohio Independent Championship. The group pioneered the concept of avoiding competition with college football games by playing games on Sundays, which was illegal in other states due to the existing blue laws. This eventually became the professional standard.The Ohio League decade-long monopoly began to lose hold in the 1910s, with the formation of the New York Pro Football League (NYPFL), first league to use a playoff format, and other associations in the Midwest, particularly in Illinois. The rise in level of play resulted in barnstorming tours between the circuits, which laid the foundations for the first truly national major league: the American Professional Football Association in 1920, which later became the NFL. History: The Golden Era The first minor leagues period of prosperity or "heyday" started in the 1920s and lasted until the end of World War II. By the 1930s, football was not a fledgling enterprise, but pro football was, as even the National Football League had trouble attracting fans, and was located mostly in the northeastern quarter of the United States. In other parts of the country, several regional leagues tried their luck in the pro game, along with flourishing regional circuits of independent teams, recapturing the pro football roots. The era is also considered the best of all time, due to the quality of play, as there were only 250 players in the NFL, while the regional leagues could sometimes offer better pay and jobs, and offered black players opportunity to play during the period of 1933–1946, when they were excluded from all NFL teams.In 1934, the American Football League (AFL) was the first true attempt to establish pro football in the American South and Southwest regions. The league was formed by the strongest independent teams in the region, including the Memphis Tigers, who claimed the national pro championship in 1929, after beating the NFL champion Green Bay Packers. The AFL had only one season of competition and folded after only the Memphis Tigers and Charlotte Bantams completed their seasons.Another strong southern league was the Dixie League, which represented Mid Atlantic teams. The league was one of the most successful minor leagues in history, playing eight seasons in eleven years, while claiming to be the highest level minor football league of the era. Unlike most pro-football minor leagues, the Dixie League had a relative stable membership until the Pearl Harbor attack forced the league into hiatus. The league returned in 1946, but folded in 1947 after playing only one week.The Dixie League's biggest counterpart was the American Association (AA) football league. The AA was formed by the nucleus of independent teams that played in the New York and New Jersey circuits, and was led by president Joe Rosentover. The league teams sought relationships with the NFL, and several teams, including the Newark Bears, Brooklyn Eagles, and Jersey City Giants, functioned as a farm system for the major NFL teams. The league allowed black players to participate, including the last African-American in the NFL in 1933, Joe Lillard. Most teams scheduled games against the independent Fritz Pollard's Harlem Brown Bombers. The league closed operations during World War II, and after a four-year hiatus, the AA was renamed the American Football League and expanded to include teams in Ohio and Pennsylvania. The league's demise was caused by the NFL severing ties with all minor league teams in 1948.The last of the "Big Three Leagues" was the Pacific Coast Professional Football League, which started in 1940. The roots of pro football in the west are attributable to the Red Grange barnstorming tour with the Chicago Bears in 1926, as some short-lived leagues, including the 1926 Pacific Coast Professional Football League (PCPFL) and 1934–1935 American Legion League, were formed. The PCPFL was formed thanks to the financial backbone of the sport in California, the Los Angeles Bulldogs, billed as the "best football team in existence outside the NFL", and the only prominent minor football league to operate during the war years. The league became home to the top African American football talents in the country, including Kenny Washington, Woody Strode, Ozzie Simmons, Mel Reid, and briefly Jackie Robinson during the NFL enforced color barrier. The league played its last season in 1948, two years after the NFL moved the Los Angeles Rams to Los Angeles. The Big Three reached an agreement with the NFL, and in 1946 formed the Association of Professional Football Leagues as a formal farm system with the league. The agreement lasted less than two years, after the NFL cancelled it altogether in 1948. The termination triggered the end of the era. History: Other prominent leagues were the Anthracite League of Pennsylvania, the Eastern League of Professional Football based in Pennsylvania and New Jersey, the Ohio Valley League, the Midwest Football League, and the Northwest War Industries League in Washington and Oregon. During the 1930s and 1940s, there were also strong independent circuits in Greater New York metropolitan area and the Northeast. History: The second wave The minor leagues experienced a renaissance in the 1960s and 1970s; their growing relevance occurred alongside the AFL and NFL rivalry. Several prominent leagues operated during that period and were mostly regional. The original United Football League (UFL) lasted from 1961 to 1964 and was concentrated in the Midwest, and was the first football league to operate teams in both the United States and Canada, as the Quebec Rifles played in the league in 1964. The Northeast Atlantic Coast Football League (ACFL) formed in 1962 and was run by Joe Rosentover, former president of the 1930s AA.In April 1964, the two leagues, along with the Central States Football League, the Midwest Football League, and the Southern Football League, formed the Association of Minor Football Leagues. The association also included the non-paying semi-pro New England Football Conference, and appointed UFL commissioner George T. Gareff as the CEO. The association represented teams in fifty cities spanning twenty-one U.S. states plus Quebec, and scheduled exhibition games between leagues, but disbanded after two years without notice. History: When the UFL folded and the Newark Bears of the ACFL unsuccessfully applied to join the AFL, two new national leagues formed. The first was the North American Football League (NAFL), which ran from 1965 to 1966, and tried to establish major league affiliations with either the NFL or the AFL. The second, the Continental Football League (CoFL), which ran from 1965 to 1971, was probably the biggest in the era, and attracted three teams from the ACFL: the Hartford Charter Oaks, the Newark Bears, and the Norfolk Neptunes. History: Some of the other notable leagues were the Professional Football League of America (PFLA), which lasted from 1965 to 1967 and played in the Midwest, essentially replacing the UFL, the North Pacific Football League (NPFL), and the Texas Football League (TFL), which operated in the southern United States. Those leagues would later merge with the CoFL, as several teams from the NPFL joined the league in 1966, and the PFLA followed in 1968, resulting in the dissolution of both leagues. In 1969, the CoFL announced the all eight teams from TFL were being added to its ranks as a separate division, and were scheduled to play mostly against each other, along with a few inter-league contests.The two bigger leagues, the CoFL and ACFL, had different strategies: the CoFL aimed to remain independent, while the ACFL functioned as a developmental league and, like previous Rosentover leagues, allowed its teams to become farm teams for the AFL and NFL. Over their existence, the CoFL arguably had better talent move on to NFL and Canadian Football League (CFL) stardom, including Ken Stabler, Don Jonas, and Sam Wyche, but folded in 1969, and again in 1971 as an incarnation called the Trans-American Football League formed with remnants of the TFL; plans to take on the Canadian Football League head-to-head were abandoned. Although the revival as the TAFL was largely a failure, the league foreshadowed the future of minor football, as it played its season in the spring to avoid direct competition against football in the fall.The ACFL also produced some significant talent, such as Pro Bowler Marvin Hubbard, first female professional football player Patricia Palinkas, who was placeholder for her husband Stephen, and cult figure King Corcoran. It also lasted longer. The league operated continuously through 1971, with a return season in 1973, which was played mostly by promoted teams from the lower-level Seaboard Football League (SFL), which in turn, brought up semi-pro teams to replace them. However, the attempted major World Football League (WFL) sapped both leagues of most of their talent, forcing them to fold by 1974. The league's first collapse in 1972, along with the demise of the Midwest Professional Football League, ended the era of NFL teams having individual farm teams. History: During its existence, the SFL hovered between a minor league and semi-pro, as some of its players, most notably Joe Klecko, were never paid, and others received only fifty dollars per game. Despite that, the league had some notable alumni, including Vince Papale, Jack Dolbin, and Klecko. Additionally, the league was the last minor league to play an inter-league exhibition match against an NFL team, when the New York Jets rookies defeated Long Island Chiefs 29–3.One other minor league attempt in the 1970s was the American Football Association (AFA), which operated from 1977 to 1983, was less successful, as it struggled to acquire recognizable players and failed to secure a TV deal. The AFA followed the model set by the TAFL, and played its from May to August. The formation of the first United States Football League (USFL) in 1982 led to a decline in AFA talent, a move to semi-pro status, and a cancellation of the league entirely after the 1983 season. History: Near the end of the era, there was one last attempt to organize non-NFL pro teams under one umbrella, with the establishment of the Minor Professional Football Association, which represented more than 200 teams and about 10,000 players. From 1980 through 1985, the association sponsored an annual post-season championship tournament for minor league teams, with an attempt to establish a minor-league system. In 1981 the association reached an agreement with the NFL to hold a special national all-star game for minor leaguers, the day before the Super Bowl, with scouts in attendance. The NFL had the right to sign any player from the association for a $1,500 payment to the team that held his contract. But the agreement did not continue, and in 1986 the association reformed into the American Football Association (AFA) and focused on providing services to semi-pro and amateur teams around the US. The development of arena football and the birth of the Arena Football League (AFL) in 1987 effectively ended the era, reducing most outdoor leagues to amateur or semi-pro status. History: NFL Europe After the turmoil in the 1980s, the NFL decided to form its own league in 1991, the World League of American Football, as a spring developmental league. For the first time, an American sport league had a European division as part of its ten-team league, while the other teams were located in continental US and Canada. The league was used to test rule changes and technical innovations and planned as a farm system for NFL teams. However, the first two seasons produced low TV ratings, and the league was put on hiatus until 1995. When it came back, the league was based entirely in Europe, reduced to six teams, and re-branded in 1998 as NFL Europe. The league kept the same format until 2007, when the NFL terminated it. Ultimately, the league was one of the longest tenured high-level minor leagues in history, lasting fifteen years in total and producing players like Hall of Famer Kurt Warner and Super Bowl quarterbacks Brad Johnson and Jake Delhomme. Other notable players include Dante Hall, David Akers, James Harrison, Adam Vinatieri, and William Perry. History: Modern era Early 2000s In the late 1990s and early 2000s, a wave of entrepreneurs tried forming new leagues in the ever growing football market. History: The first league was the self-styled "major league of spring football" Regional Football League (RFL) that played a single season in 1999. The league was initially planned to begin in 1998, but financial difficulties delayed it by a year and changed the business plan, transforming it into a lower-budget league featuring just six teams from mid-sized cities mostly located the southern United States. The league did not prosper, as it failed to secure a television contract and was forced to play a shortened eight-week season. Although the league was unsuccessful, it pioneered the idea of assigning players teams based of the region where they played in college. History: Parallel to the RFL, there were two more attempts to start up new leagues. The first, the International Football Federation, folded so rapidly it is considered the shortest football league in history, ceasing operations before completing the preliminary planning stages. The second, the Spring Football League, was founded by several ex-NFL players: Bo Jackson, Drew Pearson, Eric Dickerson, and Tony Dorsett. It failed to attract big investors on account of the tech-market crash of 2000, and was cancelled after only two weeks.The next attempt was probably the most significant since the emergence of the AFL in 1960, as NBC and the WWE collaborate to form the (original) XFL in 2001. Although 14 million viewers tuned in for the first game, the Nielsen ratings later plummeted and triggered NBC to pulled out of its broadcast contract, and the league folded after one season. The league featured several changes in rules and broadcast style, and gave birth to the Skycam in sports broadcasting.From the late 2000s into the early 2010s, many startup leagues had trouble attracting investors. Five high-profile attempts by the All American Football and United National Gridiron leagues in 2007, the New United States Football League in 2010 and 2014, and the A-11 Football League in 2014, never materialized. Two other leagues in the era were the low-level New World Football League (NWFL) and the Stars Football League (SFL), both of which survived three seasons: the NWFL from 2008 to 2010, and the SFL from 2011 to 2013. History: The modern-day United Football League (UFL) was the most prominent league in the era, playing 3½ seasons before folding. The UFL was fairly successful, attracting big crowds in Omaha, Sacramento, and Hartford and airing all league games on Versus, HDnet, and over the Internet, and had plans to expand. It functioned as a single-entity league following the Major League Soccer model. The UFL featured former NFL players, and was the first professional fall league other than the National Football League to play in the United States since the mid-1970s. The league collapsed in the middle of its 2012 season, failing to pay the bills after most investors stepped out.The Fall Experimental Football League (FXFL), founded in 2014, was the first league that openly embraced the minor league concept, and aimed to become a professional feeder system for the NFL since the 1970's. The league owner, Brian Woods, wanted his franchises to be primarily based in minor league baseball stadiums, and use the infrastructure in place to attract fans. The FXFL attracted the final NFL roster cuts, for the purpose of keeping them "in football shape, physically and mentally". The league was cancelled after two abbreviated seasons, and was reformatted as the developmental Spring League in 2016. History: New resurgence In 2018, several figures with connections to the original 2001 XFL entered the spring-football market with rival leagues. The first was the Alliance of American Football (AAF), founded by Charlie Ebersol and Bill Polian, which began playing in 2019, but ceased operations eight weeks in, as the controlling owner Thomas Dundon decided to pull the plug after the league made a deal with the NFL, which planned to take ownership of 15% of the league.The second was the relaunched version of the XFL, as Vince McMahon hired Oliver Luck as commissioner. The league first began play in 2020, with more success and a better reception than its first iteration, and aired on ABC, ESPN, and Fox Sports. After five weeks of play, the XFL announced its season would end because of growing COVID-19 pandemic concerns. The league was on hiatus in 2021 and 2022, after it filed for bankruptcy and put up for sale by McMahon and was later sold to Dany Garcia, Dwayne "The Rock" Johnson and RedBird Capital, and began its second season on February 18, 2023.Six other leagues have entered the planning stage, but have yet to launch. The first is the Spring League of American Football, a planned high-level minor league that was first announced in September 2016, by two former Madison Square Garden executives, but did not acquire any funding to begin play. As of 2023, the league appears defunct, with no official website and no news since 2018. Major League Football was founded in 2014, but so far only has two cancelled seasons in 2016 and 2022. The American Patriot League was founded in 2018 and planned to start in 2019, held two league showcase and allocated players and coaches to teams, but still hasn't launched because of the COVID-19 pandemic. Another planned league is the Freedom Football League, also founded in 2018, run by former NFL players Jeff Garcia, Ricky Williams, Terrell Owens, and Simeon Rice, with an initial 2020 starting season, but with no recent updates to its timeline. History: The last two were developmental-level leagues. The first was Pacific Pro Football, founded in 2017 and designed for non-NFL eligible players; it was abandoned in mid-2020 after several investors backed out, and was reformatted to a scouting event called HUB Football. The other, Your Call Football, did start, lasted from 2018 to 2019, and featured concepts that gave the fans the power to control the outcome, which were also adopted by the indoor Fan Controlled Football league, but was abandoned when its parent company moved on to adapting the technology in other sporting environments. History: Since 2017, a developmental league called The Spring League (TSL) was aimed at professional athletes but acted as "instructional league and showcase for professional football talent" with abbreviated seasons. On June 3, 2021, TSL owner Brian Woods announced that he had acquired the remaining extant trademarks of the United States Football League and launched a USFL-branded league in 2022, with Fox Sports owning the league and reportedly committing $150 million over three years to its operations, essentially ending the five years run of TSL, and establishing the USFL as a new high-level minor league. System and structure: There have been professional football leagues of varying levels since the invention of the sport, with the NFL dominant through most of the 20th century and into the 21st. There have been many attempts to start rival major leagues, most recently the original United States Football League (USFL), but most leagues that followed have been high-level minor leagues such as the XFL, the UFL and the AAF. Whether major or minor, most football leagues have tried to establish teams in large, untapped U.S. markets.Most of the minor leagues have been separated through the years into three de facto categories: high-level, including the Pacific Coast Professional Football League (PCPFL), the original XFL, and the Alliance of American Football (AAF); low-level, such as the American Football Association (AFA) and Seaboard Football League (SFL); and semi-professional leagues. Today, there are also mid-level leagues, including the Regional Football League (RFL) and Fall Experimental Football League (FXFL) and developmental leagues, such as The Spring League (TSL) and Your Call Football.The categories are usually determined by the following rules: the high-level leagues salaries are above the median personal income in the United States, mid-level leagues pay at approximately median level, and the low-level leagues pay around or below the minimum wage in the United States. The developmental leagues do not pay salaries or contract with non-NFL eligible players, and are designed to showcase the players' skills for future opportunities.Since 1998, there have been more than twenty traditional or indoor football leagues that played an average of 3½ years before folding or merging with others; some never reached the stage of playing games. There are five active minor leagues in North America: two high-level, the second USFL and second XFL; and three low-level leagues, the Gridiron Developmental Football League, the Rivals Professional Football League, and the Mexican Liga de Fútbol Americano Profesional. System and structure: Indoor American football The high cost of supporting an entire roster of professional players and stadium fees led to an indoor variation with the launch of the Arena Football League (AFL) in 1987. In its heyday, the it functioned as de facto minor league to the NFL, as six NFL team owners—Atlanta Falcons, Dallas Cowboys, Denver Broncos, Detroit Lions, New Orleans Saints and Tennessee Titans—had purchased teams in the AFL, and many players and coaches made the transition between leagues. On February 8, 1999, the NFL also purchased, but never exercised, an option to buy a major interest in the AFL. As of 2023, the Indoor Football League (IFL) has a player personnel partnership with the XFL, to function as their de facto minor league.Prior to the first AFL's collapse in 2008, the league had its own minor developmental league called af2, which never actually functioned as a farm system, but it dissolved after the 2009 season amid financial problems rooted in the 2007–2008 financial crisis and several of its teams joined the second AFL, which began play in 2010.Today, the indoor variation of football also has an unofficial minor-league hierarchy, although no major indoor league exists currently after the second AFL went bankrupt in 2019. Pro leagues pay varying salaries on a per-game basis, while the high-level leagues also provide housing, health insurance, and two meals per day to players during the season. System and structure: The categories are more fluid than the outdoor variation, but usually determined by per-game salaries and arena size: High-level – Indoor Football League, National Arena League Mid-level – Champions Indoor Football Low-level – American Indoor Football Alliance, American Arena League, American West Football Conference Semi-pro football Semi-pro leagues have existed since the beginning of American football, but were far more common in the early and mid 20th century than they are today. Football is especially suited for semi-pro play, and most leagues often operate at a semi-professional level due to cost concerns. Furthermore, because they play only one game per week, the players are able to pursue outside employment. In the 21st century, the semi-pro circuits usually attract only local players and teams don't pay salaries, although in the past most teams helped players find local jobs within the community. Over the years, semi-pro leagues were effectively a farm system for the NFL, attracting college players on the cusp of playing in the NFL who needed to stay in shape.The semi-pro game experienced two peak periods: first in the 1950s, then in the 1970s and 1980s, when minor leagues started disappearing. Instead, the level below the NFL tended to take the form of local, sometimes unofficial leagues matching teams from different neighborhoods or suburbs of big cities with little to no pay. The semi-pro leagues role in history is best portrayed in the 1987 24-day NFLPA Strike, when semi-pro players were called up as replacements after the third week of the NFL season was cancelled. Their stories are documented in the 2017 ESPN film Year of the Scab. The decline of semi-pro football is attributable to the rise of college football in the 1980s, and the subsequent growth of a vast pool of young talent from which the NFL can draw. System and structure: Notable semi-pro players include Johnny Unitas, who played quarterback, safety, and punter on a Pittsburgh suburb team called the Bloomfield Rams for six dollars per game before joining the Baltimore Colts; Eric Swann, who played for the Bay State Titans in the Boston suburb of Lynn and was the first, and so far the only, player to be drafted into the NFL draft first round from a semi-pro organization; and Ray Seals, who did not play college football, but made his way to the NFL through the semi-pro rank Syracuse Express. System and structure: The Watertown Red & Black, a semi-professional team that currently plays in the Empire Football League, is the oldest existing football club, tracing its history to 1896. System and structure: Minor League Football System After the decline of the minor leagues in the 1980s, the semi-pro circuit tried to fill that niche. In the summer of 1989, the Minor League Football System (MLFS) was formed as an attempt to develop a nationwide semi-pro football league. The circuit had aspirations to become a feeder system for the NFL and featured eleven teams in as many states. The league's commissioner was Roger Wehrli. Because the league wanted to attract good local talent but did not pay, it functioned as a temporary employment agency, and offered jobs and housing for players in local communities during the season. Despite that, it managed to attract decent talent, including ex-NFL players such as Rusty Hilger and Ben Rudolph, as well as coaches including Walt Michaels, Darryl Rogers and Lou Saban. After a successful first season, the league attracted strong sponsors in Wilson and Gatorade, but two teams folded midway through the second year, while the rest folded after the end of the season, unable to establish a working agreement with the NFL. System and structure: Modern circuit Today, most leagues and independent teams are sanctioned by the American Football Association, which acts as an organizer of games and playoff tournaments for teams throughout the U.S. and maintains a Minor League/Semi-Pro Football Hall of Fame. Another semi-pro organization is the USA Bowl Championship Series, which ranks the top twenty-five semi-pro and amateur teams in the country, and crowns an annual "national champion" at the USA Bowl. The final such association is the United States Federation of American Football, which tries to divide the existing leagues into AAA and AA groups rated by business practices, representation, and athletics; it was formerly recognized by the International Federation of American Football as the USA's football governing body.Under USA Football and Football Canada criteria, players at this level are eligible for the United States national American football team and Canada men's national football team, respectively.The prominent present-day leagues in the adult amateur/semi-pro US circuit are: * The NEFL is unique in the American sports landscape, allowing promotion and relegation among conferences. System and structure: In Canada there are three prominent leagues: The AFL and NFC are considered bigger leagues, and every September the NFC champion meets the champion of the AFL to determine the Canadian Major Football League national champions. Canada also has three prominent junior leagues: the Atlantic Football League, the Canadian Junior Football League, and the Quebec Junior Football League. System and structure: International American Football Leagues American football continues to grow in popularity worldwide, and has had International Olympic Committee recognition since 2013. The NFL has tried to expand their exposure to additional markets by playing games outside the United States. The first pro game outside the U.S. and Canada was played in Japan in 1976. In 1978 the NFL played in Mexico, and in 1983 they had their first game in Europe, in London.With the success of the international series in the 1970s and 1980s, other countries established their own leagues that have earned good reputations over the years, especially the long-established leagues throughout Europe and in Japan. European leagues and teams attract and sign American coaches and import players, some of whom have NFL experience, from U.S. colleges or other leagues. The number of import players allowed per team is set by league rules. The typical American import player contract includes a monthly salary, housing, insurance, transportation, round trip flights, meals, and possible performance bonuses. The top leagues in Europe are traditionally the German Football League (GFL), Austrian Football League, Italian Football League, and the Finnish league. The X-League in Japan, which plays under NCAA or NFL rules, is also very strong.Usually the foreign players in the National Football League moved to the US early, and played the game in college, but there are exceptions. Anthony Dablé, a French football player, was the first foreign pro player to sign in the NFL. Moritz Böhringer was drafted in 2016 directly from the GFL, but did not play in an official game before returning to the GFL in 2021. Efe Obada was the first player to make an NFL 53-man active roster. Since 2017, the NFL has run the International Player Pathway Program (IPPP) to increase the number of non-American and non-Canadian players in the league.Since 2017, the Canadian Football League (CFL) has tried to globalize as well, and made partnership agreements with football leagues in Austria, Denmark, Finland, France, Germany, Italy, Japan, Mexico, Norway, Sweden, and the United Kingdom. The league first held a special draft in 2019 for Mexican born players, and another one for European players., and hold a special global scouting combine and Global Draft for players from Europe, Mexico, and Japan since 2021. Today, each CFL team includes two designated global-player roster spots for players from outside the U.S. and Canada.Another American league, the Gridiron Developmental Football League, has partnership agreements for player development with two international leagues: the Elite Football League of India and the Federación Deportiva Nacional de Fútbol Americano de Chile (FDNFA). System and structure: European League of Football In March 2021, a new league, the European League of Football, announced it had reached an agreement with the NFL to be able to use team names from the days of NFL Europe. The league is a professional American football league, and consists of seventeen teams located in nine countries: Germany, Poland, Spain, Austria, Italy, Switzerland, Hungary, Czech Republic and France. There are plans to expand to twenty-four teams by 2025. KaVontae Turpin was the first ELF alum who played in the NFL, while Adedayo Odeleye and Marcel Dabo signed as practice squad players through the league's IPPP. System and structure: Australian Football League American football uses significantly different rules than Australian rules football played in the Australian Football League (AFL). However, the punting specialist position requires similar skills under both rule sets. The most successful player to ever make the transition from the AFL to the NFL is Darren Bennett, who was selected to the NFL 1990s All-Decade Team. Because salaries are usually up to five times higher in the U.S., a high number of players try their luck in the American game. In the last decade, the NFL has placed full-time development officers in Australia, and there is a full-time punting academy in the Australian continent called Prokick Australia. – which is aimed at training and assessing talented punters from the country for positions in major U.S. colleges and the NFL. System and structure: Although the vast majority of Australian players in the NFL are punters, there are a few exceptions. The most famous is offensive tackle Jordan Mailata, who played Rugby league and was drafted in 2018 without college experience. Another example is Joel Wilkinson, who signed with the Arizona Cardinals as a cornerback. Defensive end Adam Gotsis is probably the most successful non-punter Australian. He played at Georgia Tech in college and was drafted in the second round of the 2016 NFL Draft by the Denver Broncos. Other notable players are Jarryd Hayne and Jesse Williams. Current and planned Minor leagues: Current leagues High-level XFL, 2020; 2023– United States Football League (USFL), 2022– Low-level Gridiron Developmental Football League (GDFL), 2010– Rivals Professional Football League (RPFL), 2014– Liga de Fútbol Americano Profesional (LFA), 2016– Planned leagues High-level International Football Alliance (IFA), propose to begin in 2024 Spring League of American Football (SLAF), postponed Freedom Football League (FFL), postponed Mid-level American Spring Football League (ASFL), propose to begin in 2023. Current and planned Minor leagues: American Patriot League (APL), postponed. Low-level Major League Football (MLF), propose to begin in 2022, but pushed back to 2023. Developmental Young Superstars League, TBA (APL D-League). Defunct Minor leagues: High-level Source Anthracite League, 1924 Ohio Valley League, 1925–1929 American Football League, 1934 American Association* 1936–1941/American Football League* 1946–1950 Dixie League*, 1936–1942; 1946–1947Originally South Atlantic Football Association Midwest Football League, 1935–1939Became American Professional Football Association in 1938, American Football League in 1939 Pacific Coast Professional Football League*, 1940–1948 American Football League, 1944 United Football League, 1961–1964 Atlantic Coast Football League‡, 1962–1971, 1973 Continental Football League, 1965–1969Supplemented the Professional Football League of America in 1968 and the Texas Football League in 1969. International Football League, 1983𝐟 World League of American Football* 1991–1992/NFL Europe* 1995–2007 Professional Spring Football League, 1992𝐟 Fan Ownership League, 1996𝐟 All-American Football League, 1997𝐟 International Football Federation, 1999𝐟 XFL, 2001 All American Football League, 2007𝐟 United Football League, 2009–2012 New United States Football League, 2010𝐟, 2014𝐟 A-11 Football League, 2014𝐟 North American Football League, 2014𝐟 Alliance of American Football, 2019 Mid-level Regional Football League, 1999 Spring Football League, 2000 Fall Experimental Football League, 2014–2015 Major League Football, 2016𝐟; 2022𝐟 Low-level Source Pacific Coast League, 1926 Eastern League of Professional Football, 1926–1927 Anthracite League, 1928–1929 Eastern Football League, 1932–1933Renamed Interstate Football League in 1933 Greater New York League, 1934–1935Originally the New Jersey Football Circuit (1934) American Legion League, 1934–1935 Northwest Football League, 1935–1938Outgrowth of the Tri-States Football League (1934) New England Football League, 1936 Virginia-Carolina Football League, 1937 California Football League, 1938 Eastern Pennsylvania Football League‡, 1938 Northeast Football League, 1940–1942 Southern Professional Football League, 1940𝐟; 1944𝐟 Ohio Professional Football League, 1941 Northwest War Industries Football League, 1942 Eastern Football League, 1944 Virginia Negro League, 1946 Central States Football League, 1948–1953 Pacific Football Conference, 1957–1958 American Football Conference, 1959–1961 Central States Football League, 1962–1975Outgrowth of the Bi-States Football League (1949–1959) and Tri-States Football League (1960–1961) Midwest Football League‡, 1962–1978 Southern Football League, 1963–1965Merger between the Dixie Football League (1961–1962) and Florida Football League (1962–1963) North Pacific Football League‡, 1963–1966 New England Football League, 1964–1967Renamed North Atlantic Football League in 1967 North American Football League, 1965–1966Supplemented the Southern Football League in 1966 Professional Football League of America‡, 1965–1967 Texas Football League 1966–1968 United American Football League, 1967 Trans-American Football League, 1970–1971 Midwest Professional Football League‡, 1970–1972 Seaboard Football League, 1971–1974 Southwestern Football League, 1972–1973 California Football League, 1974–1982Renamed Western Football League for the 1976 season American Football Association, 1977–1983 Northern States Football League, 1977–1985 United Football Teams of America, 1982 United National Gridiron League, 2007𝐟 World Football League, 2008–2010 Hawaii Professional Football League, 2011𝐟 Stars Football League, 2011–2013 Trinity Professional Spring Football League, 2018𝐟 Fútbol Americano de México, 2019–2022 Developmental Pacific Pro Football, 2017𝐟 The Spring League, 2017–2021 Your Call Football, 2018–2019* Official NFL / AFL minor league.‡ Unofficial NFL minor league, that featured NFL farm team(s).𝐟 Folded without playing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Command queue** Command queue: In computer science, a command queue is a queue for enabling the delay of command execution, either in order of priority, on a first-in first-out basis, or in any order that serves the current purpose. Instead of waiting for each command to be executed before sending the next one, the program just puts all the commands in the queue and goes on doing other things while the queue is processed by the operating system. This delegation not only frees the program from handling the queue but also allows a more optimized execution in some situations. For instance, when handling multiple requests from several users, a network server's hard drive can reorder all the requests in its queue using, for instance, the elevator algorithm to minimize the mechanical movement. Examples: Native Command Queuing (NCQ) in Serial ATA (SATA) Tagged Command Queuing (TCQ) in Parallel ATA and SCSI
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Brick** Brick: A brick is a type of construction material used to build walls, pavements and other elements in masonry construction. Properly, the term brick denotes a unit primarily composed of clay, but is now also used informally to denote units made of other materials or other chemically cured construction blocks. Bricks can be joined using mortar, adhesives or by interlocking. Bricks are usually produced at brickworks in numerous classes, types, materials, and sizes which vary with region, and are produced in bulk quantities.Block is a similar term referring to a rectangular building unit composed of clay or concrete, but is usually larger than a brick. Lightweight bricks (also called lightweight blocks) are made from expanded clay aggregate. Brick: Fired bricks are one of the longest-lasting and strongest building materials, sometimes referred to as artificial stone, and have been used since circa 4000 BC. Air-dried bricks, also known as mud-bricks, have a history older than fired bricks, and have an additional ingredient of a mechanical binder such as straw. Bricks are laid in courses and numerous patterns known as bonds, collectively known as brickwork, and may be laid in various kinds of mortar to hold the bricks together to make a durable structure. History: Middle East and South Asia The earliest bricks were dried mud-bricks, meaning that they were formed from clay-bearing earth or mud and dried (usually in the sun) until they were strong enough for use. The oldest discovered bricks, originally made from shaped mud and dating before 7500 BC, were found at Tell Aswad, in the upper Tigris region and in southeast Anatolia close to Diyarbakir.Mud-brick construction was used at Çatalhöyük, from c. 7,400 BC.Mud-brick structures, dating to c. 7,200 BC have been located in Jericho, Jordan Valley. These structures were made up of the first bricks with dimension 400x150x100 mm.Between 5000 and 4500 BC, Mesopotamia had discovered fired brick. The standard brick sizes in Mesopotamia followed a general rule: the width of the dried or burned brick would be twice its thickness, and its length would be double its width.The South Asian inhabitants of Mehrgarh also constructed, air-dried mud-brick structures, between 7000 and 3300 BC. and later the ancient Indus Valley cities of Mohenjo-daro, Harappa, and Mehrgarh. Ceramic, or fired brick was used as early as 3000 BC in early Indus Valley cities like Kalibangan.In the middle of the third millennium BC, there was a rise in monumental baked brick architecture in Indus cities. Examples included the Great Bath at Mohenjo-daro, the fire altars of Kaalibangan, and the granary of Harappa. There was a uniformity to the brick sizes throughout the Indus Valley region, conforming to the 1:2:4, thickness, width, and length ratio. As the Indus civilization began its decline at the start of the second millennium BC, Harappans migrated east, spreading their knowledge of brickmaking technology. This led to the rise of cities like Pataliputra, Kausambi, and Ujjain, where there was an enormous demand for kiln-made bricks.By 604 BC, bricks were the construction materials for architectural wonders such as the Hanging Gardens of Babylon, where glazed fired bricks were put into practice. History: China The earliest fired bricks appeared in Neolithic China around 4400 BC at Chengtoushan, a walled settlement of the Daxi culture. These bricks were made of red clay, fired on all sides to above 600 °C, and used as flooring for houses. By the Qujialing period (3300 BC), fired bricks were being used to pave roads and as building foundations at Chengtoushan.According to Lukas Nickel, the use of ceramic pieces for protecting and decorating floors and walls dates back at various cultural sites to 3000-2000 BC and perhaps even before, but these elements should be rather qualified as tiles. For the longest time builders relied on wood, mud and rammed earth, while fired brick and mud-brick played no structural role in architecture. Proper brick construction, for erecting walls and vaults, finally emerges in the third century BC, when baked bricks of regular shape began to be employed for vaulting underground tombs. Hollow brick tomb chambers rose in popularity as builders were forced to adapt due to a lack of readily available wood or stone. The oldest extant brick building above ground is possibly Songyue Pagoda, dated to 523 AD. History: By the end of the third century BCE in China, both hollow and small bricks were available for use in building walls and ceilings. Fired bricks were first mass-produced during the construction of the tomb of China's first Emperor, Qin Shi Huangdi. The floors of the three pits of the terracotta army were paved with an estimated 230,000 bricks, with the majority measuring 28x14x7 cm, following a 4:2:1 ratio. The use of fired bricks in Chinese city walls first appeared in the Eastern Han Dynasty (25 AD-220 AD). Up until the Middle Ages, buildings in Central Asia were typically built with unbaked bricks. It was only starting in the ninth century CE when buildings were entirely constructed using fired bricks. History: The carpenter's manual Yingzao Fashi, published in 1103 at the time of the Song dynasty described the brick making process and glazing techniques then in use. Using the 17th-century encyclopaedic text Tiangong Kaiwu, historian Timothy Brook outlined the brick production process of Ming Dynasty China: ...the kilnmaster had to make sure that the temperature inside the kiln stayed at a level that caused the clay to shimmer with the colour of molten gold or silver. He also had to know when to quench the kiln with water so as to produce the surface glaze. To anonymous labourers fell the less skilled stages of brick production: mixing clay and water, driving oxen over the mixture to trample it into a thick paste, scooping the paste into standardised wooden frames (to produce a brick roughly 42 cm long, 20 cm wide, and 10 cm thick), smoothing the surfaces with a wire-strung bow, removing them from the frames, printing the fronts and backs with stamps that indicated where the bricks came from and who made them, loading the kilns with fuel (likelier wood than coal), stacking the bricks in the kiln, removing them to cool while the kilns were still hot, and bundling them into pallets for transportation. It was hot, filthy work. History: Europe Early civilisations around the Mediterranean, including the Ancient Greeks and Romans, adopted the use of fired bricks. By the early first century CE, standardised fired bricks were being heavily produced in Rome. The Roman legions operated mobile kilns, and built large brick structures throughout the Roman Empire, stamping the bricks with the seal of the legion. The Romans used brick for walls, arches, forts, aqueducts, etc. Notable mentions of Roman brick structures are the Herculaneum gate of Pompeii and the baths of Caracalla.During the Early Middle Ages the use of bricks in construction became popular in Northern Europe, after being introduced there from Northwestern Italy. An independent style of brick architecture, known as brick Gothic (similar to Gothic architecture) flourished in places that lacked indigenous sources of rocks. Examples of this architectural style can be found in modern-day Denmark, Germany, Poland, and Kaliningrad (former East Prussia). History: This style evolved into the Brick Renaissance as the stylistic changes associated with the Italian Renaissance spread to northern Europe, leading to the adoption of Renaissance elements into brick building. Identifiable attributes included a low-pitched hipped or flat roof, symmetrical facade, round arch entrances and windows, columns and pilasters, and more.A clear distinction between the two styles only developed at the transition to Baroque architecture. In Lübeck, for example, Brick Renaissance is clearly recognisable in buildings equipped with terracotta reliefs by the artist Statius von Düren, who was also active at Schwerin (Schwerin Castle) and Wismar (Fürstenhof).Long-distance bulk transport of bricks and other construction equipment remained prohibitively expensive until the development of modern transportation infrastructure, with the construction of canal, roads, and railways. History: Industrial era Production of bricks increased massively with the onset of the Industrial Revolution and the rise in factory building in England. For reasons of speed and economy, bricks were increasingly preferred as building material to stone, even in areas where the stone was readily available. It was at this time in London that bright red brick was chosen for construction to make the buildings more visible in the heavy fog and to help prevent traffic accidents.The transition from the traditional method of production known as hand-moulding to a mechanised form of mass-production slowly took place during the first half of the nineteenth century. Possibly the first successful brick-making machine was patented by Henry Clayton, employed at the Atlas Works in Middlesex, England, in 1855, and was capable of producing up to 25,000 bricks daily with minimal supervision. His mechanical apparatus soon achieved widespread attention after it was adopted for use by the South Eastern Railway Company for brick-making at their factory near Folkestone. The Bradley & Craven Ltd 'Stiff-Plastic Brickmaking Machine' was patented in 1853, apparently predating Clayton. Bradley & Craven went on to be a dominant manufacturer of brickmaking machinery. Predating both Clayton and Bradley & Craven Ltd. however was the brick making machine patented by Richard A. Ver Valen of Haverstraw, New York, in 1852.At the end of the 19th century, the Hudson River region of New York State would become the world's largest brick manufacturing region, with 130 brickyards lining the shores of the Hudson River from Mechanicsville to Haverstraw and employing 8,000 people. At its peak, about 1 billion bricks were produced a year, with many being sent to New York City for use in its construction industry.The demand for high office building construction at the turn of the 20th century led to a much greater use of cast and wrought iron, and later, steel and concrete. The use of brick for skyscraper construction severely limited the size of the building – the Monadnock Building, built in 1896 in Chicago, required exceptionally thick walls to maintain the structural integrity of its 17 storeys.Following pioneering work in the 1950s at the Swiss Federal Institute of Technology and the Building Research Establishment in Watford, UK, the use of improved masonry for the construction of tall structures up to 18 storeys high was made viable. However, the use of brick has largely remained restricted to small to medium-sized buildings, as steel and concrete remain superior materials for high-rise construction.Bricks are often made of shale because it easily splits into thin layers. Methods of manufacture: Four basic types of brick are un-fired, fired, chemically set bricks, and compressed earth blocks. Each type is manufactured differently for various purposes. Methods of manufacture: Mud-brick Unfired bricks, also known as mud-bricks, are made from a mixture of silt, clay, sand and other earth materials like gravel and stone, combined with tempers and binding agents such as chopped straw, grasses, tree bark, or dung. Since these bricks are made up of natural materials and only require heat from the Sun to bake, mud-bricks have a relatively low embodied energy and carbon footprint. Methods of manufacture: The ingredients are first harvested and added together, with clay content ranging from 30% to 70%. The mixture is broken up with hoes or adzes, and stirred with water to form a homogenous blend. Next, the tempers and binding agents are added in a ratio, roughly one part straw to five parts earth to reduce weight and reinforce the brick by helping reduce shrinkage. However, additional clay could be added to reduce the need for straw, which would prevent the likelihood of insects deteriorating the organic material of the bricks, subsequently weakening the structure. These ingredients are thoroughly mixed together by hand or by treading and are then left to ferment for about a day.The mix is then kneaded with water and molded into rectangular prisms of a desired size. Bricks are lined up and left to sundry for three days on both sides. After the six days, the bricks continue drying until required for use. Typically, longer drying times are preferred, but the average is eight to nine days spanning from initial stages to its application in structures. Unfired bricks could be made in the spring months and left to dry over the summer for use in the fall. Mud-bricks are commonly employed in arid environments to allow for adequate air drying. Methods of manufacture: Fired brick Fired bricks are burned in a kiln which makes them durable. Modern, fired, clay bricks are formed in one of three processes – soft mud, dry press, or extruded. Depending on the country, either the extruded or soft mud method is the most common, since they are the most economical. Methods of manufacture: Clay and shale are the raw ingredients in the recipe for a fired brick. They are the product of thousands of years of decomposition and erosion of rocks, such as pegmatite and granite, leading to a material that has properties of being highly chemically stable and inert. Within the clays and shales are the materials of aluminosilicate (pure clay), free silica (quartz), and decomposed rock.One proposed optimal mix is: Silica (sand) – 50% to 60% by weight Alumina (clay) – 20% to 30% by weight Lime – 2 to 5% by weight Iron oxide – ≤ 7% by weight Magnesia – less than 1% by weight Shaping methods Three main methods are used for shaping the raw materials into bricks to be fired: Molded bricks – These bricks start with raw clay, preferably in a mix with 25–30% sand to reduce shrinkage. The clay is first ground and mixed with water to the desired consistency. The clay is then pressed into steel moulds with a hydraulic press. The shaped clay is then fired ("burned") at 900–1,000 °C (1,650–1,830 °F) to achieve strength. Methods of manufacture: Dry-pressed bricks – The dry-press method is similar to the soft-mud moulded method, but starts with a much thicker clay mix, so it forms more accurate, sharper-edged bricks. The greater force in pressing and the longer burn make this method more expensive. Methods of manufacture: Extruded bricks – For extruded bricks the clay is mixed with 10–15% water (stiff extrusion) or 20–25% water (soft extrusion) in a pugmill. This mixture is forced through a die to create a long cable of material of the desired width and depth. This mass is then cut into bricks of the desired length by a wall of wires. Most structural bricks are made by this method as it produces hard, dense bricks, and suitable dies can produce perforations as well. The introduction of such holes reduces the volume of clay needed, and hence the cost. Hollow bricks are lighter and easier to handle, and have different thermal properties from solid bricks. The cut bricks are hardened by drying for 20 to 40 hours at 50 to 150 °C (120 to 300 °F) before being fired. The heat for drying is often waste heat from the kiln. Methods of manufacture: Kilns In many modern brickworks, bricks are usually fired in a continuously fired tunnel kiln, in which the bricks are fired as they move slowly through the kiln on conveyors, rails, or kiln cars, which achieves a more consistent brick product. The bricks often have lime, ash, and organic matter added, which accelerates the burning process. The other major kiln type is the Bull's Trench Kiln (BTK), based on a design developed by British engineer W. Bull in the late 19th century. Methods of manufacture: An oval or circular trench is dug, 6–9 metres (20–30 ft) wide, 2–2.5 metres (6 ft 7 in – 8 ft 2 in) deep, and 100–150 metres (330–490 ft) in circumference. A tall exhaust chimney is constructed in the centre. Half or more of the trench is filled with "green" (unfired) bricks which are stacked in an open lattice pattern to allow airflow. The lattice is capped with a roofing layer of finished brick. Methods of manufacture: In operation, new green bricks, along with roofing bricks, are stacked at one end of the brick pile. Historically, a stack of unfired bricks covered for protection from the weather was called a "hack". Cooled finished bricks are removed from the other end for transport to their destinations. In the middle, the brick workers create a firing zone by dropping fuel (coal, wood, oil, debris, etc.) through access holes in the roof above the trench. The constant source of fuel maybe grown on the woodlots.: 6 The advantage of the BTK design is a much greater energy efficiency compared with clamp or scove kilns. Sheet metal or boards are used to route the airflow through the brick lattice so that fresh air flows first through the recently burned bricks, heating the air, then through the active burning zone. The air continues through the green brick zone (pre-heating and drying the bricks), and finally out the chimney, where the rising gases create suction that pulls air through the system. The reuse of heated air yields savings in fuel cost. Methods of manufacture: As with the rail process, the BTK process is continuous. A half-dozen labourers working around the clock can fire approximately 15,000–25,000 bricks a day. Unlike the rail process, in the BTK process the bricks do not move. Instead, the locations at which the bricks are loaded, fired, and unloaded gradually rotate through the trench. Methods of manufacture: Influences on colour The colour of fired clay bricks is influenced by the chemical and mineral content of the raw materials, the firing temperature, and the atmosphere in the kiln. For example, pink bricks are the result of a high iron content, white or yellow bricks have a higher lime content. Most bricks burn to various red hues; as the temperature is increased the colour moves through dark red, purple, and then to brown or grey at around 1,300 °C (2,370 °F). The names of bricks may reflect their origin and colour, such as London stock brick and Cambridgeshire White. Brick tinting may be performed to change the colour of bricks to blend-in areas of brickwork with the surrounding masonry. Methods of manufacture: An impervious and ornamental surface may be laid on brick either by salt glazing, in which salt is added during the burning process, or by the use of a slip, which is a glaze material into which the bricks are dipped. Subsequent reheating in the kiln fuses the slip into a glazed surface integral with the brick base. Chemically set bricks Chemically set bricks are not fired but may have the curing process accelerated by the application of heat and pressure in an autoclave. Methods of manufacture: Calcium-silicate bricks Calcium-silicate bricks are also called sandlime or flintlime bricks, depending on their ingredients. Rather than being made with clay they are made with lime binding the silicate material. The raw materials for calcium-silicate bricks include lime mixed in a proportion of about 1 to 10 with sand, quartz, crushed flint, or crushed siliceous rock together with mineral colourants. The materials are mixed and left until the lime is completely hydrated; the mixture is then pressed into moulds and cured in an autoclave for three to fourteen hours to speed the chemical hardening. The finished bricks are very accurate and uniform, although the sharp arrises need careful handling to avoid damage to brick and bricklayer. The bricks can be made in a variety of colours; white, black, buff, and grey-blues are common, and pastel shades can be achieved. This type of brick is common in Sweden as well as Russia and other post-Soviet countries, especially in houses built or renovated in the 1970s. A version known as fly ash bricks, manufactured using fly ash, lime, and gypsum (known as the FaL-G process) are common in South Asia. Calcium-silicate bricks are also manufactured in Canada and the United States, and meet the criteria set forth in ASTM C73 – 10 Standard Specification for Calcium Silicate Brick (Sand-Lime Brick). Methods of manufacture: Concrete bricks Bricks formed from concrete are usually termed as blocks or concrete masonry unit, and are typically pale grey. They are made from a dry, small aggregate concrete which is formed in steel moulds by vibration and compaction in either an "egglayer" or static machine. The finished blocks are cured, rather than fired, using low-pressure steam. Concrete bricks and blocks are manufactured in a wide range of shapes, sizes and face treatments – a number of which simulate the appearance of clay bricks. Methods of manufacture: Concrete bricks are available in many colours and as an engineering brick made with sulfate-resisting Portland cement or equivalent. When made with adequate amount of cement they are suitable for harsh environments such as wet conditions and retaining walls. They are made to standards BS 6073, EN 771-3 or ASTM C55. Concrete bricks contract or shrink so they need movement joints every 5 to 6 metres, but are similar to other bricks of similar density in thermal and sound resistance and fire resistance. Methods of manufacture: Compressed earth blocks Compressed earth blocks are made mostly from slightly moistened local soils compressed with a mechanical hydraulic press or manual lever press. A small amount of a cement binder may be added, resulting in a stabilised compressed earth block. Types: There are thousands of types of bricks that are named for their use, size, forming method, origin, quality, texture, and/or materials. Categorized by manufacture method: Extruded – made by being forced through an opening in a steel die, with a very consistent size and shape. Types: Wire-cut – cut to size after extrusion with a tensioned wire which may leave drag marks Moulded – shaped in moulds rather than being extruded Machine-moulded – clay is forced into moulds using pressure Handmade – clay is forced into moulds by a person Dry-pressed – similar to soft mud method, but starts with a much thicker clay mix and is compressed with great force.Categorized by use: Common or building – A brick not intended to be visible, used for internal structure Face – A brick used on exterior surfaces to present a clean appearance Hollow – not solid, the holes are less than 25% of the brick volume Perforated – holes greater than 25% of the brick volume Keyed – indentations in at least one face and end to be used with rendering and plastering Paving – brick intended to be in ground contact as a walkway or roadway Thin – brick with normal height and length but thin width to be used as a veneerSpecialized use bricks: Chemically resistant – bricks made with resistance to chemical reactions Acid brick – acid resistant bricks Engineering – a type of hard, dense, brick used where strength, low water porosity or acid (flue gas) resistance are needed. Further classified as type A and type B based on their compressive strength Accrington – a type of engineering brick from England Fire or refractory – highly heat-resistant bricks Clinker – a vitrified brick Ceramic glazed – fire bricks with a decorative glazingBricks named for place of origin: Chicago common brick - a soft brick made near Chicago, Illinois with a range of colors, like buff yellow, salmon pink, or deep red Cream City brick – a light yellow brick made in Milwaukee, Wisconsin Dutch brick – a hard light coloured brick originally from the Netherlands Fareham red brick – a type of construction brick London stock brick – type of handmade brick which was used for the majority of building work in London and South East England until the growth in the use of machine-made bricks Nanak Shahi bricks – a type of decorative brick in India Roman brick – a long, flat brick typically used by the Romans Staffordshire blue brick – a type of construction brick from England Optimal dimensions, characteristics, and strength: For efficient handling and laying, bricks must be small enough and light enough to be picked up by the bricklayer using one hand (leaving the other hand free for the trowel). Bricks are usually laid flat, and as a result, the effective limit on the width of a brick is set by the distance which can conveniently be spanned between the thumb and fingers of one hand, normally about 100 mm (4 in). In most cases, the length of a brick is twice its width plus the width of a mortar joint, about 200 mm (8 in) or slightly more. This allows bricks to be laid bonded in a structure which increases stability and strength (for an example, see the illustration of bricks laid in English bond, at the head of this article). The wall is built using alternating courses of stretchers, bricks laid longways, and headers, bricks laid crossways. The headers tie the wall together over its width. In fact, this wall is built in a variation of English bond called English cross bond where the successive layers of stretchers are displaced horizontally from each other by half a brick length. In true English bond, the perpendicular lines of the stretcher courses are in line with each other. Optimal dimensions, characteristics, and strength: A bigger brick makes for a thicker (and thus more insulating) wall. Historically, this meant that bigger bricks were necessary in colder climates (see for instance the slightly larger size of the Russian brick in table below), while a smaller brick was adequate, and more economical, in warmer regions. A notable illustration of this correlation is the Green Gate in Gdansk; built in 1571 of imported Dutch brick, too small for the colder climate of Gdansk, it was notorious for being a chilly and drafty residence. Nowadays this is no longer an issue, as modern walls typically incorporate specialised insulation materials. Optimal dimensions, characteristics, and strength: The correct brick for a job can be selected from a choice of colour, surface texture, density, weight, absorption, and pore structure, thermal characteristics, thermal and moisture movement, and fire resistance. Optimal dimensions, characteristics, and strength: In England, the length and width of the common brick remained fairly constant from 1625 when the size was regulated by statute at 9 x 4+1⁄2 x 3 inches (but see brick tax), but the depth has varied from about two inches (51 mm) or smaller in earlier times to about 2+1⁄2 inches (64 mm) more recently. In the United Kingdom, the usual size of a modern brick (from 1965) is 215 mm × 102.5 mm × 65 mm (8+1⁄2 in × 4 in × 2+1⁄2 in), which, with a nominal 10 millimetres (3⁄8 in) mortar joint, forms a unit size of 225 by 112.5 by 75 millimetres (9 in × 4+1⁄2 in × 3 in), for a ratio of 6:3:2. Optimal dimensions, characteristics, and strength: In the United States, modern standard bricks are specified for various uses; The most commonly used is the modular brick has the actual dimensions of 7+5⁄8 × 3+5⁄8 × 2+1⁄4 inches (194 × 92 × 57 mm). With the standard 3⁄8 inch mortar joint, this gives the nominal dimensions of 8 x 4 x 2+2⁄3 inches which eases the calculation of the number of bricks in a given wall. The 2:1 ratio of modular bricks means that when they turn corners, a 1/2 running bond is formed without needing to cut the brick down or fill the gap with a cut brick; and the height of modular bricks means that a soldier course matches the height of three modular running courses, or one standard CMU course. Optimal dimensions, characteristics, and strength: Some brickmakers create innovative sizes and shapes for bricks used for plastering (and therefore not visible on the inside of the building) where their inherent mechanical properties are more important than their visual ones. These bricks are usually slightly larger, but not as large as blocks and offer the following advantages: A slightly larger brick requires less mortar and handling (fewer bricks), which reduces cost Their ribbed exterior aids plastering More complex interior cavities allow improved insulation, while maintaining strength.Blocks have a much greater range of sizes. Standard co-ordinating sizes in length and height (in mm) include 400×200, 450×150, 450×200, 450×225, 450×300, 600×150, 600×200, and 600×225; depths (work size, mm) include 60, 75, 90, 100, 115, 140, 150, 190, 200, 225, and 250. They are usable across this range as they are lighter than clay bricks. The density of solid clay bricks is around 2000 kg/m3: this is reduced by frogging, hollow bricks, and so on, but aerated autoclaved concrete, even as a solid brick, can have densities in the range of 450–850 kg/m3. Optimal dimensions, characteristics, and strength: Bricks may also be classified as solid (less than 25% perforations by volume, although the brick may be "frogged," having indentations on one of the longer faces), perforated (containing a pattern of small holes through the brick, removing no more than 25% of the volume), cellular (containing a pattern of holes removing more than 20% of the volume, but closed on one face), or hollow (containing a pattern of large holes removing more than 25% of the brick's volume). Blocks may be solid, cellular or hollow. Optimal dimensions, characteristics, and strength: The term "frog" can refer to the indentation or the implement used to make it. Modern brickmakers usually use plastic frogs but in the past they were made of wood. Optimal dimensions, characteristics, and strength: The compressive strength of bricks produced in the United States ranges from about 7 to 103 MPa (1,000 to 15,000 lbf/in2), varying according to the use to which the brick are to be put. In England clay bricks can have strengths of up to 100 MPa, although a common house brick is likely to show a range of 20–40 MPa. Uses: Bricks are a versatile building material, able to participate in a wide variety of applications, including: Structural walls, exterior and interior walls Bearing and non-bearing sound proof partitions The fireproofing of structural-steel members in the form of firewalls, party walls, enclosures and fire towers Foundations for stucco Chimneys and fireplaces Porches and terraces Outdoor steps, brick walks and paved floors Swimming poolsIn the United States, bricks have been used for both buildings and pavement. Examples of brick use in buildings can be seen in colonial era buildings and other notable structures around the country. Bricks have been used in paving roads and sidewalks especially during the late 19th century and early 20th century. The introduction of asphalt and concrete reduced the use of brick for paving, but they are still sometimes installed as a method of traffic calming or as a decorative surface in pedestrian precincts. For example, in the early 1900s, most of the streets in the city of Grand Rapids, Michigan, were paved with bricks. Today, there are only about 20 blocks of brick-paved streets remaining (totalling less than 0.5 percent of all the streets in the city limits). Much like in Grand Rapids, municipalities across the United States began replacing brick streets with inexpensive asphalt concrete by the mid-20th century.In Northwest Europe, bricks have been used in construction for centuries. Until recently, almost all houses were built almost entirely from bricks. Although many houses are now built using a mixture of concrete blocks and other materials, many houses are skinned with a layer of bricks on the outside for aesthetic appeal. Uses: Bricks in the metallurgy and glass industries are often used for lining furnaces, in particular refractory bricks such as silica, magnesia, chamotte and neutral (chromomagnesite) refractory bricks. This type of brick must have good thermal shock resistance, refractoriness under load, high melting point, and satisfactory porosity. There is a large refractory brick industry, especially in the United Kingdom, Japan, the United States, Belgium and the Netherlands. Uses: Engineering bricks are used where strength, low water porosity or acid (flue gas) resistance are needed. In the UK a red brick university is one founded in the late 19th or early 20th century. The term is used to refer to such institutions collectively to distinguish them from the older Oxbridge institutions, and refers to the use of bricks, as opposed to stone, in their buildings. Colombian architect Rogelio Salmona was noted for his extensive use of red bricks in his buildings and for using natural shapes like spirals, radial geometry and curves in his designs. Limitations: Starting in the 20th century, the use of brickwork declined in some areas due to concerns about earthquakes. Earthquakes such as the San Francisco earthquake of 1906 and the 1933 Long Beach earthquake revealed the weaknesses of unreinforced brick masonry in earthquake-prone areas. During seismic events, the mortar cracks and crumbles, so that the bricks are no longer held together. Brick masonry with steel reinforcement, which helps hold the masonry together during earthquakes, has been used to replace unreinforced bricks in many buildings. Retrofitting older unreinforced masonry structures has been mandated in many jurisdictions. However, similar to steel corrosion in reinforced concrete, rebar rusting will compromise the structural integrity of reinforced brick and ultimately limit the expected lifetime, so there is a trade-off between earthquake safety and longevity to a certain extent.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GPAT4** GPAT4: Glycerol-3-phosphate acyltransferase 4 is a glycerol-3-phosphate acyltransferase that in humans is encoded by the GPAT4 gene. Function: GPAT4 is involved in the biosynthesis of triglycerides. The majority of triglycerides are synthesised from glycerol 3-phosphate (G3P) via the addition of three fatty acyl-CoA substrates, which are made from fatty acids. The first of these additions is catalysed by glycerol-3-phosphate acyltransferases (GPATs: EC 2.3.1.15), including GPAT4, yielding lysophosphatidic acid. GPAT4 has been shown to be important for lactation, with quantitative trait locus (QTL) for several milk production and composition traits observed at this locus in cattle.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ledoyom** Ledoyom: Ledoyom (Russian: ледоём, IPA: [lʲɪdɐˈjom]) is a term proposed by the Russian geologist Vasily Nekhoroshev for intermontane depressions which might get completely filled by glaciers from the surrounding mountains at the maxima of glaciation. Etymology: The Russian term “Ledoyom (ледоём)” means “body of ice” by analogy with a “body of water” ("vodoyom"). Description: In the 1930s the Russian geologist V.P. Nekhoroshev marked out intermontane depressions in the Altai which might get completely filled by glaciers from the surrounding mountains at the maxima of glaciation. He called such depressions "ledoyoms". Ledoyoms produced large valley glaciers within outlet runoff valleys from the depressions at culmination stages of their development. Diagnostic marks of the so-called classical ledoyoms are moraines, eskers and kames on the bottoms of the corresponding depressions.In the 1980s and 1990s the development of the idea by Russian geologist Alexei Rudoy of glacier-dammed lakes which used to fill most of the inter-montane basins of the mountain belt of Siberia, the depressions of Teletskoye and Baikal lakes including, took place. It also became clear that many depressions, even very large ones, had been already occupied by dammed water basins by the time when the glaciers of the mountain frame moved forward into them. Thus, mountain glaciers turned into original “shelf” glaciers and armored completely the surface of the glacier-dammed lake joining together floating on the surface. That is the way the so-called “captured lakes” came to exist. Description: At maximum lowering of the snow-line (in the Altai and the Sayan its depression gave about 1200 m in late pleistocene) some of the lakes (Chuya, Kuray, Uymon and others) began functioning in an under-ice regime because they never got free from ice for thousands of years. Such lakes turned into ice bodies of the “aufeis” type. They consisted of a thick lens of lake water, which was covered by lake ice, aufeis and glacier ice, and by snow-firn sequence, too. “Aufeis” ledoyoms became independent centers of glaciation with subradial ice outlets. Possible analogies of such an evolution mechanism and pre-glacial lakes are thick water lenses under a 3–4 kilometer-thick unit of the glacier cover at the sites of Dome B and Dome Charlie and the Vostok Station in Eastern Antarctica. Description: Thus, depending on the intermontane depression topography, the values, of the snow-line depression and of the glaciation energy, the interrelation of the glaciers and the ice-dammed lakes in the mountains of south of Western Siberia could develop according to the following scenarios: A ledoyom only (no ice-dammed lake). In such cases some glacial and water-glacial relief forms and sediments would remain in the basins A water body and a ledoyom together (the stage of “catch lakes”). Certain forms of “dead ice” may remain in the basins, as well as intraglacial water-ice forms – eskers and kames which were projected onto the bottom lake deposits when the “shelf” ice descended “Aufeis” ledoyoms An ice-dammed lake only.Under different extensions of the glacier at different time periods, one and the same basin could have undergone different sequences of the lake-glacier events.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Language documentation tools and methods** Language documentation tools and methods: The field of language documentation in the modern context involves a complex and ever-evolving set of tools and methods, and the study and development of their use - and, especially, identification and promotion of best practices - can be considered a sub-field of language documentation proper. Among these are ethical and recording principles, workflows and methods, hardware tools, and software tools. Principles and workflows: Researchers in language documentation often conduct linguistic fieldwork to gather the data on which their work is based, recording audiovisual files that document language use in traditional contexts. Because the environments in which linguistic fieldwork often takes place may be logistically challenging, not every type of recording tool is necessary or ideal, and compromises must often be struck between quality, cost and usability. It is also important to envision one's complete workflow and intended outcomes; for example, if video files are made, some amount of processing may be required to expose the audio component to processing in various ways by different software packages. Principles and workflows: Ethics Ethical practices in language documentation have been the focus of much recent discussion and debate. The Linguistic Society of America has prepared an Ethics Statement, and maintains an Ethics Discussion Blog which is primarily focused on ethics in the language documentation context. The morality of ethics protocols has itself been brought into question by George van Driem. Most postgraduate programs that involve some form of language documentation and description require researchers to submit their proposed protocols to an internal Institutional Review Board which ensures that research is being conducted ethically. Minimally, participants should be informed of the process and the intended use of the recordings, and give recorded audible or written permission for the audiovisual materials to be used for linguistic investigation by the researcher(s). Many participants will want to be named as consultants, but others will not - this will determine whether the data needs to be anonymized or restricted from public access. Principles and workflows: Data Formats Adhering to standards for formats is critical for interoperability between software tools. Many individual archives or data repositories have their own standards and requirements for data deposited on their servers - knowledge of these requirements ought to inform the data collection strategy and tools used, and should be part of a data management plan developed before the start of research. Some example guidelines from well-used repositories are given below: Endangered Languages Archive (ELAR) guidelines Max Planck Institute Archive accepted formats Yale University Library audiovisual guidelinesMost current archive standards for video use MPEG-4 (H264) as an encoding or storage format, which includes an AAC audio stream (generally of up to 320 kbit/s). Audio archive quality is at least WAV 44.1 kHz, 16-bit. Principles and workflows: Principles for recording Since documentation of languages is often difficult, with many languages that linguists work with being endangered (they may not be spoken in the near future), it is recommended to record at the highest quality possible given the limitations of a recorder. For video, this means recording at HD resolution (1080p or 720p) or higher when possible, while for audio this means recording minimally in uncompressed PCM 44,100 samples per second, 16-bit resolution. Arguably, however, good recording techniques (isolation, microphone selection and usage, using a tripod to minimize blur) is more important than resolution. A microphone that gives a clear recording of a speaker telling a folktale (high signal/noise ratio) in MP3 format (perhaps via a phone) is better than an extremely noisy recording in WAV format where all that can be heard are cars going by. To ensure that good recordings can be obtained, linguists should practice with their recording devices as much as possible and compare the results to observe which techniques yield the best results. Principles and workflows: Workflows For many linguists the end-result of making recordings is language analysis, often investigation of a language's phonological or syntactic properties using various software tools. This requires transcription of the audio, generally in collaboration with native speakers of the language in question. For general transcription, media files can be played back on a computer (or other device capable of playback) and paused for transcription in a text editor. Other (cross-platform) tools to assist this process include Audacity and Transcriber, while a program like ELAN (described further below) can also perform this function. Principles and workflows: Programs like Toolbox or FLEx are often preferred by linguists who want to be able to interlinearize their texts, as these programs build a dictionary of forms and parsing rules to help speed up analysis. Unfortunately, media files are generally not linked by these programs (as opposed to ELAN, in which linked files are preferred), making it difficult to view or listen back to recordings to check transcriptions. There is currently a workaround for Toolbox that allows timecodes to reference an audio file and enable playback (of a complete text or a referenced sentence) from within Toolbox - in this workflow, time-alignment of text is performed in Transcriber, and then the relevant timecodes and text are converted into a format that Toolbox can read. Hardware: Video+audio recorders Recorders that record video typically also record audio as well. However, the audio does not always meet the criteria of minimal needs and recommended best practices for language documentation (uncompressed WAV format, 44.1 kHz, 16-bit), and is often not useful for linguistic purposes such as phonetic analysis. Many video devices record instead to a compressed audio format such as AAC or MP3, which is combined with the video stream in a wrapper of various kinds. Exceptions to this general rule are the following Video+Audio recorders: The Zoom series, particularly the Q8, Q4n, and Q2n, which record to multiple video and audio resolutions/formats, most notably WAV (44.1/48/96 kHz, 16/24-bit). Hardware: When using a video recorder that does not record audio in WAV format (such as most DSLR cameras), it is recommended to record audio separately on another recorder, following some of the guidelines below. As with the audio recorders described below, many video recorders also accept microphone input of various kinds (generally through an 1/8-inch or TRS connector) - this can ensure a high-quality backup audio recording that is in sync with the recorded video, which can be helpful in some cases (i.e. for transcription). Hardware: Audio recorders and microphones Audio-only recorders can be used in scenarios where video is impractical or otherwise undesirable. In most cases it is advantageous to combine the use of an audio-only recorder with one or more external microphones, however many modern audio recorders include built-in microphones which are usable if cost or setup speed are important concerns. Digital (solid state) recorders are preferred for most language documentation scenarios. Modern digital recorders achieve a very high level of quality at a relatively low price. Some of the most popular field recorders are found in the Zoom range, including the H1, H2, H4, H5 and H6. The H1 is particularly suitable for situations in which cost and user-friendliness are major desiderata. Other popular recorders for situations where size is a factor are the Olympus LS-series and the Sony Digital Voice recorders (though in the latter case, ensure that the device can record to WAV/Linear PCM format). Hardware: Several types of microphone can be effectively used in language documentation scenarios, depending on the situation (especially, including factors such as number, position and mobility of speakers) and on budget. In general, condenser microphones should be selected rather than dynamic microphones. It is an advantage in most fieldwork situations if a condenser microphone is self-powered (via a battery); however, when power is not a major factor, phantom-powered models can also be used. A stereo microphone setup is needed whenever more than one speaker is involved in a recording; this can be achieved via an array of two mono microphones, or by a dedicated stereo microphone. Hardware: Directional microphones should be used in most cases, in order to isolate a speaker's voice from other potential noise sources. However, omnidirectional microphones may be preferred in situations involving larger numbers of speakers arrayed in a relatively large space. Among directional microphones, cardioid microphones are suitable for most applications, however in some cases a hypercardioid ("shotgun") microphone may be preferred. Hardware: Good quality headset microphones are comparatively expensive, but can produce recordings of extremely high quality in controlled situations. Lavalier or "lapel" microphones may be used in some situations, however, depending on the microphone they can produce recordings which are inferior to a headset microphone for phonetic analysis, and are subject to some of the same concerns that headset microphones are in terms of restriction of a recording to a single speaker - while other speakers may be audible on the recording, they will be backgrounded in relation to the speaker wearing the lavalier microphone.Some good quality microphones used for film-making and interviews include the Røde VideoMic shotgun and the Røde lavalier series, Shure headworn mics and Shure lavaliers. Depending on the recorder and microphone, additional cables (XLR, stereo/mono converter or a TRRS to TRS adapter) will be necessary. Hardware: Other recording tools Electrical power generation, storage and management Computer systems Accessories Software: There is as yet no single software suite which is designed to or able to handle all aspects of a typical language documentation workflow. Instead, there is a large and increasing number of packages designed to handle various aspects of the workflow, many of which overlap considerably. Some of these packages use standard formats and are inter-operable, whereas others are much less so. Software: SayMore SayMore is a language documentation package developed by SIL International in Dallas which primarily focuses on the initial stages in language documentation, and aims for a relatively uncomplicated user experience. Software: The primary functions of SayMore are: (a) audio recording (b) file import from recording device (video and/or audio) (c) file organization (d) metadata entry at session and file levels (e) association of AV files with evidence of informed consent and other supplementary objects (such as photographs) (f) AV file segmentation (g) transcription/translation (h) BOLD-style Careful Speech annotation and Oral Translation. Software: SayMore files can be further exported for annotation in FLEx, and metadata can be exported in .csv and IMDI formats for archiving. ELAN ELAN is developed by The Language Archive at the Max Planck Institute for Psycholinguistics in Nijmegen. ELAN is a full-featured transcription tool, particularly useful for researchers with complex annotation needs/goals. Software: FLEx FieldWorks Language Explorer, FLEx is developed by SIL International formerly Summer Institute of Linguistics, Inc. at SIL International in Dallas. FLEx allows the user to build a "lexicon" of the language, i.e. a word-list with definitions and grammatical information, and also to store texts from the language. Within the texts, each word or part of a word (i.e. a "morpheme") is linked to an entry in the lexicon. For new projects and for students learning for the first time, FLEx is now the best tool for interlinearising and dictionary-making. Software: Toolbox Field Linguist's Toolbox (usually called Toolbox) is a precursor of FLEx and has been one of the most widely used language documentation packages for some decades. Previously known as Shoebox, Toolbox's primary functions are construction of a lexical database, and interlinearization of texts through interaction with the lexical database. Both lexical database and texts can be exported to a word processing environment, in the case of the lexical database using the Multi-Dictionary Formatter (MDF) conversion tool. It is also possible to use Toolbox as a transcription environment. By comparison with ELAN and FLEx, Toolbox has relatively limited functionality, and is felt by some to have an unintuitive design and interface. However, a large number of projects have been carried-out in the Shoebox/Toolbox environment over its lifespan, and its user base continues to enjoy its advantages of familiarity, speed, and community support. Toolbox also has the advantage of working directly with human-readable text files that can be opened in any text editor and easily manipulated and archived. Toolbox files can also be easily converted for storage in XML (recommended for archives), such as with open source Python libraries like Xigt intended for computational uses of IGT data. Software: Tools for automating components of the workflow Language documentation may be partially automated thanks to a number of software tools, including: eSpeak HTK Lingua Libre, a libre online tool allowing to record a large number of words and phrases in a short period (up to 1 000 words/hour with a clean word list and an experienced user). It automatizes the classic procedure for recording audio and video pronunciation files (for spoken and signed languages). Once the recording is done, the platform automatically uploads clean, well cut, well named and apps-friendly files, directly to Wikimedia Commons (it is possible to download datasets for a specific language). Literature: The peer-reviewed journal Language Documentation and Conservation has published a large number of articles focusing on tools and methods in language documentation. Film: The 2021 Indian documentary film Dreaming of Words traces the life and work of Njattyela Sreedharan, a fourth standard drop-out, who compiles a multilingual dictionary connecting four major Dravidian languages Malayalam, Kannada, Tamil and Telugu. Travelling across four states and doing extensive research, he spent twenty five years making this multilingual dictionary.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Single sign-on** Single sign-on: Single sign-on (SSO) is an authentication scheme that allows a user to log in with a single ID to any of several related, yet independent, software systems. True single sign-on allows the user to log in once and access services without re-entering authentication factors. Single sign-on: It should not be confused with same-sign on (Directory Server Authentication), often accomplished by using the Lightweight Directory Access Protocol (LDAP) and stored LDAP databases on (directory) servers.A simple version of single sign-on can be achieved over IP networks using cookies but only if the sites share a common DNS parent domain.For clarity, a distinction is made between Directory Server Authentication (same-sign on) and single sign-on: Directory Server Authentication refers to systems requiring authentication for each application but using the same credentials from a directory server, whereas single sign-on refers to systems where a single authentication provides access to multiple applications by passing the authentication token seamlessly to configured applications. Single sign-on: Conversely, single sign-off or single log-out (SLO) is the property whereby a single action of signing out terminates access to multiple software systems. As different applications and resources support different authentication mechanisms, single sign-on must internally store the credentials used for initial authentication and translate them to the credentials required for the different mechanisms. Single sign-on: Other shared authentication schemes, such as OpenID and OpenID Connect, offer other services that may require users to make choices during a sign-on to a resource, but can be configured for single sign-on if those other services (such as user consent) are disabled. An increasing number of federated social logons, like Facebook Connect, do require the user to enter consent choices upon first registration with a new resource, and so are not always single sign-on in the strictest sense. Benefits: Benefits of using single sign-on include: Mitigate risk for access to 3rd-party sites ("federated authentication") because user passwords are not stored or managed externally Reduce password fatigue from different username and password combinations Reduce time spent re-entering passwords for the same identity Reduce IT costs due to lower number of IT help desk calls about passwords Simpler administration. SSO-related tasks are performed transparently as part of normal maintenance, using the same tools that are used for other administrative tasks. Benefits: Better administrative control. All network management information is stored in a single repository. This means that there is a single, authoritative listing of each user's rights and privileges. This allows the administrator to change a user's privileges and know that the results will propagate network wide. Improved user productivity. Users are no longer bogged down by multiple logons, nor are they required to remember multiple passwords in order to access network resources. This is also a benefit to Help desk personnel, who need to field fewer requests for forgotten passwords. Better network security. Eliminating multiple passwords also reduces a common source of security breaches—users writing down their passwords. Finally, because of the consolidation of network management information, the administrator can know with certainty that when he disables a user's account, the account is fully disabled. Benefits: Consolidation of heterogeneous networks. By joining disparate networks, administrative efforts can be consolidated, ensuring that administrative best practices and corporate security policies are being consistently enforced.SSO shares centralized authentication servers that all other applications and systems use for authentication purposes and combines this with techniques to ensure that users do not have to actively enter their credentials more than once. Criticism: The term reduced sign-on (RSO) has been used by some to reflect the fact that single sign-on is impractical in addressing the need for different levels of secure access in the enterprise, and as such more than one authentication server may be necessary.As single sign-on provides access to many resources once the user is initially authenticated ("keys to the castle"), it increases the negative impact in case the credentials are available to other people and misused. Therefore, single sign-on requires an increased focus on the protection of the user credentials, and should ideally be combined with strong authentication methods like smart cards and one-time password tokens.Single sign-on also increases dependence on highly-available authentication systems; a loss of their availability can result in denial of access to all systems unified under the SSO. SSO can be configured with session failover capabilities in order to maintain the system operation. Nonetheless, the risk of system failure may make single sign-on undesirable for systems to which access must be guaranteed at all times, such as security or plant-floor systems. Criticism: Furthermore, the use of single-sign-on techniques utilizing social networking services such as Facebook may render third party websites unusable within libraries, schools, or workplaces that block social media sites for productivity reasons. It can also cause difficulties in countries with active censorship regimes, such as China and its "Golden Shield Project," where the third party website may not be actively censored, but is effectively blocked if a user's social login is blocked. Security: In March, 2012, a research paper reported an extensive study on the security of social login mechanisms. The authors found 8 serious logic flaws in high-profile ID providers and relying party websites, such as OpenID (including Google ID and PayPal Access), Facebook, Janrain, Freelancer, FarmVille, and Sears.com. Because the researchers informed ID providers and relying party websites prior to public announcement of the discovery of the flaws, the vulnerabilities were corrected, and there have been no security breaches reported.In May 2014, a vulnerability named Covert Redirect was disclosed. It was first reported "Covert Redirect Vulnerability Related to OAuth 2.0 and OpenID" by its discoverer Wang Jing, a Mathematical PhD student from Nanyang Technological University, Singapore. In fact, almost all Single sign-on protocols are affected. Covert Redirect takes advantage of third-party clients susceptible to an XSS or Open Redirect.In December 2020, flaws in federated authentication systems were discovered to have been utilized by attackers during the 2020 United States federal government data breach.Due to how single sign-on works, by sending a request to the logged-in website to get a SSO token and sending a request with the token to the logged-out website, the token cannot be protected with the HttpOnly cookie flag and thus can be stolen by an attacker if there is an XSS vulnerability on the logged-out website, in order to do session hijacking. Another security issue is that if the session used for SSO is stolen (which can be protected with the HttpOnly cookie flag unlike the SSO token), the attacker can access all the websites that are using the SSO system. Privacy: As originally implemented in Kerberos and SAML, single sign-on did not give users any choices about releasing their personal information to each new resource that the user visited. This worked well enough within a single enterprise, like MIT where Kerberos was invented, or major corporations where all of the resources were internal sites. However, as federated services like Active Directory Federation Services proliferated, the user's private information was sent out to affiliated sites not under control of the enterprise that collected the data from the user. Since privacy regulations are now tightening with legislation like the GDPR, the newer methods like OpenID Connect have started to become more attractive; for example MIT, the originator of Kerberos, now supports OpenID Connect. Privacy: Email address Single sign-on in theory can work without revealing identifying information such as email addresses to the relying party (credential consumer), but many credential providers do not allow users to configure what information is passed on to the credential consumer. As of 2019, Google and Facebook sign-in do not require users to share email addresses with the credential consumer. "Sign in with Apple" introduced in iOS 13 allows a user to request a unique relay email address each time the user signs up for a new service, thus reducing the likelihood of account linking by the credential consumer. Common configurations: Kerberos-based Initial sign-on prompts the user for credentials, and gets a Kerberos ticket-granting ticket (TGT). Common configurations: Additional software applications requiring authentication, such as email clients, wikis, and revision-control systems, use the ticket-granting ticket to acquire service tickets, proving the user's identity to the mail-server / wiki server / etc. without prompting the user to re-enter credentials.Windows environment - Windows login fetches TGT. Active Directory-aware applications fetch service tickets, so the user is not prompted to re-authenticate. Common configurations: Unix/Linux environment - Login via Kerberos PAM modules fetches TGT. Kerberized client applications such as Evolution, Firefox, and SVN use service tickets, so the user is not prompted to re-authenticate. Smart-card-based Initial sign-on prompts the user for the smart card. Additional software applications also use the smart card, without prompting the user to re-enter credentials. Smart-card-based single sign-on can either use certificates or passwords stored on the smart card. Common configurations: Integrated Windows Authentication Integrated Windows Authentication is a term associated with Microsoft products and refers to the SPNEGO, Kerberos, and NTLMSSP authentication protocols with respect to SSPI functionality introduced with Microsoft Windows 2000 and included with later Windows NT-based operating systems. The term is most commonly used to refer to the automatically authenticated connections between Microsoft Internet Information Services and Internet Explorer. Cross-platform Active Directory integration vendors have extended the Integrated Windows Authentication paradigm to Unix (including Mac) and Linux systems. Common configurations: Security Assertion Markup Language Security Assertion Markup Language (SAML) is an XML-based method for exchanging user security information between an SAML identity provider and a SAML service provider. SAML 2.0 supports W3C XML encryption and service-provider–initiated web browser single sign-on exchanges. A user wielding a user agent (usually a web browser) is called the subject in SAML-based single sign-on. The user requests a web resource protected by a SAML service provider. The service provider, wishing to know the identity of the user, issues an authentication request to a SAML identity provider through the user agent. The identity provider is the one that provides the user credentials. The service provider trusts the user information from the identity provider to provide access to its services or resources. Emerging configurations: Mobile devices as access credentials A newer variation of single-sign-on authentication has been developed using mobile devices as access credentials. Users' mobile devices can be used to automatically log them onto multiple systems, such as building-access-control systems and computer systems, through the use of authentication methods which include OpenID Connect and SAML, in conjunction with an X.509 ITU-T cryptography certificate used to identify the mobile device to an access server. Emerging configurations: A mobile device is "something you have," as opposed to a password which is "something you know," or biometrics (fingerprint, retinal scan, facial recognition, etc.) which is "something you are." Security experts recommend using at least two out of these three factors (multi-factor authentication) for best protection.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nucleoside** Nucleoside: Nucleosides are glycosylamines that can be thought of as nucleotides without a phosphate group. A nucleoside consists simply of a nucleobase (also termed a nitrogenous base) and a five-carbon sugar (ribose or 2'-deoxyribose) whereas a nucleotide is composed of a nucleobase, a five-carbon sugar, and one or more phosphate groups. In a nucleoside, the anomeric carbon is linked through a glycosidic bond to the N9 of a purine or the N1 of a pyrimidine. Nucleotides are the molecular building blocks of DNA and RNA. List of nucleosides and corresponding nucleobases: The reason for 2 symbols, shorter and longer, is that the shorter ones are better for contexts where explicit disambiguation is superfluous (because context disambiguates) and the longer ones are for contexts where explicit disambiguation is judged to be needed or wise. For example, when discussing long nucleobase sequences in genomes, the CATG symbol system is much preferable to the Cyt-Ade-Thy-Gua symbol system (see Nucleic acid sequence § Notation for examples), but in discussions where confusion is likelier, the unambiguous symbols can be used. Sources: Nucleosides can be produced from nucleotides de novo, particularly in the liver, but they are more abundantly supplied via ingestion and digestion of nucleic acids in the diet, whereby nucleotidases break down nucleotides (such as the thymidine monophosphate) into nucleosides (such as thymidine) and phosphate. The nucleosides, in turn, are subsequently broken down in the lumen of the digestive system by nucleosidases into nucleobases and ribose or deoxyribose. In addition, nucleotides can be broken down inside the cell into nitrogenous bases, and ribose-1-phosphate or deoxyribose-1-phosphate. Use in medicine and technology: In medicine several nucleoside analogues are used as antiviral or anticancer agents. The viral polymerase incorporates these compounds with non-canonical bases. These compounds are activated in the cells by being converted into nucleotides. They are administered as nucleosides since charged nucleotides cannot easily cross cell membranes. In molecular biology, several analogues of the sugar backbone exist. Due to the low stability of RNA, which is prone to hydrolysis, several more stable alternative nucleoside/nucleotide analogues that correctly bind to RNA are used. This is achieved by using a different backbone sugar. These analogues include locked nucleic acids (LNA), morpholinos and peptide nucleic acids (PNA). In sequencing, dideoxynucleotides are used. These nucleotides possess the non-canonical sugar dideoxyribose, which lacks 3' hydroxyl group (which accepts the phosphate). DNA polymerases cannot distinguish between these and regular deoxyribonucleotides, but when incorporated a dideoxynucleotide cannot bond with the next base and the chain is terminated. Prebiotic synthesis of ribonucleosides: In order to understand how life arose, knowledge is required of the chemical pathways that permit formation of the key building blocks of life under plausible prebiotic conditions. According to the RNA world hypothesis free-floating ribonucleosides and ribonucleotides were present in the primitive soup. Molecules as complex as RNA must have arisen from small molecules whose reactivity was governed by physico-chemical processes. RNA is composed of purine and pyrimidine nucleotides, both of which are necessary for reliable information transfer, and thus Darwinian natural selection and evolution. Nam et al. demonstrated the direct condensation of nucleobases with ribose to give ribonucleosides in aqueous microdroplets, a key step leading to RNA formation. Also, a plausible prebiotic process for synthesizing pyrimidine and purine ribonucleosides and ribonucleotides using wet-dry cycles was presented by Becker et al.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Geometric Shapes (Unicode block)** Geometric Shapes (Unicode block): Geometric Shapes is a Unicode block of 96 symbols at code point range U+25A0–25FF. U+25A0–U+25CF: The BLACK CIRCLE is displayed when typing in a password field, in order to hide characters from a screen recorder or shoulder surfing. U+25D0–U+25FF: The CIRCLE WITH LEFT HALF BLACK is used to represent the contrast ratio of a screen. Font coverage: Font sets like Code2000 and the DejaVu family include coverage for each of the glyphs in the Geometric Shapes range. Unifont also contains all the glyphs. Among the fonts in widespread use, full implementation is provided by Segoe UI Symbol and significant partial implementation of this range is provided by Arial Unicode MS and Lucida Sans Unicode, which include coverage for 83% (80 out of 96) and 82% (79 out of 96) of the symbols, respectively. Emoji: The Geometric Shapes block contains eight emoji: U+25AA–U+25AB, U+25B6, U+25C0 and U+25FB–U+25FE.The block has sixteen standardized variants defined to specify emoji-style (U+FE0F VS16) or text presentation (U+FE0E VS15) for the eight emoji. History: The following Unicode-related documents record the purpose and process of defining specific characters in the Geometric Shapes block:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Roland U-110** Roland U-110: The Roland U-110 is a ROMpler synthesizer module that was produced by Roland Corporation in 1988. General information: The predecessor of the more successful U-20 keyboard and U-220 module, the U-110 was Roland's first dedicated sample playback synth. It used ROM to store sounds rather than loading them from disks into RAM, hence it was not a true sampler as it could not sample sounds. General information: The U-110 contained a base 2MB of sounds stored in ROM. It could be expanded with up to four Roland SN-U110 sound library cards, unlike the more popular Roland U-220 that could only accommodate two. It had six individual outputs, allowing for each instrument channel to be recorded separately, and two mix outputs to output all channels as a stereo pair.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sentence spacing studies** Sentence spacing studies: Sentence spacing concerns how spaces are inserted between sentences in typeset text and is a matter of typographical convention. Since the introduction of movable-type printing in Europe, various sentence spacing conventions have been used in languages with a Latin alphabet. These include a normal word space (as between the words in a sentence), a single enlarged space, and two full spaces. Sentence spacing studies: Until the 20th century, publishing houses and printers in many countries used additional space between sentences. There were exceptions to this traditional spacing method—some printers used spacing between sentences that was no wider than word spacing. This was French spacing—a term synonymous with single-space sentence spacing until the late 20th century. With the introduction of the typewriter in the late 19th century, typists used two spaces between sentences to mimic the style used by traditional typesetters. While wide sentence spacing was phased out in the printing industry in the mid-20th century, the practice continued on typewriters and later on computers. Perhaps because of this, many modern sources now incorrectly claim that wide spacing was created for the typewriter.The desired or correct sentence spacing is often debated, but most sources now state that an additional space is not necessary or desirable. From around 1950, single sentence spacing became standard in books, magazines, and newspapers, and the majority of style guides that use a Latin-derived alphabet as a language base now prescribe or recommend the use of a single space after the concluding punctuation of a sentence. However, some sources still state that additional spacing is correct or acceptable. Some people preferred double sentence spacing because that was how they were taught to type. The few direct studies conducted since 2002 have produced inconclusive results as to which convention is more readable. History: Traditional typesetting Shortly after the invention of movable type, highly variable spacing was created, which could create spaces of any size and allowed for perfectly even justification. Early American, English, and other European typesetters' style guides (also known as printers' rules) specified spacing standards that were all essentially identical from the 18th century onwards. These guides—e.g., Jacobi in the UK (1890) and MacKellar, Harpel, and De Vinne (1866–1901) in the U.S.—indicated that sentences should be em-spaced, and that words should be 1/3 or 1/2 em-spaced. The relative size of the sentence spacing would vary depending on the size of the word spaces and the justification needs. For most countries, this remained the standard for published work until the 20th century. Yet, even in this period, there were publishing houses that used a standard word space between sentences. History: Mechanical type and the advent of the typewriter Mechanical type systems introduced near the end of the 19th century, such as the Linotype and Monotype machines, allowed for some variable sentence spacing similar to hand composition. Just as these machines revolutionized the mass production of text, the advent of the typewriter around the same time revolutionized the creation of personal and business documents. But the typewriters' mechanical limitations did not allow variable spacing—typists could only choose the number of times they pressed the space bar. Typists in some English-speaking countries initially learned to insert three spaces between sentences to approximate the wider sentence spacing used in traditional printing, but later settled on two spaces, a practice that continued throughout the 20th century. This became known as English spacing and marked a divergence from French typists, who continued to use French spacing. History: Transition to single spacing In the early 20th century, some printers began using one and a half interword spaces (an "en quad") to separate sentences. This standard continued in use, to some extent, into the 1990s.Magazines, newspapers, and books began to adopt the single-space convention in the United States in the 1940s and in the United Kingdom in the 1950s. Typists did not move to single spacing simultaneously.Technological advances began affecting sentence spacing methods. In 1941, IBM introduced the Executive, a typewriter capable of proportional spacing, which had been used in professional typesetting for hundreds of years. This innovation broke the hold that the monospaced font had on the typewriter, reducing the severity of its mechanical limitations. However, this innovation did not spread throughout the typewriter industry; the majority of mechanical typewriters, including all of the widely distributed models, remained monospaced, while a small minority of special models carried the innovations. By the 1960s, electronic phototypesetting systems ignored runs of white space in text. This was also true for the World Wide Web, as HTML normally ignores additional spacing, although in 2011 the CSS 2.1 standard officially added an option that can preserve additional spaces. In the 1980s, desktop publishing software provided the average writer with more advanced formatting tools. Modern literature: Typography Early positions on typography (the "arrangement and appearance of text") supported traditional spacing techniques in English publications. In 1954, Geoffrey Dowding's book Finer Points in the Spacing and Arrangement of Type underscored the widespread shift from a single enlarged em space to a standard word space between sentences.With the advent of the computer age, typographers began deprecating double spacing, even in monospaced text. In 1989, Desktop Publishing by Design stated that "typesetting requires only one space after periods, question marks, exclamation points, and colons" and identified single sentence spacing as a typographic convention. Stop Stealing Sheep & Find Out How Type Works (1993) and Designing with Type: The Essential Guide to Typography (2006) both indicate that uniform spacing should be used between words, including between sentences.More recent works on typography weigh in strongly. Ilene Strizver, founder of the Type Studio, says: "Forget about tolerating differences of opinion: typographically speaking, typing two spaces before the start of a new sentence is absolutely, unequivocally wrong." The Complete Manual on Typography (2003) states that "The typewriter tradition of separating sentences with two-word spaces after a period has no place in typesetting" and that the single space is "standard typographic practice". The Elements of Typographic Style (2004) advocates a single space between sentences, noting that "your typing as well as your typesetting will benefit from unlearning this quaint [double spacing] Victorian habit".David Jury's book About Face: Reviving the Rules of Typography (2004)—published in Switzerland—clarifies the contemporary typographic position on sentence spacing: Word spaces, preceding or following punctuation, should be optically adjusted to appear to be of the same value as a standard word space. If a standard word space is inserted after a full point or a comma, then, optically, this produces a space of up to 50% wider than that of other word spaces within a line of type. This is because these punctuation marks carry space above them, which, when added to the adjacent standard word spaces, combines to create a visually larger space. Some argue that the "additional" space after a comma and full point serves as a "pause signal" for the reader. But this is unnecessary (and visually disruptive) since the pause signal is provided by the punctuation mark itself. Modern literature: Style and language guides Style guides Early style guides for typesetting used a wider space between sentences than between words—"traditional spacing", as shown in the illustration to the right. During the 20th century, style guides commonly mandated two spaces between sentences for typewritten manuscripts, which were used prior to professionally typesetting the work. As computer desktop publishing became commonplace, typewritten manuscripts became less relevant and most style guides stopped making distinctions between manuscripts and final typeset products. In the same period, style guides began changing their guidance on sentence spacing. The 1969 edition of the Chicago Manual of Style used em spaces between sentences in its text; by the 2003 edition it had changed to single sentence spacing for both manuscript and print. By the 1980s, the United Kingdom's Hart's Rules (1983) had shifted to single sentence spacing. Other style guides followed suit in the 1990s. Soon after the beginning of the 21st century, the majority of style guides had changed to indicate that only one word space was proper between sentences.Modern style guides provide standards and guidance for the written language. These works are important to writers, since "virtually all professional editors work closely with one of them in editing a manuscript for publication". Late editions of comprehensive style guides, such as the Oxford Style Manual (2003) in the United Kingdom and the Chicago Manual of Style (2010) in the United States, provide standards for a wide variety of writing and design topics, including sentence spacing. The majority of style guides now prescribe the use of a single space after terminal punctuation in final written works and publications. A few style guides allow double sentence spacing for draft work, and the Gregg Reference Manual makes room for double and single sentence spacing based on author preferences. Web design guides do not usually provide guidance on this topic, as "HTML refuses to recognize double spaces altogether". These works themselves follow the current publication standard of single sentence spacing.The European Union's Interinstitutional Style Guide (2008) indicates that single sentence spacing is to be used in all European Union publications—encompassing 23 languages. For the English language, the European Commission's English Style Guide (2010) states that sentences are always single-spaced. The Style Manual: For Authors, Editors and Printers (2007), first published in 1966 by the Commonwealth Government Printing Office of Australia, stipulates that only one space is used after "sentence-closing punctuation" and that "Programs for word processing and desktop publishing offer more sophisticated, variable spacing, so this practice of double spacing is now avoided because it can create distracting gaps on a page."National languages not covered by an authoritative language academy typically have multiple style guides, only some of which may discuss sentence spacing. This is the case in the United Kingdom. The Oxford Style Manual (2003) and the Modern Humanities Research Association's MHRA Style Guide (2002) state that only single spacing should be used. In Canada, both the English- and French-language sections of the Canadian Style, A Guide to Writing and Editing (1997), prescribe single sentence spacing. In the United States, many style guides—such as the Chicago Manual of Style (2003)—allow only single sentence spacing. The most important style guide in Italy, Il Nuovo Manuale di Stile (2009), does not address sentence spacing, but the Guida di Stile Italiano (2010), the official guide for Microsoft translation, tells users to use single sentence spacing "instead of the double spacing used in the United States". Modern literature: Language guides Some languages, such as French and Spanish, have academies that set language rules. Their publications typically address orthography and grammar as opposed to matters of typography. Style guides are less relevant for such languages, as their academies set prescriptive rules. For example, the Académie française publishes the Dictionnaire de l'Académie française for French speakers worldwide. The 1992 edition does not provide guidance on sentence spacing, but is single-sentence-spaced throughout—consistent with historical French spacing. The Spanish language is similar. The most important body within the Association of Spanish Language Academies, the Royal Spanish Academy, publishes the Diccionario de la lengua española, which is viewed as prescriptive for the Spanish language worldwide. The 2001 edition does not provide sentence-spacing guidance, but is itself single-sentence-spaced. The German language manual Empfehlungen des Rats für deutsche Rechtschreibung ("Recommendations of the Council for German Orthography"; 2006) does not address sentence spacing. The manual itself uses one space after terminal punctuation. Additionally, the Duden, the German-language dictionary most commonly used in Germany, indicates that double sentence spacing is an error. Modern literature: Grammar guides A few reference grammars address sentence spacing, as increased spacing between words is punctuation in itself. Most do not. Grammar guides typically cover terminal punctuation and the proper construction of sentences—but not the spacing between sentences. Moreover, many modern grammar guides are designed for quick reference and refer users to comprehensive style guides for additional matters of writing style. For example, the Pocket Idiot's Guide to Grammar and Punctuation (2005) points users to style guides such as the MLA Style Manual for consistency in formatting work and for all other "editorial concerns". The Grammar Bible (2004) states that "The modern system of English punctuation is by no means simple. A book that covers all the bases would need to be of considerable breadth and weight and anyone interested in such a resource is advised to consult the Chicago Manual of Style." Computer era: In the computer era, spacing between sentences is handled in several different ways by various software packages. Some systems accept whatever the user types, while others attempt to alter the spacing or use the user input as a method of detecting sentences. Computer-based word processors and typesetting software such as troff and TeX allow users to arrange text in a manner previously only available to professional typesetters.The text-editing environment in Emacs uses a double space following a period to identify the end of sentences unambiguously; the double-space convention prevents confusion with periods within sentences that signify abbreviations. How Emacs recognizes the end of a sentence is controlled by the settings sentence-end-double-space and sentence-end.The Unix typesetter program Troff uses two spaces to mark the end of a sentence. This allows the typesetter to distinguish sentence endings from abbreviations and to typeset them differently. Early versions of Troff, which only typeset in fixed-width fonts, would automatically add a second space between sentences, which were detected based on the combination of terminal punctuation and a line feed. Computer era: In the April 2020 update, Microsoft Word started highlighting two spaces after a period as an error and offers a correction of one space.Multiple spaces are eliminated by default in most World Wide Web content, whether or not they are associated with sentences. There are options for preserving spacing, such as the CSS white-space property, and the <pre> tag. Controversy: James Felici, author of the Complete Manual of Typography, says that the topic of sentence spacing is "the debate that refuses to die ... In all my years of writing about type, it's still the question I hear most often, and a search of the web will find threads galore on the subject."Many people are opposed to single sentence spacing for various reasons. Some state that the habit of double spacing is too deeply ingrained to change. Others claim that additional space between sentences improves the aesthetics or readability of text. Proponents of double sentence spacing also state that some publishers may still require double-spaced manuscript submissions from authors. A key example noted is the screenwriting industry's monospaced standard for screenplay manuscripts, Courier, 12-point font, although some works on screenwriting indicate that Courier is merely preferred—proportional fonts may be used. Some reliable sources state simply that writers should follow their particular style guide, but proponents of double spacing caution that publishers' guidance takes precedence, including those that ask for double-sentence-spaced manuscripts.One of the most popular arguments against wider sentence spacing is that it was created for monospaced fonts of the typewriter and is no longer needed with modern proportional fonts. Controversy: However, proportional fonts existed together with wide sentence spacing for centuries before the typewriter and remained for decades after its invention. When the typewriter was first introduced, typists were most commonly taught to use three spaces between sentences. This gradually shifted to two spaces, while the print industry remained unchanged in its wide em-spaced sentences. Some sources now state it is acceptable for monospaced fonts to be single-spaced today, although other references continue to specify double spacing for monospaced fonts. The double-space typewriter convention has been taught in schools in typing classes and remains the practice in many cases. Some voice concern that students will later be forced to relearn how to type.Most style guides indicate that single sentence spacing is proper for final or published work today, and most publishers require manuscripts to be submitted as they will appear in publication—with single sentence spacing. Writing sources typically recommend that prospective authors remove extra spaces before submitting manuscripts, although other sources state that publishers will use software to remove the spaces before final publication. Effects on readability and legibility: Claims abound regarding the legibility and readability of the single and double sentence spacing methods—by proponents on both sides. Supporters of single spacing assert that familiarity with the current standard in books, magazines, and the Web enhances readability, that double spacing looks strange in text using proportional fonts, and that the "rivers" and "holes" caused by double spacing impair readability. Proponents of double sentence spacing state that the extra space between sentences enhances readability by providing clearer breaks between sentences and making text appear more legible.However, typographic opinions are typically anecdotal with no basis in evidence. "Opinions are not always safe guides to legibility of print", and when direct studies are conducted, anecdotal opinions—even those of experts—can turn out to be false. Text that seems legible (visually pleasing at first glance) may be shown to actually impair reading effectiveness when subjected to scientific study. Effects on readability and legibility: Studies Direct studies on sentence spacing include those by Loh, Branch, Shewanown, and Ali (2002); and Clinton, Branch, Holschuh, and Shewanown (2003); with results favoring neither single, double, nor triple spacing. The 2002 study tested participants' reading speed for passages of on-screen text with single and double sentence spacing. The authors stated that "the 'double space group' consistently took longer time to finish than the 'single space' group" but concluded that "there was not enough evidence to suggest that a significant difference exists". The 2003 study analyzed on-screen single, double, and triple spacing. In both cases, the authors stated that there was insufficient evidence to draw a conclusion. Ni, Branch, Chen, and Clinton conducted a similar study in 2009 using identical spacing variables. The authors concluded that the "results provided insufficient evidence that time and comprehension differ significantly among different conditions of spacing between sentences". A 2018 study of 60 students found that those who used two word spaces between sentences read the same text 3 percent faster than with a monospaced font (Courier New). Effects on readability and legibility: Related studies There are other studies that could be relevant to sentence spacing, such as the familiarity of typographic conventions on readability. Some studies indicate that "tradition" can increase the readability of text, and that reading is disrupted when conventional printing arrangements are disrupted or violated. The standard for the Web and published books, magazines, and newspapers is single sentence spacing. Effects on readability and legibility: David Jury's book What is Typography? notes, "Changes in spacing either between letters and words, or between the words only ... do not appear to affect legibility. [These rather extraordinary conclusions are contrary to all other surveys on readability of texts.]" A widespread observation is that increased sentence spacing creates "rivers" or "holes" within text, making it visually unattractive, distracting, and difficult to locate the end of sentences. Comprehensive works on typography describe the negative effect on readability caused by inconsistent spacing, which is supported in a 1981 study which found that "comprehension was significantly less accurate with the river condition." Another 1981 study on Cathode Ray Tube (CRT) displays concluded that "more densely packed text is read more efficiently … than is more loosely packed text." This statement is supported in other works as well. Canadian typographer Geoffrey Dowding suggests possible explanations of this phenomenon:A carefully composed text page appears as an orderly series of strips of black separated by horizontal channels of white space. Conversely, in a slovenly setting the tendency is for the page to appear as a grey and muddled pattern of isolated spats, this effect being caused by the over-widely separated words. The normal, easy, left-to-right movement of the eye is slowed down simply because of this separation; further, the short letters and serifs are unable to discharge an important function – that of keeping the eye on "the line". The eye also tends to be confused by a feeling of vertical emphasis, that is, an up & down movement, induced by the relative isolation of the words & consequent insistence of the ascending and descending letters. This movement is further emphasized by those "rivers" of white which are the inseparable & ugly accompaniment of all carelessly set text matter.Some studies suggest that readability can be improved by breaking sentences into separate units of thought—or varying the internal spacing of sentences. Mid-20th century research on this topic resulted in inconclusive findings. A 1980 study split sentences into 1–5 word phrases with additional spacing between segments. The study concluded that there was no significant difference in efficacy, but that a wider study was needed. Numerous other similar studies in 1951–1991 resulted in disparate and inconclusive findings. Finally, although various studies have been conducted on the readability of proportional vs. monospaced fonts, the studies typically did not decrease sentence spacing when using proportional fonts, or did not specify whether sentence spacing was changed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Conjugate depth** Conjugate depth: In fluid dynamics, the conjugate depths refer to the depth (y1) upstream and the depth (y2) downstream of the hydraulic jump whose momentum fluxes are equal for a given discharge (volume flux) q. The depth upstream of a hydraulic jump is always supercritical. It is important to note that the conjugate depth is different from the alternate depths for flow which are used in energy conservation calculations. Mathematical derivation: Beginning with an equal momentum flux M and discharge q upstream and downstream of the hydraulic jump: M=y122+q2gy1=y222+q2gy2. Rearranging terms gives: q2g(1y1−1y2)=12(yz2−y12). Multiply to get a common denominator on the left-hand side and factor the right-hand side: q2g(y2−y1y1y2)=12(y2−y1)(y2+y1). The (y2−y1) term cancels out: where q12=y12v12=y22v22. Divide by y12 recall Fr12=v12gy1. Thereafter multiply by y2 and expand the right hand side: Fr22=y222y12+y22y1. Substitute x for the constant y2/y1: Fr12=x22+x2⇒0=x22+x2−Fr12. Solving the quadratic equation and multiplying it by 42 gives: x=−12±(1/2)2+4(1/2)(Fr12)2(1/2). Substitute the constant y2/y1 back in for x to get the conjugate depth equation y2y1=12(1+8Fr12−1) Note that this equation is only applicable to hydraulic jumps over flat beds.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kermes mineral** Kermes mineral: Kermes mineral or Alkermes mineral was a compound of antimony oxides and sulfides, more specifically, antimony trioxide and trisulfide. It can be made or obtained in the laboratory by the actions of potassium carbonate (K2CO3) on antimony sulfide. The compound is reddish brown in color and described as a velvety powder which is insoluble in water. It was used extensively in the medical field until the general use of antimony compounds declined due to toxic effects. History and Uses: The name is derived from the word kermes as denoting the compound’s red color. The origins of the term is from the French kermès, which is short for alkermès, from the Arabic al-qirmiz a reference to crimson dye made from the bodies of insects (see Kermes (dye)). It was also known as poudre des Chartreux from a story of how it saved the life of a Carthusian monk in 1714. Because of its reputation as a medication and heal-all (or panacea), the formula and production process for Kermes mineral was purchased by the French government in 1720. Used for centuries in medicine as a health treatment, diaphoretic (causing sweat), anti-inflammatory and emetic it was used through the 19th century and its use extended to epilepsy treatment in addition to hectic fever.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cyclin** Cyclin: Cyclin is a family of proteins that controls the progression of a cell through the cell cycle by activating cyclin-dependent kinase (CDK) enzymes or group of enzymes required for synthesis of cell cycle. Etymology: Cyclins were originally discovered by R. Timothy Hunt in 1982 while studying the cell cycle of sea urchins.In an interview for "The Life Scientific" (aired on 13/12/2011) hosted by Jim Al-Khalili, R. Timothy Hunt explained that the name "cyclin" was originally named after his hobby cycling. It was only after the naming did its importance in the cell cycle become apparent. As it was appropriate the name stuck. R. Timothy Hunt: "By the way, the name cyclin, which I coined, was really a joke, it's because I liked cycling so much at the time, but they did come and go in the cell..." Function: Cyclins were originally named because their concentration varies in a cyclical fashion during the cell cycle. (Note that the cyclins are now classified according to their conserved cyclin box structure, and not all these cyclins alter in level through the cell cycle.) The oscillations of the cyclins, namely fluctuations in cyclin gene expression and destruction by the ubiquitin mediated proteasome pathway, induce oscillations in Cdk activity to drive the cell cycle. A cyclin forms a complex with Cdk, which begins to activate but the complete activation requires phosphorylation, as well. Complex formation results in activation of the Cdk active site. Cyclins themselves have no enzymatic activity but have binding sites for some substrates and target the Cdks to specific subcellular locations.Cyclins, when bound with the dependent kinases, such as the p34/cdc2/cdk1 protein, form the maturation-promoting factor. MPFs activate other proteins through phosphorylation. These phosphorylated proteins, in turn, are responsible for specific events during cell division such as microtubule formation and chromatin remodeling. Cyclins can be divided into four classes based on their behaviour in the cell cycle of vertebrate somatic cells and yeast cells: G1 cyclins, G1/S cyclins, S cyclins, and M cyclins. This division is useful when talking about most cell cycles, but it is not universal as some cyclins have different functions or timing in different cell types. Function: G1/S Cyclins rise in late G1 and fall in early S phase. The Cdk- G1/S cyclin complex begins to induce the initial processes of DNA replication, primarily by arresting systems that prevent S phase Cdk activity in G1. The cyclins also promote other activities to progress the cell cycle, such as centrosome duplication in vertebrates or spindle pole body in yeast. The rise in presence of G1/S cyclins is paralleled by a rise in S cyclins. Function: G1 cyclins do not behave like the other cyclins, in that the concentrations increase gradually (with no oscillation), throughout the cell cycle based on cell growth and the external growth-regulatory signals. The presence of G cyclins coordinate cell growth with the entry to a new cell cycle. S cyclins bind to Cdk and the complex directly induces DNA replication. The levels of S cyclins remain high, not only throughout S phase, but through G2 and early mitosis as well to promote early events in mitosis. Function: M cyclin concentrations rise as the cell begins to enter mitosis and the concentrations peak at metaphase. Cell changes in the cell cycle like the assembly of mitotic spindles and alignment of sister-chromatids along the spindles are induced by M cyclin- Cdk complexes. The destruction of M cyclins during metaphase and anaphase, after the Spindle Assembly Checkpoint is satisfied, causes the exit of mitosis and cytokinesis. Function: Expression of cyclins detected immunocytochemically in individual cells in relation to cellular DNA content (cell cycle phase), or in relation to initiation and termination of DNA replication during S-phase, can be measured by flow cytometry.Kaposi sarcoma herpesvirus (KSHV) encodes a D-type cyclin (ORF72) that binds CDK6 and is likely to contribute to KSHV-related cancers. Domain structure: Cyclins are generally very different from each other in primary structure, or amino acid sequence. However, all members of the cyclin family are similar in 100 amino acids that make up the cyclin box. Cyclins contain two domains of a similar all-α fold, the first located at the N-terminus and the second at the C-terminus. All cyclins are believed to contain a similar tertiary structure of two compact domains of 5 α helices. The first of which is the conserved cyclin box, outside of which cyclins are divergent. For example, the amino-terminal regions of S and M cyclins contain short destruction-box motifs that target these proteins for proteolysis in mitosis. Types: There are several different cyclins that are active in different parts of the cell cycle and that cause the Cdk to phosphorylate different substrates. There are also several "orphan" cyclins for which no Cdk partner has been identified. For example, cyclin F is an orphan cyclin that is essential for G2/M transition. A study in C. elegans revealed the specific roles of mitotic cyclins. Notably, recent studies have shown that cyclin A creates a cellular environment that promotes microtubule detachment from kinetochores in prometaphase to ensure efficient error correction and faithful chromosome segregation. Cells must separate their chromosomes precisely, an event that relies on the bi-oriented attachment of chromosomes to spindle microtubules through specialized structures called kinetochores. In the early phases of division, there are numerous errors in how kinetochores bind to spindle microtubules. The unstable attachments promote the correction of errors by causing a constant detachment, realignment and reattachment of microtubules from kinetochores in the cells as they try to find the correct attachment. Protein cyclin A governs this process by keeping the process going until the errors are eliminated. In normal cells, persistent cyclin A expression prevents the stabilization of microtubules bound to kinetochores even in cells with aligned chromosomes. As levels of cyclin A decline, microtubule attachments become stable, allowing the chromosomes to be divided correctly as cell division proceeds. In contrast, in cyclin A-deficient cells, microtubule attachments are prematurely stabilized. Consequently, these cells may fail to correct errors, leading to higher rates of chromosome mis-segregation. Types: Main groups There are two main groups of cyclins: G1/S cyclins – essential for the control of the cell cycle at the G1/S transition, Cyclin A / CDK2 – active in S phase. Cyclin D / CDK4, Cyclin D / CDK6, and Cyclin E / CDK2 – regulates transition from G1 to S phase. G2/M cyclins – essential for the control of the cell cycle at the G2/M transition (mitosis). G2/M cyclins accumulate steadily during G2 and are abruptly destroyed as cells exit from mitosis (at the end of the M-phase). Cyclin B / CDK1 – regulates progression from G2 to M phase. Subtypes The specific cyclin subtypes along with their corresponding CDK (in brackets) are: Other proteins containing this domain: In addition, the following human protein contains a cyclin domain: CNTD1 History: Leland H. Hartwell, R. Timothy Hunt, and Paul M. Nurse won the 2001 Nobel Prize in Physiology or Medicine for their discovery of cyclin and cyclin-dependent kinase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vendiamorpha** Vendiamorpha: Vendiamorpha is a class of extinct animals within the Ediacaran phylum Proarticulata. The typical vendiamorph had an oval-shaped or round-shaped body divided completely into segmented isomers, that were arranged alternately in two rows with reference to the longitudinal axis of the body. Description: The phenomenon of left-right alternating segments is called glide reflection symmetry, and is a diagnostic feature of proarticulatans. Transverse elements decrease in size from one end to the other and are inclined in the same direction. The larger isomers cover the smaller ones externally and the first isomer is much larger than the rest.Typically, the first few, or largest isomers are fused together to form a headshield-like structure, leading some researchers to have originally considered them to be ancestral or related to arthropods, though, overwhelming evidence of them being proarticulatans have since led researchers to discard this hypothetical relationship.Some vendiamorphs (e.g., Vendia and Paravendia) supposedly demonstrate a digestive-distributive system consisting of a simple axial tube and lateral appendages, with one lateral appendage corresponding to one isomer.Class Vendiamorpha currently includes only one Family Vendiidae (originally referred to as Vendomiidae as the type genus Vendomia, before V. menneri was redescribed as a member of Dickinsonia) that consist of species Vendia sokolovi, V. rachiata, Paravendia janae and possibly Karakhtia nessovi, from Ediacaran (Vendian) rocks of the Arkhangelsk Region in Russia. Name: The clade name Pseudovendia refers to the resemblances to a fossil imprint described as Vendia sokolovi. Originally, that fossil was interpreted as an arthropod, later as a proarticulatan, then conjectured as possibly a frond-like organism.Current scientific consensus now recognizes the poorly preserved holotype of Pseudovendia as a pseudofossil.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Journal of Electrical Bioimpedance** Journal of Electrical Bioimpedance: The Journal of Electrical Bioimpedance is an open access scientific journal that was established in 2010 and is published by the Oslo Bioimpedance Group with assistance of the University of Oslo Library. The editor-in-chief is Ørjan G. Martinsen (University of Oslo). The journal publishes reviews, articles, and educational material covering research on all aspects of bioimpedance. It is abstracted and indexed in Scopus and PubMed Central.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Salivary microbiome** Salivary microbiome: The salivary microbiome consists of the nonpathogenic, commensal bacteria present in the healthy human salivary glands. It differs from the oral microbiome which is located in the oral cavity. Oral microorganisms tend to adhere to teeth. The oral microbiome possesses its own characteristic microorganisms found there. Resident microbes of the mouth adhere to the teeth and gums. "[T]here may be important interactions between the saliva microbiome and other microbiomes in the human body, in particular, that of the intestinal tract." Characteristics: Unlike the uterine, placental and vaginal microbiomes, the types of organisms in the salivary microbiota remain relatively constant. There is no difference between populations of microbes of based upon gender, age, diet, obesity, alcohol intake, race, or tobacco use. The salivary microbiome characteristically remains stable over a lifetime. One study suggests sharing an environment (e.g., living together) may influence the salivary microbiome more than genetic components. Porphyromonas, Solobacterium, Haemophilus, Corynebacterium, Cellulosimicrobium, Streptococcus and Campylobacter are some of the genera found in the saliva. Genetic markers and diagnostic testing: "There is high diversity in the salivary microbiome within and between individuals, but little geographic structure. Overall, ~13.5% of the total variance in the composition of genera is due to differences among individuals, which is remarkably similar to the fraction of the total variance in neutral genetic markers that can be attributed to differences among human populations.""[E]nvironmental variables revealed a significant association between the genetic distances among locations and the distance of each location from the equator. Further characterization of the enormous diversity revealed here in the human salivary microbiome will aid in elucidating the role it plays in human health and disease, and in the identification of potentially informative species for studies of human population history."Sixty new genera have been identified from the salivary glands. A total of 101 different genera were identified in the salivary glands. Out of these, 39 genera are not found in the oral microbiome. It is not known whether the resident species remain constant or change.Though the association between the salivary microbiome is similar to that of the oral microbiome, there also exists an association the salivary microbiome and the gut microbiome. Saliva sampling may be a non-invasive way to detect changes in the gut microbiome and changes in systemic disease. The association between the salivary microbiome those with Polycistic Ovarian Syndrome has been characterized: "saliva microbiome profiles correlate with those in the stool, despite the fact that the bacterial communities in the two locations differ greatly. Therefore, saliva may be a useful alternative to stool as an indicator of bacterial dysbiosis in systemic disease."The sugar concentration in salivary secretions can vary. Blood sugar levels are reflected in salivary gland secretions. High salivary glucose (HSG) levels are a glucose concentration ≥ 1.0 mg/d, n = 175) and those with low salivary glucose (LSG) levels are < 0.1 mg/dL n = 2,537). Salivary gland secretions containing high levels of sugar change the oral microbiome and contributes to an environment that is conductive to the formation of dental caries and gingivitis. Salivary glands: Organisms of the salivary microbiome reside in the three major salivary glands: parotid, submandibular, and sublingual. These glands secrete electrolytes, proteins, genetic material, polysaccharides, and other molecules. Most of these substances enter the salivary gland acinus and duct system from surrounding capillaries via the intervening tissue fluid, although some substances are produced within the glands themselves. The level of each salivary component varies considerably depending on the health status of the individual and the presence of pathogenic and commensal organisms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Active shooter training** Active shooter training: Active shooter training (sometimes termed active shooter response training or active shooter preparation) addresses the threat of an active shooter by providing awareness, preparation, prevention, and response methods.Organizations such as businesses, places of worship or education, choose to sponsor active shooter training in light of a concern that as of 2013, 66.9% of active shooter incidents ended before police arrival in the United States. The Department of Justice says they remain "committed to assist training for better prevention, response, and recovery practices involving active shooter incidents" and they encourage training for civilians as well as first responders.Although training is currently optional, businesses and organizations are beginning to face citations due to non-compliance with Occupational Safety and Health Administration (OSHA) guidelines regarding workplace violence.The Federal Bureau of Investigation (FBI) further stresses that civilian training and exercises should include: ‘an understanding of the threats faced and also the risks and options available in active shooter incidents. Legal requirements: In the United States, OSHA has made recommendations for businesses when it comes to active shooting and the workplace. Their guidelines within certain organizations also extend to building securities and facilities structures, as well as properly implemented active shooter preparation training. Lawmakers have also held organizations accountable for not having appropriate training or other protocols in place, citing OSHA's general duty clause. In 2017, the Department of Labor published the new "Enforcement Procedures and Scheduling for Occupational Exposure to Workplace Violence" which provides policy guidance and procedures to be followed when issuing citations related to workplace violence.Given these changes, active shooter response training is quickly becoming a standard across America. Current active shooter training methods range from books to videos to multi-day on-site courses, but not all fall under OSHA's guidelines or judges approvals. Types of training: Much analysis has been done on the techniques and methods of active shooters. In response to the data, some training programs include a focus on medical response for civilians, whereas other training programs place their attention on prevention, self defense, security of the building, escape during the event, psychology or physical escape. Common ideologies frequently taught are "Run/Hide/Fight" as put forth by the Department of Homeland Security (DHS), "Avoid, Deny, Defend" which was developed by the ALERRT center at Texas State University, "ALICE" which is an acronym for "Alert, Lockdown, Inform, Counter, Evacuate" offered by Navigate 360, and "STAAAT" or "Situational Threat Awareness, Assessment, and Action Training" developed by Security Advisors Consulting Group. There are differing viewpoints on the effectiveness of certain concepts, and the standards are constantly being updated with new data and methods. Training for law enforcement: Many training programs focus on a particular group or groups of people. ALERRT (Advanced Law Enforcement Rapid Response Training), in conjunction with CRASE, and FLETC (Federal Law Enforcement Training Center) are examples of Federal- and State-level training, intended to help first responders and federal agents know how to respond to an active shooting. Other local agencies are also joining the fight and instituting their own form of training. Training for organizations: Active shooter response training should not be confused with speech seminars, continuing education courses, or requesting a visit from local law enforcement. The FBI stresses the importance that training and exercises for citizens include an understanding of the threats faced and also the risks and options available in active shooter incidents. Private programs are available to organizations, businesses, schools (and more) which provide training in how to respond to or prepare for an active shooter. Active shooter response training has become a service in demand given the increase in active shooting events in the United States, as well as to the continuing changes in laws, litigation, and OSHA requirements. There are few active shooter training programs available to the public, and not all agree on what are the correct methods for addressing the issue. The FBI urges everyone to receive proper training:Recognizing the increased active shooter threat and the swiftness with which active shooter incidents unfold, [our] study results support the importance of training and exercises— not only for law enforcement but also for citizens...even when law enforcement was present or able to respond within minutes, civilians often had to make life and death decisions, and therefore, should be engaged in training and discussions on decisions they may face. Training for schools: Schools have changed the way they approach the possibility of an active shooter entering the building. The different strategies that students and staff used at the Virginia Tech shooting in 2007, made significant difference in the of injuries. It can be concluded that the more resistance that the staff and students used, the less likely the shooter could enter the room. For example, the room of fourteen people that gave the active shooter no resistance on entry had ten fatalities and two injuries, so a total of 85.7% of people in the room, while the room of twelve people that had a strong barricade of a table and body weight only had the one injury with zero fatalities, totaling to 8.3% of the room. It is more effective to give an active shooter resistance on entry than to give no resistance at all. However, the room of nineteen people that gave a weak barricade of just a table in front of the door suffered twelve fatalities and six injuries, which totals to 94.8% of the people in the room. This could have been due to there being more people in the room, or the resistance made the shooter frustrated, which made him lash out more. No certain conclusions can be made, however, as the shooter took his own life after the incident. Training and insurance: FBI study results reveal that as of 2013, 45.6% of incidents occurred in areas of commerce, including those open and closed to pedestrian traffic. The second-largest area for incidents was places of education at 24.4%. From 2013–2018, the trends have remained similar, with more events occurring at places of business. Demand for a new active shooter insurance has increased, with some policies now offering discounts to those organizations who have received prior and qualifying active shooter training. Some schools and organizations are already spending millions on active shooter insurance, as it is becoming a growing necessity. Controversy: Risks vs. benefits Those who plan training programs determine what should be conveyed in the drills, or if there should be drills at all. Controversy: In the United States, there has been some controversy over the effectiveness of active shooter training programs. Organizations disagree whether teaching youth to "fight" the active shooter (as referenced in the DHS "Run, Hide, Fight" directive) is dangerous or effective. While the Department of Education does not recommend that students try to fight an active shooter, the FBI senior executive in charge of its active shooter initiative believes that fighting is often an unfortunate necessity and points out that individuals can at least train to fight.After the Stoneman Douglas High School shooting in Parkland, Florida in February 2018, it was suggested training programs could yield strategic information to potential shooters. Like most U.S. states, Florida requires schools to test their plans with drills. The shooter, a former student, may have been familiar with the school's drills and emergency plans regarding active shootings; some alleged that he used the information to increase casualties. The school had received active shooter training before the actual shooting occurred, yet the 2018 shooting was the deadliest school shooting in the United States since the Sandy Hook Elementary School shooting in 2012. There is a movement among national teacher organizations to end these drills. One such voice is the Massachusetts Teachers Association labor union. Merrie Najimy, the head of the MTA, said in 2020 that the drills are "scary" and "stressful" for students and that "lockdown is just a narrow and fear-based view of how to address a serious problem. It doesn't get at the root causes." It has been argued the drills and training programs are too traumatic for the students and that the training is more harmful—emotionally and mentally—than beneficial. Controversy: Effectiveness and qualifications Other controversies arise over the effectiveness of certain programs or the qualifications and tactics used by those teaching them. For instance, some trainers focus solely on the training acronyms and directions, no matter the situation. Others deem this type of training as ineffective and are instead advocating for a scenario-based training protocol.Some training programs are created by police, school resource officers, or SWAT, while others are created by current or former military, Special Operations, psychologists, Federal agents, or more. Some citizens and other professionals express concern that first responders do not have the appropriate credentials to direct an active shooter response training program for civilians. The expressed concern is that some first responders are not adequately trained in active shooting response strategies and either have not received, are currently receiving or only recently received their own training through the ALERRT (or like) program--thereby representing inadequate qualifications or experience to be training others. The FBI indicates that some officers and agents are under-experienced and under-educated in how to handle active shootings, even as a first responder. Broward County Sheriff's Office, for example, also received widespread criticism for their handling of the Stoneman Douglas school shooting as first responders. The public outcry was focused on the inadequacies of first responders in addressing active shootings at all.Schools have changed the way that they approach the possibility of an active shooter entering the building. The different strategies that students and staff used at the Virginia Tech shooting in 2007, made significant difference in the amount of injuries. It can be concluded that the more resistance that the staff and students used, the less likely the shooter could enter the room. Controversy: Police responses Over the time that school shootings have taken place, there have been some incidents of law enforcement response time not living up to what is expected of them. For example, video from the school shooting in Uvalde, TX showed officers waiting in the halls as the active shooter was making his way through the building and harming young students. The video footage that revealed this to the public sparked outrage amongst the community as people demanded answers as to why law enforcement would allow young children to become victims to such a horrific incident. The police response from the Nashville school shooting, however, showed that it is possible for officers to clear a building, neutralize the threat, and minimalize casualties within minutes. When you compare the footage from these two responses, the Uvalde officers failed to locate the shooter in a timely manner, they fled from the sound of gunfire, and withheld the footage from the public for weeks. While the Nashville police found and neutralized the shooter and released the footage within 24-hours. Controversy: Certifications Until 2019, there were no regulating or certifying agencies for the qualifications of active shooter response training directors, trainers, or programs. There has been only one certifying organization—the National Active Shooter Preparation and Recovery Administration—which claims to hold active shooter training professionals to a certain standard. The industry of active shooter response training has gone and can continue to go unregulated, since certification is not mandatory. Even OSHA, with their citations, recommendations, and compliance requirements, does not mandate a certain set of prerequisites for active shooter training directors or trainers. Controversy: Lack of drill safety Another area of public critique was seen when Indiana State Teachers Association expressed concern over the event in which their active shooter training from the ALICE program was incorporated with a drill that resulted in teachers being shot with pellet guns execution-style, leaving staff with welts and blood drawn. They called for a focus on educator and student safety during trainings and drills, and requested mental health be added to the House Bill 1004. Controversy: Discrimination Oher controversies stem from the actors used during mock drills and the ways in which the shooter was presented. In one such incident, a Penn-Trafford School District employee wore a checkered keffiyeh, drawing controversy over allegations that the depiction unfairly represented Arab Americans and that the depiction was meant to sow distrust of American Muslims; the school district denied such intents, stating they did not intend to depict any specific individual or group with the shooter's outfit. Controversy: Media coverage Media plays a large part of the public's knowledge when it comes to school shootings, especially today's youth. When there is a mass shooting, it gets posted on the news and the internet within moments. Once it is public knowledge it is posted to all social media either by a news cooperation or people that want to share to sped the awareness. Controversy: There have been times where social media posts have assisted law enforcement in the prevention or neutralization of mass shootings. For example, the 25-year-old that opened fire on the Louisville, Kentucky bank livestreamed the attack. This lead to a faster response and stopped any further casualties. There are many examples that could showcase the negativity that people tend to lean towards when it comes to talking about mass shootings. People see that the media focuses on the shooter more than the victims by sharing their names, motivations, and stories for long periods of time after the event. When this happens, it increases the likely-hood of a copycat shooter, because people want that level of publicity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mind Thrust** Mind Thrust: Mind Thrust is a 1981 video game published by Tandy Corporation. Gameplay: Mind Thrust is a game in which the player defeats the computer by either removing all playing pieces of the opponent, or by creating a chain of pieces that covers the width of the board. Reception: Barbour Stokes reviewed the game for Computer Gaming World, and stated that "The rules and plays of Mind Thrust are easily and quickly learned making it an excellent home demonstration game to make believers out of those non-gamers and non-computerists that may drop in."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded