text
stringlengths
60
353k
source
stringclasses
2 values
**Nyoki Nyoki: Tabidachi Hen** Nyoki Nyoki: Tabidachi Hen: Nyoki Nyoki: Tabidachi Hen (にょきにょき たびだち編, Nyokinyoki tabidachi-hen) is a puzzle video game developed by Compile Maru and published by D4 Enterprise for the Nintendo 3DS. This is a "fighting puzzle" video game based on 2003 Neo Geo game Pochi and Nyaa, co-produced by Aiky, Taito and SNK, and inspired by Puyo Puyo. Gameplay: The game consists of two 8x16 grids, one for each player, where the Nyokis fall in pairs, and each player must gather several adjacent Nyokis to clear them from the board, and send nuisance Nyoki to the opponent. The game ends when the Nyoki reach the top of the third column. Unlike Puyo Puyo, where the mechanics is to gather 4 or more adjacent Puyos of the same color and form chains, the player can accumulate as many adjacent Nyoki as they want, and eliminate them when they considers it convenient by converting the falling Nyoki pair into an "activator ", forming a chain reaction through the adjacent Nyoki. Development: Masamitsu Niitani, Compile founder and the creator of Puyo Puyo, founded Compile Maru at 2016, with Nyoki Nyoki: Tabidachi Hen being his first announced project. Niitani's intention was to "eliminate the" complexity of the chains "in conventional puzzles", where "Puzzle beginners are welcome". During 2017, intentions to launch the video game for the Nintendo Switch were announced. The funds would be collected through a crowdfunding campaign.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Irrationality sequence** Irrationality sequence: In mathematics, a sequence of positive integers an is called an irrationality sequence if it has the property that for every sequence xn of positive integers, the sum of the series ∑n=1∞1anxn exists (that is, it converges) and is an irrational number. The problem of characterizing irrationality sequences was posed by Paul Erdős and Ernst G. Straus, who originally called the property of being an irrationality sequence "Property P". Examples: The powers of two whose exponents are powers of two, 22n , form an irrationality sequence. However, although Sylvester's sequence 2, 3, 7, 43, 1807, 3263443, ...(in which each term is one more than the product of all previous terms) also grows doubly exponentially, it does not form an irrationality sequence. For, letting xn=1 for all n gives 43 +⋯=1, a series converging to a rational number. Likewise, the factorials, n! , do not form an irrationality sequence because the sequence given by xn=n+2 for all n leads to a series with a rational sum, 30 144 1. Growth rate: For any sequence an to be an irrationality sequence, it must grow at a rate such that lim sup log log log ⁡2 .This includes sequences that grow at a more than doubly exponential rate as well as some doubly exponential sequences that grow more quickly than the powers of powers of two.Every irrationality sequence must grow quickly enough that lim n→∞an1/n=∞. Growth rate: However, it is not known whether there exists such a sequence in which the greatest common divisor of each pair of terms is 1 (unlike the powers of powers of two) and for which lim n→∞an1/2n<∞. Related properties: Analogously to irrationality sequences, Hančl (1996) has defined a transcendental sequence to be an integer sequence an such that, for every sequence xn of positive integers, the sum of the series ∑n=1∞1anxn exists and is a transcendental number.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lipophobicity** Lipophobicity: Lipophobicity, also sometimes called lipophobia (from the Greek λιποφοβία from λίπος lipos "fat" and φόβος phobos "fear"), is a chemical property of chemical compounds which means "fat rejection", literally "fear of fat". Lipophobic compounds are those not soluble in lipids or other non-polar solvents. From the other point of view, they do not absorb fats. "Oleophobic" (from the Latin oleum "oil", Greek ελαιοφοβικό eleophobico from έλαιο eleo "oil" and φόβος phobos "fear") refers to the physical property of a molecule that is seemingly repelled from oil. (Strictly speaking, there is no repulsive force involved; it is an absence of attraction.) The most common lipophobic substance is water. Fluorocarbons are also lipophobic/oleophobic in addition to being hydrophobic. Uses: A lipophobic coating has been used on the touchscreens of Apple's iPhones since the 3GS, their iPads, Nokia's N9 and Lumia devices, the HTC HD2, the Blackberry DTEK50, Hero, and Flyer and many other phones to repel fingerprint oil, which aids in preventing and cleaning fingerprint marks. Most "oleophobic" coatings used on mobile devices are fluoropolymer-based solids (similar to Teflon, which was used on the HTC Hero) and are both lipophobic and hydrophobic. The oleophobic coating beads up the oils left behind a user's fingers, making it easy to clean without smearing and smudging. This helps decrease the feasibility of a successful smudge attack. In addition to being lipophobic or oleophobic, perfluoropolyether coatings impart exceptional lubricity to touch screens and give them a "slick feel" that eases their use. Use of isopropyl alcohol wipes to clean the screen should not damage or remove the coating.DIY products exist to restore or add an oleophobic coating to devices lacking one.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Four discourses** Four discourses: Four discourses is a concept developed by French psychoanalyst Jacques Lacan. He argued that there were four fundamental types of discourse. He defined four discourses, which he called Master, University, Hysteric and Analyst, and suggested that these relate dynamically to one another.Lacan's theory of the four discourses was initially developed in 1969, perhaps in response to the events of social unrest during May 1968 in France, but also through his discovery of what he believed were deficiencies in the orthodox reading of the Oedipus complex. The four discourses theory is presented in his seminar L'envers de la psychanalyse and in Radiophonie, where he starts using "discourse" as a social bond founded in intersubjectivity. He uses the term discourse to stress the transindividual nature of language: speech always implies another subject. Necessity of formalising psychoanalysis: Prior to the development of the four discourses, the primary guideline for clinical psychoanalysis was Freud's Oedipus complex. In Lacan's Seminar of 1969–70, Lacan argues that the terrifying Oedipal father that Freud invoked was already castrated at the point of intervention. The castration was symbolic rather than physical. In an effort to stem analysts' tendency to project their own imaginary readings and neurotic fantasies onto psychoanalysis, Lacan worked to formalise psychoanalytic theory with mathematical functions with renewed focus on the semiology of Ferdinand de Saussure. This would ensure only a minimum of teaching is lost when communicated and also provide the conceptual architecture to limit the associations of the analyst. Structure: Discourse, in the first place, refers to a point where speech and language intersect. The four discourses represent the four possible formulations of the symbolic network which social bonds can take and can be expressed as the permutations of a four-term configuration showing the relative positions—the agent, the other, the product and the truth—of four terms, the subject, the master signifier, knowledge and objet petit a. Structure: Positions Agent (upper left), the speaker of the discourse. Other (upper right), what the discourse is addressed to. Product (lower right), what the discourse has created. Truth (lower left), what the discourse attempted to express. Structure: Variables S1: the dominant, ordering and sense giving signifier of a discourse as it is received by the group, community or culture. S1 refers to "the marked circle of the field of the Other," it is the Master-Signifier. S1 comes into play in a signifying battery conforming the network of knowledge. S2: what is ordered by or set in motion by S1. It is knowledge, the existing body of knowledge, the knowledge of the time. S2 is the "battery of signifiers, already there" at the place where "one wants to determine the status of a discourse as status of statement," that is knowledge (savoir). $: The subject, or person, for Lacan is always barred in the sense that it is incomplete, divided. Just as we can never know the world around us except in the partial refractions of language and the domination of identification, so, too, we can never know ourselves. $ is the subject, marked by the unbroken line (trait unaire) which represents it and is different from the living individual who is not the locus of this subject. a: the objet petit a or surplus-jouissance. In Lacan's psychoanalytic theory, objet petit a stands for the unattainable object of desire. It is sometimes called the object cause of desire. Lacan always insisted that the term should remain untranslated, "thus acquiring the status of an algebraic sign". It is the object-waste or the loss of the object that occurred when the original division of the subject took place—the object that is the cause of desire: the plus-de-jouir. Structure: Four Discourses Discourse of the Master We see a barred subject ($) positioned as master signifier's truth, who's itself positioned as discourse's agent for all other signifiers (S2), that illustrates the structure of the dialectic of the master and the slave. The master, (S1) is the agent that puts the other, (S2) to work: the product is a surplus, objet a, that master struggles to appropriate alone. In a modern society, an example of this discourse can be found within so-called “family-like” work environments that tend to hide direct subordination under the mask of “favorable” submission to master's truth that generates value. The Master's reach for the truth in principle is fulfillment of his/her castratedness through subject's work. Based on Hegel's master–slave dialectic. Structure: Discourse of the University Knowledge in position of an agent is handed down by the institute which legitimises the master signifier (S1) taking the place of discourse's truth. Impossibility to satisfy one's need with a knowledge (which is a structural thing) produces a barred subject ($) as discourses sustain, and the cycle repeats itself through the primary subject being slavish to the institution values to fulfill the castratedness. The discourse's truth "knowledge " is being positioned aside of this loop and never the direct object of the subject, and the institute controls the subjects's objet a and defines the subject's master signifier's. Pathological symptom of an agent in this discourse is seeking fulfillment of their castratedness through enjoying the castratedness of their subject. Structure: Discourse of the Analyst The position of an agent — the analyst — is occupied by objet a of the analysand. Analyst's silence leads to reverse hysterization, as such the analyst becomes a mirror of question himself to the analysand, thus embodies barred subject's desire that lets his symptom speak itself through speech and thus be interpreted by the analyst. The master signifier of the analysand emerges as a product of this role. Hidden knowledge, positioned as discourse's truth (S2) stands for both analyst interpretation technique and knowledge acquired from the subject. Structure: Discourse of the Hysteric Despite its pathological aura, hysteric's discourse exhibits the most common mode of speech, blurring the line between clinical image and the otherness of social settings. Object a truth is defined by interrogative nature of subject's address ('Who am I?') as well as tryst for satisfaction of knowledge. This mutually drives the barred subject and turns on the agent's master signifiers. It leads the agent to produce a new knowledge (discourse's product) in a futile attempt to provide a barred subject with an answer to fulfill subject's castratedness (Lacan in Discourse of the Analyst breaks the pathological cycle of it by purposefully leaving the question unanswered, reversing the discourse and putting an analyst in a place of hysteric's desire). However, object a of the subject is search for the agent's object a, thus without being a subject like in the 'Discourse of the University' the Hysteric ends up gathering knowledge instead of their object a truth. Relevance for cultural studies: Slavoj Žižek uses the theory to explain various cultural artefacts, including Don Giovanni and Parsifal.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aluminum can** Aluminum can: An Aluminum can (British English: Aluminium can) is a single-use container for packaging made primarily of aluminum. It is commonly used for food and beverages such as olive and soup but also for products such as oil, chemicals, and other liquids. Global production is 180 billion annually and constitutes the largest single use of aluminum globally. Usage: Use of aluminum in cans began in 1957. Aluminum offers greater malleability, resulting in ease of manufacture; this gave rise to the two-piece can, where all but the top of the can is simply stamped out of a single piece of aluminum, rather than constructed from two pieces of steel. The inside of the can is lined by spray coating an epoxy lacquer or polymer to protect the aluminum from being corroded by acidic contents such as carbonated beverages and imparting a metallic taste to the beverage. The epoxy may contain bisphenol A. A label is either printed directly on the side of the can or will be glued to the outside of the curved surface, indicating its contents. Usage: Most aluminum cans are made of two pieces. The bottom and body are "drawn" or "drawn and ironed" from a flat plate or shallow cup. After filling, the can "end" is sealed onto the top of the can. This is supplemented by a sealing compound to ensure that the top is air tight. Usage: The advantages of aluminum over steel (tinplate) cans include; light weight competitive cost usage of easy-open aluminum ends: no need for a can opener clean appearance aluminum does not rust easy to press into shapeThe easy-open aluminum end for beverage cans was developed by Alcoa in 1962 for the Pittsburgh Brewing Company and is now used in nearly all of the canned beer market. Recycling: Aluminum cans can be made with recycled aluminum. In 2017, 3.8 million tons of aluminum were generated in the US of which 0.62 million tons were recycled - a recycling rate of 16%. According to estimates from the Aluminum Association, a large amount of aluminium remains unrecycled in the US, where roughly $700 million worth of cans end up in landfills each year. In 2012, 92% of the aluminum beverage cans sold in Switzerland were recycled. Cans are the most recycled beverage container, at a rate of 69% worldwide.One issue is that the top of the can is made from a blend of aluminum and magnesium to increase its strength. When the can is melted for recycling, the mixture is unsuitable for either the top or the bottom/side. Instead of mixing recycled metal with more aluminum (to soften it) or magnesium (to harden it), a new approach uses annealing to produce an alloy that works for both.The aluminum can is also considered the most valuable recyclable material in an average recycling bin. It is estimated that Americans throw away nearly 1 billion dollars a year in wasted aluminum. The aluminum industry pays nearly 800 million dollars a year for recycled aluminum since it is so versatile.Because of the advantages of aluminium packaging (shelf life, durability, food grade factor) over plastics, it is considered an alternative to PET bottles, with the possibility of replacing the majority of them in the next decades. Cans as collectibles: Some people collect cans as a hobby. Can collections can be exclusive to one sector only, eg., some collectors may collect soda cans only, while others may dedicate themselves to collecting beer cans or oil cans exclusively, but some collectors may collect cans regardless of the type of can. One aspect that may make someone interested in building a can collection as a hobby is the variety of cans available worldwide promoting such things as films, musical albums and tours, sporting teams and events, countries, ideals and even some non-food or petrol-oriented brands and companies.Celebrities can also be featured on collectible cans; such was the case of tennis player Andre Agassi, who had a set of four Pepsi Max soda cans dedicated to him in 1996.Davide Andreani of Italy is in the Guinness Book of World Records for having the largest collection of soda cans of one specific brand in the world, with over 20,000 cans in his collection. According to a website named canmuseum.com, the largest collection of Pepsi Cola cans belongs to Chris Cavaletti, also of Italy, who owned 12,402 Pepsi Cola cans from 81 countries as of 2022, while the largest collection of Coca-Cola soda cans belonged to Gary Feng of Canada with 11,308 variations of the Coca-Cola cans from 108 countries collected, with William B. Christensen of the United States owning the largest collection of beer cans with 75,000 from 125 countries and Allan Green, of the United States also, with the largest collection of wine cans, at 449.Some webpages are dedicated to the hobby of can collecting.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Semi-structured model** Semi-structured model: The semi-structured model is a database model where there is no separation between the data and the schema, and the amount of structure used depends on the purpose. The advantages of this model are the following: It can represent the information of some data sources that cannot be constrained by schema. It provides a flexible format for data exchange between different types of databases. It can be helpful to view structured data as semi-structured (for browsing purposes). The schema can easily be changed. Semi-structured model: The data transfer format may be portable.The primary trade-off being made in using a semi-structured database model is that queries cannot be made as efficiently as in a more constrained structure, such as in the relational model. Typically the records in a semi-structured database are stored with unique IDs that are referenced with pointers to their location on disk. This makes navigational or path-based queries quite efficient, but for doing searches over many records (as is typical in SQL), it is not as efficient because it has to seek around the disk following pointers. Semi-structured model: The Object Exchange Model (OEM) is one standard to express semi-structured data, another way is XML.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fuzzy measure theory** Fuzzy measure theory: In mathematics, fuzzy measure theory considers generalized measures in which the additive property is replaced by the weaker property of monotonicity. The central concept of fuzzy measure theory is the fuzzy measure (also capacity, see ), which was introduced by Choquet in 1953 and independently defined by Sugeno in 1974 in the context of fuzzy integrals. There exists a number of different classes of fuzzy measures including plausibility/belief measures; possibility/necessity measures; and probability measures, which are a subset of classical measures. Definitions: Let X be a universe of discourse, C be a class of subsets of X , and E,F∈C . A function g:C→R where ∅∈C⇒g(∅)=0 E⊆F⇒g(E)≤g(F) is called a fuzzy measure. A fuzzy measure is called normalized or regular if g(X)=1 Properties of fuzzy measures: A fuzzy measure is: additive if for any E,F∈C such that E∩F=∅ , we have g(E∪F)=g(E)+g(F). Properties of fuzzy measures: supermodular if for any E,F∈C , we have g(E∪F)+g(E∩F)≥g(E)+g(F) submodular if for any E,F∈C , we have g(E∪F)+g(E∩F)≤g(E)+g(F) superadditive if for any E,F∈C such that E∩F=∅ , we have g(E∪F)≥g(E)+g(F) subadditive if for any E,F∈C such that E∩F=∅ , we have g(E∪F)≤g(E)+g(F) symmetric if for any E,F∈C , we have |E|=|F| implies g(E)=g(F) Boolean if for any E∈C , we have g(E)=0 or g(E)=1 .Understanding the properties of fuzzy measures is useful in application. When a fuzzy measure is used to define a function such as the Sugeno integral or Choquet integral, these properties will be crucial in understanding the function's behavior. For instance, the Choquet integral with respect to an additive fuzzy measure reduces to the Lebesgue integral. In discrete cases, a symmetric fuzzy measure will result in the ordered weighted averaging (OWA) operator. Submodular fuzzy measures result in convex functions, while supermodular fuzzy measures result in concave functions when used to define a Choquet integral. Möbius representation: Let g be a fuzzy measure. The Möbius representation of g is given by the set function M, where for every E,F⊆X , M(E)=∑F⊆E(−1)|E∖F|g(F). The equivalent axioms in Möbius representation are: M(∅)=0 .∑F⊆E|i∈FM(F)≥0 , for all E⊆X and all i∈E A fuzzy measure in Möbius representation M is called normalized if 1. Möbius representation can be used to give an indication of which subsets of X interact with one another. For instance, an additive fuzzy measure has Möbius values all equal to zero except for singletons. The fuzzy measure g in standard representation can be recovered from the Möbius form using the Zeta transform: g(E)=∑F⊆EM(F),∀E⊆X. Simplification assumptions for fuzzy measures: Fuzzy measures are defined on a semiring of sets or monotone class, which may be as granular as the power set of X, and even in discrete cases the number of variables can be as large as 2|X|. For this reason, in the context of multi-criteria decision analysis and other disciplines, simplification assumptions on the fuzzy measure have been introduced so that it is less computationally expensive to determine and use. For instance, when it is assumed the fuzzy measure is additive, it will hold that g(E)=∑i∈Eg({i}) and the values of the fuzzy measure can be evaluated from the values on X. Similarly, a symmetric fuzzy measure is defined uniquely by |X| values. Two important fuzzy measures that can be used are the Sugeno- or λ -fuzzy measure and k-additive measures, introduced by Sugeno and Grabisch respectively. Simplification assumptions for fuzzy measures: Sugeno λ-measure The Sugeno λ -measure is a special case of fuzzy measures defined iteratively. It has the following definition: Definition Let X={x1,…,xn} be a finite set and let λ∈(−1,+∞) . A Sugeno λ -measure is a function g:2X→[0,1] such that g(X)=1 if A,B⊆X (alternatively A,B∈2X ) with A∩B=∅ then g(A∪B)=g(A)+g(B)+λg(A)g(B) .As a convention, the value of g at a singleton set {xi} is called a density and is denoted by gi=g({xi}) . In addition, we have that λ satisfies the property λ+1=∏i=1n(1+λgi) .Tahani and Keller as well as Wang and Klir have showed that once the densities are known, it is possible to use the previous polynomial to obtain the values of λ uniquely. Simplification assumptions for fuzzy measures: k-additive fuzzy measure The k-additive fuzzy measure limits the interaction between the subsets E⊆X to size |E|=k . This drastically reduces the number of variables needed to define the fuzzy measure, and as k can be anything from 1 (in which case the fuzzy measure is additive) to X, it allows for a compromise between modelling ability and simplicity. Definition A discrete fuzzy measure g on a set X is called k-additive ( 1≤k≤|X| ) if its Möbius representation verifies M(E)=0 , whenever |E|>k for any E⊆X , and there exists a subset F with k elements such that M(F)≠0 Shapley and interaction indices: In game theory, the Shapley value or Shapley index is used to indicate the weight of a game. Shapley values can be calculated for fuzzy measures in order to give some indication of the importance of each singleton. In the case of additive fuzzy measures, the Shapley value will be the same as each singleton. For a given fuzzy measure g, and |X|=n , the Shapley index for every i,…,n∈X is: ϕ(i)=∑E⊆X∖{i}(n−|E|−1)!|E|!n![g(E∪{i})−g(E)]. The Shapley value is the vector ϕ(g)=(ψ(1),…,ψ(n)).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Basement apartment** Basement apartment: A basement apartment is an apartment located below street level, underneath another structure—usually an apartment building, but possibly a house or a business. Cities in North America are beginning to recognize these units as a vital source of housing in urban areas and legally define them as an accessory dwelling unit or "ADU". Basement apartment: Rent in basement apartments is usually much lower than it is in above-ground units, due to a number of deficiencies common to basement apartments. The apartments are usually cramped, and tend to be noisy, both from uninsulated building noises and from traffic on the adjacent street. They are also particularly vulnerable to burglary, especially those with windows at sidewalk level. In some instances, residential use of below-ground space is illegal, but is done anyway in order for the building owner to generate extra income.Homeowners will typically rent out basement apartments to tenants as a way to earn additional income so as to offset living expenses. Owning a home with a basement apartment can be an investment. Tenants will provide income to the home owner, reducing expenses, and equity will grow as the value of the property increases. Health risks to tenants: Some health risks to people who live in basements have been noted, for example mold, radon, and risk of injury/death due to fire. It has been suggested that a basement suite is the last type of dwelling a tenant should look for because of the risk of mold. However, due to demand for affordable housing, basement suites are often the only available housing for some low-income families and individuals.Airborne spores can cause mold to grow in damp and unventilated areas, such as basements. Presence of mold can lead to "respiratory symptoms, respiratory infections, allergic rhinitis and asthma", as well as personal belongings being contaminated by mold.Basement suite tenants are more likely to be injured or die due to a fire in the house. Many landlords do not follow fire code regulations, and often such regulations are not enforced by governments.source?During flooding, these apartments are extremely dangerous. When Hurricane Ida passed over the northeast of the United States as an extratropical storm, most of the deaths were caused due to flooding in basement apartments. Notable people: A number of noted artistic achievements have occurred in basement apartments occupied by struggling authors, painters, and musicians. Andy Warhol made one of his earliest films, Mrs. Warhol (black-and-white, 66 minutes), in the basement apartment of his house, where his mother (Julia Warhola) lived. Ruth McKenney based a series of stories in The New Yorker, later republished in the book My Sister Eileen, on her experiences living with her sister in a moldy, one-room basement apartment, directly adjoining the Christopher Street subway station on the 1 and ​2 trains, at 14 Gay Street, in Greenwich Village for which she paid $45 a month (equivalent to $940 in 2022). The apartment was burgled within the first week during the six months they lived there.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Grass Labyrinth** Grass Labyrinth: Grass Labyrinth (草迷宮, Kusa meikyū) is a Japanese film directed by Shūji Terayama which was released in France in 1979 and in Japan in 1983. Plot: A surreal excursion into a young man's subconscious as he searches for the words to a tune that his mother may have sung to him as a child. The dreamlike images culminate in a scene of a girl's naked body covered with calligraphic characters. Cast: Hiroshi Mikami as Akira (as a boy) Takeshi Wakamatsu as Akira (as a man) Keiko Niitaka as Mother Juzo Itami as Principal / Priest / Old man Miho Fukuya as Girl Masaharu Satō Release: Grass Labyrinth was originally one of the installments in a French movie package called Private Collections, the other two sections being directed by Walerian Borowczyk and Just Jaeckin, both associated with avant-garde films with strong sexual content. Grass Labyrinth was the longest of the three and was later (1983) released as a separate film in Japan. Awards and nominations: 8th Hochi Film Award Won: Best Actor - Juzo Itami
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Eraser** Eraser: An eraser (also known as a rubber in some Commonwealth countries, including South Africa from the material first used) is an article of stationery that is used for removing marks from paper or skin (e.g. parchment or vellum). Erasers have a rubbery consistency and come in a variety of shapes, sizes, and colors. Some pencils have an eraser on one end. Less expensive erasers are made from synthetic rubber and synthetic soy-based gum, but more expensive or specialized erasers are made from vinyl, plastic, or gum-like materials. Eraser: At first, erasers were invented to erase mistakes made with a pencil; later, more abrasive ink erasers were introduced. The term is also used for things that remove marks from chalkboards and whiteboards. History: Before rubber erasers used today, tablets of wax were used to erase lead or charcoal marks from paper. Bits of rough stone such as sandstone or pumice were used to remove small errors from parchment or papyrus documents written in ink. Crustless bread was used; a Meiji period (1868–1912) Tokyo student said: "Bread erasers were used in place of rubber erasers, and so they would give them to us with no restriction on amount. So we thought nothing of taking these and eating a firm part to at least slightly satisfy our hunger. "In 1770 English engineer Edward Nairne is reported to have developed the first widely marketed rubber eraser, for an inventions competition. Until that time the material was known as gum elastic or by its Native American name (via French) caoutchouc. Nairne sold natural rubber erasers for the high price of three shillings per half-inch cube. According to Nairne, he inadvertently picked up a piece of rubber instead of breadcrumbs, discovered rubber's erasing properties, and began selling rubber erasers. The invention was described by Joseph Priestley on April 15, 1770, in a footnote: "I have seen a substance excellently adapted to the purpose of wiping from paper the mark of black-lead-pencil. ... It is sold by Mr. Nairne, Mathematical Instrument-Maker, opposite the Royal-Exchange." In 1770 the word rubber was in general use for any object used for rubbing; the word became attached to the new material sometime between 1770 and 1778.However, raw rubber was perishable. In 1839 Charles Goodyear discovered the process of vulcanization, a method that would cure rubber, making it durable. Rubber erasers became common with the advent of vulcanization. History: On March 30, 1858, Hymen Lipman of Philadelphia, United States, received the first patent for attaching an eraser to the end of a pencil. It was later invalidated because it was determined to be simply a composite of two devices rather than an entirely new product.Erasers may be free-standing blocks (block and wedge eraser), or conical caps that can slip onto the end of a pencil (cap eraser). A barrel or click eraser is a device shaped like a pencil, but instead of being filled with pencil lead, its barrel contains a retractable cylinder of eraser material (most commonly soft vinyl). Many, but not all, wooden pencils are made with attached erasers. Novelty erasers made in shapes intended to be amusing are often made of hard vinyl, which tends to smear heavy markings when used as an eraser. Types: Pencil or cap erasers Originally made from natural rubber, but now usually from cheaper SBR, this type contains mineral fillers and an abrasive such as pumice with a plasticizer such as vegetable oil. They are relatively hard (in order to remain attached to the pencil) and frequently colored pink. They can also be permanently attached to the end of a pencil with a ferrule. Types: Artist's gum eraser The stylized word "Art gum" was first used in 1903 and trademarked in the United States in 1907. That type of eraser was originally made from oils such as corn oil vulcanized with sulfur dichloride although may now be made from natural or synthetic rubber or vinyl compounds. It is very soft yet retains its shape and is not mechanically plastic, but crumbles as it is used. It is especially suited to cleaning large areas without damaging the paper. However, they are so soft as to be imprecise in use. The removed graphite is carried away in the crumbles, leaving the eraser clean, but resulting in a lot of eraser residue. This residue must then be brushed away with care, as the eraser particles are coated with the graphite and can make new marks. Art gum erasers are traditionally tan or brown, but some are blue. Types: Vinyl erasers High-quality plasticized vinyl or other "plastic" erasers, originally trademarked Mylar in the mid-20th century, are softer, non-abrasive, and erase cleaner than standard rubber erasers. This is because the removed graphite does not remain on the eraser as much as rubber erasers, but is instead absorbed into the discarded vinyl scraps. Being softer and non-abrasive, they are less likely to damage canvas or paper. Engineers favor this type of eraser for work on technical drawings due to their gentleness on paper with less smearing to surrounding areas. They often come in white and can be found in a variety of shapes. More recently, very low-cost erasers are manufactured from highly plasticized vinyl compounds and made in decorative shapes. Types: Elastomer erasers In these types, a thermoplastic elastomer combines a styrene resin elastomer and an olefin resin. These erasers have better erasability for erasing pencil marks compared to conventional vinyl erasers. Elastomers can be formed into thin cylindrical or other shapes to be used as extendable erasers. Types: Kneaded erasers Kneaded erasers (called putty rubbers outside the United States) have a plastic consistency and are common to most artists' standard toolkit. They can be pulled into a point for erasing small areas and tight detail erasing, molded into a textured surface and used as a reverse stamp to give texture, or used in a "blotting" manner to lighten lines or shading without completely erasing them. They gradually lose their efficacy and resilience as they become infused with particles picked up from erasing and from their environment. They are not suited to erase large areas because of their tendency to deform under vigorous erasing. Types: Poster putty Commonly sold in retail outlets with school supplies and home improvement products, this soft, malleable putty appears in many colors and under numerous brand names. Intended to adhere posters and prints to walls without damaging the underlying wall surface, poster putty works much the same as traditional kneaded erasers, but with a greater tack and in some circumstances, lifting strength. Poster putty does not erase so much as lighten by directly pulling particles of graphite, charcoal or pastel from a drawing. In this regard, poster putty does not smudge or damage work in the process. Repeatedly touching the putty to a drawing pulls ever more medium free, gradually lightening the work in a controlled fashion. Poster putty can be shaped into fine points or knife edges, making it ideal for detailed or small areas of work. It can be rolled across a surface to create visual textures. Poster putty loses its efficacy with use, becoming less tacky as the material grows polluted with debris and oils from the user's skin. Types: Electric erasers The electric eraser was invented in 1932 by Albert J. Dremel of Racine, Wisconsin, United States. It used a replaceable cylinder of eraser material held by a chuck driven on the axis of a motor. The speed of rotation allowed less pressure to be used, which minimized paper damage. Originally standard pencil-eraser rubber was used, later replaced by higher-performance vinyl. Dremel went on to develop an entire line of hand-held rotary power tools. Types: Fiberglass erasers A fiberglass eraser, a bundle of very fine glass fibers, can be used for erasing and other tasks requiring abrasion. Typically the eraser is a pen-shaped device with a replaceable insert with glass fibers, which wear down in use. The fibers are very hard; in addition to removing pencil and pen markings, such erasers are used for cleaning traces on electronic circuit boards to facilitate soldering, removing rust, and many other applications. As an example of an unusual use, a fiberglass eraser was used for preparing a Pterosaur fossil embedded in a very hard and massive limestone. Because fiberglass erasers shed fiberglass dust when used, care must be taken during and after use to avoid accidental contamination with this abrasive dust in sensitive areas of the body, especially in the eyes. Types: Other Felt chalkboard erasers or blackboard dusters are used to erase chalk markings on a chalkboard. Chalk writing leaves light-colored particles weakly adhering to a dark surface (e.g., white on black, or yellow on green); it can be rubbed off with a soft material, such as a rag. Erasers for chalkboards are made, with a block of plastic or wood, much larger than an eraser for pen or pencil, with a layer of felt on one side. The block is held in the hand and the felt rubbed against the writing, which it easily wipes off. Chalk dust is released, some of which sticks to the eraser until it is cleaned, usually by hitting it against a hard surface. Types: Various types of eraser, depending upon the board and the type of ink used, are used to erase a whiteboard. Dedicated erasers that are supplied with some ballpens and permanent markers are intended only to erase the ink of the writing instrument they are made for; sometimes this is done by making the ink bond more strongly to the material of an eraser than the surface it was applied to.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Back bacon** Back bacon: Back bacon is a cut of bacon that includes the pork loin from the back of the pig. It may also include a portion of the pork belly in the same cut. It is much leaner than side bacon made only from the pork belly. Back bacon is derived from the same cut used for pork chops. It is the most common cut of bacon used in British and Irish cuisine, where both smoked and unsmoked varieties of bacon are found. Canadian bacon: Canadian bacon (or Canadian-style bacon) is the term commonly used in the United States for a form of back bacon that is cured, smoked and fully cooked, trimmed into cylindrical medallions, and sliced thick. The name was created when this product was first imported from Toronto to New York City. "Canadian" bacon is made only from the lean eye of the loin and is ready to eat. Its flavor is described as more ham-like than other types because of its lean cut.The term "Canadian bacon" is not used in Canada, where the product is generally known simply as "back bacon" while "bacon" alone refers to the same streaky pork belly bacon as in the United States. Peameal bacon is a variety of back bacon popular in Ontario where the loin is wet cured before being rolled in cornmeal (originally yellow pea meal); it is unsmoked.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Routing loop** Routing loop: A routing loop is a common problem with various types of networks, particularly computer networks. They are formed when an error occurs in the operation of the routing algorithm, and as a result, in a group of nodes, the path to a particular destination forms a loop.In the simplest version, a routing loop of size two, node A thinks that the path to some destination (call it C) is through its neighbouring node, node B. At the same time, node B thinks that the path to C starts at node A. Routing loop: Thus, whenever traffic for C arrives at either A or B, it will loop endlessly between A and B, unless some mechanism exists to prevent that behaviour. How a routing loop can form: For example, in this illustration, node A is transmitting data to node C via node B. If the link between nodes B and C goes down and B has not yet informed node A about the breakage, node A transmits the data to node B assuming that the link A-B-C is operational and of lowest cost. Node B knows of the broken link and tries to reach node C via node A, thus sending the original data back to node A. Furthermore, node A receives the data that it originated back from node B and consults its routing table. Node A's routing table will say that it can reach node C via node B (because it still has not been informed of the break) thus sending its data back to node B creating an infinite loop. This routing loop problem is also called a two-node loop. How a routing loop can persist: Consider now what happens if both the link from A to C and the link from B to C vanish at the same time (this can happen if node C has crashed). A believes that C is still reachable through B, and B believes that C is reachable through A. In a simple reachability protocol, such as EGP, the routing loop will persist forever. How a routing loop can persist: In a naive distance-vector protocol, such as the routing information protocol, the loop will persist until the metrics for C reach infinity (the maximum number of routers that a packet can traverse in RIP is 15. The value 16 is considered infinity and the packet is discarded). Prevention and mitigations: In a link-state routing protocol, such as OSPF or IS-IS, a routing loop disappears as soon as the new network topology is flooded to all the routers within the routing area. Assuming a sufficiently reliable network, this happens within a few seconds.Newer distance-vector routing protocols like EIGRP, DSDV, and Babel have built-in loop prevention: they use algorithms that assure that routing loops can never happen, not even transiently. Older routing protocols like RIP and IGRP do not implement the newest forms of loop prevention and only implement mitigations such as split horizon, route poisoning, and holddown timers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Text-free user interface** Text-free user interface: A text-free user interface is a user interface (UI) wholly based on graphical UI techniques and without any writing. Text-free UIs are employed in areas where written language may not be understood by the user. For example, with young children, international UIs where localisation is not feasible or where users may be illiterate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Secondary electrospray ionization** Secondary electrospray ionization: Secondary electro-spray ionization (SESI) is an ambient ionization technique for the analysis of trace concentrations of vapors, where a nano-electrospray produces charging agents that collide with the analyte molecules directly in gas-phase. In the subsequent reaction, the charge is transferred and vapors get ionized, most molecules get protonated (in positive mode) and deprotonated (in negative mode). SESI works in combination with mass spectrometry or ion-mobility spectrometry. History: The fact that trace concentrations of gases in contact with an electrospray plume were efficiently ionized was first observed by Fenn and colleagues when they noted that tiny concentrations of plasticizers produced intense peaks in their mass spectra. However, it was not until 2000 when this problem was reframed as a solution, when Hill and coworkers used an electrospray to ionize molecules in the gas phase, and named the technique Secondary Electrospray Ionization. In 2007, the almost simultaneous works of Zenobi and Pablo Sinues applied SESI to breath analysis for the first time, marking the beginning of a fruitful field or research. With sensitivities in the low pptv range (10−12), SESI has been used in other applications, where the detection of low volatility vapors is important. History: Detecting low volatility species in the gas phase is important because larger molecules tend to have higher biological significance. Low volatility species have been overlooked because it is technically difficult to detect them, as they are in very low concentration, and they tend to condensate in the inner piping of instruments. However, as this problem is solved, and new instruments are able to handle larger and more specific molecules, the ability to perform on-line, real time analysis of molecules naturally released in the air, even at minute concentrations, is attracting attention to this ionization technique. Principle of operation: In the early days of SESI, two ionization mechanisms were under debate.: the droplet-vapor interaction model postulates that vapors are adsorbed in the electrospray ionization (ESI) droplets, and then reemitted as the droplet shrinks, just as regular liquid phase analytes are produced in electrospray ionization; on the other hand, the ion-vapor interaction model postulates that molecules and ions or small clusters collide, and the charge is transferred in this collision. Currently available commercial SESI sources operate at high temperature so as to better handle low volatility species. In this regime, nanodroplets from the electrospray evaporate very quickly to form ion clusters in equilibrium. This results in ion-vapor reactions dominating the majority of the ionization region. As charging ions originate from nano-droplets, and no high energy ions are involved at any point of the ionization process nor the creation of ionizing agents, fragmentation in SESI is remarkably low, and the resulting spectra are very clean. This allows for a very high dynamic range, where low intensity peaks are not affected by more abundant species.Some related techniques are laser ablation electrospray ionization, proton-transfer-reaction mass spectrometry and selected-ion flow-tube mass spectrometry. Applications: The main feature of SESI is that it can detect minuscule concentrations of low volatility species in real time, with molecular masses as high as 700 Da, falling in the realm of metabolomics. These molecules are naturally released by living organisms, and are commonly detected as odors, which means that they can be analyzed non-invasively. SESI, combined with High Resolution Mass Spectrometry, provides time-resolved, biologically relevant information of living systems, where the system does not need to be interfered with. This allows to seamlessly capture the time evolution of their metabolism and their response to controlled stimuli. Applications: SESI has been widely used for breath gas analysis for biomarker discovery, and in vivo pharmacokinetic studies: Biomarker discovery Bacterial infection It has been widely reported the identification of bacteria by their volatile organic compound fingerprint. SESI-MS has proven to be a robust technique for the identification of bacteria from cell cultures and infections in vivo from breath samples, after the development of libraries of vapor profiles. Other studies include: In vivo differentiation between critical pathogens Staphylococcus aureus and Pseudomonas aeruginosa. or differential detection among antibiotic resistant S. aureus and its non-resistant strains. Bacterial infection detection from other fluids such as saliva have also been reported. Applications: Respiratory diseases Many chronic respiratory diseases lack of an appropriate method of monitoring and differentiation among disease stages. SESI-MS has been used to diagnose and distinguish exacerbations from breath samples in chronic obstructive pulmonary disease. Metabolic profiling of breath samples has accurately differentiated healthy individuals from idiopathic pulmonary fibrosis or obstructive sleep apnea patients. Cancer SESI-MS is being studied as a non-invasive detection system of cancer biomarkers in breath. A preliminary study differentiates patients suffering from breast neoplasia. Skin Volatiles released from the skin can be detected by sampling the ambient gas surrounding it, providing a fast method for detecting metabolic changes in fatty acids composition patterns. Applications: Pharmacokinetics To study pharmacokinetics, it is necessary a robust technique because of the complex nature of the samples' matrix, be it plasma, urine, or breath. Recent studies show that secondary electrospray ionization (SESI) is a powerful technique to monitor drug kinetics via breath analysis. Because breath is naturally produced, several datapoints can be readily collected. This allows for the number of collected data-points to be greatly increased. In animal studies, this approach SESI can reduce animal sacrifice while yielding pharmacokinetic curves with unmatched time resolutions. In humans, SESI-MS non-invasive analysis of breath can help study the kinetics of drugs at a personalized level. Monitoring exogenously introduced species allows tracking their specific metabolic pathway, which reduces the risk of picking confounding factors. Applications: Time-resolved metabolic analysis Introducing known stimuli, such as specific metabolites isotopically labeled compounds, or other sources of stress triggers metabolic changes which can be easily monitored with SESI-MS. Some examples if this include: cell culture volatile compounds profiling; and metabolic studies for plant or trace human metabolic pathways. Other applications Other applications developed with SESI-MS include: Detection of illicit drugs; Detection of explosives; Food quality control monitoring.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Advanced work** Advanced work: An advanced work, advance-work or advanced outwork is a fortification or outwork in front of the main defensive building or castle.In the Middle Ages in the Holy Roman Empire, advanced works, known as Vorwerke (singular: Vorwerk), were commonly found in smaller villages that were located in front of the main castle. Within these advanced works often lived relatives of the knightly family whose ancestral seat was in the castle itself. As a result, the advanced works became manor houses and were known locally as schlosses. They were suitable for defending against minor attacks and offered the village population a degree of protection. In the case of major attacks they also acted as an early warning system for the castle. Because the advanced works were supposed to function autonomously, a link with agricultural estates was possible, such estates then became granges or vorwerkenden Gutshöfen. Later they also took over administrative tasks. Over the course of time these advanced works detached themselves from the castle and became independent estates.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Integer literal** Integer literal: In computer science, an integer literal is a kind of literal for an integer whose value is directly represented in source code. For example, in the assignment statement x = 1, the string 1 is an integer literal indicating the value 1, while in the statement x = 0x10 the string 0x10 is an integer literal indicating the value 16, which is represented by 10 in hexadecimal (indicated by the 0x prefix). Integer literal: By contrast, in x = cos(0), the expression cos(0) evaluates to 1 (as the cosine of 0), but the value 1 is not literally included in the source code. More simply, in x = 2 + 2, the expression 2 + 2 evaluates to 4, but the value 4 is not literally included. Further, in x = "1" the "1" is a string literal, not an integer literal, because it is in quotes. The value of the string is 1, which happens to be an integer string, but this is semantic analysis of the string literal – at the syntactic level "1" is simply a string, no different from "foo". Parsing: Recognizing a string (sequence of characters in the source code) as an integer literal is part of the lexical analysis (lexing) phase, while evaluating the literal to its value is part of the semantic analysis phase. Within the lexer and phrase grammar, the token class is often denoted integer, with the lowercase indicating a lexical-level token class, as opposed to phrase-level production rule (such as ListOfIntegers). Once a string has been lexed (tokenized) as an integer literal, its value cannot be determined syntactically (it is just an integer), and evaluation of its value becomes a semantic question. Parsing: Integer literals are generally lexed with regular expressions, as in Python. Evaluation: As with other literals, integer literals are generally evaluated at compile time, as part of the semantic analysis phase. In some cases this semantic analysis is done in the lexer, immediately on recognition of an integer literal, while in other cases this is deferred until the parsing stage, or until after the parse tree has been completely constructed. For example, on recognizing the string 0x10 the lexer could immediately evaluate this to 16 and store that (a token of type integer and value 16), or defer evaluation and instead record a token of type integer and value 0x10. Evaluation: Once literals have been evaluated, further semantic analysis in the form of constant folding is possible, meaning that literal expressions involving literal values can be evaluated at the compile phase. For example, in the statement x = 2 + 2 after the literals have been evaluated and the expression 2 + 2 has been parsed, it can then be evaluated to 4, though the value 4 does not itself appear as a literal. Affixes: Integer literals frequently have prefixes indicating base, and less frequently suffixes indicating type. For example, in C++ 0x10ULL indicates the value 16 (because hexadecimal) as an unsigned long long integer. Affixes: Common prefixes include: 0x or 0X for hexadecimal (base 16); 0, 0o or 0O for octal (base 8); 0b or 0B for binary (base 2).Common suffixes include: l or L for long integer; ll or LL for long long integer; u or U for unsigned integer.These affixes are somewhat similar to sigils, though sigils attach to identifiers (names), not literals. Digit separators: In some languages, integer literals may contain digit separators to allow digit grouping into more legible forms. If this is available, it can usually be done for floating point literals as well. This is particularly useful for bit fields and makes it easier to see the size of large numbers (such as a million) at a glance by subitizing rather than counting digits. It is also useful for numbers that are typically grouped, such as credit card number or social security numbers. Very long numbers can be further grouped by doubling up separators. Digit separators: Typically decimal numbers (base-10) are grouped in three digit groups (representing one of 1000 possible values), binary numbers (base-2) in four digit groups (one nibble, representing one of 16 possible values), and hexadecimal numbers (base-16) in two digit groups (each digit is one nibble, so two digits are one byte, representing one of 256 possible values). Numbers from other systems (such as id numbers) are grouped following whatever convention is in use. Digit separators: Examples In Ada, C# (from version 7.0), D, Eiffel, Go (from version 1.13), Haskell (from GHC version 8.6.1), Java (from version 7), Julia, Perl, Python (from version 3.6), Ruby, Rust and Swift, integer literals and float literals can be separated with an underscore (_). There can be some restrictions on placement; for example, in Java they cannot appear at the start or end of the literal, nor next to a decimal point. Note that while the period, comma, and (thin) spaces are used in normal writing for digit separation, these conflict with their existing use in programming languages as radix point, list separator (and in C/C++, the comma operator), and token separator. Digit separators: Examples include: In C++14 (2014) and the next version of C as of 2022, C23, the apostrophe character may be used to separate digits arbitrarily in numeric literals. The underscore was initially proposed, with an initial proposal in 1993, and again for C++11, following other languages. However, this caused conflict with user-defined literals, so the apostrophe was proposed instead, as an "upper comma" (which is used in some other contexts).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Passive binding** Passive binding: In complexation catalysis, the term passive binding refers to any stabilizing interaction that is equally strong at the transition state level and in the reactant-catalyst complex. Passive binding: Having the same effect on the stability of the transition state and the reactant-catalyst complex, passive binding contributes to acceleration only if the equilibrium between the unassociated reactant and catalyst and their complex is not completely shifted to the right. It was defined by A.J. Kirby in 1996 as opposed to the dynamic binding, i.e. the whole of interactions that are stronger at the transition state level than in the reactant-catalyst complex.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Radical 82** Radical 82: Radical 82 or radical fur (毛部) meaning "fur" is one of the 34 Kangxi radicals (214 radicals in total) composed of 4 strokes. In the Kangxi Dictionary, there are 211 characters (out of 49,030) to be found under this radical. 毛 is also the 82nd indexing component in the Table of Indexing Chinese Character Components predominantly adopted by Simplified Chinese dictionaries published in mainland China. The character is a Chinese family name, and often refers to the Chinese leader Mao Zedong. Literature: Fazzioli, Edoardo (1987). Chinese calligraphy : from pictograph to ideogram : the history of 214 essential Chinese/Japanese characters. calligraphy by Rebecca Hon Ko. New York: Abbeville Press. ISBN 0-89659-774-1. Lunde, Ken (Jan 5, 2009). "Appendix J: Japanese Character Sets" (PDF). CJKV Information Processing: Chinese, Japanese, Korean & Vietnamese Computing (Second ed.). Sebastopol, Calif.: O'Reilly Media. ISBN 978-0-596-51447-1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Light characteristic** Light characteristic: A light characteristic is all of the properties that make a particular navigational light identifiable. Graphical and textual descriptions of navigational light sequences and colours are displayed on nautical charts and in Light Lists with the chart symbol for a lighthouse, lightvessel, buoy or sea mark with a light on it. Different lights use different colours, frequencies and light patterns, so mariners can identify which light they are seeing. Abbreviations: While light characteristics can be described in prose, e.g. "Flashing white every three seconds", lists of lights and navigation chart annotations use abbreviations. The abbreviation notation is slightly different from one light list to another, with dots added or removed, but it usually follows a pattern similar to the following (see the chart to the right for examples). An abbreviation of the type of light, e.g. "Fl." for flashing, "F." for fixed. The color of the light, e.g. "W" for white, "G" for green, "R" for red, "Y" for yellow, "Bu" for blue. If no color is given, a white light is generally implied. The cycle period, e.g. "10s" for ten seconds. Additional parameters are sometimes added:The height of the light above the chart datum for height (usually based on high water). e.g. 15m for 15 metres. Abbreviations: The range in which the light is visible, e.g. "10M" for 10 nautical miles.An example of a complete light characteristic is "Gp Oc(3) W 10s 15m 10M". This indicates that the light is a group occulting light in which a group of three eclipses repeat every 10 seconds; the light is white; the light is 15 metres above the chart datum and the nominal range is 10 nautical miles. Light patterns: Fixed light A fixed light, abbreviated "F", is a continuous and steady light. Light patterns: Flashing light A flashing light is a rhythmic light in which the total duration of the light in each period is clearly shorter than the total duration of the darkness and in which the flashes of light are all of equal duration. It is most commonly used for a single-flashing light which exhibits only single flashes which are repeated at regular intervals, in which case it is abbreviated simply as "Fl". It can also be used with a group of flashes which are regularly repeated, in which case the abbreviation is "Fl(2)" or "Gr Fl(2)", for a group of two flashes. Another possibility is a composite group, in which successive groups in the period have different numbers of flashes, e.g. "Fl. (2+1)" indicates a group of two flashes, followed by one flash. Light patterns: A specific case sometimes used is when the flashes are longer than two seconds. Such a light is sometimes denoted "long flashing" with the abbreviation "L.Fl". If the frequency of flashes is high (more than 30 or 50 per minute) the light is denoted as a "quick light", see below. Light patterns: Occulting light An occulting light is a rhythmic light in which the duration of light in each period is longer than the total duration of darkness. In other words, it is the opposite to a flashing light where the total duration of darkness is longer than the duration of light. It has the appearance of flashing off, rather than flashing on. Like a flashing light, it can be used for a single occulting light that exhibits only a single period of darkness or the periods of darkness can be grouped and repeated at regular intervals (abbreviated "Oc"), a group (Oc(3)) or a composite group (Oc(2+1)). Light patterns: The term occulting is used because originally the effect was obtained by a mechanism (e.g. a vertical or rotating shutter) periodically shading the light from view. Isophase light An isophase light, abbreviated "Iso", is a light which has dark and light periods of equal length. The prefix derives from the Greek iso- meaning "same". Quick light A quick light, abbreviated "Q", is a special case of a flashing light with a high frequency (more than 30 or 50 per minute). If the sequence of flashes is interrupted by regularly repeated eclipses of constant and long duration, the light is denoted "interrupted quick", abbreviated "I.Q". Group notation similar to flashing and occulting lights is also sometimes used, e.g. Q(9). Light patterns: Another distinction sometimes made is between quick (more than 50 and less than 80 flashes per minute), very quick (more than 80 and less than 160 flashes per minutes, abbreviated "V.Q") and ultra quick (no less than 160 flashes per minute, abbreviate "U.Q"). This can be combined with notations for interruptions, e.g. I.U.Q for interrupted ultra quick, or grouping, e.g. V.Q(9) for a very quick group of nine flashes. Quick characteristics can also be followed by other characteristics, e.g. VQ(6) LFl for a very quick group of six flashes, followed by a long flash. Light patterns: Morse code A Morse code light is light in which appearances of light of two clearly different durations (dots and dashes) are grouped to represent a character or characters in the Morse Code. For example, "Mo(A)" is a light in which in each period light is shown for a short period (dot) followed by a long period (dash), the Morse Code for "A". Light patterns: Fixed and flashing A fixed and flashing light, abbreviated "F. Fl", is a light in which a fixed low intensity light is combined with a flashing high intensity light. Alternating An alternating light, abbreviated "Al", is a light which shows alternating colors. For example, "Al WG" shows white and green lights alternately.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hypoparathyroidism** Hypoparathyroidism: Hypoparathyroidism is decreased function of the parathyroid glands with underproduction of parathyroid hormone (PTH). This can lead to low levels of calcium in the blood, often causing cramping and twitching of muscles or tetany (involuntary muscle contraction), and several other symptoms. It is a very rare disease. The condition can be inherited, but it is also encountered after thyroid or parathyroid gland surgery, and it can be caused by immune system-related damage as well as a number of rarer causes. The diagnosis is made with blood tests, and other investigations such as genetic testing depending on the results. The primary treatment of hypoparathyroidism is calcium and vitamin D supplementation. Calcium replacement or vitamin D can ameliorate the symptoms but can increase the risk of kidney stones and chronic kidney disease. Additionally, medications such as recombinant human parathyroid hormone or teriparatide may be given by injection to replace the missing hormone. Signs and symptoms: The main symptoms of hypoparathyroidism are the result of the low blood calcium level, which interferes with normal muscle contraction and nerve conduction. As a result, people with hypoparathyroidism can experience paresthesia, an unpleasant tingling sensation around the mouth and in the hands and feet, as well as muscle cramps and severe spasms known as "tetany" that affect the hands and feet. Many also report a number of subjective symptoms such as fatigue, headaches, bone pain and insomnia. Crampy abdominal pain may occur. Physical examination of someone with hypocalcemia may show tetany, but it is also possible to provoke tetany of the facial muscles by tapping on the facial nerve (a phenomenon known as Chvostek's sign) or by using the cuff of a sphygmomanometer to temporarily obstruct the blood flow to the arm (a phenomenon known as Trousseau's sign of latent tetany).A number of medical emergencies can arise in people with low calcium levels. These are seizures, severe irregularities in the normal heart beat, as well as spasm of the upper part of the airways or the smaller airways known as the bronchi (both potentially causing respiratory failure). Causes: Hypoparathyroidism can have the following causes: Removal of, or trauma to, the parathyroid glands due to thyroid surgery (thyroidectomy), parathyroid surgery (parathyroidectomy) or other surgical interventions in the central part of the neck (such as operations on the larynx and/or pharynx) is a recognized cause. It is the most common cause of hypoparathyroidism. Although surgeons generally make attempts to spare normal parathyroid glands at surgery, inadvertent injury to the glands or their blood supply is still common. When this happens, the parathyroids may cease functioning. This is usually temporary but occasionally long term (permanent). Causes: Kenny-Caffey Syndrome Autoimmune invasion and destruction is the most common non-surgical cause. It can occur as part of autoimmune polyendocrine syndromes. Hemochromatosis can lead to iron accumulation and consequent dysfunction of a number of endocrine organs, including the parathyroids. Absence or dysfunction of the parathyroid glands is one of the components of chromosome 22q11 microdeletion syndrome (other names: DiGeorge syndrome, Schprintzen syndrome, velocardiofacial syndrome). Magnesium deficiency A defect in the calcium receptor leads to a rare congenital form of the disease Idiopathic (of unknown cause) Occasionally due to other hereditary causes (e.g. Barakat syndrome (HDR syndrome) a genetic development disorder resulting in hypoparathyroidism, sensorineural deafness, and kidney disease) Mechanism: The parathyroid glands are so named because they are usually located behind the thyroid gland in the neck. They arise during fetal development from structures known as the third and fourth pharyngeal pouch. The glands, usually four in number, contain the parathyroid chief cells that sense the level of calcium in the blood through the calcium-sensing receptor and secrete parathyroid hormone. Magnesium is required for PTH secretion. Under normal circumstances, the parathyroids secrete PTH to maintain a calcium level within normal limits, as calcium is required for adequate muscle and nerve function (including the autonomic nervous system). PTH acts on several organs to increase calcium levels. It increases calcium absorption in the bowel, while in the kidney it prevents calcium excretion and increases phosphate release and in bone it increases calcium through bone resorption. Diagnosis: Diagnosis is by measurement of calcium, serum albumin (for correction) and PTH in blood. If necessary, measuring cAMP (cyclic AMP) in the urine after an intravenous dose of PTH can help in the distinction between hypoparathyroidism and other causes.Differential diagnoses are: Pseudohypoparathyroidism (normal PTH levels but tissue insensitivity to the hormone, associated with intellectual disability and skeletal deformities) and pseudopseudohypoparathyroidism. Diagnosis: Vitamin D deficiency or hereditary insensitivity to this vitamin (X-linked dominant). Malabsorption Kidney disease Medication: steroids, diuretics, some antiepileptics.Other tests include ECG for abnormal heart rhythms, and measurement of blood magnesium levels. Treatment: Severe hypocalcaemia, a potentially life-threatening condition, is treated as soon as possible with intravenous calcium (e.g. as calcium gluconate). Generally, a central venous catheter is recommended, as the calcium can irritate peripheral veins and cause phlebitis. In the event of a life-threatening attack of low calcium levels or tetany (prolonged muscle contractions), calcium is administered by intravenous (IV) infusion. Precautions are taken to prevent seizures or larynx spasms. The heart is monitored for abnormal rhythms until the person is stable. When the life-threatening attack has been controlled, treatment continues with medicine taken by mouth as often as four times a day.Long-term treatment of hypoparathyroidism is with vitamin D analogs and calcium supplementation, but may be ineffective in some due to potential renal damage. The N-terminal fragment of parathyroid hormone (PTH 1-34) has full biological activity. The use of pump delivery of synthetic PTH 1-34 provides the closest approach to physiologic PTH replacement therapy. Injections of recombinant human parathyroid hormone are available as treatment in those with low blood calcium levels.A 2019 systematic review has highlighted that there is a lack of high-quality evidence for the use of vitamin D, calcium, or recombinant parathyroid hormone in the management of both temporary and long-term hypoparathyroidism following thyroidectomy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Urban ecosystem** Urban ecosystem: In ecology, urban ecosystems are considered a ecosystem functional group within the intensive land-use biome. They are structurally complex ecosystems with highly heterogeneous and dynamic spatial structure that is created and maintained by humans. They include cities, smaller settlements and industrial areas, that are made up of diverse patch types (e.g. buildings, paved surfaces, transport infrastructure, parks and gardens, refuse areas). Urban ecosystems rely on large subsidies of imported water, nutrients, food and other resources. Compared to other natural and artificial ecosystems human population density is high, and their interaction with the different patch types produces emergent properties and complex feedbacks among ecosystem components.In socioecology, urban areas are considered part of a broader social-ecological system in which urban landscapes and urban human communities interact with other landscape elements. Urbanization has large impacts on human and environmental health, and the study of urban ecosystems has led to proposals for sustainable urban designs and approaches to development of city fringe areas that can help reduce negative impact on surrounding environments and promote human well-being. Urban ecosystem research: Urban ecology is a relatively new field. Because of this, the research that has been done in this field has yet to become extensive. While there is still plenty of time for growth in the research of this field, there are some key issues and biases within the current research that still need to be addressed. Urban ecosystem research: The article “A Review of Urban Ecosystem Services: Six Key Challenges for Future Research'' addresses the issue of geographical bias. According to this article, there is a significant geographical bias, “towards the northern hemisphere”. The article states that case study research is done primarily in the United States and China. It goes on to explain how future research would benefit from a more geographically diverse array of case studies. Urban ecosystem research: “A Quantitative Review of Urban Ecosystem Service Assessments: Concepts, Models, and Implementation” is an article that gives a comprehensive examination of 217 papers written on Urban Ecosystems to answer the questions of where studies are being done, which types of studies are being done, and to what extent do stakeholders influence these studies. According to this article, "The results indicate that most UES studies have been undertaken in Europe, North America, and China, at city scale. Assessment methods involve bio-physical models, Geographical Information Systems, and valuation, but few study findings have been implemented as land use policy." “Urban vacancy and land use legacies: A frontier for urban ecological research, design, and planning” is another scholarly article that gives an insight into the future of urban ecological research. It details an important opportunity for the future of urban ecological researchers that only a few researchers have inquired into so far, the utilization of vacant land for the creation of urban ecosystems. Difficulties and Opportunities: Difficulties Urban ecosystems are complex and dynamic systems that encompass a wide range of living and nonliving components. These components include humans, plants, animals, buildings, transportation systems, and water and energy infrastructure. As the world becomes increasingly urbanized, understanding urban ecosystems and how they function is becoming increasingly important. Difficulties and Opportunities: POPULATION GROWTH Cities are home to more than half of the world's population, and the number of people living in urban areas is expected to continue to grow in the coming decades. This rapid urbanization can have both positive and negative impacts. On the one hand, cities can provide economic opportunities, access to healthcare and education, and a high quality of life for residents. On the other, increased urbanization exacerbates the struggles of pollution, loss of green spaces, loss of biodiversity, and more. Difficulties and Opportunities: POLLUTION In many cities, air pollution levels are well above safe limits, and this can have serious implications for human health. Pollution from vehicles, factories, and power plants can cause respiratory problems, heart disease, and even cancer. In addition to its impact on human health, air pollution can also damage buildings, corrode infrastructure, and harm plant and animal life. DISSOLUTION OF GREEN SPACES AS A PUBLIC RESOURCE As cities grow, natural areas such as forests, wetlands, and grasslands are often replaced by buildings, roads, and other forms of development. Lack of urban green spaces contribute to a reduction in air/water quality, mental and physical health of residents, energy efficiency, and biodiversity. Difficulties and Opportunities: HABITAT FRAGMENTATION and LOSS OF SPECIES DIVERSITY Related to the dissolution of green space, habitat fragmentation refers to the way in which green spaces get divided by urban development, making it impossible for some species to migrate between. The process, referred to as Genetic Drift, is essential to maintaining the genetic diversity needed for species survival.Species diversity is also impacted by the introduction of non-native and invasive species from travel and shipping processes. Research has found that heavily urbanized areas have a higher richness of invasive species when compared to rural communities. While not all non-native or invasive species are inherently detrimental to a city, invasives can out-compete essential native species, cause biotic homogenization, and introduce new vectors for new diseases. Difficulties and Opportunities: URBAN HEAT ISLANDS Urban Heat Island (UHI) refers to the variation in average temperature that occurs within an urban area due to current methods of development. Patterns in UHIs cause disproportionate impacts of climate change, often creating extra burdens for the already vulnerable. Extreme heat events, which occur more frequently in UHIs, can and do result in deaths, cardiopulmonary diseases, reduced capacity for outdoor labor, mental health concerns, and kidney disease. The demographics most vulnerable to the negative impacts of UHIs are senior citizens, and those without resources to cool off, such as air conditioners. Difficulties and Opportunities: DISEASE Currently methods of urban development increase the risk of disease proliferation within cities as compared to rural environments. Urban traits that contribute to higher risk are poor housing conditions, contaminated water supplies, frequent travel in and out, survival success of rats, and intense population density that causes rapid spread and rapid evolution of the disease. Difficulties and Opportunities: Opportunities GREEN AND BLUE INFRASTRUCTURE Green and blue infrastructure refers to methods of development that work to integrate natural systems and human made structures. Green Infrastructure includes land conservation, such as nature preserves, and increased vegetation cover, such as vertical gardens. Blue infrastructure would include stormwater management efforts such as bioswales. The process of LEED certification can be used to establish green infrastructure practices in individual buildings. Buildings with LEED certification status report 30% less energy used and economic and mental benefits from natural lighting. Difficulties and Opportunities: PUBLIC TRANSIT AND WALKABLE CITIES Beginning in earnest during the 1960, city planning in terms of transit centered around individual car use. Today, cars are still the most dominant form of transportation in urban areas. One effective solution is an improvement to public transportation. Expanding bus or train routes and switching to clean energy use address the issues of air quality, noise pollution, and socioeconomic equity.Another opportunity to reduce carbon emissions and increase population health would be the implementation of the walkable city model in urban planning. A walkable city is strategically planned to reduce distance traveled in order to access resources needed such as food and jobs. Difficulties and Opportunities: STRATEGIC INCREASES IN GREEN SPACES RENEWABLE ENERGY CITIZEN PARTICIPATION IN PLANNING IMPROVING RESEARCH
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TACSTD2** TACSTD2: Tumor-associated calcium signal transducer 2, also known as Trop-2 and as epithelial glycoprotein-1 antigen (EGP-1), is a protein that in humans is encoded by the TACSTD2 gene.This intronless gene encodes a carcinoma-associated antigen defined by the monoclonal antibody GA733. This antigen is a member of a family including at least two type I membrane proteins. It transduces an intracellular calcium signal and acts as a cell surface receptor. TACSTD2: Mutations of this gene result in gelatinous drop-like corneal dystrophy, an autosomal recessive disorder characterized by severe corneal amyloidosis leading to blindness.Trop-2 expression was originally described in trophoblasts (placenta) and fetal tissues (e.g., lung). Later, its expression was also described in the normal stratified squamous epithelium of the skin, uterine cervix, esophagus, and tonsillar crypts.Trop-2 plays a role in tumor progression by actively interacting with several key molecular signaling pathways traditionally associated with cancer development and progression. Aberrant overexpression of Trop-2 has been described in several solid cancers, such as colorectal, renal, lung, and breast cancers. Trop-2 expression has also been described in some rare and aggressive malignancies, e.g., salivary duct, anaplastic thyroid, uterine/ovarian, and neuroendocrine prostate cancers.This antigen is the target of sacituzumab govitecan, an antibody-drug conjugate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**V/STOL** V/STOL: A vertical and/or short take-off and landing (V/STOL) aircraft is an airplane able to take-off or land vertically or on short runways. Vertical takeoff and landing (VTOL) aircraft are a subset of V/STOL craft that do not require runways at all. Generally, a V/STOL aircraft needs to be able to hover. Helicopters are not considered under the V/STOL classification as the classification is only used for aeroplanes, aircraft that achieve lift (force) in forward flight by planing the air, thereby achieving speed and fuel efficiency that is typically greater than the capability of helicopters. V/STOL: Most V/STOL aircraft types were experiments or outright failures from the 1950s to 1970s. V/STOL aircraft types that have been produced in large numbers include the F-35B Lightning II, Harrier and V-22 Osprey. V/STOL: A rolling takeoff, sometimes with a ramp (ski-jump), reduces the amount of thrust required to lift an aircraft from the ground (compared with vertical takeoff), and hence increases the payload and range that can be achieved for a given thrust. For instance, the Harrier is incapable of taking off vertically with full weapons and fuel load. Hence V/STOL aircraft generally use a runway if it is available. I.e. short takeoff and vertical landing (STOVL) or conventional takeoff and landing (CTOL) operation is preferred to VTOL operation. V/STOL: V/STOL was developed to allow fast jets to be operated from clearings in forests, from very short runways, and from small aircraft carriers that would previously only have been able to carry helicopters. The main advantage of V/STOL aircraft is closer basing to the enemy, which reduces response time and tanker support requirements. In the case of the Falklands War, it also permitted high-performance fighter air cover and ground attack without a large aircraft carrier equipped with aircraft catapult. Lists of V/STOL aircraft: This is a partial list; there have been many designs for V/STOL aircraft. Vectored thrust Hawker P.1127/Kestrel/Harrier; four rotating nozzles for vectored thrust of fan and jet exhaust. Tilt-jet Bell XF-109 Bell 65 EWR VJ 101 Tilt-rotor AgustaWestland AW609 (originally Bell 609) AgustaWestland Project Zero technology demonstrator Bell XV-3 Bell XV-15 Bell-Boeing V-22 Osprey (scale up of XV-15) Bell V-280 Valor Tilt-wing Curtiss-Wright X-19 – four rotating propellers, tilt-wing. Canadair CL-84 Dynavert, two turboprop tilt-wing LTV XC-142 four-engine tilt-wing cross-shafted turboprop Bell X-22 rotating ducted propellers. Small transport prototype. Slightly smaller than V-22 Osprey. Lists of V/STOL aircraft: Hiller X-18 Separate thrust and lift Dornier Do 31 Jet transport with podded vector nozzles and lift engines Kamov Ka-22 Lockheed XV-4 Hummingbird Dassault Balzac V (V stands for vertical and is a modified Mirage III) Dassault Mirage IIIV the first VTOL capable of supersonic flight (Mach 2.03 during tests) Fokker/Republic D-24 Alliance Ryan XV-5. Fans in wings driven by engine exhaust gas. Lists of V/STOL aircraft: VFW VAK 191B Attack fighter similar to Harrier but supersonic dash speed, smaller wings and lift engines. Flown, but not operational. Yakovlev Yak-38 Yakovlev Yak-141 Short SC.1 Supersonic Although many aircraft have been proposed and built, with a few being tested, the F-35B is the first and only supersonic V/STOL aircraft to have reached operational service, having entered service in 2016. Lists of V/STOL aircraft: Bell D-188A Mach 2 swivelling engines, mockup stage EWR VJ 101 Mach 2 fighter, flown to Mach 1.04 but not operational Dassault Mirage IIIV Delta wing Mach 2 fighter with lift engines, first VTOL capable of supersonic and Mach 2 flight (Mach 2.03 during tests), not operational Hawker Siddeley P.1154 M1.7 Supersonic Harrier. It was not completed Republic AP-100 strike fighter concept Convair Model 200 Lift engines plus swivel tailpipe, not built Rockwell XFV-12 Built with complex "window blind" wings but could not lift its own weight Yakovlev Yak-141 Lift engines plus swivel tailpipe Lockheed Martin X-35B / F-35B uses a vectored-thrust tailpipe (the Pratt & Whitney F135) plus a shaft-driven lifting fan. It is the first aircraft capable of demonstrating transition from short take-off to supersonic flight to vertical landing on the same sortie.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Blade** Blade: A blade is the portion of a tool, weapon, or machine with an edge that is designed to puncture, chop, slice or scrape surfaces or materials. Blades are typically made from materials that are harder than those they are to be used on. Historically, humans have made blades from flaking stones such as flint or obsidian, and from various metal such as copper, bronze and iron. Modern blades are often made of steel or ceramic. Blades are one of humanity's oldest tools, and continue to be used for combat, food preparation, and other purposes. Blade: Blades work by concentrating force on the cutting edge. Certain blades, such as those used on bread knives or saws, are serrated, further concentrating force on the point of each tooth. Uses: During food preparation, knives are mainly used for slicing, chopping, and piercing.In combat, a blade may be used to slash or puncture, and may also be thrown or otherwise propelled. The function is to sever a nerve, muscle or tendon fibers, or blood vessel to disable or kill the adversary. Severing a major blood vessel typically leads to death due to exsanguination. Uses: Blades may be used to scrape, moving the blade sideways across a surface, as in an ink eraser, rather than along or through a surface. For construction equipment such as a grader, the ground-working implement is also referred to as the blade, typically with a replaceable cutting edge. Physics: A simple blade intended for cutting has two faces that meet at an edge. Ideally, this edge would have no roundness but in practice, all edges can be seen to be rounded to some degree under magnification either optically or with an electron microscope. Force is applied to the blade, either from the handle or pressing on the back of the blade. The handle or back of the blade has a large area compared to the fine edge. This concentration of applied force onto the small edge area increases the pressure exerted by the edge. It is this high pressure that allows a blade to cut through a material by breaking the bonds between the molecules/crystals/fibers/etc. in the material. This necessitates the blade being strong enough to resist breaking before the other material gives way. Physics: Geometry The angle at which the faces meet is important as a larger angle will make for a duller blade while making the edge stronger. A stronger edge is less likely to dull from fracture or have the edge roll out of shape. Physics: The shape of the blade is also important. A thicker blade will be heavier and stronger and stiffer than a thinner one of similar design while also making it experience more drag while slicing or piercing. A filleting knife will be thin enough to be very flexible while a carving knife will be thicker and stiffer; a dagger will be thin so it can pierce, while a camping knife will be thicker so it can be stronger and more durable. A strongly curved edge, like a talwar, will allow the user to draw the edge of the blade against an opponent even while close to the opponent where a straight sword would be more difficult to pull in the same fashion. The curved edge of an axe means that only a small length of the edge will initially strike the tree, concentrating force as does a thinner edge, whereas a straight edge could potentially land with the full length of its edge against a flat section of the tree. A splitting maul has a convex section to avoid getting stuck in the wood where chopping axes can be flat or even concave. A khopesh or falchion or kukri is angled and/or weighted at the distal end so that force is concentrated at the faster moving, heavier part of the blade maximizing cutting power and making it largely unsuitable for thrusting, whereas a rapier is thin and tapered allowing it to pierce and be moved with more agility while reducing its chopping power compared to a similarly sized sword. Physics: A serrated edge, such as on a saw or a bread knife, concentrates force onto the tips of the serrations which increases pressure as well as allowing soft or fibrous material (like wood, rope, bread, vegetables) to expand into the spaces between serrations. Whereas pushing any knife, even a bread knife, down onto a bread loaf will just squash the loaf as bread has a low elastic modulus (is soft) but high yield strain (loosely, can be stretched or squashed by a large proportion without breaking), drawing serrations across the loaf with little downward force will allow each serration to simultaneously cut the bread with much less deformation of the loaf. Similarly, pushing on a rope tends to squash the rope while drawing serrations across it sheers the rope fibers. Drawing a smooth blade is less effective as the blade is parallel to the direction draw but the serrations of a serrated blade are at an angle to the fibers. Serrations on knives are often symmetric allowing the blade to cut on both the forward and reverse strokes of a cut, a notable exception being Veff serrations which are designed to maximize cutting power while moving the blade away from the user. Saw blade serrations, for both wood and metal, are typically asymmetrical so that they cut while moving in only one direction. (Saws act by abrading a material into dust along a narrow channel, the kerf, whereas knives and similar act by forcing the material apart. This means that saws result in a loss of material and the serrations of a saw also serve to carry metal swarf and sawdust out of the cut channel.) Fullers are longitudinal channels either forged into the blade or later machined/milled out of the blade though the latter process is less desirable. This loss of material necessarily weakens the blade but serves to make the blade lighter without sacrificing stiffness. The same principle is applied in the manufacture of beams such as I-beams. Fullers are only of significant utility in swords. In most knives there is so little material removed by the fuller that it makes little difference to the weight of the blade and they are largely cosmetic. Physics: Materials Typically blades are made from a material that is about as hard, though usually harder, than the material to be cut. Insufficiently hard blades will be unable to cut a material or will wear away quickly as hardness is related to a material's ability to resist abrasion. However, blades must also be tough enough to resist the dynamic load of impact and as a general rule the harder a blade the less tough (the more brittle) a material. For example, a steel axehead is much harder than the wood it is intended to cut and is sufficiently tough to resist the impact resulting when swung against a tree while a ceramic kitchen knife, harder than steel, is very brittle (has low toughness) and can easily shatter if dropped onto the floor or twisted while inside the food it is cutting or carelessly stored under other kitchen utensils. This creates a tension between the intended use of the blade, the material it is to be made from, and any manufacturing processes (such as heat treatment in the case of steel blades that will affect a blade's hardness and toughness). A balance must be found between the sharpness and how well it can last. Methods that can circumvent this include differential hardening. This method yields an edge that can hold its sharpness as well as a body that is tough. Physics: Non-metals Prehistorically, and in less technologically advanced cultures even into modern times, tool and weapon blades have been made from wood, bone, and stone. Most woods are exceptionally poor at holding edges and bone and stone suffer from brittleness making them suffer from fracture when striking or struck. In modern times stone, in the form of obsidian, is used in some medical scalpels as it is capable of being formed into an exceedingly fine edge. Ceramic knives are non-metallic and non-magnetic. As non-metals do not corrode they remain rust and corrosion free but they suffer from similar faults as stone and bone, being rather brittle and almost entirely inflexible. They are harder than metal knives and so more difficult to sharpen, and some ceramic knives may be as hard or harder than some sharpening stones. For example, synthetic sapphire is harder than natural sharpening stones and is as hard as alumina sharpening stones. Zirconium dioxide is also harder than garnet sharpening stones and is nearly as hard as alumina. Both require diamond stones or silicon carbide stones to sharpen and care has to be taken to avoid chipping the blade. As such ceramic knives are seldom used outside of a kitchen and they are still quite uncommon. Plastic knives are difficult to make sharp and poorly retain an edge. They are largely used as low cost, disposable utensils or as children's utensils or in environments such as air travel where metal blades are prohibited. They are often serrated to compensate for their general lack of sharpness but, as evidenced by the fact they can cut food, they are still capable of inflicting injury. Plastic blades of designs other than disposable cutlery are prohibited or restricted in some jurisdictions as they are undetectable by metal detectors. Physics: Metals Native copper was used to make blades by ancient civilizations due to its availability. Copper's comparative softness causes it to deform easily; it does not hold an edge well and is poorly suited for working stone. Bronze is superior in this regard, and was taken up by later civilizations. Both bronze and copper can be work hardened by hitting the metal with a hammer. With technological advancement in smelting, iron came to be used in the manufacturing of blades. Steel, a range of alloys made from iron, has become the metal of choice for the modern age. Physics: Various alloys of steel can be made which offer a wide range of physical and chemical properties desirable for blades. For example, surgical scalpels are often made of stainless steel so that they remain free of rust and largely chemically inert; tool steels are hard and impact resistant (and often expensive as retaining toughness and hardness requires expensive alloying materials, and, being hard, they are difficult to make into their finished shape) and some are designed to resist changes to their physical properties at high temperatures. Steels can be further heat treated to optimize their toughness, which is important for impact blades, or their hardness, which allows them to retain an edge well with use (although harder metals require more effort to sharpen). Physics: Combined materials and heat-treatments It is possible to combine different materials, or different heat treatments, to produce desirable qualities in a blade. For example, the finest Japanese swords were routinely made of up to seven sections of metals and even poorer quality swords were often made of two. These would include soft irons that could absorb the energy of impact without fracturing but which would bend and poorly retain an edge, and hard steels more liable to shatter on impact but which retained an edge well. The combination provided a sword that would resist impact while remaining sharp, even though the edge could chip if abused. Pattern welding involved forging together twisted bars of soft (bendable) low carbon and hard (brittle) higher carbon iron. This was done because furnaces of the time were typically able to produce only one grade or the other, and neither was well suited for more than a very limited use blade. The ability of modern steelmakers to produce very high-quality steels of various compositions has largely relegated this technique to either historical recreations or to artistic works. Acid etching and polishing blades made of different grades of steel can be used to produce decorative or artistic effects. Physics: Japanese sword makers developed the technique of differential hardening by covering their sword blades in different thicknesses of clay before quenching. Thinner clay allowed the heated metal to cool faster, particularly along the edge. Faster cooling resulted in a finer crystal structure, resulting in a blade with a hard edge but a more flexible body. European sword makers produced similar results using differential tempering. Physics: Dulling Blades dull with use and abuse. This is particularly true of acute blades and those made of soft materials. Dulling usually occurs due to contact between the blade and a hard substance such as ceramic, stone, bone, glass, or metal. Physics: The more acute the blade, the more easily it will dull. As the blade near the edge is thinner, there is little material to remove before the edge is worn away to a thicker section. Thin edges can also roll over when force is applied it them, forming a section like the bottom part of a letter "J". For this reason, straight edge razors are frequently stropped to straighten the edge. Physics: Drawing a blade across any material tends to abrade both the blade, usually making it duller, and the cut material. Though softer than glass or many types of stone used in the kitchen, steel edges can still scratch these surfaces. The resulting scratch is full of very fine particles of ground glass or stone which will very quickly abrade the blade's edge and so dull it. Physics: In times when swords were regularly used in warfare, they required frequent sharpening because of dulling from contact with rigid armor, mail, metal rimmed shields, or other swords, for example. Particularly, hitting the edge of another sword by accident or in an emergency could chip away metal and even cause cracks through the blade. Soft-cored blades are more resistant to fracturing on impact. Physics: Nail pulls Folding pocket knives often have a groove cut in the side of the blade near the spine. This is called a nail pull and allows the fingernail to be inserted to swing the blade out of the holder. Knife patterns: Some of the most common shapes are listed below. Knife patterns: (S1) A straight back blade, also called standard or normal, has a curving edge and a straight back. A dull back lets the wielder use fingers to concentrate force; it also makes the knife heavy and strong for its size. The curve concentrates force on a smaller area, making cutting easier. This knife can chop as well as pick and slice. This is also the best single-edged blade shape for thrusting, as the edge cuts a swath that the entire width of the knife can pass through without the spine having to push aside any material on its path, as a sheepsfoot or drop-point knife would. Knife patterns: (S2) A trailing-point knife has a back edge that curves upward to end above the spine. This lets a lightweight knife have a larger curve on its edge and indeed the whole of the knife may be curved. Such a knife is optimized for slicing or slashing. Trailing point blades provide a larger cutting area, or belly, and are common on skinning knives. Knife patterns: (S3) A drop point blade has a convex curve of the back towards the point. It handles much like the clip-point, though with a stronger point typically less suitable for piercing. Swiss army pocket knives often have drop-points on their larger blades. Knife patterns: (S4) A clip-point blade is like a normal blade with the back "clipped". This clip can be either straight or concave. The back edge of the clip may have a false edge that could be sharpened to make a second edge. The sharp tip is useful as a pick, or for cutting in tight places. If the false edge is sharpened it increases the knife's effectiveness in piercing. As well, having the tip closer to the center of the blade allows greater control in piercing. The Bowie knife has a clip point blade and clip-points are common on pocket knives and other folding knives.(S5) A sheepsfoot blade has a straight edge and a straight dull back that curves towards the edge at the end. It gives the most control because the dull back edge is made to be held by fingers. Sheepsfoot blades were originally made to trim the hooves of sheep; their shape bears no similarity to the foot of a sheep.(S6) A Wharncliffe blade is similar in profile to a sheep's foot but the curve of the back edge starts closer to the handle and is more gradual. Its blade is much thicker than a knife of comparable size. Wharncliffes were used by sailors, as the shape of the tip prevented accidental penetration of the work or the user's hand with the sudden motion of a ship. Knife patterns: (S7) A spey point blade (once used for neutering livestock) has a single, sharp, straight edge that curves strongly upwards at the end to meet a short, dull, straight point from the dull back. With the curved end of the blade being closer to perpendicular to the blade's axis than other knives and lacking a point, making penetration unlikely, spey blades are common on Trapper style pocketknives for skinning fur-bearing animals. Knife patterns: (C1) Leaf blade with a distinctive recurved "waist" adding some curved "belly" to the knife facilitating slicing as well as shifting weight towards the tip meaning that it is commonly used for throwing knives as well as improving chopping ability. Knife patterns: (C2) A spear point blade is a symmetrically-shaped blade with a point aligned with the centerline of the blade's long axis. True spear-point blades are double-edged with a central spine, like a dagger or spear head. The spear point is one of the stronger blade point designs in terms of penetration stress, and is found on many thrusting knives such as the dagger. The term spear point is occasionally and confusingly used to describe small single-edged blades without a central spine, such as that of the pen knife, a small folding-blade pocket knife formerly used in sharpening quills for writing. Pen-knife may also nowadays refer to a knifelike weapon blade pattern of some of larger pocket knife blades that would otherwise be termed drop-point designs. Knife patterns: (C3) A needle point blade has a sharply-tapered acuminated point. It is frequently found on daggers such as the stiletto (which had no sharpened edges) and the Fairbairn–Sykes fighting knife. Its long, narrow point reduces friction and increases the blade's penetrative capabilities, but is liable to stick in bone and can break if abused. When the needle point is combined with a reinforced 'T' section running the length of the blade's spine, it is called a reinforced tip. One example of a knife with a reinforced tip is the pesh-kabz. Knife patterns: (C4) Kris or flame-bladed sword. These blades have a distinct recurved blade form and are sharpened on both sides, typically tapering to (or approximating) a symmetrical point. Knife patterns: (C5) Referred to in English speaking countries as a "tanto" or "tanto point" (a corruption of the Japanese word tantō though the tip bears no resemblance to a tantō) or a chisel point. ("Chisel point" refers to the straightness of the edge that comprises the end of the blade, whereas "chisel grind" usually refers to a blade ground on only one side even though chisels can be ground on one or both sides.) It is similar to, but not the same as, some early Japanese swords that had kamasu kissaki ("barracuda tip"), a nearly straight edge at the tip whereas the typical "tanto point" as found in the west has a straight edge. The barracuda tip sword was sharp but also fragile whereas modern tanto points are often advertised as being stronger at the tip for having nearly the whole thickness of the blade present until quite close to the end of the knife. The geometry of the angle under the point gives tanto blades excellent penetration capabilities. For this reason, tanto blades are often found on knives designed for combat or fighting applications, where the user may need to pierce heavy clothing or low-level soft body armor. Knife patterns: The lower illustration is a modified tanto where the end is clipped and often sharpened. This brings the tip closer to the center of the blade increasing control of the blade and improves penetration potential by having a finer point and a sharpened back edge. Knife patterns: (C6) A hawkbill blade is sharpened on the inside edge and is similar to carpet and linoleum knives. The point will tear even if the rest of the knife is comparatively dull. The karambit from Far South-East Asia is a hawkbill knife which is held with the blade extending from the bottom of the fist and the tip facing forward. The outside edge of a karambit may be sharp and if so may also feature a backward-facing point. Knife patterns: (C7) An ulu (lit. 'woman's knife' in Inuktitut) knife is a sharpened segment of a circle. This blade type has no point, and has a handle in the middle. It is good for scraping and sometimes chopping. The semi-circular version appears elsewhere in the world and is called a head knife. It is used in leatherworking both to scrape down leather (reducing thickness, i.e. skiving), and to make precise, rolling cuts for shapes other than straight lines. The circular version is a popular tool for slicing pizzas. One corner is placed at the edge of the pizza and the blade is rolled across in a diameter cut. Sword patterns: The sharp edges of a sword may be either curved or straight. Curved blades tend to glide more easily through soft materials, making these weapons more ideal for slicing. Techniques for such weapons feature drawing the blade across the opponent's body and back. For straight-edged weapons, many recorded techniques feature cleaving cuts, which deliver the power out to a point, striking directly in at the target's body, done to split flesh and bone rather than slice it. That being said, there also exist many historical slicing techniques for straight-edged weapons. Hacking cuts can be followed by a drawing action to maximize the cut's effectiveness. For more information see Western Martial Arts or kenjutsu. Sword patterns: Some weapons are made with only a single leading edge, such as the saber or dusack. The dusack has a "false edge" near the tip, which only extends down a portion of the blade's backside. Other weapons have a blade that's entirely dull except for a sharpened point, like the épée or foil, which prefer thrusts over cuts. A blade cannot perform a proper cut without an edge, and so in competitive fencing such attacks reward no points. Sword patterns: Some variations include: The flame blade (an undulated blade, for both psychological effect and some tactical advantage of using a non-standard blade: vibrations and easier parry) The colichemarde, found in smallsword Marks and decoration: Blades are sometimes marked or inscribed, for decorative purposes, or with the mark of either the maker or the owner. Blade decorations are often realized in inlay in some precious metal (gold or silver).Early blade inscriptions are known from the Bronze Age, a Hittite sword found at Hattusa bears an inscription chiseled into the bronze, stating that the blade was deposited as an offering to the storm-god by king Tuthaliya.Blade inscriptions become particularly popular in the 12th century knightly sword, based on the earlier, 9th to 11th century, the tradition of the so-called Ulfberht swords.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Telomerase RNA component** Telomerase RNA component: Telomerase RNA component, also known as TR, TER or TERC, is an ncRNA found in eukaryotes that is a component of telomerase, the enzyme used to extend telomeres. TERC serves as a template for telomere replication (reverse transcription) by telomerase. Telomerase RNAs differ greatly in sequence and structure between vertebrates, ciliates and yeasts, but they share a 5' pseudoknot structure close to the template sequence. The vertebrate telomerase RNAs have a 3' H/ACA snoRNA-like domain. Structure: TERC is a Long non-coding RNA (lncRNA) ranging in length from ~150nt in ciliates to 400-600nt in vertebrates, and 1,300nt in yeast (Alnafakh). Mature human TERC (hTR) is 451nt in length. TERC has extensive secondary structural features over 4 principal conserved domains. The core domain, the largest domain at the 5’ end of TERC, contains the CUAAC Telomere template sequence. Its secondary structure consists of a large loop containing the template sequence, a P1 loop-closing helix, and a P2/P3 pseudoknot. The core domain and CR4/CR5 conserved domain associate with TERT, and are the only domains of TERC necessary for in vitro catalytic activity of telomerase. The 3’ end of TERC consists of a conserved H/ACA domain, a 2 hairpin structure connected by a single-stranded hinge and bordered on the 3’ end by a single-stranded ACA sequence. The H/ACA domain binds Dyskerin, GAR1, NOP10, NHP2, to form an H/ACA RNP complex. The conserved CR7 domain is also localized at the 3’ end of TERC, and contains a 3nt CAB (Cajal body Localisation) box which binds TCAB1. Function: Telomerase is a ribonucleoprotein polymerase that maintains telomere ends by addition of the telomere repeat TTAGGG. This repeat does vary across eukaryotes (see the table on the telomere article for a complete list). The enzyme consists of a protein component (TERT) with reverse transcriptase activity, and an RNA component, encoded by this gene, that serves as a template for the telomere repeat. CCCUAA found near position 50 of the vertebrate TERC sequence acts as the template. Telomerase expression plays a role in cellular senescence, as it is normally repressed in postnatal somatic cells resulting in progressive shortening of telomeres. Deregulation of telomerase expression in somatic cells may be involved in oncogenesis. Studies in mice suggest that telomerase also participates in chromosomal repair, since de novo synthesis of telomere repeats may occur at double-stranded breaks. Homologs of TERC can also be found in the Gallid herpes viruses.The core domain of TERC contains the RNA template from which TERT synthesizes TTAGGG telomeric repeats. Unlike in other RNPs, in telomerase, the protein TERT is catalytic while the lncRNA TERC is structural, rather than acting as a ribozyme. The core region of TERC and TERT are sufficient to reconstitute catalytic telomerase activity in vitro. The H/ACA domain of TERC recruits the Dyskerin complex (DKC1, GAR1, NOP10, NHP2), which stabilises TERC, increasing telomerase complex formation and overall catalytic activity. The CR7 domain binds TCAB1, which localizes telomerase to cajal bodies, further increasing telomerase catalytic activity. TERC is ubiquitously expressed, even in cells lacking telomerase activity and TERT expression. As a result, various TERT-independent functional roles of TERC have been proposed. 14 genes containing a TERC binding motif are directly transcriptionally regulated by TERC through RNA-DNA triplex formation-mediated increase of expression. TERC-mediated upregulation of Lin37, Trpg1l, tyrobp, Usp16 stimulates the NF-κB pathway, resulting in increased expression and secretion of inflammatory cytokines. Biosynthesis: Unlike most lncRNAs which are assembled from introns by the spliceosome, hTR is directly transcribed from a dedicated promoter site located at genomic locus 3q26.2 by RNA polymerase II. Mature hTR is 451nt in length, but approximately 1/3 of cellular hTR transcripts at steady state have ~10nt genomically encoded 3’ tails. The majority of those extended hTR species have additional oligo-A 3’ extension. Processing of immature 3’-tailed hTR to mature 451nt hTR can be accomplished by direct 3’-5’ exoribonucleolytic degradation or by an indirect pathway of oligoadenylation by PAPD5, removal of 3’ oligo-A tail by the 3’-5’ RNA exonuclease PARN, and subsequent 3’-5’ exoribonucleolytic degradation. Extended hTR transcripts are also degraded by the RNA exosome.The 5’ ends of hTR transcripts are also additionally processed. TGS-1 hypermethylation the 5'-methylguanosine cap to an N2,2,7 trimethylguanosine (TMG) cap, which inhibits hTR maturation. Binding of the Dyskerin complex to transcribed H/ACA domains of hTR during transcription promotes termination of transcription. Control of the relative rates of these various competing pathways that activate or inhibit hTR maturation is a crucial element of regulation of overall telomerase activity. Clinical Significance: Loss of function mutations in the TERC genomic locus have been associated with a variety of degenerative diseases. Mutations in TERC have been associated with dyskeratosis congenita, idiopathic pulmonary fibrosis, aplastic anemia, and myelodysplasia. Overexpression and improper regulation of TERC have been associated with a variety of cancers. Upregulation of hTR is widely observed in patients with precancerous cervical phenotype as a result of HPV infection. Overexpression of TERC enhances MDV-mediated oncogenesis, and is observed in gastric carcinoma. Overexpression of TERC is also observed in inflammatory conditions such as Type II diabetes and multiple sclerosis, due to TERC-mediated activation of the NF-κB inflammatory pathway.TERC has been implicated as protective in osteoporosis, with its increased expression arresting the rate of osteogenesis. Due to its overexpression in a range of cancer phenotypes, TERC has been investigated as a potential cancer biomarker. It was found to be an effective biomarker of lung squamous cell carcinoma (LUSC).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Creation and annihilation operators** Creation and annihilation operators: Creation operators and annihilation operators are mathematical operators that have widespread applications in quantum mechanics, notably in the study of quantum harmonic oscillators and many-particle systems. An annihilation operator (usually denoted a^ ) lowers the number of particles in a given state by one. A creation operator (usually denoted a^† ) increases the number of particles in a given state by one, and it is the adjoint of the annihilation operator. In many subfields of physics and chemistry, the use of these operators instead of wavefunctions is known as second quantization. They were introduced by Paul Dirac.Creation and annihilation operators can act on states of various types of particles. For example, in quantum chemistry and many-body theory the creation and annihilation operators often act on electron states. They can also refer specifically to the ladder operators for the quantum harmonic oscillator. In the latter case, the raising operator is interpreted as a creation operator, adding a quantum of energy to the oscillator system (similarly for the lowering operator). They can be used to represent phonons. Constructing Hamiltonians using these operators has the advantage that the theory automatically satisfies the cluster decomposition theorem.The mathematics for the creation and annihilation operators for bosons is the same as for the ladder operators of the quantum harmonic oscillator. For example, the commutator of the creation and annihilation operators that are associated with the same boson state equals one, while all other commutators vanish. However, for fermions the mathematics is different, involving anticommutators instead of commutators. Ladder operators for the quantum harmonic oscillator: In the context of the quantum harmonic oscillator, one reinterprets the ladder operators as creation and annihilation operators, adding or subtracting fixed quanta of energy to the oscillator system. Creation/annihilation operators are different for bosons (integer spin) and fermions (half-integer spin). This is because their wavefunctions have different symmetry properties. First consider the simpler bosonic case of the photons of the quantum harmonic oscillator. Ladder operators for the quantum harmonic oscillator: Start with the Schrödinger equation for the one-dimensional time independent quantum harmonic oscillator, Make a coordinate substitution to nondimensionalize the differential equation The Schrödinger equation for the oscillator becomes Note that the quantity ℏω=hν is the same energy as that found for light quanta and that the parenthesis in the Hamiltonian can be written as The last two terms can be simplified by considering their effect on an arbitrary differentiable function f(q), which implies, coinciding with the usual canonical commutation relation −i[q,p]=1 , in position space representation: := −iddq Therefore, and the Schrödinger equation for the oscillator becomes, with substitution of the above and rearrangement of the factor of 1/2, If one defines as the "creation operator" or the "raising operator" and as the "annihilation operator" or the "lowering operator", the Schrödinger equation for the oscillator reduces to This is significantly simpler than the original form. Further simplifications of this equation enable one to derive all the properties listed above thus far. Ladder operators for the quantum harmonic oscillator: Letting p=−iddq , where p is the nondimensionalized momentum operator one has and Note that these imply The operators a and a† may be contrasted to normal operators, which commute with their adjoints.Using the commutation relations given above, the Hamiltonian operator can be expressed as One may compute the commutation relations between the a and a† operators and the Hamiltonian: These relations can be used to easily find all the energy eigenstates of the quantum harmonic oscillator as follows. Ladder operators for the quantum harmonic oscillator: Assuming that ψn is an eigenstate of the Hamiltonian H^ψn=Enψn . Using these commutation relations, it follows that This shows that aψn and a†ψn are also eigenstates of the Hamiltonian, with eigenvalues En−ℏω and En+ℏω respectively. This identifies the operators a and a† as "lowering" and "raising" operators between adjacent eigenstates. The energy difference between adjacent eigenstates is ΔE=ℏω The ground state can be found by assuming that the lowering operator possesses a nontrivial kernel: aψ0=0 with ψ0≠0 . Applying the Hamiltonian to the ground state, So ψ0 is an eigenfunction of the Hamiltonian. Ladder operators for the quantum harmonic oscillator: This gives the ground state energy E0=ℏω/2 , which allows one to identify the energy eigenvalue of any eigenstate ψn as Furthermore, it turns out that the first-mentioned operator in (*), the number operator N=a†a, plays the most important role in applications, while the second one, aa† can simply be replaced by N+1 Consequently, The time-evolution operator is then Explicit eigenfunctions The ground state ψ0(q) of the quantum harmonic oscillator can be found by imposing the condition that Written out as a differential equation, the wavefunction satisfies with the solution The normalization constant C is found to be 1/π4 from {\textstyle \int _{-\infty }^{\infty }\psi _{0}^{*}\psi _{0}\,dq=1} , using the Gaussian integral. Explicit formulas for all the eigenfunctions can now be found by repeated application of a† to ψ0 Matrix representation The matrix expression of the creation and annihilation operators of the quantum harmonic oscillator with respect to the above orthonormal basis is These can be obtained via the relationships aij†=⟨ψi|a†|ψj⟩ and aij=⟨ψi|a|ψj⟩ . The eigenvectors ψi are those of the quantum harmonic oscillator, and are sometimes called the "number basis". Generalized creation and annihilation operators: The operators derived above are actually a specific instance of a more generalized notion of creation and annihilation operators. The more abstract form of the operators are constructed as follows. Let H be a one-particle Hilbert space (that is, any Hilbert space, viewed as representing the state of a single particle). The (bosonic) CCR algebra over H is the algebra-with-conjugation-operator (called *) abstractly generated by elements a(f) , where f ranges freely over H , subject to the relations in bra–ket notation. Generalized creation and annihilation operators: The map a:f→a(f) from H to the bosonic CCR algebra is required to be complex antilinear (this adds more relations). Its adjoint is a†(f) , and the map f→a†(f) is complex linear in H. Thus H embeds as a complex vector subspace of its own CCR algebra. In a representation of this algebra, the element a(f) will be realized as an annihilation operator, and a†(f) as a creation operator. Generalized creation and annihilation operators: In general, the CCR algebra is infinite dimensional. If we take a Banach space completion, it becomes a C*-algebra. The CCR algebra over H is closely related to, but not identical to, a Weyl algebra. Generalized creation and annihilation operators: For fermions, the (fermionic) CAR algebra over H is constructed similarly, but using anticommutator relations instead, namely The CAR algebra is finite dimensional only if H is finite dimensional. If we take a Banach space completion (only necessary in the infinite dimensional case), it becomes a C∗ algebra. The CAR algebra is closely related, but not identical to, a Clifford algebra. Generalized creation and annihilation operators: Physically speaking, a(f) removes (i.e. annihilates) a particle in the state |f⟩ whereas a†(f) creates a particle in the state |f⟩ The free field vacuum state is the state {\textstyle \left\vert 0\right\rangle } with no particles, characterized by If |f⟩ is normalized so that ⟨f|f⟩=1 , then N=a†(f)a(f) gives the number of particles in the state |f⟩ Creation and annihilation operators for reaction-diffusion equations: The annihilation and creation operator description has also been useful to analyze classical reaction diffusion equations, such as the situation when a gas of molecules A diffuse and interact on contact, forming an inert product: A+A→∅ . To see how this kind of reaction can be described by the annihilation and creation operator formalism, consider ni particles at a site i on a one dimensional lattice. Each particle moves to the right or left with a certain probability, and each pair of particles at the same site annihilates each other with a certain other probability. Creation and annihilation operators for reaction-diffusion equations: The probability that one particle leaves the site during the short time period dt is proportional to nidt , let us say a probability αnidt to hop left and αnidt to hop right. All ni particles will stay put with a probability 1−2αnidt . (Since dt is so short, the probability that two or more will leave during dt is very small and will be ignored.) We can now describe the occupation of particles on the lattice as a 'ket' of the form |…,n−1,n0,n1,…⟩ . It represents the juxtaposition (or conjunction, or tensor product) of the number states …,|n−1⟩ |n0⟩ , |n1⟩,… located at the individual sites of the lattice. Recall that and for all n ≥ 0, while This definition of the operators will now be changed to accommodate the "non-quantum" nature of this problem and we shall use the following definition: note that even though the behavior of the operators on the kets has been modified, these operators still obey the commutation relation Now define ai so that it applies a to |ni⟩ . Correspondingly, define ai† as applying a† to |ni⟩ . Thus, for example, the net effect of ai−1ai† is to move a particle from the (i−1) -th to the i-th site while multiplying with the appropriate factor. Creation and annihilation operators for reaction-diffusion equations: This allows writing the pure diffusive behavior of the particles as The reaction term can be deduced by noting that n particles can interact in n(n−1) different ways, so that the probability that a pair annihilates is λn(n−1)dt , yielding a term where number state n is replaced by number state n − 2 at site i at a certain rate. Creation and annihilation operators for reaction-diffusion equations: Thus the state evolves by Other kinds of interactions can be included in a similar manner. This kind of notation allows the use of quantum field theoretic techniques to be used in the analysis of reaction diffusion systems. Creation and annihilation operators in quantum field theories: In quantum field theories and many-body problems one works with creation and annihilation operators of quantum states, ai† and ai . These operators change the eigenvalues of the number operator, by one, in analogy to the harmonic oscillator. The indices (such as i ) represent quantum numbers that label the single-particle states of the system; hence, they are not necessarily single numbers. For example, a tuple of quantum numbers (n,ℓ,m,s) is used to label states in the hydrogen atom. Creation and annihilation operators in quantum field theories: The commutation relations of creation and annihilation operators in a multiple-boson system are, where [⋅,⋅] is the commutator and δij is the Kronecker delta. For fermions, the commutator is replaced by the anticommutator {⋅,⋅} Therefore, exchanging disjoint (i.e. i≠j ) operators in a product of creation or annihilation operators will reverse the sign in fermion systems, but not in boson systems. Creation and annihilation operators in quantum field theories: If the states labelled by i are an orthonormal basis of a Hilbert space H, then the result of this construction coincides with the CCR algebra and CAR algebra construction in the previous section but one. If they represent "eigenvectors" corresponding to the continuous spectrum of some operator, as for unbound particles in QFT, then the interpretation is more subtle. Creation and annihilation operators in quantum field theories: Normalization While Zee obtains the momentum space normalization [a^p,a^q†]=δ(p−q) via the symmetric convention for Fourier transforms, Tong and Peskin & Schroeder use the common asymmetric convention to obtain [a^p,a^q†]=(2π)3δ(p−q) . Each derives [ϕ^(x),π^(x′)]=iδ(x−x′) Srednicki additionally merges the Lorentz-invariant measure into his asymmetric Fourier measure, dk~=d3k(2π)32ω , yielding [a^k,a^k′†]=(2π)32ωδ(k−k′)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Unified interoperability** Unified interoperability: Unified interoperability is the property of a system that allows for the integration of real-time and non-real time communications, activities, data, and information services (i.e., unified) and the display and coordination of those services across systems and devices (i.e., interoperability). Unified interoperability provides the capability to communicate and exchange processing across different applications, data, and infrastructure. Unified communications: Unified communications has been led by the business world, which has a need for efficiency, simplicity, and speed. Rather than a single tool or product, unified communications is a set of products that deliver a nearly identically user experience across multiple devices or media types. The system begins with “presence information” - a feature of telecommunications technology that “senses” where a user is in relation to the technology. This change has been dominated by telecommunications providers integrating video, instant messaging, voice, and collaboration. Unified Communications Interoperability Forum: In May 2010, a number of communications technology vendors founded a nonprofit organization for the advancement of interoperability. The goal of the Unified Communications Interoperability Forum is to enable complete interoperability of hardware and software across huge networks of systems. The UCIF relies on existing standards rather than the authoring of new ones. Members of the UCIF include (*founding member): HP* Microsoft* Polycom* Logitech* Juniper Networks* Acme Packet Huawei Aspect Software AudioCodes Broadcom BroadSoft Brocade Communications Systems ClearOne Jabra Plantronics Siemens Enterprise Communications Teliris Interoperability: In the broadest sense, interoperability is the ability of multiple systems (usually computer systems) to work together seamlessly. In the Information Age, interoperability is a highly desirable trait for most business systems. Likewise, as homes become more infused with networked technologies (desktop PCs, tablet computers, smartphones, Internet-ready television), interoperability becomes an issue even for the average consumer.Computer operating systems are a prime example of interoperability, wherein several programs from different vendors are able to co-exist and, in many cases, exchange data in a meaningful way. An operating system is also “unified” in the sense that it presents the user with a common, easy to understand computer interface for executing numerous tasks. The unified interoperability of computers means that users need not have specialized knowledge about how computers function.A system with the property of interoperability will retain that property well into the future. The system will be adaptable to the rapid changes in technology with only minor adjustments. Syntactic interoperability: The most fundamental level of interoperability is syntactic interoperability. At this level, systems can exchange data without loss or corruption. Certain data formats are especially suited to the exchange of data between diverse systems. XML (extensible markup language), for instance, allows data to be transmitted in a comprehensible format for people and machines. SQL (structured query language), on the other hand, is an industry-standard, nearly universal format for compiling information in a database. SQL databases are essential for a business such as Amazon.com, with its vast catalog of products, attributes, and consumer reviews. Semantic interoperability: Semantic interoperability goes a step further than syntactic interoperability. Systems with semantic interoperability can not only exchange data effortlessly, but also interpret and communicate that data to human users in a meaningful, actionable way. Distributed functions and processing interoperability: Distributed functions and processing interoperability focus on the ability to create new products, applications and operating models without traditional intermediaries like data models, databases or large system integrations through establishing a Unified Interoperability framework between normally, diverse and distributed sources, data, technology and other assets.It enables business problems to be solved by connecting interoperable components of any characteristic into single, uniform, global “instruction chain” of functionality. Components use existing IP or applications and so integrate disparate technology to a uniform platform. Configuration models combine runtime processing infrastructure for common and predictable performance, security, resiliency, and availability with the whole process, enabling the uniform exchange of data and consistent processing across components, irrespective of technology, format or location. Benefits: Unified interoperability offers benefits for every stakeholder in a system. For customers and end-users of a system, unified interoperability offers a more convenient, satisfying experience. In business, interoperability helps lower costs and improves overall efficiency. As businesses strive to maximize the efficiency of their integrated systems, they encourage innovation and problem solving.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ROYGBIV** ROYGBIV: ROYGBIV is an acronym for the sequence of hues commonly described as making up a rainbow: red, orange, yellow, green, blue, indigo, and violet. There are several mnemonics that can be used for remembering this color sequence, such as the name "Roy G. Biv" or sentences such as "Richard of York Gave Battle in Vain". History: In the Renaissance, several artists tried to establish a sequence of up to seven primary colors from which all other colors could be mixed. In line with this artistic tradition, Sir Isaac Newton divided his color circle, which he constructed to explain additive color mixing, into seven colors. Originally he used only five colors, but later he added orange and indigo to match the number of musical notes in the major scale.The Munsell color system, the first formal color notation system (1905), names only five "principal hues": red, yellow, green, blue, and purple. Mnemonics: Isaac Newton's color sequence (red, orange, yellow, green, blue, indigo, violet) is kept alive today by several popular mnemonics. One is simply the nonsense word roygbiv, which is an acronym for the seven colors. This word can also be envisioned as a person's name, "Roy G. Biv".Another traditional mnemonic device has been to turn the initial letters of the seven spectral colors into a sentence, most commonly "Richard Of York Gave Battle In Vain" (or the slight alternative "Richard Of York Gained Battles In Vain"). This mnemonic is said to refer to the defeat and death of Richard, Duke of York at the Battle of Wakefield in 1460, or to his son Richard III being defeated at the battle of Bosworth Field in 1485. Another sentence sometimes used is "Read Out Your Good Book In Verse", referring to the Bible. Mnemonics: The color sequence may also be recalled in reverse order with the mnemonic vibgyor.In the modern era, these traditional mnemonics have been adapted to reflect the use of the rainbow flag as a symbol of LGBT movements. In Ireland, a campaign to reduce homophobic prejudice among schoolchildren revolves around the phrase "Respect Others, You Grow By Including Variety".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Complex hyperbolic space** Complex hyperbolic space: In mathematics, hyperbolic complex space is a Hermitian manifold which is the equivalent of the real hyperbolic space in the context of complex manifolds. The complex hyperbolic space is a Kähler manifold, and it is characterised by being the only simply connected Kähler manifold whose holomorphic sectional curvature is constant equal to -1. Its underlying Riemannian manifold has non-constant negative curvature, pinched between -1 and -1/4 (or -4 and -1, according to the choice of a normalization of the metric): in particular, it is a CAT(-1/4) space. Complex hyperbolic spaces are also the symmetric spaces associated with the Lie groups PU(n,1) . They constitute one of the three families of rank one symmetric spaces of noncompact type, together with real and quaternionic hyperbolic spaces, classification to which must be added one exceptional space, the Cayley plane. Construction of the complex hyperbolic space: Projective model Let := −u1v1¯+u2v2¯+⋯+un+1vn+1¯ be a pseudo-Hermitian form of signature (n,1) in the complex vector space Cn+1 . The projective model of the complex hyperbolic space is the projectivized space of all negative vectors for this form: HCn={[ξ]∈CPn|⟨ξ,ξ⟩<0}. Construction of the complex hyperbolic space: As an open set of the complex projective space, this space is endowed with the structure of a complex manifold. It is biholomorphic to the unit ball of Cn , as one can see by noting that a negative vector must have non zero first coordinate, and has a unique representant with first coordinate equal to 1 in the projective space. The condition ⟨ξ,ξ⟩<0 when ξ=(1,x1,…,xn+1) is equivalent to ∑i=1n|xi|2<1 . The map sending the point (x1,…,xn) of the unit ball of Cn to the point [1:x1:⋯:xn] of the projective space thus defines the required biholomorphism. Construction of the complex hyperbolic space: This model is the equivalent of the Poincaré disk model. Contrary to the real hyperbolic space, the complex projective space cannot be defined as a sheet of the hyperboloid ⟨x,x⟩=−1 , because the projection of this hyperboloid onto the projective model has connected fiber S1 (the fiber being Z/2Z in the real case). Construction of the complex hyperbolic space: A Hermitian metric is defined on HCn in the following way: if p∈Cn+1 belongs to the cone ⟨p,p⟩=−1 , then the restriction of ⟨⋅,⋅⟩ to the orthogonal space (Cp)⊥⊂Cn+1 defines a definite positive hermitian product on this space, and because the tangent space of HCn at the point [p] can be naturally identified with (Cp)⊥ , this defines a hermitian inner product on T[p]HCn . As can be seen by computation, this inner product does not depend on the choice of the representant p . In order to have holomorphic sectional curvature equal to -1 and not -4, one needs to renormalize this metric by a 1/2 factor. This metric is a Kähler metric. Construction of the complex hyperbolic space: Siegel model The Siegel model of complex hyperbolic space is the subset of (w,z)∈C×Cn−1 such that i(w¯−w)>2zz¯. It is biholomorphic to the unit ball in Cn via the Cayley transform (w,z)↦(w−iw+i,2zw+i). Group of holomorphic isometries and symmetric space: The group of holomorphic isometries of the complex hyperbolic space is the Lie group PU(n,1) . This group acts transitively on the complex hyperbolic space, and the stabilizer of a point is isomorphic to the unitary group U(n) . The complex hyperbolic space is thus homeomorphic to the homogeneous space PU(n,1)/U(n) . The stabilizer U(n) is the maximal compact subgroup of PU(n,1) As a consequence, the complex hyperbolic space is the Riemannian symmetric space SU(n,1)/S(U(n)U(1)) , where SU(n,1) is the pseudo-unitary group. Curvature: The group of holomorphic isometries PU(n,1) acts transitively on the tangent complex lines of the hyperbolic complex space. This is why this space has constant holomorphic sectional curvature, that can be computed to be equal to -4 (with the above normalization of the metric). This property characterizes the hyperbolic complex space : up to isometric biholomorphism, there is only one simply connected complete Kähler manifold of given constant holomorphic sectional curvature.Furthermore, when a Hermitian manifold has constant holomorphic sectional curvature equal to k , the sectional curvature of every real tangent plane Π is completely determined by the formula : cos 2⁡(α(Π)) where α(Π) is the angle between Π and JΠ , ie the infimum of the angles between a vector in Π and a vector in JΠ . This angle equals 0 if and only if Π is a complex line, and equals π/2 if and only if Π is totally real. Thus the sectional curvature of the complex hyperbolic space varies from -4 (for complex lines) to -1 (for totally real planes). Curvature: In complex dimension 1, every real plane in the tangent space is a complex line: thus the hyperbolic complex space of dimension 1 has constant curvature equal to -1, and by the uniformization theorem, it is isometric to the real hyperbolic plane. Hyperbolic complex spaces can thus be seen as another high-dimensional generalization of the hyperbolic plane, less standard than the real hyperbolic spaces. A third possible generalization is the homogeneous space SLn(R)/SOn(R) , which for n=2 again coincides with the hyperbolic plane, but becomes a symmetric space of rank greater than 1 when n≥3 Totally geodesic subspaces: Every totally geodesic submanifold of the complex hyperbolic space of dimension n is one of the following : a copy of a complex hyperbolic space of smaller dimension a copy of a real hyperbolic space of real dimension smaller than n In particular, there is no codimension 1 totally geodesic subspace of the complex hyperbolic space.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thyroid ima artery** Thyroid ima artery: The thyroid ima artery (thyroidea ima artery, arteria thyroidea ima, thyroid artery of Neubauer or the lowest thyroid artery) is an artery of the head and neck. It is an anatomical variant that, when present, supplies blood to the thyroid gland primarily, or the trachea, the parathyroid gland and the thymus gland (as thymica accessoria) in rare cases. It has also been reported to be a compensatory artery when one or both of the inferior thyroid arteries are absent, and in a few cases the only source of blood to the thyroid gland. Furthermore, it varies in origin, size, blood supply, and termination, and occurs in around 3.8% of the population and is 4.5 times more common in fetuses than in adults. Because of the variations and rarity, it may lead to surgical complications, particularly during tracheostomy and other airway managements. Structure: The thyroid ima artery is an embryonic artery and it occurs because of the failure of the vessel to close, remaining patent (open).The artery has a variable origin. It mostly arises from the brachiocephalic trunk, but may also originate from the aortic arch, the right common carotid, the subclavian, the pericardiacophrenic artery, the thyrocervical trunk, the transverse scapular or the internal thoracic artery. It ascends in front of the trachea in the superior mediastinum to the lower part of the thyroid gland.It differs in size and ranges from as small as accessory thyroid arteries to the size of primary thyroid vessels. The diameter of the lumen of the artery ranges from 3 to 5 millimetres (0.12 to 0.20 in). The artery may be present as an accessory thyroid artery, but sometimes appears to compensate for incompetence or absence of one or more main thyroid vessels. Since it begins from below the thyroid gland and ascends upwards, it is mostly associated with absence or reduced size of the inferior thyroid arteries. In such cases, it is known as the accessory inferior thyroid artery. In rare cases, the artery has been seen to be compensating for absence of one or both superior thyroid arteries.In cases where the length of the thyroid ima artery is shorter, the artery ends by supplying the thymus gland and is known as thymica accessoria. Function: When present, the thyroid ima's chief supply is the thyroid gland, though it also supplies the trachea. The artery may extend and supply the parathyroid glands. An infrequently observed artery, it is more frequently reported in the context of enlarged parathyroid glands (parathyroid adenomas). The artery ends by supplying the thyroid gland, or the parathyroid glands, as a single unit or as multiple branches. The artery is also found to be the only supply of the parathyroid gland in rare cases. Clinical significance: The artery is only present in approximately 3–10% of the population. Thyroid ima artery is of surgical importance; due to its relatively small size and infrequent presence it can cause complications such as severe bleeding. Knowledge of occurrence of the artery is especially important during tracheostomy, sternotomy and thyroidectomy. Because the artery is smaller than the other thyroid vessels, and having an origin from one of the bigger vessels, a brisk cut while performing the surgery may cause complications such as severe hemorrhage and significant blood loss. The artery, if dissected, may draw back into the mediastinum and further complicate the condition by causing hemorrhage and clots in the thoracic cavity. History: The thyroid ima artery was first defined by German anatomist Johann Ernst Neubauer in the year 1772. Hence, it was named the thyroid artery of Neubauer. The artery originates lower than the inferior thyroid arteries, so it is also known as the lowest thyroid artery. Arteria thyroidea ima is the Latin name of the artery. Other animals: The presence of thyroid ima artery is also observed in other higher primates. The artery has been reported in gorillas, gibbons, macaques and gray langurs. Variations in the origin were also seen; it was found to originate from the aorta in the thorax, or the carotid in the neck.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Numerical linear algebra** Numerical linear algebra: Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible. Numerical linear algebra: Numerical linear algebra aims to solve problems of continuous mathematics using finite precision computers, so its applications to the natural and social sciences are as vast as the applications of continuous mathematics. It is often a fundamental part of engineering and computational science problems, such as image and signal processing, telecommunication, computational finance, materials science simulations, structural biology, data mining, bioinformatics, and fluid dynamics. Matrix methods are particularly used in finite difference methods, finite element methods, and the modeling of differential equations. Noting the broad applications of numerical linear algebra, Lloyd N. Trefethen and David Bau, III argue that it is "as fundamental to the mathematical sciences as calculus and differential equations",: x  even though it is a comparatively small field. Because many properties of matrices and vectors also apply to functions and operators, numerical linear algebra can also be viewed as a type of functional analysis which has a particular emphasis on practical algorithms.: ix Common problems in numerical linear algebra include obtaining matrix decompositions like the singular value decomposition, the QR factorization, the LU factorization, or the eigendecomposition, which can then be used to answer common linear algebraic problems like solving linear systems of equations, locating eigenvalues, or least squares optimisation. Numerical linear algebra's central concern with developing algorithms that do not introduce errors when applied to real data on a finite precision computer is often achieved by iterative methods rather than direct ones. History: Numerical linear algebra was developed by computer pioneers like John von Neumann, Alan Turing, James H. Wilkinson, Alston Scott Householder, George Forsythe, and Heinz Rutishauser, in order to apply the earliest computers to problems in continuous mathematics, such as ballistics problems and the solutions to systems of partial differential equations. The first serious attempt to minimize computer error in the application of algorithms to real data is John von Neumann and Herman Goldstine's work in 1947. The field has grown as technology has increasingly enabled researchers to solve complex problems on extremely large high-precision matrices, and some numerical algorithms have grown in prominence as technologies like parallel computing have made them practical approaches to scientific problems. Matrix decompositions: Partitioned matrices For many problems in applied linear algebra, it is useful to adopt the perspective of a matrix as being a concatenation of column vectors. For example, when solving the linear system x=A−1b , rather than understanding x as the product of A−1 with b, it is helpful to think of x as the vector of coefficients in the linear expansion of b in the basis formed by the columns of A.: 8  Thinking of matrices as a concatenation of columns is also a practical approach for the purposes of matrix algorithms. This is because matrix algorithms frequently contain two nested loops: one over the columns of a matrix A, and another over the rows of A. For example, for matrices Am×n and vectors xn×1 and ym×1 , we could use the column partitioning perspective to compute Ax + y as Singular value decomposition The singular value decomposition of a matrix Am×n is A=UΣV∗ where U and V are unitary, and Σ is diagonal. The diagonal entries of Σ are called the singular values of A. Because singular values are the square roots of the eigenvalues of AA∗ , there is a tight connection between the singular value decomposition and eigenvalue decompositions. This means that most methods for computing the singular value decomposition are similar to eigenvalue methods;: 36  perhaps the most common method involves Householder procedures.: 253 QR factorization The QR factorization of a matrix Am×n is a matrix Qm×m and a matrix Rm×n so that A = QR, where Q is orthogonal and R is upper triangular.: 50 : 223  The two main algorithms for computing QR factorizations are the Gram–Schmidt process and the Householder transformation. The QR factorization is often used to solve linear least-squares problems, and eigenvalue problems (by way of the iterative QR algorithm). Matrix decompositions: LU factorization An LU factorization of a matrix A consists of a lower triangular matrix L and an upper triangular matrix U so that A = LU. The matrix U is found by an upper triangularization procedure which involves left-multiplying A by a series of matrices M1,…,Mn−1 to form the product Mn−1⋯M1A=U , so that equivalently L=M1−1⋯Mn−1−1 .: 147 : 96 Eigenvalue decomposition The eigenvalue decomposition of a matrix Am×m is A=XΛX−1 , where the columns of X are the eigenvectors of A, and Λ is a diagonal matrix the diagonal entries of which are the corresponding eigenvalues of A.: 33  There is no direct method for finding the eigenvalue decomposition of an arbitrary matrix. Because it is not possible to write a program that finds the exact roots of an arbitrary polynomial in finite time, any general eigenvalue solver must necessarily be iterative.: 192 Algorithms: Gaussian elimination From the numerical linear algebra perspective, Gaussian elimination is a procedure for factoring a matrix A into its LU factorization, which Gaussian elimination accomplishes by left-multiplying A by a succession of matrices Lm−1⋯L2L1A=U until U is upper triangular and L is lower triangular, where L≡L1−1L2−1⋯Lm−1−1 .: 148  Naive programs for Gaussian elimination are notoriously highly unstable, and produce huge errors when applied to matrices with many significant digits. The simplest solution is to introduce pivoting, which produces a modified Gaussian elimination algorithm that is stable.: 151 Solutions of linear systems Numerical linear algebra characteristically approaches matrices as a concatenation of columns vectors. In order to solve the linear system x=A−1b , the traditional algebraic approach is to understand x as the product of A−1 with b. Numerical linear algebra instead interprets x as the vector of coefficients of the linear expansion of b in the basis formed by the columns of A.: 8 Many different decompositions can be used to solve the linear problem, depending on the characteristics of the matrix A and the vectors x and b, which may make one factorization much easier to obtain than others. If A = QR is a QR factorization of A, then equivalently Rx=Q∗b . This is as easy to compute as a matrix factorization.: 54  If A=XΛX−1 is an eigendecomposition A, and we seek to find b so that b = Ax, with b′=X−1b and x′=X−1x , then we have b′=Λx′ .: 33  This is closely related to the solution to the linear system using the singular value decomposition, because singular values of a matrix are the absolute values of its eigenvalues, which are also equivalent to the square roots of the absolute values of the eigenvalues of the Gram matrix X∗X . And if A = LU is an LU factorization of A, then Ax = b can be solved using the triangular matrices Ly = b and Ux = y.: 147 : 99 Least squares optimisation Matrix decompositions suggest a number of ways to solve the linear system r = b − Ax where we seek to minimize r, as in the regression problem. The QR algorithm solves this problem by computing the reduced QR factorization of A and rearranging to obtain R^x=Q^∗b . This upper triangular system can then be solved for x. The SVD also suggests an algorithm for obtaining linear least squares. By computing the reduced SVD decomposition A=U^Σ^V∗ and then computing the vector U^∗b , we reduce the least squares problem to a simple diagonal system.: 84  The fact that least squares solutions can be produced by the QR and SVD factorizations means that, in addition to the classical normal equations method for solving least squares problems, these problems can also be solved by methods that include the Gram-Schmidt algorithm and Householder methods. Conditioning and stability: Allow that a problem is a function f:X→Y , where X is a normed vector space of data and Y is a normed vector space of solutions. For some data point x∈X , the problem is said to be ill-conditioned if a small perturbation in x produces a large change in the value of f(x). We can quantify this by defining a condition number which represents how well-conditioned a problem is, defined as Instability is the tendency of computer algorithms, which depend on floating-point arithmetic, to produce results that differ dramatically from the exact mathematical solution to a problem. When a matrix contains real data with many significant digits, many algorithms for solving problems like linear systems of equation or least squares optimisation may produce highly inaccurate results. Creating stable algorithms for ill-conditioned problems is a central concern in numerical linear algebra. One example is that the stability of householder triangularization makes it a particularly robust solution method for linear systems, whereas the instability of the normal equations method for solving least squares problems is a reason to favour matrix decomposition methods like using the singular value decomposition. Some matrix decomposition methods may be unstable, but have straightforward modifications that make them stable; one example is the unstable Gram–Schmidt, which can easily be changed to produce the stable modified Gram–Schmidt.: 140  Another classical problem in numerical linear algebra is the finding that Gaussian elimination is unstable, but becomes stable with the introduction of pivoting. Iterative methods: There are two reasons that iterative algorithms are an important part of numerical linear algebra. First, many important numerical problems have no direct solution; in order to find the eigenvalues and eigenvectors of an arbitrary matrix, we can only adopt an iterative approach. Second, noniterative algorithms for an arbitrary m×m matrix require O(m3) time, which is a surprisingly high floor given that matrices contain only m2 numbers. Iterative approaches can take advantage of several features of some matrices to reduce this time. For example, when a matrix is sparse, an iterative algorithm can skip many of the steps that a direct approach would necessarily follow, even if they are redundant steps given a highly structured matrix. Iterative methods: The core of many iterative methods in numerical linear algebra is the projection of a matrix onto a lower dimensional Krylov subspace, which allows features of a high-dimensional matrix to be approximated by iteratively computing the equivalent features of similar matrices starting in a low dimension space and moving to successively higher dimensions. When A is symmetric and we wish to solve the linear problem Ax = b, the classical iterative approach is the conjugate gradient method. If A is not symmetric, then examples of iterative solutions to the linear problem are the generalized minimal residual method and CGN. If A is symmetric, then to solve the eigenvalue and eigenvector problem we can use the Lanczos algorithm, and if A is non-symmetric, then we can use Arnoldi iteration. Software: Several programming languages use numerical linear algebra optimisation techniques and are designed to implement numerical linear algebra algorithms. These languages include MATLAB, Analytica, Maple, and Mathematica. Other programming languages which are not explicitly designed for numerical linear algebra have libraries that provide numerical linear algebra routines and optimisation; C and Fortran have packages like Basic Linear Algebra Subprograms and LAPACK, python has the library NumPy, and Perl has the Perl Data Language. Many numerical linear algebra commands in R rely on these more fundamental libraries like LAPACK. More libraries can be found on the List of numerical libraries.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Daisy grubber** Daisy grubber: A daisy grubber is a garden tool that is used to pull out roots. It is effective because it can pull out deep roots yet cause little or no disturbance to the surrounding soil.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hutchinson Patent Stopper** Hutchinson Patent Stopper: Charles G. Hutchinson invented and patented the Hutchinson Patent Stopper in 1879 as a replacement for cork bottle stoppers which were commonly used as stoppers on soda water or pop bottles. His invention employed a wire spring attached to a rubber seal. Production of these stoppers was discontinued after 1912.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Planar lightwave circuit interferometer** Planar lightwave circuit interferometer: An interferometer is an optical measuring device using the principle of light waves canceling and reinforcing each other. Interferometers are typically used to accurately measure distances. Planar lightwave circuits are either optical integrated circuits (ICs) or optical circuit boards made using the same manufacturing techniques as their electronic counterparts, using optical waveguides to route photons the same way that metal traces are used to route electrons in electronic ICs and circuit boards. A planar lightwave circuit interferometer (PLCI) is a planar lightwave circuit configured as an interferometer. PLCIs can take on any form which is rigidly printable, e.g. Mach-Zehnder, Michelson, Young's interferometer, etc. PLCIs are often found in products that are mass-produced, such as multiplexers/demultiplexers used in communications technology.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Typing** Typing: Typing is the process of writing or inputting text by pressing keys on a typewriter, computer keyboard, mobile phone, or calculator. It can be distinguished from other means of text input, such as handwriting and speech recognition. Text can be in the form of letters, numbers and other symbols. The world's first typist was Lillian Sholes from Wisconsin in the United States, the daughter of Christopher Sholes, who invented the first practical typewriter.User interface features such as spell checker and autocomplete serve to facilitate and speed up typing and to prevent or correct errors the typist may make. Techniques: Hunt and peck Hunt and peck (two-fingered typing) is a common form of typing in which the typist presses each key individually. Instead of relying on the memorized position of keys, the typist must find each key by sight. Although good accuracy may be achieved, the use of this method may also prevent the typist from being able to see what has been typed without glancing away from the keys, and any typing errors that are made may not be noticed immediately. Because only a few fingers are used in this technique, this also means that the fingers are forced to move a much greater distance. Techniques: Touch typing In this technique, the typist keeps their eyes on the source copy at all times. Touch typing also involves the use of the home row method, where typists rest their wrist down, rather than lifting up and typing (which can cause carpal tunnel syndrome). To avoid this, typists should sit up tall, leaning slightly forward from the waist, place their feet flat on the floor in front of them with one foot slightly in front of the other, and keep their elbows close to their sides with forearms slanted slightly upward to the keyboard; fingers should be curved slightly and rest on the home row. Techniques: Many touch typists also use keyboard shortcuts when typing on a computer. This allows them to edit their document without having to take their hands off the keyboard to use a mouse. An example of a keyboard shortcut is pressing the Ctrl key plus the S key to save a document as they type, or the Ctrl key plus the Z key to undo a mistake. Other shortcuts are the Ctrl key plus the C to copy and the Ctrl key and the V key to paste, and the Ctrl key and the X key to cut. Many experienced typists can feel or sense when they have made an error and can hit the ← Backspace key and make the correction with no increase in time between keystrokes. Techniques: Hybrid There are many idiosyncratic typing styles in between novice-style "hunt and peck" and touch typing. For example, many "hunt and peck" typists have the keyboard layout memorized and are able to type while focusing their gaze on the screen. Some use just two fingers, while others use 3–6 fingers. Some use their fingers very consistently, with the same finger being used to type the same character every time, while others vary the way they use their fingers. Techniques: One study examining 30 subjects, of varying different styles and expertise, has found minimal difference in typing speed between touch typists and self-taught hybrid typists. According to the study, "The number of fingers does not determine typing speed... People using self-taught typing strategies were found to be as fast as trained typists... instead of the number of fingers, there are other factors that predict typing speed... fast typists... keep their hands fixed on one position, instead of moving them over the keyboard, and more consistently use the same finger to type a certain letter." To quote doctoral candidate Anna Feit: "We were surprised to observe that people who took a typing course, performed at similar average speed and accuracy, as those that taught typing to themselves and only used 6 fingers on average." Thumbing A late 20th century trend in typing, primarily used with devices with small keyboards (such as PDAs and Smartphones), is thumbing or thumb typing. This can be accomplished using either only one thumb or both the thumbs, with more proficient typists reaching speeds of 100 words per minute. Similar to desktop keyboards and input devices, if a user overuses keys which need hard presses and/or have small and unergonomic layouts, it could cause thumb tendonitis or other repetitive strain injury. Words per minute: Words per minute (WPM) is a measure of typing speed, commonly used in recruitment. For the purposes of WPM measurement a word is standardized to five characters or keystrokes. Therefore, "brown" counts as one word, but "mozzarella" counts as two. The benefits of a standardized measurement of input speed are that it enables comparison across language and hardware boundaries. The speed of an Afrikaans-speaking operator in Cape Town can be compared with a French-speaking operator in Paris. Today, even Written Chinese can be typed very quickly using the combination of a software prediction system and by typing their sounds in Pinyin. Such prediction software even allows typing short-hand forms while producing complete characters. For example, the phrase "nǐ chī le ma" (你吃了吗) meaning "Have you eaten yet?" can be typed with just 4 strokes: "nclm". Words per minute: Alphanumeric entry In one study of average computer users, the average rate for transcription was 33 words per minute, and 19 words per minute for composition. In the same study, when the group was divided into "fast", "moderate" and "slow" groups, the average speeds were 40 wpm, 35 wpm, and 23 wpm respectively. An average professional typist reaches 50 to 80 wpm, while some positions can require 80 to 95 wpm (usually the minimum required for dispatch positions and other typing jobs), and some advanced typists work at speeds above 120 wpm. Two-finger typists, sometimes also referred to as "hunt and peck" typists, commonly reach sustained speeds of about 37 wpm for memorized text and 27 wpm when copying text, but in bursts may be able to reach speeds of 60 to 70 wpm. From the 1920s through the 1970s, typing speed (along with shorthand speed) was an important secretarial qualification and typing contests were popular and often publicized by typewriter companies as promotional tools. Words per minute: A less common measure of the speed of a typist, CPM is used to identify the number of characters typed per minute. This is a common measurement for typing programs, or typing tutors, as it can give a more accurate measure of a person's typing speed without having to type for a prolonged period of time. The common conversion factor between WPM and CPM is 5. It is also used occasionally for associating the speed of a reader with the amount they have read. CPM has also been applied to 20th century printers, but modern faster printers more commonly use PPM (pages per minute). Words per minute: The fastest typing speed ever, 216 words per minute, was achieved by Stella Pajunas-Garnand from Chicago in 1946 in one minute on an IBM electric using the QWERTY keyboard layout. As of 2005, writer Barbara Blackburn was the fastest English language typist in the world, according to The Guinness Book of World Records. Using the Dvorak keyboard layout, she had maintained 150 wpm for 50 minutes, and 170 wpm for shorter periods, with a peak speed of 212 wpm. Barbara Blackburn, who failed her QWERTY typing class in high school, first encountered the Dvorak layout in 1938 and then she quickly learned to achieve very high speeds of typing, also she occasionally toured giving speed-typing demonstrations during her secretarial career. She appeared on Late Night with David Letterman on January 24, 1985, but felt that Letterman made a spectacle of her.The recent emergence of several competitive typing websites has allowed fast typists on computer keyboards to emerge along with new records, though many of these are unverifiable. Some notable, verified records include 255 wpm on a one-minute, random-word test by a user under the username slekap and occasionally bailey, 213 wpm on a 1-hour, random-word test by Joshua Hu, 221 wpm average on 10 random quotes by Joshua Hu, and first place in the 2020 Ultimate Typing Championship by Anthony Ermollin based on an average of 180.88 wpm on texts of various lengths. These three people are the most commonly cited fastest typists in online typing communities. All of their records were set on the QWERTY keyboard layout. Words per minute: Using a personalized interface, physicist Stephen Hawking, who suffered from amyotrophic lateral sclerosis, managed to type 15 wpm with a switch and adapted software created by Walt Woltosz. Due to a slowdown of his motor skills, his interface was upgraded with an infrared camera that detected "twitches in the cheek muscle under the eye." His typing speed decreased to approximately one word per minute in the later part of his life. Words per minute: Numeric entry The numeric entry, or 10-key, speed is a measure of one's ability to manipulate a numeric keypad. Generally, it is measured in Keystrokes per Hour or KPH. Text-entry research: Error analysis With the introduction of computers and word-processors, there has been a change in how text-entry is performed. In the past, using a typewriter, speed was measured with a stopwatch and errors were tallied by hand. With the current technology, document preparation is more about using word-processors as a composition aid, changing the meaning of error rate and how it is measured. Research performed by R. William Soukoreff and I. Scott MacKenzie, has led to a discovery of the application of a well-known algorithm. Through the use of this algorithm and accompanying analysis technique, two statistics were used, minimum string distance error rate (MSD error rate) and keystrokes per character (KSPC). The two advantages of this technique include: Participants are allowed to enter text naturally, since they may commit errors and correct them. Text-entry research: The identification of errors and generation of error rate statistics is easy to automate. Deconstructing the text input process Through analysis of keystrokes, the keystrokes of the input stream were divided into four classes: Correct (C), Incorrect Fixed (IF), Fixes (F), and Incorrect Not Fixed (INF). These key stroke classification are broken down into the following The two classes Correct and Incorrect Not Fixed comprise all of the characters in transcribed text. Fixes (F) keystrokes are easy to identify, and include keystrokes such as backspace, delete, cursor movements, and modifier keys. Incorrect Fixed (IF) keystrokes are found in the input stream, but not the transcribed text, and are not editing keys.Using these classes, the Minimum String Distance Error Rate and the Key Strokes per Character statistics can both be calculated. Minimum string distance error rate The minimum string distance (MSD) is the number of "primitives" which is the number of insertions, deletions, or substitutions to transform one string into another. The following equation was found for the MSD Error Rate. Text-entry research: MSD Error Rate = 100 % Key strokes per character (KSPC) With the minimum string distance error, errors that are corrected do not appear in the transcribed text. The following example will show you why this is an important class of errors to consider: Presented Text: the quick brownInput Stream: the quix<-ck brownTranscribed Text: the quick brown in the above example, the incorrect character ('x') was deleted with a backspace ('<-'). Since these errors do not appear in the transcribed text, the MSD error rate is 0%. This is why there is the key strokes per character (KSPC) statistic. Text-entry research: KSPC = (C+INF+IF+F)/(C+INF) The three shortcomings of the KSPC statistic are listed below: High KSPC values can be related to either many errors which were corrected, or few errors which were not corrected; however, there is no way to distinguish the two. KSPC depend on the text input method, and cannot be used to meaningfully compare two different input methods, such as Qwerty-keyboard and a multi-tap input. There is no obvious way to combine KSPC and MSD into an overall error rate, even though they have an inverse relationship. Further metrics Using the classes described above, further metrics were defined by R. William Soukoreff and I.Scott MacKenzie: Error correction efficiency refers to the ease with which the participant performed error correction. Correction Efficiency = IF/FParticipant conscientiousness is the ratio of corrected errors to the total number of error, which helps distinguish perfectionists from apathetic participants. Participant Conscientiousness = IF / (IF + INF)If C represents the amount of useful information transferred, INF, IF, and F represent the proportion of bandwidth wasted. Text-entry research: Utilized Bandwidth = C / (C + INF + IF + F) Wasted Bandwidth = (INF + IF + F)/ (C + INF + IF + F) Total error rate The classes described also provide an intuitive definition of total error rate: Total Error Rate = ((INF + IF)/ (C + INF + IF)) * 100% Not Corrected Error Rate = (INF/ (C + INF + IF)) * 100% Corrected Error Rate = (IF/ (C + INF + IF)) * 100%Since these three error rates are ratios, they are comparable between different devices, something that cannot be done with the KSPC statistic, which is device dependent. Text-entry research: Tools for text entry research Currently, two tools are publicly available for text entry researchers to record text entry performance metrics. The first is TEMA that runs only on the Android (operating system). The second is WebTEM that runs on any device with a modern Web browser, and works with almost all text entry technique. Keystroke dynamics: Keystroke dynamics, or typing dynamics, is the obtaining of detailed timing information that describes exactly when each key was pressed and when it was released as a person is typing at a computer keyboard for biometric identification, similar to speaker recognition. Data needed to analyze keystroke dynamics is obtained by keystroke logging. The behavioral biometric of Keystroke Dynamics uses the manner and rhythm in which an individual types characters on a keyboard or keypad.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Digital Combat Simulator** Digital Combat Simulator: Digital Combat Simulator, or DCS, is a combat flight simulation game developed primarily by Eagle Dynamics and The Fighter Collection. Digital Combat Simulator: Several labels are used when referring to the DCS line of simulation products: DCS World, Modules, and Campaigns. DCS World is a free-to-play game that includes two free aircraft and two free maps. Modules are downloadable content that expand the game with add-on aircraft, maps, and other content. Campaigns are scripted sets of missions. Modules and campaigns are produced by Eagle Dynamics as well as third-parties. Gameplay: DCS World is a study sim in which players learn how to operate aircraft using realistic procedures. Aircraft are meticulously modeled from real-world data, including authentic flight models and subsystems and detailed cockpits with interactive buttons and switches. Digital manuals document the history, systems, and operation of each aircraft in extensive detail. The game has extensive support for joysticks and HOTAS input devices ranging from gamepads to 1:1 replica cockpits.DCS World supports a wide variety of combat operations including combat air patrol, dogfighting, airstrikes, close air support, SEAD, and airlifts. Dozens of military airplanes and helicopters are available, spanning eras from World War II through the Cold War into the early 21st century. Popular modules include the AH-64D, F-16C, F/A-18C, F-14, and A-10C.A mission editor is included for users to create their own scenarios and campaigns with support for scripting in Lua. Users can host their own servers with user-made missions for PVE and PVP multiplayer. The community has developed tools to create missions using procedural generation and hosts servers that simulate dynamic battlefields.DCS World acts as a unified modular platform, in contrast to previous installments in the series which were standalone products. This allows users of different modules to switch between aircraft and play together within a single game client. The platform also allows third-party developers to publish modules through Eagle Dynamics' storefront. Community mods have also been produced, such as the A-4E, T-45C and UH-60L. Gameplay: Use as a training aid Some air forces have used DCS World as a training aid. A professional version called Mission Combat Simulator (MCS) is available for organizational use.The United States Air Force's 355th Training Squadron at Davis-Monthan AFB makes use of DCS as an instrument and weapons-system trainer for the A-10C. The use of virtual reality headsets is preferred for a more immersive experience.Before the Mirage 2000C was retired in 2022, the Armee de l'Air used DCS for both instrument and tactical training with the Mirage 2000C module, citing insufficient numbers of professional simulators.Ukrainian pilots have trained using the A-10C II module for DCS World. The training program is a joint military-civilian effort "to prepare a cadre of Ukrainian A-10 pilots for the hoped-for day when the U.S. does supply Ukraine with the planes." Setting: DCS World has a number of maps available from Eagle Dynamics and third parties: Caucasus - The default map for the game. Includes areas of Georgia, Russia, Crimea and the Black Sea Nevada Test and Training Range - A U.S. Air Force training range and location of the Red Flag exercise, which includes Nellis Air Force Base, Creech Air Force Base, Groom Lake, Las Vegas, McCarran International Airport and Hoover Dam Persian Gulf - A map centered around the Strait of Hormuz, it includes the United Arab Emirates, as well as areas of Oman and Iran Sinai - A third-party map representing the Sinai Peninsula, eastern Egypt and southern Israel Syria - A third-party map centered around most of Syria, Cyprus and Lebanon and areas of Turkey, Israel, Jordan and Iraq Mariana Islands - A free map centered around the Mariana island chain, including Guam, Rota, Tinian, Saipan, and "a score of lesser islands" South Atlantic - A third-party map including Argentina, Chile, and the Falkland Islands "The Channel" - A map of the southeast of England and northeastern France during World War II Normandy 1944 - A third-party map centered on the World War II battlefield of Normandy, France Development: DCS World traces its lineage directly from the Flanker and Lock On: Modern Air Combat series of combat flight simulator games. Three standalone titles were released under the DCS name from 2008 through 2011. The first was DCS: Black Shark as a simulation of the Kamov Ka-50. DCS: A-10C Warthog, a standalone simulation of the A-10C, was released in February 2011. An upgrade for Black Shark, DCS: Black Shark 2, was released in November 2011 and allowed for network multiplayer with Warthog.The open beta of DCS World was launched in May 2012. Warthog and Black Shark 2 were made available as modules. Flaming Cliffs 3 was released later that year, which added aircraft from Lock On as modules of DCS World. The first third-party module, the Bell UH-1H Huey, was also announced in 2012. Development: DCS World 1.5 was released in October 2015 featuring a new DirectX 11 graphics engine and a unified executable.In November 2015, DCS World 2.0 was released as an open alpha while 1.5 continued to be supported as a stable release. 2.0 added support for more detailed terrain including the Nevada Test and Training Range map. DCS World 2.1 was released in 2017 and added support for deferred shading and physically based rendering, followed by DCS World 2.2 that same year. The next major release, DCS World 2.5, added an improved Caucasus map in 2018. 2.5 replaced 1.5 as the stable release version coinciding with a Steam release.DCS World 2.7 was released as an open beta in April 2021, with new weather and clouds as well as improved piston engine simulation. 2.7 became the stable release in June of that year. DCS World 2.8 was released as an open beta in October 2022, improving atmospheric effects and AI basic fighter maneuvers.Over the course of development, modules have introduced new features to the simulator including improved flight models and damage models, multi-crew aircraft with multiple players or AI acting as crew and enhanced FLIR simulation. Reception: PC Gamer reviewed the DCS: A-10C Warthog module with a rating of 92/100. IGN praised the care and attention to detail, though remarking a level of inaccessibility: "Yes, there is a 44-page 'Quickstart' guide and yes, there are tutorials – a bevy of lengthy, highly instructive tutorials, actually – but precious little of this is designed for the neophyte or even the marginally experienced jet jock."SimHQ praised the Ka-50 module, noting the attention to technical details such as the recoil of the main gun affecting flight dynamics, along with smaller details such as the windscreen wiper having several modes. Also noted was the difficulty of flying the helicopter. The Ka-50 simulation earned SimHQ's Simulation Product of the Year award for 2008.PC Pilot reviewed the third-party F-14 Tomcat module with a score of 97/100. The review concluded that "[DCS: F-14 Tomcat] is truly one of the greatest simulation modules ever created for a PC flight simulator." The complexity and depth of the multi-crew cockpit and systems was described as exceptional.HeliSimmer.com's article on the work-in-progress AH-64D module's early access version praised the 3D modeling and soundscape while noting the incomplete systems and critiquing the flight model's accuracy compared to a real helicopter. Despite these shortcomings, it was said to be "the best representation of an AH-64D since Jane's Longbow 2."DCS World's gameplay has been critiqued, in contrast to its aircraft simulation. FlyAndWire wrote that "DCS is the best 'cockpit simulator' around" but criticized the interaction between the aircraft and the game environment.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Temgicoluril** Temgicoluril: Temgicoluril (INN), also known as tetramethylglycoluril and sold under the brand names Adaptol and Mebicar, is an anxiolytic medication produced by Latvian pharmaceutical company Olainfarm and sold in Latvia and Russia.The chemical structure of temgicoluril is similar to metabolites in human body and it doesn't interact with acids, alkali, oxidants and reducing agents. It affects all major neurotransmitter systems.Temgicoluril has an effect on the structure of limbic–reticular activity, particularly on hypothalamus emotional zone, as well as on all several basic neuromediator systems – γ aminobutyric acid (GABA), choline, serotonin, and adrenergic activity. It decreases brain norepinephrine levels and increases brain serotonin levels without modulating dopaminergic systems or cholinergic systems.Temgicoluril purportedly has anti-anxiety (anxiolytic) properties. It is also used to aid smoking cessation. In addition, temgicoluril may be useful in the treatment of ADHD symptoms. In contrast with typical anxiolytic medications such as benzodiazepines, temgicoluril is non-habit forming, non-sedating, and does not impair motor function.It can be prepared by condensation of N,N-dimethylurea with glyoxal. One publication reported an elegant procedure for doing this. They combined N,N-dimethylurea, glyoxal, and a catalytic amount of phosphoric anhydride in an aqueous solution at room temperature and after sufficient time temgicoluril was conveniently isolated by filtration. The filtrate can be re-used by adding more dimethylurea and glyoxal (no additional catalyst) and obtaining respectable yields, although this requires a longer reaction time.As of 2021, temgicoluril has not been evaluated by the United States medical system. Medical uses: Temgicoluril is used in Latvia and Russia, as a pharmaceutical drug to treat anxiety and to prevent or reduce anxiety, unrest, fear, internal emotional tension and irritability, reduce neuroses and neurotic disorders, heartburns of non-coronary heart disease origin. These effects are not accompanied with relaxation of muscle tone and impaired coordination of movement, suppression of mental and physical activity, so the drug can be used without interruption of work or school. Medical uses: Temgicoluril does not have a direct effect on sleep, however, it enhances the effectiveness of sleep medicines and normalizes the course of disturbed sleep. Temgicoluril alleviates or eliminates the manifestations of nicotine dependence that occur after smoking cessation. Temgicoluril does not cause mood swings or euphoria, no habituation and addiction, withdrawal syndrome has been observed. Side effects: Possible and rare side effects may include dizziness, hypotension, indigestion, allergic reactions (itchy skin) after high doses, hypothermia, fatigue. And lowered blood pressure and/or body temperature decreased by 1 to 1.5 °C. Blood pressure and body temperature return to normal after completion of treatment.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Keller's conjecture** Keller's conjecture: In geometry, Keller's conjecture is the conjecture that in any tiling of n-dimensional Euclidean space by identical hypercubes, there are two hypercubes that share an entire (n − 1)-dimensional face with each other. For instance, in any tiling of the plane by identical squares, some two squares must share an entire edge, as they do in the illustration. Keller's conjecture: This conjecture was introduced by Ott-Heinrich Keller (1930), after whom it is named. A breakthrough by Lagarias and Shor (1992) showed that it is false in ten or more dimensions, and after subsequent refinements, it is now known to be true in spaces of dimension at most seven and false in all higher dimensions. The proofs of these results use a reformulation of the problem in terms of the clique number of certain graphs now known as Keller graphs. Keller's conjecture: The related Minkowski lattice cube-tiling conjecture states that whenever a tiling of space by identical cubes has the additional property that the cubes' centers form a lattice, some cubes must meet face-to-face. It was proved by György Hajós in 1942. Szabó (1993), Shor (2004), and Zong (2005) give surveys of work on Keller's conjecture and related problems. Statement: A tessellation or tiling of a Euclidean space is, intuitively, a family of subsets that cover the whole space without overlapping. More formally, a family of closed sets, called tiles, forms a tiling if their union is the whole space and every two distinct sets in the family have disjoint interiors. A tiling is said to be monohedral if all of the tiles have the same shape (they are congruent to each other). Keller's conjecture concerns monohedral tilings in which all of the tiles are hypercubes of the same dimension as the space. As Szabó (1986) formulates the problem, a cube tiling is a tiling by congruent hypercubes in which the tiles are additionally required to all be translations of each other without any rotation, or equivalently, to have all of their sides parallel to the coordinate axes of the space. Not every tiling by congruent cubes has this property; for instance, three-dimensional space may be tiled by two-dimensional sheets of cubes that are twisted at arbitrary angles with respect to each other. In formulating the same problem, Shor (2004) instead considers all tilings of space by congruent hypercubes and states, without proof, that the assumption that cubes are axis-parallel can be added without loss of generality. Statement: An n-dimensional hypercube has 2n faces of dimension n − 1 that are, themselves, hypercubes; for instance, a square has four edges, and a three-dimensional cube has six square faces. Two tiles in a cube tiling (defined in either of the above ways) meet face-to-face if there is an (n − 1)-dimensional hypercube that is a face of both of them. Keller's conjecture is the statement that every cube tiling has at least one pair of tiles that meet face-to-face in this way. Statement: The original version of the conjecture stated by Keller was for a stronger statement: every cube tiling has a column of cubes all meeting face-to-face. This version of the problem is true or false for the same dimensions as its more commonly studied formulation. It is a necessary part of the conjecture that the cubes in the tiling all be congruent to each other, for if cubes of unequal sizes are allowed, then the Pythagorean tiling would form a counterexample in two dimensions. Statement: The conjecture as stated does not require all of the cubes in a tiling to meet face-to-face with other cubes. Although tilings by congruent squares in the plane have the stronger property that every square meets edge-to-edge with another square, some of the tiles in higher-dimensional hypercube tilings may not meet face-to-face with any other tile. For instance, in three dimensions, the tetrastix structure formed by three perpendicular sets of square prisms can be used to construct a cube tiling, combinatorially equivalent to the Weaire–Phelan structure, in which one fourth of the cubes (the ones not part of any prism) are surrounded by twelve other cubes without meeting any of them face-to-face. Group-theoretic reformulation: Keller's conjecture was shown to be true in dimensions at most six by Perron (1940a, 1940b). The disproof of Keller's conjecture, for sufficiently high dimensions, has progressed through a sequence of reductions that transform it from a problem in the geometry of tilings into a problem in group theory and, from there, into a problem in graph theory.Hajós (1949) first reformulated Keller's conjecture in terms of factorizations of abelian groups. He shows that if there is a counterexample to the conjecture, then it can be assumed to be a periodic tiling of cubes with an integer side length and integer vertex positions; thus, in studying the conjecture, it is sufficient to consider tilings of this special form. In this case, the group of integer translations, modulo the translations that preserve the tiling, forms an abelian group, and certain elements of this group correspond to the positions of the tiles. Hajós defines a family of subsets Ai of an abelian group to be a factorization if each element of the group has a unique expression as a sum a0 + a1 + ..., where each ai belongs to Ai. With this definition, Hajós' reformulated conjecture is that whenever an Abelian group has a factorization in which the first set A0 may be arbitrary but each subsequent set Ai takes the special form {0, gi, 2gi, 3gi, ..., (|Ai| − 1)gi} for some element gi of Ai, then at least one element |Ai|gi must belong to A0 −A0 (the difference set of A0 with itself).Szabó (1986) showed that any tiling that forms a counterexample to the conjecture can be assumed to have an even more special form: the cubes have side length a power of two and integer vertex coordinates, and the tiling is periodic with period twice the side length of the cubes in each coordinate direction. Based on this geometric simplification, he also simplified Hajós' group-theoretic formulation, showing that it is sufficient to consider abelian groups that are the direct sums of cyclic groups of order four, with each qi = 2. Keller graphs: Corrádi & Szabó (1990) reformulated Szabó's result as a condition about the existence of a large clique in a certain family of graphs, which subsequently became known as the Keller graphs. More precisely, the vertices of the Keller graph of dimension n are the 4n elements (m1,...,mn) where each m is 0, 1, 2, or 3. Two vertices are joined by an edge if they differ in at least two coordinates and differ by exactly two in at least one coordinate. Corrádi and Szabó showed that the maximum clique in this graph has size at most 2n and if there is a clique of this size, then Keller's conjecture is false. Given such a clique, one can form a covering of space by cubes of side two whose centers have coordinates that, when taken modulo four, are vertices of the clique. The condition that any two vertices of the clique have a coordinate that differs by two implies that cubes corresponding to these vertices do not overlap. The condition that vertices differ in two coordinates implies that these cubes cannot meet face-to-face. The condition that the clique has size 2n implies that the cubes within any period of the tiling have the same total volume as the period itself. Together with the fact that they do not overlap, this implies that the cubes placed in this way tile space without meeting face-to-face.Lagarias and Shor (1992) disproved Keller's conjecture by finding a clique of size 210 in the Keller graph of dimension 10. This clique leads to a non-face-to-face tiling in dimension 10, and copies of it can be stacked (offset by half a unit in each coordinate direction) to produce non-face-to-face tilings in any higher dimension. Similarly, Mackey (2002) found a clique of size 28 in the Keller graph of dimension eight, leading in the same way to a non-face-to-face tiling in dimension 8 and (by stacking) in dimension 9. Keller graphs: Subsequently, Debroni et al. (2011) showed that the Keller graph of dimension seven has a maximum clique of size 124. Because this is less than 27 = 128, the graph-theoretic version of Keller's conjecture is true in seven dimensions. However, the translation from cube tilings to graph theory can change the dimension of the problem, so this result does not settle the geometric version of the conjecture in seven dimensions. Finally, a 200-gigabyte computer-assisted proof in 2019 used Keller graphs to establish that the conjecture holds true in seven dimensions. Therefore, the question Keller posed can be considered solved: the conjecture is true in seven dimensions or fewer but is false when there are more than seven dimensions.The sizes of the maximum cliques in the Keller graphs of dimensions 2, 3, 4, 5, and 6 are, respectively, 2, 5, 12, 28, and 60. The Keller graphs of dimensions 4, 5, and 6 have been included in the set of "DIMACS challenge graphs" frequently used as a benchmark for clique-finding algorithms. Related problems: As Szabó (1993) describes, Hermann Minkowski was led to a special case of the cube-tiling conjecture from a problem in diophantine approximation. One consequence of Minkowski's theorem is that any lattice (normalized to have determinant one) must contain a nonzero point whose Chebyshev distance to the origin is at most one. The lattices that do not contain a nonzero point with Chebyshev distance strictly less than one are called critical, and the points of a critical lattice form the centers of the cubes in a cube tiling. Minkowski conjectured in 1900 that whenever a cube tiling has its cubes centered at lattice points in this way, it must contain two cubes that meet face-to-face. If this is true, then (because of the symmetries of the lattice) each cube in the tiling must be part of a column of cubes, and the cross-sections of these columns form a cube tiling of one smaller dimension. Reasoning in this way, Minkowski showed that (assuming the truth of his conjecture) every critical lattice has a basis that can be expressed as a triangular matrix, with ones on its main diagonal and numbers less than one away from the diagonal. György Hajós proved Minkowski's conjecture in 1942 using Hajós's theorem on factorizations of abelian groups, a similar group-theoretic method to the one that he would later apply to Keller's more general conjecture.Keller's conjecture is a variant of Minkowski's conjecture in which the condition that the cube centers form a lattice is relaxed. A second related conjecture, made by Furtwängler in 1936, instead relaxes the condition that the cubes form a tiling. Furtwängler asked whether a system of cubes centered on lattice points forming a k-fold covering of space (that is, all but a measure-zero subset of the points in the space must be interior to exactly k cubes) must necessarily have two cubes meeting face-to-face. Furtwängler's conjecture is true for two- and three-dimensional space, but Hajós found a four-dimensional counterexample in 1938. Robinson (1979) characterized the combinations of k and the dimension n that permit a counterexample. Additionally, combining both Furtwängler's and Keller's conjectures, Robinson showed that k-fold square coverings of the Euclidean plane must include two squares that meet edge-to-edge. However, for every k > 1 and every n > 2, there is a k-fold tiling of n-dimensional space by cubes with no shared faces.Once counterexamples to Keller's conjecture became known, it became of interest to ask for the maximum dimension of a shared face that can be guaranteed to exist in a cube tiling. When the dimension n is at most seven, this maximum dimension is just n − 1, by the proofs of Keller's conjecture for those small dimensions, and when n is at least eight, then this maximum dimension is at most n − 2. Lagarias & Shor (1994) showed that it is at most n − √n/3, stronger for ten or more dimensions. Related problems: Iosevich & Pedersen (1998) and Lagarias, Reeds & Wang (2000) found close connections between cube tilings and the spectral theory of square-integrable functions on the cube. Dutour Sikirić, Itoh & Poyarkov (2007) use cliques in the Keller graphs that are maximal but not maximum to study packings of cubes into space that cannot be extended by adding any additional cubes. In 1975, Ludwig Danzer and independently Branko Grünbaum and G. C. Shephard found a tiling of three-dimensional space by parallelepipeds with 60° and 120° face angles in which no two parallelepipeds share a face.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Angiostatin** Angiostatin: Angiostatin is a naturally occurring protein found in several animal species, including humans. It is an endogenous angiogenesis inhibitor (i.e., it blocks the growth of new blood vessels). Clinical trials have been undertaken for its use in anticancer therapy. Structure: Angiostatin is a 38 kDa fragment of a larger protein, plasmin (itself a fragment of plasminogen) enclosing three to five contiguous kringle modules. Each module contains two small beta sheets and three disulfide bonds.There are four different structural variants to angiostatin differing in the combination of kringle domains: K1-3, K1-4, K1-5, K1-4 with a fragment of K-5. Each kringle domain contributes a different element of inhibition to the cytokine. Recent studies through recombinant angiostatin have shown however that K1-3 is pivotal is the inhibitory nature of angiostatin.K1-3 form the “triangular bowl-like structure” of angiostatin. This structure is stabilized by interactions between inter-kringle peptides and kringles, although the kringle domains do not directly interact with each other. Angiostatin is effectively divided into two sides. The active site of K1 is found on one side, while the active sites of K2 and K3 are found on the other. This is hypothesized to result in the two different functions of angiostatin. The K1 side is believed to be primarily responsible for the inhibition of cellular proliferation, while the K2-K3 sides is believed to be primarily responsible for the inhibition of cell migration. Generation: Angiostatin is produced, for example, by autoproteolytic cleavage of plasminogen, involving extracellular disulfide bond reduction by phosphoglycerate kinase. Furthermore, angiostatin can be cleaved from plasminogen by different metalloproteinases (MMPs), elastase, prostate-specific antigen (PSA), 13 KD serine protease, or 24KD endopeptidase. Biological activity: Angiostatin is known to bind many proteins, especially to angiomotin and endothelial cell surface ATP synthase but also integrins, annexin II, C-met receptor, NG2 proteoglycan, tissue-type plasminogen activator, chondroitin sulfate proteoglycans, and CD26. Additionally, smaller fragments of angiostatin may bind several other proteins. There is still considerable uncertainty on its mechanism of action, but it seems to involve inhibition of endothelial cell migration, proliferation and induction of apoptosis. It has been proposed that angiostatin activity is related, among other things, to the coupling of its mechanical and redox properties.Although the exact mechanisms of action of angiostatin has not been completely understood yet, there are three proposed mechanism of action. The first proposed mechanism of action is that angiostatin binds to F1-FoATP synthase found both in the mitochondria and on the cellular membrane of epithelial cells which not only inhibits ATP production in tumor cells but also inhibits the cell's ability to maintain the acidic pH of tumor cells. This inability to regulate the intracellular pH can initiate apoptosis. Another proposed mechanism of action is that angiostatin is able to reduce epithelial cell migration by binding to avB3-integrins. However studies have shown that avB3-integrins are not critically essential for angiogenesis, so more investigation is require to ascertain how the inhibition of avB3-integrins inhibit epithelial cell migration. Another proposed mechanism of action is that angiostatin binds to Angiomotin (AMOT) and activating focal adhesion kinase (FAK). FAK has been shown to promote the inhibition of cell proliferation and cell migration, but lack of knowledge on how angiostatin and angiomotin function necessitate that addition research is required.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Skylake (microarchitecture)** Skylake (microarchitecture): Skylake is Intel's codename for its sixth generation Core microprocessor family that was launched on August 5, 2015, succeeding the Broadwell microarchitecture. Skylake is a microarchitecture redesign using the same 14 nm manufacturing process technology as its predecessor, serving as a tock in Intel's tick–tock manufacturing and design model. According to Intel, the redesign brings greater CPU and GPU performance and reduced power consumption. Skylake CPUs share their microarchitecture with Kaby Lake, Coffee Lake, Cannon Lake, Whiskey Lake, and Comet Lake CPUs. Skylake (microarchitecture): Skylake is the last Intel platform on which Windows earlier than Windows 10 will be officially supported by Microsoft, although enthusiast-created modifications exist that allow Windows 8.1 and earlier to continue to receive Windows Updates on later platforms.Some of the processors based on the Skylake microarchitecture are marketed as 6th-generation Core.Intel officially declared end of life and discontinued Skylake LGA 1151 CPUs on March 4, 2019. Development history: Skylake's development, as with previous processors such as Banias, Dothan, Conroe, Sandy Bridge, and Ivy Bridge, was primarily undertaken by Intel Israel at its engineering research center in Haifa, Israel. The final design was largely an evolution of Haswell, with minor improvements to performance and several power-saving features being added. A major priority of Skylake's design was to design a microarchitecture for envelopes as low as 4.5W to embed within tablet computers and notebooks in addition to higher-power desktop computers and servers.In September 2014, Intel announced the Skylake microarchitecture at the Intel Developer Forum in San Francisco, and that volume shipments of Skylake CPUs were scheduled for the second half of 2015. The Skylake development platform was announced to be available in Q1 2015. During the announcement, Intel also demonstrated two computers with desktop and mobile Skylake prototypes: the first was a desktop testbed system, running the latest version of 3DMark, while the second computer was a fully functional laptop, playing 4K video.An initial batch of Skylake CPU models (6600K and 6700K) was announced for immediate availability during the Gamescom on August 5, 2015, unusually soon after the release of its predecessor, Broadwell, which had suffered from launch delays. Intel acknowledged in 2014 that moving from 22 nm (Haswell) to 14 nm (Broadwell) had been its most difficult process to develop yet, causing Broadwell's planned launch to slip by several months; yet, the 14 nm production was back on track and in full production as of Q3 2014. Industry observers had initially believed that the issues affecting Broadwell would also cause Skylake to slip to 2016, but Intel was able to bring forward Skylake's release and shorten Broadwell's release cycle instead. As a result, the Broadwell architecture had an unusually short run. Overclocking of unsupported processors: Officially Intel supported overclocking of only the K and X versions of Skylake processors. However, it was later discovered that other non-K chips could be overclocked by modifying the base clock value – a process made feasible by the base clock applying only to the CPU, RAM, and integrated graphics on Skylake. Through beta UEFI firmware updates, some motherboard vendors, such as ASRock (which prominently promoted it under the name Sky OC) allowed the base clock to be modified in this manner.When overclocking unsupported processors using these UEFI firmware updates, several issues arise: C-states are disabled, therefore the CPU will constantly run at its highest frequency and voltage Turbo-boost is disabled Integrated graphics are disabled AVX2 instruction performance is poor, approximately 4-5 times slower due to the upper 128-bit half of the execution units and data buses not being taken out of their power saving states CPU core temperature readings are incorrect These issues are partly caused by the power management of the processor needing to be disabled for base clock overclocking to work.In February 2016, however, an ASRock firmware update removed the feature. On February 9, 2016, Intel announced that it would no longer allow such overclocking of non-K processors, and that it had issued a CPU microcode update that removes the function. In April 2016, ASRock started selling motherboards that allow overclocking of unsupported CPUs using an external clock generator. Operating system support: In January 2016, Microsoft announced that it would end support of Windows 7 and Windows 8.1 on Skylake processors effective July 17, 2017; after this date, only the most critical updates for the two operating systems would be released for Skylake users if they have been judged not to affect the reliability of the OS on older hardware, and Windows 10 would be the only Microsoft Windows platform officially supported on Skylake, as well as all future Intel CPU microarchitectures beginning with Skylake's successor Kaby Lake. Terry Myerson stated that Microsoft had to make a large investment in order to reliably support Skylake on older versions of Windows, and that future generations of processors would require further investments. Microsoft also stated that due to the age of the platform, it would be challenging for newer hardware, firmware, and device driver combinations to properly run under Windows 7.On March 18, 2016, in response to criticism over the move, primarily from enterprise customers, Microsoft announced revisions to the support policy, changing the cutoff for support and non-critical updates to July 17, 2018, and stating that Skylake users would receive all critical security updates for Windows 7 and 8.1 through the end of extended support. In August 2016, citing "a strong partnership with our OEM partners and Intel", Microsoft stated that it would continue to fully support 7 and 8.1 on Skylake through the end of their respective lifecycles. In addition, an enthusiast-created modification was released that disabled the Windows Update check and allowed Windows 8.1 and earlier to continue to be updated on this and later platforms.As of Linux kernel 4.10, Skylake mobile power management is supported with most Package C states supported seeing some use. Linux 4.11 enables Frame-Buffer Compression for the integrated graphics chipset by default, which lowers power consumption.Skylake is fully supported on OpenBSD 6.2 and later, including accelerated graphics.For Windows 11, only the high-end Skylake-X processors are officially listed as compatible. All other Skylake processors are not officially supported due to security concerns. However, it is still possible to manually upgrade using an ISO image (as Windows 10 users on those processors won't be offered to upgrade to Windows 11 via Windows Update), or perform a clean installation as long as the system has Trusted Platform Module (TPM) 2.0 enabled, but the user must accept that they will not be entitled to receive updates, and that damage caused by using Windows 11 on an unsupported configuration are not covered by the manufacturer's warranty. Features: Like its predecessor, Broadwell, Skylake is available in five variants, identified by the suffixes S (SKL-S), X (SKL-X), H (SKL-H), U (SKL-U), and Y (SKL-Y). SKL-S and SKL-X contain overclockable K and X variants with unlocked multipliers. The H, U and Y variants are manufactured in ball grid array (BGA) packaging, while the S and X variants are manufactured in land grid array (LGA) packaging using a new socket, LGA 1151 (LGA 2066 for Skylake X). Skylake is used in conjunction with Intel 100 Series chipsets, also known as Sunrise Point.The major changes between the Haswell and Skylake architectures include the removal of the fully integrated voltage regulator (FIVR) introduced with Haswell. On the variants that will use a discrete Platform Controller Hub (PCH), Direct Media Interface (DMI) 2.0 is replaced by DMI 3.0, which allows speeds of up to 8 GT/s. Features: Skylake's U and Y variants support one DIMM slot per channel, while H and S variants support two DIMM slots per channel. Skylake's launch and sales lifespan occur at the same time as the ongoing SDRAM market transition, with DDR3 SDRAM memory gradually being replaced by DDR4 memory. Rather than working exclusively with DDR4, the Skylake microarchitecture remains backward compatible by interoperating with both types of memory. Accompanying the microarchitecture's support for both memory standards, a new SO-DIMM type capable of carrying either DDR3 or DDR4 memory chips, called UniDIMM, was also announced.Skylake's few P variants have a reduced on-die graphics unit (12 execution units enabled instead of 24 execution units) over their direct counterparts; see the table below. In contrast, with Ivy Bridge CPUs the P suffix was used for CPUs with completely disabled on-die video chipset. Features: Other enhancements include Thunderbolt 3.0, SATA Express, Iris Pro graphics with Direct3D feature level 12_1 with up to 128 MB of L4 eDRAM cache on certain SKUs. The Skylake line of processors retires VGA support, while supporting up to five monitors connected via HDMI 1.4, DisplayPort 1.2 or Embedded DisplayPort (eDP) interfaces. HDMI 2.0 (4K@60 Hz) is only supported on motherboards equipped with Intel's Alpine Ridge Thunderbolt controller.The Skylake instruction set changes include Intel MPX (Memory Protection Extensions) and Intel SGX (Software Guard Extensions). Future Xeon variants will also have Advanced Vector Extensions 3.2 (AVX-512F).Skylake-based laptops were predicted to use wireless technology called Rezence for charging, and other wireless technologies for communication with peripherals. Many major PC vendors agreed to use this technology in Skylake-based laptops; however, no laptops were released with the technology as of 2019.The integrated GPU of Skylake's S variant supports on Windows DirectX 12 Feature Level 12_1, OpenGL 4.6 with latest Windows 10 driver update (OpenGL 4.5 on Linux) and OpenCL 3.0 standards. The Quick Sync video engine now includes support for VP9 (GPU accelerated decode only), VP8 and HEVC (hardware accelerated 8-bit encode/decode and GPU accelerated 10-bit decode), and supports for resolutions up to 4096 × 2048.Intel also released unlocked (capable of overclocking) mobile Skylake CPUs.Unlike previous generations, Skylake-based Xeon E3 no longer works with a desktop chipset that supports the same socket, and requires either the C232 or the C236 chipset to operate. Known issues: Short loops with a specific combination of instruction use may cause unpredictable system behavior on CPUs with hyperthreading. A microcode update was issued to fix the issue.Skylake is vulnerable to Spectre attacks. In fact, it is more vulnerable than other processors because it uses indirect branch speculation not just on indirect branches but also when the return prediction stack underflows. The latency for the spinlock PAUSE instruction has been increased dramatically (from the usual 10 cycles to 141 cycles in Skylake), which can cause performance issues with older programs or libraries using pause instructions. Intel documents the increased latency as a feature that improves power efficiency. Architecture changes compared to Broadwell microarchitecture: CPU Improved front-end, deeper out-of-order buffers, improved execution units, more execution units (third vector integer ALU(VALU)) for five ALUs in total, more load/store bandwidth, improved hyper-threading (wider retirement), speedup of AES-GCM and AES-CBC by 17% and 33% accordingly. Architecture changes compared to Broadwell microarchitecture: Up to four cores as the default mainstream configuration and up to 18 cores for X-series AVX-512: F, CD, VL, BW, and DQ for some future Xeon variants, but not Xeon E3 Intel MPX (Memory Protection Extensions) Intel SGX (Software Guard Extensions) Intel Speed Shift Larger re-order buffer (224 entries, up from 192) L1 cache size unchanged at 32 KB instruction and 32 KB data cache per core. Architecture changes compared to Broadwell microarchitecture: L2 cache was changed from 8-way to 4-way set associative Voltage regulator module (FIVR) is moved back to the motherboard Enhancements of Intel Processor Trace: fine grained timing through CYC packets (cycle-accurate mode) and support for IP (Instruction Pointer) address filtering. 64 to 128 MB L4 eDRAM cache on certain SKUs GPU Skylake's integrated Gen9 GPU supports Direct3D 12 at the feature level 12_1 Full fixed function HEVC Main/8bit encoding/decoding acceleration. Hybrid/Partial HEVC Main10/10bit decoding acceleration. JPEG encoding acceleration for resolutions up to 16,000×16,000 pixels. Partial VP9 encoding/decoding acceleration. Architecture changes compared to Broadwell microarchitecture: I/O LGA 1151 socket for mainstream desktop processors and LGA 2066 socket for enthusiast gaming/workstation X-series processors 100-series chipset (Sunrise Point) X-series uses X299-series chipset DMI 3.0 (From DMI 2.0) Support for both DDR3L SDRAM and DDR4 SDRAM in mainstream variants, using custom UniDIMM SO-DIMM form factor with up to 64 GB of RAM on LGA 1151 variants. Usual DDR3 memory is also supported by certain motherboard vendors even though Intel doesn't officially support it. Architecture changes compared to Broadwell microarchitecture: Support for 16 PCI Express 3.0 lanes from CPU, 20 PCI Express 3.0 lanes from PCH (LGA 1151), 44 PCI Express 3.0 lanes for Skylake-X Support for Thunderbolt 3 (Alpine Ridge) Other Thermal design power (TDP) up to 95 W (LGA 1151); up to 165 W (LGA 2066) 14 nm manufacturing process Configurations: Skylake processors are produced in five main families: Y, U, H, S, and X. Multiple configurations are available within each family: List of Skylake processor models: Mainstream desktop processors Common features of the mainstream desktop Skylake CPUs: DMI 3.0 and PCIe 3.0 interfaces Dual channel memory support in the following configurations: DDR3L-1600 1.35 V (32 GB maximum) or DDR4-2133 1.2 V (64 GB maximum). DDR3 is unofficially supported through some motherboard vendors 16 PCIe 3.0 lanes The Core-branded processors support the AVX2 instruction set. The Celeron and Pentium-branded ones support only SSE4.1/4.2 350 MHz base graphics clock rate High-end desktop processors (Skylake-X) Common features of the high performance Skylake-X CPUs: In addition to the AVX2 instruction set, they also support the AVX-512 instructions No built-in iGPU (integrated graphics processor) Turbo Boost Max Technology 3.0 for up to 2/4 threads workloads for CPUs that have 8 cores and more (7820X, 7900X, 7920X, 7940X, 7960X, 7980XE, and all 9th generation chips) A different cache hierarchy (when compared to client Skylake CPUs or previous architectures) Xeon High-end desktop processors (Skylake-X) Is Xeon instead of Core Uses C621 Chipset Xeon W-3175X was the only Xeon with a multiplier unlocked for overclocking until the introduction of Sapphire Rapids-WS Xeon CPUs in 2023. List of Skylake processor models: Mobile processors See also Server, Mobile below for mobile workstation processors. Workstation processors All models support: MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX, AVX2, AVX-512, FMA3, MPX, Enhanced Intel SpeedStep Technology (EIST), Intel 64, XD bit (an NX bit implementation), Intel VT-x, Intel VT-d, Turbo Boost (excluding W-2102 and W-2104), Hyper-threading (excluding W-2102 and W-2104), AES-NI, Intel TSX-NI, Smart Cache. PCI Express lanes: 48 Supports up to 8 DIMMs of DDR4 memory, maximum 512 GB. Server processors E3 series server chips all consist of System Bus 9 GT/s, max. memory bandwidth of 34.1 GB/s dual channel memory. Unlike its predecessor, the Skylake Xeon CPUs require C230 series (C232/C236) or C240 series (C242/C246) chipset to operate, with integrated graphics working only with C236 and C246 chipsets. Mobile counterparts uses CM230 and CM240 series chipsets. Skylake-SP (14 nm) Scalable Performance Xeon Platinum supports up to 8 sockets. Xeon Gold supports up to 4 sockets. Xeon Silver and Bronze support up to 2 sockets. −M: 1536 GB RAM per socket instead of 768 GB RAM for non−M SKUs −F: integrated OmniPath fabric −T: High thermal-case and extended reliability Support for up to 12 DIMMs of DDR4 memory per CPU socket. Xeon Platinum, Gold 61XX, and Gold 5122 have two AVX-512 FMA units per core. Xeon Gold 51XX (except 5122), Silver, and Bronze have a single AVX-512 FMA unit per core. Xeon Bronze and Silver (dual processor) Xeon Bronze 31XX has no HT or Turbo Boost support. Xeon Bronze 31XX supports DDR4-2133 MHz RAM. Xeon Silver 41XX supports DDR4-2400 MHz RAM. Xeon Bronze 31XX and Xeon Silver 41XX support two UPI links at 9.6 GT/s. Xeon Gold (quad processor) Xeon Gold 51XX and F SKUs has two UPIs at 10.4 GT/s. Xeon Gold 61XX has three UPIs at 10.4 GT/s. Xeon Gold 51XX support DDR4-2400 MHz RAM (except 5122). Xeon Gold 5122 and 61XX support DDR4-2666 MHz RAM. Xeon Platinum (octal processor) Xeon Platinum non-F SKUs have three UPIs at 10.4 GT/s. Xeon Platinum F-SKUs have two UPIs at 10.4 GT/s. Xeon Platinum supports DDR4-2666 MHz RAM.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Restricted power series** Restricted power series: In algebra, the ring of restricted power series is the subring of a formal power series ring that consists of power series whose coefficients approach zero as degree goes to infinity. Over a non-archimedean complete field, the ring is also called a Tate algebra. Quotient rings of the ring are used in the study of a formal algebraic space as well as rigid analysis, the latter over non-archimedean complete fields. Restricted power series: Over a discrete topological ring, the ring of restricted power series coincides with a polynomial ring; thus, in this sense, the notion of "restricted power series" is a generalization of a polynomial. Definition: Let A be a linearly topologized ring, separated and complete and {Iλ} the fundamental system of open ideals. Then the ring of restricted power series is defined as the projective limit of the polynomial rings over A/Iλ lim ←λ⁡A/Iλ[x1,…,xn] .In other words, it is the completion of the polynomial ring A[x1,…,xn] with respect to the filtration {Iλ[x1,…,xn]} . Sometimes this ring of restricted power series is also denoted by A{x1,…,xn} Clearly, the ring A⟨x1,…,xn⟩ can be identified with the subring of the formal power series ring A[[x1,…,xn]] that consists of series ∑cαxα with coefficients cα→0 ; i.e., each Iλ contains all but finitely many coefficients cα Also, the ring satisfies (and in fact is characterized by) the universal property: for (1) each continuous ring homomorphism A→B to a linearly topologized ring B , separated and complete and (2) each elements b1,…,bn in B , there exists a unique continuous ring homomorphism A⟨x1,…,xn⟩→B,xi↦bi extending A→B Tate algebra: In rigid analysis, when the base ring A is the valuation ring of a complete non-archimedean field (K,|⋅|) , the ring of restricted power series tensored with K ,Tn=K⟨ξ1,…ξn⟩=A⟨ξ1,…,ξn⟩⊗AK is called a Tate algebra, named for John Tate. It is equivalently the subring of formal power series k[[ξ1,…,ξn]] which consists of series convergent on ok¯n , where := {x∈k¯:|x|≤1} is the valuation ring in the algebraic closure k¯ The maximal spectrum of Tn is then a rigid-analytic space that models an affine space in rigid geometry. Tate algebra: Define the Gauss norm of f=∑aαξα in Tn by max α|aα|. This makes Tn a Banach algebra over k; i.e., a normed algebra that is complete as a metric space. With this norm, any ideal I of Tn is closed and thus, if I is radical, the quotient Tn/I is also a (reduced) Banach algebra called an affinoid algebra. Some key results are: (Weierstrass division) Let g∈Tn be a ξn -distinguished series of order s; i.e., g=∑ν=0∞gνξnν where gν∈Tn−1 , gs is a unit element and |gs|=‖g‖>|gv| for ν>s . Then for each f∈Tn , there exist a unique q∈Tn and a unique polynomial r∈Tn−1[ξn] of degree <s such that f=qg+r. Tate algebra: (Weierstrass preparation) As above, let g be a ξn -distinguished series of order s. Then there exist a unique monic polynomial f∈Tn−1[ξn] of degree s and a unit element u∈Tn such that g=fu (Noether normalization) If a⊂Tn is an ideal, then there is a finite homomorphism Td↪Tn/a .As consequence of the division, preparation theorems and Noether normalization, Tn is a Noetherian unique factorization domain of Krull dimension n. An analog of Hilbert's Nullstellensatz is valid: the radical of an ideal is the intersection of all maximal ideals containing the ideal (we say the ring is Jacobson). Results: Results for polynomial rings such as Hensel's lemma, division algorithms (or the theory of Gröbner bases) are also true for the ring of restricted power series. Throughout the section, let A denote a linearly topologized ring, separated and complete. (Hensel) Let m⊂A a maximal ideal and := A/m the quotient map. Given a F in A⟨ξ⟩ , if φ(F)=gh for some monic polynomial g∈k[ξ] and a restricted power series h∈k⟨ξ⟩ such that g,h generate the unit ideal of k⟨ξ⟩ , then there exist G in A[ξ] and H in A⟨ξ⟩ such that F=GH,φ(G)=g,φ(H)=h
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**I²C** I²C: I2C (Inter-Integrated Circuit; pronounced as “eye-squared-C”), alternatively known as I2C or IIC, is a synchronous, multi-master/multi-slave (controller/target), packet switched, single-ended, serial communication bus invented in 1982 by Philips Semiconductors. It is widely used for attaching lower-speed peripheral ICs to processors and microcontrollers in short-distance, intra-board communication. Several competitors, such as Siemens, NEC, Texas Instruments, STMicroelectronics, Motorola, Nordic Semiconductor and Intersil, have introduced compatible I2C products to the market since the mid-1990s. System Management Bus (SMBus), defined by Intel in 1995, is a subset of I2C, defining a stricter usage. One purpose of SMBus is to promote robustness and interoperability. Accordingly, modern I2C systems incorporate some policies and rules from SMBus, sometimes supporting both I2C and SMBus, requiring only minimal reconfiguration either by commanding or output pin use. Applications: I2C is appropriate for peripherals where simplicity and low manufacturing cost are more important than speed. Common applications of the I2C bus are: Describing connectable devices via small ROM configuration tables to enable plug and play operation, such as in serial presence detect (SPD) EEPROMs on dual in-line memory modules (DIMMs), and Extended Display Identification Data (EDID) for monitors via VGA, DVI and HDMI connectors. Applications: System management for PC systems via SMBus; SMBus pins are allocated in both conventional PCI and PCI Express connectors. Accessing real-time clocks and NVRAM chips that keep user settings. Accessing low-speed DACs and ADCs. Changing backlight, contrast, hue, color balance settings etc in monitors (via Display Data Channel). Changing sound volume in intelligent speakers. Controlling small (e.g. feature phone) LCD or OLED displays. Reading hardware monitors and diagnostic sensors, e.g. a fan's speed. Turning on and off the power supply of system components.A particular strength of I2C is the capability of a microcontroller to control a network of device chips with just two general-purpose I/O pins and software. Many other bus technologies used in similar applications, such as Serial Peripheral Interface Bus (SPI), require more pins and signals to connect multiple devices. Design: I2C uses only two bidirectional open-collector or open-drain lines: serial data line (SDA) and serial clock line (SCL), pulled up with resistors. Typical voltages used are +5 V or +3.3 V, although systems with other voltages are permitted. Design: The I2C reference design has a 7-bit address space, with a rarely used 10-bit extension. Common I2C bus speeds are the 100 kbit/s standard mode and the 400 kbit/s fast mode. There is also a 10 kbit/s low-speed mode, but arbitrarily low clock frequencies are also allowed. Later revisions of I2C can host more nodes and run at faster speeds (400 kbit/s fast mode, 1 Mbit/s fast mode plus, 3.4 Mbit/s high-speed mode, and 5 Mbit/s ultra-fast mode). These speeds are more widely used on embedded systems than on PCs. Design: Note that the bit rates are quoted for the transfers between controller (master) and target (slave) without clock stretching or other hardware overhead. Protocol overheads include a target address and perhaps a register address within the target device, as well as per-byte ACK/NACK bits. Thus the actual transfer rate of user data is lower than those peak bit rates alone would imply. For example, if each interaction with a target inefficiently allows only 1 byte of data to be transferred, the data rate will be less than half the peak bit rate. Design: The number of nodes which can exist on a given I2C bus is limited by the address space and also by the total bus capacitance of 400 pF, which restricts practical communication distances to a few meters. The relatively high impedance and low noise immunity requires a common ground potential, which again restricts practical use to communication within the same PC board or small system of boards. Design: Reference design The aforementioned reference design is a bus with a clock (SCL) and data (SDA) lines with 7-bit addressing. The bus has two roles for nodes, either controller (master) or target (slave): Controller (master) node: Node that generates the clock and initiates communication with targets (slaves). Target (slave) node: Node that receives the clock and responds when addressed by the controller (master).The bus is a multi-controller bus, which means that any number of controller nodes can be present. Additionally, controller and target roles may be changed between messages (after a STOP is sent). There may be four potential modes of operation for a given bus device, although most devices only use a single role and its two modes: Controller (master) transmit: Controller node is sending data to a target (slave). Controller (master) receive: Controller node is receiving data from a target (slave). Target (slave) transmit: Target node is sending data to the controller (master). Design: Target (slave) receive: Target node is receiving data from the controller (master).In addition to 0 and 1 data bits, the I2C bus allows special START and STOP signals which act as message delimiters and are distinct from the data bits. (This is in contrast to the start bits and stop bits used in asynchronous serial communication, which are distinguished from data bits only by their timing.) The controller is initially in controller transmit mode by sending a START followed by the 7-bit address of the target it wishes to communicate with, which is finally followed by a single bit representing whether it wishes to write (0) to or read (1) from the target. Design: If the target exists on the bus then it will respond with an ACK bit (active low for acknowledged) for that address. The controller then continues in either transmit or receive mode (according to the read/write bit it sent), and the target continues in the complementary mode (receive or transmit, respectively). The address and the data bytes are sent most significant bit first. The start condition is indicated by a high-to-low transition of SDA with SCL high; the stop condition is indicated by a low-to-high transition of SDA with SCL high. All other transitions of SDA take place with SCL low. Design: If the controller wishes to write to the target, then it repeatedly sends a byte with the target sending an ACK bit. (In this situation, the controller is in controller transmit mode, and the target is in target receive mode.) If the controller wishes to read from the target, then it repeatedly receives a byte from the target, the controller sending an ACK bit after every byte except the last one. (In this situation, the controller is in controller receive mode, and the target is in target transmit mode.) An I2C transaction may consist of multiple messages. The controller terminates a message with a STOP condition if this is the end of the transaction or it may send another START condition to retain control of the bus for another message (a "combined format" transaction). Design: Message protocols I2C defines basic types of transactions, each of which begins with a START and ends with a STOP: Single message where a controller (master) writes data to a target (slave). Single message where a controller (master) reads data from a target (slave). Design: Combined format, where a controller (master) issues at least two reads or writes to one or more targets (slaves).In a combined transaction, each read or write begins with a START and the target address. The START conditions after the first are also called repeated START bits. Repeated STARTs are not preceded by STOP conditions, which is how targets know that the next message is part of the same transaction. Design: Any given target will only respond to certain messages, as specified in its product documentation. Design: Pure I2C systems support arbitrary message structures. SMBus is restricted to nine of those structures, such as read word N and write word N, involving a single target. PMBus extends SMBus with a Group protocol, allowing multiple such SMBus transactions to be sent in one combined message. The terminating STOP indicates when those grouped actions should take effect. For example, one PMBus operation might reconfigure three power supplies (using three different I2C target addresses), and their new configurations would take effect at the same time: when they receive that STOP. Design: With only a few exceptions, neither I2C nor SMBus define message semantics, such as the meaning of data bytes in messages. Message semantics are otherwise product-specific. Those exceptions include messages addressed to the I2C general call address (0x00) or to the SMBus Alert Response Address; and messages involved in the SMBus Address Resolution Protocol (ARP) for dynamic address allocation and management. Design: In practice, most targets adopt request-response control models, where one or more bytes following a write command are treated as a command or address. Those bytes determine how subsequent written bytes are treated or how the target responds on subsequent reads. Most SMBus operations involve single-byte commands. Design: Messaging example: 24C32 EEPROM One specific example is the 24C32 type EEPROM, which uses two request bytes that are called Address High and Address Low. (Accordingly, these EEPROMs are not usable by pure SMBus hosts, which support only single-byte commands or addresses.) These bytes are used for addressing bytes within the 32 kbit (or 4 kB) EEPROM address space. The same two-byte addressing is also used by larger EEPROMs, like the 24C512 which stores 512 kbits (or 64 kB). Writing data to and reading from these EEPROMs uses a simple protocol: the address is written, and then data is transferred until the end of the message. The data transfer part of the protocol can cause trouble on the SMBus, since the data bytes are not preceded by a count, and more than 32 bytes can be transferred at once. I2C EEPROMs smaller than 32 kbit, like the 2 kbit 24C02, are often used on the SMBus with inefficient single-byte data transfers to overcome this problem. Design: A single message writes to the EEPROM. After the START, the controller sends the chip's bus address with the direction bit clear (write), then sends the two-byte address of data within the EEPROM and then sends data bytes to be written starting at that address, followed by a STOP. When writing multiple bytes, all the bytes must be in the same 32-byte page. While it is busy saving those bytes to memory, the EEPROM will not respond to further I2C requests. (That is another incompatibility with SMBus: SMBus devices must always respond to their bus addresses.) To read starting at a particular address in the EEPROM, a combined message is used. After a START, the controller first writes that chip's bus address with the direction bit clear (write) and then the two bytes of EEPROM data address. It then sends a (repeated) START and the EEPROM's bus address with the direction bit set (read). The EEPROM will then respond with the data bytes beginning at the specified EEPROM data address — a combined message: first a write, then a read. The controller issues an ACK after each read byte except the last byte, and then issues a STOP. The EEPROM increments the address after each data byte transferred; multi-byte reads can retrieve the entire contents of the EEPROM using one combined message. Design: Physical layer At the physical layer, both SCL and SDA lines are an open-drain (MOSFET) or open-collector (BJT) bus design, thus a pull-up resistor is needed for each line. A logic "0" is output by pulling the line to ground, and a logic "1" is output by letting the line float (output high impedance) so that the pull-up resistor pulls it high. A line is never actively driven high. This wiring allows multiple nodes to connect to the bus without short circuits from signal contention. High-speed systems (and some others) may use a current source instead of a resistor to pull-up only SCL or both SCL and SDA, to accommodate higher bus capacitance and enable faster rise times. Design: An important consequence of this is that multiple nodes may be driving the lines simultaneously. If any node is driving the line low, it will be low. Nodes that are trying to transmit a logical one (i.e. letting the line float high) can detect this and conclude that another node is active at the same time. When used on SCL, this is called clock stretching and is a flow-control mechanism for targets. When used on SDA, this is called arbitration and ensures that there is only one transmitter at a time. When idle, both lines are high. To start a transaction, SDA is pulled low while SCL remains high. It is illegal: 14  to transmit a stop marker by releasing SDA to float high again (although such a "void message" is usually harmless), so the next step is to pull SCL low. Except for the start and stop signals, the SDA line only changes while the clock is low; transmitting a data bit consists of pulsing the clock line high while holding the data line steady at the desired level. Design: While SCL is low, the transmitter (initially the controller) sets SDA to the desired value and (after a small delay to let the value propagate) lets SCL float high. The controller then waits for SCL to actually go high; this will be delayed by the finite rise time of the SCL signal (the RC time constant of the pull-up resistor and the parasitic capacitance of the bus) and may be additionally delayed by a target's clock stretching. Design: Once SCL is high, the controller waits a minimum time (4 μs for standard-speed I2C) to ensure that the receiver has seen the bit, then pulls it low again. This completes transmission of one bit. Design: After every 8 data bits in one direction, an "acknowledge" bit is transmitted in the other direction. The transmitter and receiver switch roles for one bit, and the original receiver transmits a single "0" bit (ACK) back. If the transmitter sees a "1" bit (NACK) instead, it learns that: (If controller transmitting to target) The target is unable to accept the data. No such target, command not understood, or unable to accept any more data. Design: (If target transmitting to controller) The controller wishes the transfer to stop after this data byte.Only the SDA line changes direction during acknowledge bits; the SCL is always controlled by the controller. After the acknowledge bit, the clock line is low and the controller may do one of three things: Begin transferring another byte of data: the transmitter sets SDA, and the controller pulses SCL high. Send a "Stop": Set SDA low, let SCL go high, then let SDA go high. This releases the I2C bus. Send a "Repeated start": Set SDA high, let SCL go high, then pull SDA low again. This starts a new I2C bus message without releasing the bus. Design: Clock stretching using SCL One of the more significant features of the I2C protocol is clock stretching. An addressed target device may hold the clock line (SCL) low after receiving (or sending) a byte, indicating that it is not yet ready to process more data. The controller that is communicating with the target may not finish the transmission of the current bit, but must wait until the clock line actually goes high. If the target is clock-stretching, the clock line will still be low (because the connections are open-drain). The same is true if a second, slower, controller tries to drive the clock at the same time. (If there is more than one controller, all but one of them will normally lose arbitration.) The controller must wait until it observes the clock line going high, and an additional minimal time (4 μs for standard 100 kbit/s I2C) before pulling the clock low again. Design: Although the controller may also hold the SCL line low for as long as it desires (this is not allowed since Rev. 6 of the protocol – subsection 3.1.1), the term "clock stretching" is normally used only when targets do it. Although in theory any clock pulse may be stretched, generally it is the intervals before or after the acknowledgment bit which are used. For example, if the target is a microcontroller, its I2C interface could stretch the clock after each byte, until the software decides whether to send a positive acknowledgment or a NACK. Design: Clock stretching is the only time in I2C where the target drives SCL. Many targets do not need to clock stretch and thus treat SCL as strictly an input with no circuitry to drive it. Some controllers, such as those found inside custom ASICs may not support clock stretching; often these devices will be labeled as a "two-wire interface" and not I2C. Design: To ensure a minimal bus throughput, SMBus places limits on how far clocks may be stretched. Hosts and targets adhering to those limits cannot block access to the bus for more than a short time, which is not a guarantee made by pure I2C systems. Design: Arbitration using SDA Every controller monitors the bus for start and stop bits and does not start a message while another controller is keeping the bus busy. However, two controllers may start transmission at about the same time; in this case, arbitration occurs. Target transmit mode can also be arbitrated, when a controller addresses multiple targets, but this is less common. In contrast to protocols (such as Ethernet) that use random back-off delays before issuing a retry, I2C has a deterministic arbitration policy. Each transmitter checks the level of the data line (SDA) and compares it with the levels it expects; if they do not match, that transmitter has lost arbitration and drops out of this protocol interaction. Design: If one transmitter sets SDA to 1 (not driving a signal) and a second transmitter sets it to 0 (pull to ground), the result is that the line is low. The first transmitter then observes that the level of the line is different from that expected and concludes that another node is transmitting. The first node to notice such a difference is the one that loses arbitration: it stops driving SDA. If it is a controller, it also stops driving SCL and waits for a STOP; then it may try to reissue its entire message. In the meantime, the other node has not noticed any difference between the expected and actual levels on SDA and therefore continues transmission. It can do so without problems because so far the signal has been exactly as it expected; no other transmitter has disturbed its message. Design: If the two controllers are sending a message to two different targets, the one sending the lower target address always "wins" arbitration in the address stage. Since the two controllers may send messages to the same target address, and addresses sometimes refer to multiple targets, arbitration must sometimes continue into the data stages. Arbitration occurs very rarely, but is necessary for proper multi-controller support. As with clock stretching, not all devices support arbitration. Those that do, generally label themselves as supporting "multi-controller" communication. One case which must be handled carefully in multi-controller I2C implementations is that of the controllers talking to each other. One controller may lose arbitration to an incoming message, and must change its role from controller to target in time to acknowledge its own address. Design: In the extremely rare case that two controllers simultaneously send identical messages, both will regard the communication as successful, but the target will only see one message. For this reason, when a target can be accessed by multiple controllers, every command recognized by the target either must be idempotent or must be guaranteed never to be issued by two controllers at the same time. (For example, a command which is issued by only one controller need not be idempotent, nor is it necessary for a specific command to be idempotent when some mutual exclusion mechanism ensures that only one controller can be caused to issue that command at any given time.) Arbitration in SMBus While I2C only arbitrates between controllers, SMBus uses arbitration in three additional contexts, where multiple targets respond to the controller, and one gets its message through. Design: Although conceptually a single-controller bus, a target device that supports the "host notify protocol" acts as a controller to perform the notification. It seizes the bus and writes a 3-byte message to the reserved "SMBus Host" address (0x08), passing its address and two bytes of data. When two targets try to notify the host at the same time, one of them will lose arbitration and need to retry. Design: An alternative target notification system uses the separate SMBALERT# signal to request attention. In this case, the host performs a 1-byte read from the reserved "SMBus Alert Response Address" (0x0C), which is a kind of broadcast address. All alerting targets respond with a data bytes containing their own address. When the target successfully transmits its own address (winning arbitration against others) it stops raising that interrupt. In both this and the preceding case, arbitration ensures that one target's message will be received, and the others will know they must retry. Design: SMBus also supports an "address resolution protocol", wherein devices return a 16-byte "Unique Device Identifier" (UDID). Multiple devices may respond; the one with the lowest UDID will win arbitration and be recognized. Design: Arbitration in PMBus PMBus version 1.3 extends the SMBus alert response protocol in its "zone read" protocol. Targets may be grouped into "zones", and all targets in a zone may be addressed to respond, with their responses masked (omitting unwanted information), inverted (so wanted information is sent as 0 bits, which win arbitration), or reordered (so the most significant information is sent first). Arbitration ensures that the highest priority response is the one first returned to the controller. Design: PMBus reserves I2C addresses 0x28 and 0x37 for zone reads and writes, respectively. Design: Differences between modes There are several possible operating modes for I2C communication. All are compatible in that the 100 kbit/s standard mode may always be used, but combining devices of different capabilities on the same bus can cause issues, as follows: Fast mode is highly compatible and simply tightens several of the timing parameters to achieve 400 kbit/s speed. Fast mode is widely supported by I2C target devices, so a controller may use it as long as it knows that the bus capacitance and pull-up strength allow it. Design: Fast mode plus achieves up to 1 Mbit/s using more powerful (20 mA) drivers and pull-ups to achieve faster rise and fall times. Compatibility with standard and fast mode devices (with 3 mA pull-down capability) can be achieved if there is some way to reduce the strength of the pull-ups when talking to them. Design: High speed mode (3.4 Mbit/s) is compatible with normal I2C devices on the same bus, but requires the controller have an active pull-up on the clock line which is enabled during high speed transfers. The first data bit is transferred with a normal open-drain rising clock edge, which may be stretched. For the remaining seven data bits, and the ACK, the controller drives the clock high at the appropriate time and the target may not stretch it. All high-speed transfers are preceded by a single-byte "controller code" at fast or standard speed. This code serves three purposes: it tells high-speed target devices to change to high-speed timing rules, it ensures that fast or normal speed devices will not try to participate in the transfer (because it does not match their address), and because it identifies the controller (there are eight controller codes, and each controller must use a different one), it ensures that arbitration is complete before the high-speed portion of the transfer, and so the high-speed portion need not make allowances for that ability. Design: Ultra-Fast mode is essentially a write-only I2C subset, which is incompatible with other modes except in that it is easy to add support for it to an existing I2C interface hardware design. Only one controller is permitted, and it actively drives data lines at all times to achieve a 5 Mbit/s transfer rate. Clock stretching, arbitration, read transfers, and acknowledgements are all omitted. It is mainly intended for animated LED displays where a transmission error would only cause an inconsequential brief visual glitch. The resemblance to other I2C bus modes is limited to: the start and stop conditions are used to delimit transfers, I2C addressing allows multiple target devices to share the bus without SPI bus style target select signals, and a ninth clock pulse is sent per byte transmitted marking the position of the unused acknowledgement bits.Some of the vendors provide a so called non-standard Turbo mode with a speed up to 1.4 Mbit/s. Design: In all modes, the clock frequency is controlled by the controller(s), and a longer-than-normal bus may be operated at a slower-than-nominal speed by underclocking. Design: Circuit interconnections I2C is popular for interfacing peripheral circuits to prototyping systems, such as the Arduino and Raspberry Pi. I2C does not employ a standardized connector, however, board designers have created various wiring schemes for I2C interconnections. To minimize the possible damage due to plugging 0.1-inch headers in backwards, some developers have suggested using alternating signal and power connections of the following wiring schemes: (GND, SCL, VCC, SDA) or (VCC, SDA, GND, SCL).The vast majority of applications use I2C in the way it was originally designed—peripheral ICs directly wired to a processor on the same printed circuit board, and therefore over relatively short distances of less than 1 foot (30 cm), without a connector. However using a differential driver, an alternate version of I2C can communicate up to 20 meters (possibly over 100 meters) over CAT5 or other cable.Several standard connectors carry I2C signals. For example, the UEXT connector carries I2C; the 10-pin iPack connector carries I2C; the 6P6C Lego Mindstorms NXT connector carries I2C; a few people use the 8P8C connectors and CAT5 cable normally used for Ethernet physical layer to instead carry differential-encoded I2C signals or boosted single-ended I2C signals; and every HDMI and most DVI and VGA connectors carry DDC2 data over I2C. Design: Buffering and multiplexing When there are many I2C devices in a system, there can be a need to include bus buffers or multiplexers to split large bus segments into smaller ones. This can be necessary to keep the capacitance of a bus segment below the allowable value or to allow multiple devices with the same address to be separated by a multiplexer. Many types of multiplexers and buffers exist and all must take into account the fact that I2C lines are specified to be bidirectional. Multiplexers can be implemented with analog switches, which can tie one segment to another. Analog switches maintain the bidirectional nature of the lines but do not isolate the capacitance of one segment from another or provide buffering capability. Design: Buffers can be used to isolate capacitance on one segment from another and/or allow I2C to be sent over longer cables or traces. Buffers for bi-directional lines such as I2C must use one of several schemes for preventing latch-up. I2C is open-drain, so buffers must drive a low on one side when they see a low on the other. One method for preventing latch-up is for a buffer to have carefully selected input and output levels such that the output level of its driver is higher than its input threshold, preventing it from triggering itself. For example, a buffer may have an input threshold of 0.4 V for detecting a low, but an output low level of 0.5 V. This method requires that all other devices on the bus have thresholds which are compatible and often means that multiple buffers implementing this scheme cannot be put in series with one another. Design: Alternatively, other types of buffers exist that implement current amplifiers or keep track of the state (i.e. which side drove the bus low) to prevent latch-up. The state method typically means that an unintended pulse is created during a hand-off when one side is driving the bus low, then the other drives it low, then the first side releases (this is common during an I2C acknowledgement). Design: Sharing SCL between multiple buses When having a single controller, it is possible to have multiple I2C buses share the same SCL line. The packets on each bus are either sent one after the other or at the same time. This is possible, because the communication on each bus can be subdivided in alternating short periods with high SCL followed by short periods with low SCL. And the clock can be stretched, if one bus needs more time in one state. Design: Advantages are using targets devices with the same address at the same time and saving connections or a faster throughput by using several data lines at the same time. Line state table These tables show the various atomic states and bit operations that may occur during an I2C message. Design: Addressing structure 7-bit addressing 10-bit addressing Reserved addresses in 7-bit address space Two groups of addresses are reserved for special functions: 0000 XXX 1111 XXXSMBus reserves some additional addresses. In particular, 0001 000 is reserved for the SMBus host, which may be used by controller-capable devices, 0001 100 is the "SMBus alert response address" which is polled by the host after an out-of-band interrupt, and 1100 001 is the default address which is initially used by devices capable of dynamic address assignment. Design: This leaves a total of 107 unreserved 7-bit addresses in common between I2C, SMBus, and PMBus. Non-reserved addresses in 7-bit address space Although MSB 1111 is reserved for Device ID and 10-bit target (slave) addressing, it is also used by VESA DDC display dependent devices such as pointing devices. Transaction format An I2C transaction consists of one or more messages. Each message begins with a start symbol, and the transaction ends with a stop symbol. Start symbols after the first, which begin a message but not a transaction, are referred to as repeated start symbols. Each message is a read or a write. A transaction consisting of a single message is called either a read or a write transaction. A transaction consisting of multiple messages is called a combined transaction. The most common form of the latter is a write message providing intra-device address information, followed by a read message. Design: Many I2C devices do not distinguish between a combined transaction and the same messages sent as separate transactions, but not all. The device ID protocol requires a single transaction; targets are forbidden from responding if they observe a stop symbol. Configuration, calibration or self-test modes which cause the target to respond unusually are also often automatically terminated at the end of a transaction. Design: Timing diagram Data transfer is initiated with a start condition (S) signalled by SDA being pulled low while SCL stays high. SCL is pulled low, and SDA sets the first data bit level while keeping SCL low (during blue bar time). The data is sampled (received) when SCL rises for the first bit (B1). For a bit to be valid, SDA must not change between a rising edge of SCL and the subsequent falling edge (the entire green bar time). This process repeats, SDA transitioning while SCL is low, and the data being read while SCL is high (B2 through Bn). The final bit is followed by a clock pulse, during which SDA is pulled low in preparation for the stop bit. A stop condition (P) is signalled when SCL rises, followed by SDA rising.In order to avoid false marker detection, there is a minimum delay between the SCL falling edge and changing SDA, and between changing SDA and the SCL rising edge. Note that an I2C message containing n data bits (including acknowledges) contains n + 1 clock pulses. Software Design I2C lends itself to a "bus driver" software design. Software for attached devices is written to call a "bus driver" that handles the actual low-level I2C hardware. This permits the driver code for attached devices to port easily to other hardware, including a bit-banging design. Example of bit-banging the I2C protocol Below is an example of bit-banging the I2C protocol as an I2C controller (master). The example is written in pseudo C. It illustrates all of the I2C features described before (clock stretching, arbitration, start/stop bit, ack/nack). Operating system support: In AmigaOS one can use the i2c.resource component for AmigaOS 4.x and MorphOS 3.x or the shared library i2c.library by Wilhelm Noeker for older systems. Arduino developers can use the "Wire" library. Maximite supports I2C communications natively as part of its MMBasic. PICAXE uses the i2c and hi2c commands. eCos supports I2C for several hardware architectures. ChibiOS/RT supports I2C for several hardware architectures. FreeBSD, NetBSD and OpenBSD also provide an I2C framework, with support for a number of common controllers and sensors. Operating system support: Since OpenBSD 3.9 (released 1 May 2006 (2006-05-01)), a central i2c_scan subsystem probes all possible sensor chips at once during boot, using an ad hoc weighting scheme and a local caching function for reading register values from the I2C targets; this makes it possible to probe sensors on general-purpose off-the-shelf i386/amd64 hardware during boot without any configuration by the user nor a noticeable probing delay; the matching procedures of the individual drivers then only has to rely on a string-based "friendly-name" for matching; as a result, most I2C sensor drivers are automatically enabled by default in applicable architectures without ill effects on stability; individual sensors, both I2C and otherwise, are exported to the userland through the sysctl hw.sensors framework. As of March 2019, OpenBSD has over two dozen device drivers on I2C that export some kind of a sensor through the hw.sensors framework, and the majority of these drivers are fully enabled by default in i386/amd64 GENERIC kernels of OpenBSD. Operating system support: In NetBSD, over two dozen I2C target devices exist that feature hardware monitoring sensors, which are accessible through the sysmon envsys framework as property lists. On general-purpose hardware, each driver has to do its own probing, hence all drivers for the I2C targets are disabled by default in NetBSD in GENERIC i386/amd64 builds. In Linux, I2C is handled with a device driver for the specific device, and another for the I2C (or SMBus) adapter to which it is connected. Hundreds of such drivers are part of current Linux kernel releases. In Mac OS X, there are about two dozen I2C kernel extensions that communicate with sensors for reading voltage, current, temperature, motion, and other physical status. In Microsoft Windows, I2C is implemented by the respective device drivers of much of the industry's available hardware. For HID embedded/SoC devices, Windows 8 and later have an integrated I²C bus driver. In Windows CE, I2C is implemented by the respective device drivers of much of the industry's available hardware. Unison OS, a POSIX RTOS for IoT, supports I2C for several MCU and MPU hardware architectures. In RISC OS, I2C is provided with a generic I2C interface from the IO controller and supported from the OS module system In Sinclair QDOS and Minerva QL operating systems I2C is supported by a set of extensions provided by TF Services. Development tools: When developing or troubleshooting systems using I2C, visibility at the level of hardware signals can be important. Host adapters There are a number of I2C host adapter hardware solutions for making a I2C controller or target connection to host computers, running Linux, Mac or Windows. Most options are USB-to-I2C adapters. Not all of them require proprietary drivers or APIs. Protocol analyzers I2C protocol analyzers are tools that sample an I2C bus and decode the electrical signals to provide a higher-level view of the data being transmitted on the bus. Development tools: Logic analyzers When developing and/or troubleshooting the I2C bus, examination of hardware signals can be very important. Logic analyzers are tools that collect, analyze, decode, and store signals, so people can view the high-speed waveforms at their leisure. Logic analyzers display time stamps of each signal level change, which can help find protocol problems. Most logic analyzers have the capability to decode bus signals into high-level protocol data and show ASCII data. Limitations: On low-power systems, the pull-up resistors can use more power than the entire rest of the design combined. On these, the resistors are often powered by a switchable voltage source, such as a DIO from a microcontroller. The pull-ups also limit the speed of the bus and have a small additional cost. Therefore, some designers are turning to other serial buses, e.g. I3C or SPI, that do not need pull-ups. Limitations: The assignment of target addresses is a weakness of I2C. Seven bits is too few to prevent address collisions between the many thousands of available devices. What alleviates the issue of address collisions between different vendors and also allows to connect to several identical devices is that manufacturers dedicate pins that can be used to set the target address to one of a few address options per device. Two or three pins is typical, and with many devices, there are three or more wiring options per address pin.10-bit I2C addresses are not yet widely used, and many host operating systems do not support them. Neither is the complex SMBus "ARP" scheme for dynamically assigning addresses (other than for PCI cards with SMBus presence, for which it is required). Limitations: Automatic bus configuration is a related issue. A given address may be used by a number of different protocol-incompatible devices in various systems, and hardly any device types can be detected at runtime. For example, 0x51 may be used by a 24LC02 or 24C32 EEPROM, with incompatible addressing; or by a PCF8563 RTC, which cannot reliably be distinguished from either (without changing device state, which might not be allowed). The only reliable configuration mechanisms available to hosts involve out-of-band mechanisms such as tables provided by system firmware, which list the available devices. Again, this issue can partially be addressed by ARP in SMBus systems, especially when vendor and product identifiers are used; but that has not really caught on. The Rev. 3 version of the I2C specification adds a device ID mechanism. Limitations: I2C supports a limited range of speeds. Hosts supporting the multi-megabit speeds are rare. Support for the Fm+ 1 Mbit/s speed is more widespread, since its electronics are simple variants of what is used at lower speeds. Many devices do not support the 400 kbit/s speed (in part because SMBus does not yet support it). I2C nodes implemented in software (instead of dedicated hardware) may not even support the 100 kbit/s speed; so the whole range defined in the specification is rarely usable. All devices must at least partially support the highest speed used or they may spuriously detect their device address. Limitations: Devices are allowed to stretch clock cycles to suit their particular needs, which can starve bandwidth needed by faster devices and increase latencies when talking to other device addresses. Bus capacitance also places a limit on the transfer speed, especially when current sources are not used to decrease signal rise times. Limitations: Because I2C is a shared bus, there is the potential for any device to have a fault and hang the entire bus. For example, if any device holds the SDA or SCL line low, it prevents the controller from sending START or STOP commands to reset the bus. Thus it is common for designs to include a reset signal that provides an external method of resetting the bus devices. However many devices do not have a dedicated reset pin, forcing the designer to put in circuitry to allow devices to be power-cycled if they need to be reset. Limitations: Because of these limits (address management, bus configuration, potential faults, speed), few I2C bus segments have even a dozen devices. It is common for systems to have several such segments. One might be dedicated to use with high-speed devices, for low-latency power management. Another might be used to control a few devices where latency and throughput are not important issues; yet another segment might be used only to read EEPROM chips describing add-on cards (such as the SPD standard used with DRAM sticks). Derivative technologies: I2C is the basis for the ACCESS.bus, the VESA Display Data Channel (DDC) interface, the System Management Bus (SMBus), Power Management Bus (PMBus) and the Intelligent Platform Management Bus (IPMB, one of the protocols of IPMI). These variants have differences in voltage and clock frequency ranges, and may have interrupt lines. High-availability systems (AdvancedTCA, MicroTCA) use 2-way redundant I2C for shelf management. Multi-controller I2C capability is a requirement in these systems. Derivative technologies: TWI (Two-Wire Interface) or TWSI (Two-Wire Serial Interface) is essentially the same bus implemented on various system-on-chip processors from Atmel and other vendors. Vendors use the name TWI, even though I2C is not a registered trademark as of 2014-11-07. Trademark protection only exists for the respective logo (see upper right corner), and patents on I2C have now lapsed. According to Microchip Technology, TWI and I2C have a few differences. One of them is that TWI does not support START byte.In some cases, use of the term "two-wire interface" indicates incomplete implementation of the I2C specification. Not supporting arbitration or clock stretching is one common limitation, which is still useful for a single controller communicating with simple targets that never stretch the clock. Derivative technologies: MIPI I3C sensor interface standard (I3C) is a development of I2C, under development in 2017.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nik-L-Nip** Nik-L-Nip: Nik-L-Nip is a brand of confectionery created by Vinny Cavallo in the early 20th century that comes in a variety of fruit flavors, marketed by Tootsie Roll Industries. The Nik-L-Nip brand name is a combination of the original cost (a nickel, $0.05) and the preferred method of opening wax bottles, which is to nip (bite) the top off. It has a fruity-tasting liquid flavoring inside of it. Once the top of the small, bottle-shaped wax containers has been bitten off, one can drink the fruit-flavored syrup inside. Afterward the wax can be chewed like gum. The wax in Nik-L-Nip wax bottles is food-grade and non-toxic, although it is meant to be chewed but not swallowed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hive management** Hive management: Hive management in beekeeping refers to intervention techniques that a beekeeper may perform to ensure hive survival and to maximize hive production. Hive management techniques vary widely depending on the objectives. For honey production: The dependent factors for honey production are the duration and timing of the honey flow in a certain area. Duration and timing of a honey flow may vary widely depending on local predominant climates, weather during the honey flow and the nectar sources in the area. Good honey production sites are the far northern latitudes. In the summer, as days grow longer, bees can fly and forage for longer hours increasing the production. Migrating beekeepers also take advantage of local bloom of agricultural plants or wild flowers and trees. In mountainous regions a beekeeper may migrate up the mountain as the spring and summer bloom progresses. For honey production: It has been shown that a larger bee colony will produce relatively more honey. Therefore, the early buildup and spring feeding and subsequent prevention of swarming are of high priority. Several different methods such as the Demaree method, Checkerboarding and opening up the brood nest have been advocated to prevent swarming. For honey production: Techniques to maximize extracted honey production Once a good location for an apiary is selected, techniques under the control of a beekeeper for maximizing extracted honey production depend mostly on maximizing the number of foraging bees at the peak time of the honey flow. Techniques may include interrupting brood production right before the main honey flow to free up nurse bees for foraging. A main objective is to prevent swarming. For honey production: Techniques to maximize comb honey production Comb honey production requires many of the same techniques that are required for the production of extracted honey. In addition, the colony must be very strong and have comb building traits. Honeycomb for direct consumption as comb honey is always created the same year it is harvested. For honey production: Honey combs may also be harvested by crushing the comb and squeezing out the honey. This is the lowest cost method of producing honey. Keepers of the low-cost top-bar hives use this technique to harvest honey. The technique may also be used for the frames of Langstroth hives. The so-called cut comb are sections of sealed honey comb that are cut out of the frame. If the cut comb is to be consumed not crushed only the purest beeswax foundation may be used. For honey production: Techniques for maximizing Ross rounds and cassette production Killion Method Juniper Hill Method Crowding Shock Shook Method For pollination: see pollination management Techniques for maximizing agricultural crops pollination Pollinator decline Pesticide toxicity to bees Buzz pollination For queen breeding: Techniques to maximize open mating Techniques to maximize open mating of virgin queens center around having drones of a desired parentage saturate a queen mating yard. Techniques to maximize artificial insemination Artificial insemination of honeybee queens is a process used for very selective breeding of honeybee races. In the open mating of queens the source of drones cannot be fully controlled. In artificial insemination the source of drone sperm can be fully controlled and be more predictably selected than in open breeding. For pollen production: Bee pollen is one of the byproducts of the hive. Pollen collection is usually not the main management objective. Pollen is collected by installing a pollen trap at the entrance of the bee hive. There are varying designs for pollen traps. The pollen trap makes access to the hive harder for the foraging bees. In the process of climbing through the pollen trap wires some pollen is loosened from the bee's pollen basket and falls into a collection container. Varying recommendations describe leaving the pollen trap on for a few days or for more extended periods. Pollen collection works best in an area with various pollen sources throughout the year. For pollen production: Fresh pollen can be frozen or dried. It is used for human consumption or fed back to the colony in early spring to speed up brood production. For propolis production: Propolis is another byproduct of the bee hive. Certain races of bees are more prone to using propolis. Propolis can be collected on special plastic propolis screens. The tendency of the bees is to use propolis as a glue to seal openings that are too small for a bee to crawl through. A propolis screen is usually put in place of an inner cover. It has small openings that are propolized by the bees. The propolis screen can be frozen which hardens the propolis. Once the propolis is frozen it can be easily knocked off and collected. Bee races that use propolis heavily are usually not desirable as it makes other hive manipulation more difficult. There is a good market for propolis in medicinal and pharmacological industries. For beeswax production: Beeswax may be a major product or a minor byproduct. The management technique that yields the highest amount of wax per hive is the top-bar hive. During the harvest of the honey from top-bar hives the whole honey comb is removed and crushed to extract the honey. The commercial honey producers use Langstroth hive frames. The honey extraction process yields beeswax from the uncapping process. The highest quality beeswax is almost white. Lower quality beeswax from older cappings or comb is yellow or brown. Beeswax should be rendered and filtered before it is sold. The least amount of beeswax that can be used as such, is produced in Ross rounds or cassette type comb honey production. Wax and honey are not separated and are consumed together. Tha ability and tendency to build wax comb differs between the honeybee races. It also differs between colonies. A newly hived swarm produces wax and builds comb very quickly. For royal jelly production: The production of royal jelly is most dependent on the proper genetics of the queen. Queens and drones are selectively bred to increase the production of royal jelly. A good yield per hive is 5 kg per year. For apitoxin production: Bee venom (apitoxin) is obtained by stimulating the bee with an electric current that incite them to sting, releasing a drop of poison onto a glass slide. The crystallized venom can be collected and processed. In order to get 1 gram of dry venom, it is necessary to collect the apitoxin of 10,000 to 15,000 bees. For bee brood production: Bee brood as such is generally not a commercial commodity. However, bee brood is edible, and is used as a food in Asia and Africa. For the production of nucs: Hive management techniques to multiply colonies use the bees natural tendency to swarm by simulating a swarm. Nucs are bought and sold usually in the spring time. The advantage to packaged bees is that the bees are on established frames with a laying queen and developing brood. A fast developing nuc can be transferred to a full hive box and may produce honey in the same year. For the production of nucs: Walk-away split In a walk-away split, frames with eggs and worker bees are removed and the bees will create a queen cell out of a suitable egg. Once the queen hatches, successfully mates and returns to the hive, the hive will be queenright. Cut down split For bee package production: A package of bees is made of a queen and 3 to 5 pounds of bees, typically around 20000 bees. The bees are shipped in a cage clustered around a caged queen. The queen is typically unrelated to the bees, so the cage creates a barrier between the bees and the queen. Packages are usually shipped in the spring from regions of mild winter climates to areas that have more severe winters.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Homogeneity (physics)** Homogeneity (physics): In physics, a homogeneous material or system has the same properties at every point; it is uniform without irregularities. A uniform electric field (which has the same strength and the same direction at each point) would be compatible with homogeneity (all points experience the same physics). A material constructed with different constituents can be described as effectively homogeneous in the electromagnetic materials domain, when interacting with a directed radiation field (light, microwave frequencies, etc.).Mathematically, homogeneity has the connotation of invariance, as all components of the equation have the same degree of value whether or not each of these components are scaled to different values, for example, by multiplication or addition. Cumulative distribution fits this description. "The state of having identical cumulative distribution function or values". Context: The definition of homogeneous strongly depends on the context used. For example, a composite material is made up of different individual materials, known as "constituents" of the material, but may be defined as a homogeneous material when assigned a function. For example, asphalt paves our roads, but is a composite material consisting of asphalt binder and mineral aggregate, and then laid down in layers and compacted. However, homogeneity of materials does not necessarily mean isotropy. In the previous example, a composite material may not be isotropic. Context: In another context, a material is not homogeneous in so far as it is composed of atoms and molecules. However, at the normal level of our everyday world, a pane of glass, or a sheet of metal is described as glass, or stainless steel. In other words, these are each described as a homogeneous material. A few other instances of context are: dimensional homogeneity (see below) is the quality of an equation having quantities of same units on both sides; homogeneity (in space) implies conservation of momentum; and homogeneity in time implies conservation of energy. Context: Homogeneous alloy In the context of composite metals is an alloy. A blend of a metal with one or more metallic or nonmetallic materials is an alloy. The components of an alloy do not combine chemically but, rather, are very finely mixed. An alloy might be homogeneous or might contain small particles of components that can be viewed with a microscope. Brass is an example of an alloy, being a homogeneous mixture of copper and zinc. Another example is steel, which is an alloy of iron with carbon and possibly other metals. The purpose of alloying is to produce desired properties in a metal that naturally lacks them. Brass, for example, is harder than copper and has a more gold-like color. Steel is harder than iron and can even be made rust proof (stainless steel). Context: Homogeneous cosmology Homogeneity, in another context plays a role in cosmology. From the perspective of 19th-century cosmology (and before), the universe was infinite, unchanging, homogeneous, and therefore filled with stars. However, German astronomer Heinrich Olbers asserted that if this were true, then the entire night sky would be filled with light and bright as day; this is known as Olbers' paradox. Olbers presented a technical paper in 1826 that attempted to answer this conundrum. The faulty premise, unknown in Olbers' time, was that the universe is not infinite, static, and homogeneous. The Big Bang cosmology replaced this model (expanding, finite, and inhomogeneous universe). However, modern astronomers supply reasonable explanations to answer this question. One of at least several explanations is that distant stars and galaxies are red shifted, which weakens their apparent light and makes the night sky dark. However, the weakening is not sufficient to actually explain Olbers' paradox. Many cosmologists think that the fact that the Universe is finite in time, that is that the Universe has not been around forever, is the solution to the paradox. The fact that the night sky is dark is thus an indication for the Big Bang. Translation invariance: By translation invariance, one means independence of (absolute) position, especially when referring to a law of physics, or to the evolution of a physical system. Fundamental laws of physics should not (explicitly) depend on position in space. That would make them quite useless. In some sense, this is also linked to the requirement that experiments should be reproducible. This principle is true for all laws of mechanics (Newton's laws, etc.), electrodynamics, quantum mechanics, etc. Translation invariance: In practice, this principle is usually violated, since one studies only a small subsystem of the universe, which of course "feels" the influence of the rest of the universe. This situation gives rise to "external fields" (electric, magnetic, gravitational, etc.) which make the description of the evolution of the system depend upon its position (potential wells, etc.). This only stems from the fact that the objects creating these external fields are not considered as (a "dynamical") part of the system. Translation invariance: Translational invariance as described above is equivalent to shift invariance in system analysis, although here it is most commonly used in linear systems, whereas in physics the distinction is not usually made. The notion of isotropy, for properties independent of direction, is not a consequence of homogeneity. For example, a uniform electric field (i.e., which has the same strength and the same direction at each point) would be compatible with homogeneity (at each point physics will be the same), but not with isotropy, since the field singles out one "preferred" direction. Consequences In the Lagrangian formalism, homogeneity in space implies conservation of momentum, and homogeneity in time implies conservation of energy. This is shown, using variational calculus, in standard textbooks like the classical reference text of Landau & Lifshitz. This is a particular application of Noether's theorem. Dimensional homogeneity: As said in the introduction, dimensional homogeneity is the quality of an equation having quantities of same units on both sides. A valid equation in physics must be homogeneous, since equality cannot apply between quantities of different nature. This can be used to spot errors in formula or calculations. For example, if one is calculating a speed, units must always combine to [length]/[time]; if one is calculating an energy, units must always combine to [mass]•[length]²/[time]², etc. For example, the following formulae could be valid expressions for some energy: Ek=12mv2;E=mc2;E=pv;E=hc/λ if m is a mass, v and c are velocities, p is a momentum, h is Planck's constant, λ a length. On the other hand, if the units of the right hand side do not combine to [mass]•[length]2/[time]2, it cannot be a valid expression for some energy. Dimensional homogeneity: Being homogeneous does not necessarily mean the equation will be true, since it does not take into account numerical factors. For example, E = m•v2 could be or could not be the correct formula for the energy of a particle of mass m traveling at speed v, and one cannot know if h•c/λ should be divided or multiplied by 2π. Dimensional homogeneity: Nevertheless, this is a very powerful tool in finding characteristic units of a given problem, see dimensional analysis. Theoretical physicists tend to express everything in natural units given by constants of nature, for example by taking c = ħ = k = 1; once this is done, one partly loses the possibility of the above checking.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Marbled meat** Marbled meat: Marbled meat is meat, especially red meat, that contains various amounts of intramuscular fat, giving it an appearance similar to marble. Important terms defined: Beef quality grades - A quality grade is a composite evaluation of factors that affect palatability of meat (tenderness, juiciness, and flavor). These factors include carcass maturity, firmness, texture, and color of lean, and the amount and distribution of marbling within the lean. Beef carcass quality grading is based on (1) degree of marbling and (2) degree of maturity.Marbling - (intramuscular fat) is the intermingling or dispersion of fat within the lean. Graders evaluate the amount and distribution of marbling in the ribeye muscle at the cut surface after the carcass has been ribbed between the 12th and 13th ribs. Degree of marbling is the primary determination of quality grade.Maturity refers to the physiological age of the animal rather than the chronological age. Because the chronological age is virtually never known, physiological maturity is used; the indicators are bone characteristics, ossification of cartilage, and the color and texture of ribeye muscle. Cartilage becomes bone, lean color darkens and texture becomes coarser with increasing age. Cartilage and bone maturity receives more emphasis because lean color and texture can be affected by other postmortem factors.Beef yield grades - In beef, yield grades estimate the amount of boneless, closely trimmed retail cuts from the high-value parts of the carcass–the round, loin, rib, and chuck. However, they also show differences in the total yield of retail cuts. We expect a YG 1 carcass to have the highest percentage of boneless, closely trimmed retail cuts, or higher cutability, while a YG 5 carcass would have the lowest percentage of boneless, closely trimmed retail cuts, or the lowest cutability. The USDA Yield Grades are rated numerically and are 1, 2, 3, 4, and 5. Yield Grade 1 denotes the highest yielding carcass and Yield Grade 5, the lowest. United States grading system: The USDA's grading system, which has been designed to reward marbling, has eight different grades; Prime, Choice, Select, Standard, Commercial, Utility, Cutter and Canner. Prime has the highest marbling content when compared to other grades, and is capable of fetching a premium at restaurants and supermarkets. Choice is the grade most commonly sold in retail outlets, and Select is sold as a cheaper option in many stores. Prime, Choice, Select and Standard are commonly used in the younger cattle (under 42 months of age), and Commercial, Utility, Canner and Cutter are used in older cattle carcasses which are not marketed as wholesale beef "block" meat, but as material used in ground products and cheaper steaks for family restaurants.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Endocrine and Metabolic Diseases Information Service** Endocrine and Metabolic Diseases Information Service: The Endocrine and Metabolic Diseases Information Service is an information dissemination service of the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK). The NIDDK is part of the National Institutes of Health, which is part of the U.S. Department of Health and Human Services. The Endocrine and Metabolic Diseases Information Service is a part of the NIDDK's Division of Diabetes, Endocrinology, and Metabolic Diseases.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dragon kill points** Dragon kill points: Dragon kill points or DKP are a semi-formal score-keeping system (loot system) used by guilds in massively multiplayer online games. Players in these games are faced with large scale challenges, or raids, which may only be surmounted through the concerted effort of dozens of players at a time. While many players may be involved in defeating a boss, the boss will reward the group with only a small number of items desired by the players. Faced with this scarcity, some system of fairly distributing the items must be established. Used originally in the massively multiplayer online role-playing game EverQuest, dragon kill points are points that are awarded to players for defeating bosses and redeemed for items that those bosses would "drop". At the time, most of the bosses faced by the players were dragons, hence the name. Dragon kill points: While not transferable outside of a particular guild, DKP are often treated in a manner similar to currency by guilds. They are paid out at a specified rate and redeemed in modified first or second price auctions, although these are not the only methods by which DKP may be redeemed or awarded. However, Dragon kill points are distinct from the virtual currencies in each game world which are designed by the game developers; DKP systems vary from guild to guild and the points themselves only have value in regard to the dispersal of boss "loot". Origin and motivation: DKP systems were first designed for Everquest in 1999 by Thott as part of the creation of a guild called "Afterlife" and named for two dragons, Lady Vox and Lord Nagafen. Since then, it has been adapted for use in other similar online games, in World of Warcraft for example an Avatar named Dragonkiller started its popular use and other programmers designed applications so that the system could work in game as an application to track data for achievements made. Unlike pen and paper or more traditional role-playing video games, massively multiplayer online games could present challenges so significant that the number of players required to defeat them would greatly exceed the number of items awarded to the raid following the boss kill—a raid of 25 individuals may only see two or three items "drop". The actual number of players required to defeat a specific boss varies from game to game, but the person-hours invested are non-trivial. Raid encounters may involve "10-200 players organized to achieve a common goal over a period of typically around 3-6 continuous hours" and demand teamwork and competence from all raid members.As the number of players required to defeat a boss grows, so does the problem of distributing the rewards from such efforts. Since these items appear, or "drop", in quantities much smaller than the total number of players in the group required to defeat them, a means of deciding which of the players should receive the items is necessary. At the "endgame", new items rewarded from boss kills represent one of the only means to continue to enhance the combat effectiveness of the character or the social standing of the player. As such, individual players care about receiving a fair shot at dropped items.: 1–3  Guilds facing smaller challenges with fewer players typically begin by allotting items through a simulated roll of the dice (provided by the software serving the game itself), similar to dice rolls used to dictate the outcome of contingent events in pen and paper role-playing games. As the number of players expands, rolls may be weighted by seniority within the guild or adjusted by some other measure so as to ensure that veterans of the guild do not lose out on an item to a new member.: 7  Games and dungeons which require larger groups of players may create the incentive for more formal DKP systems. Methods to reward items according to seniority or performance developed out of these modifications, including systems relying on a formal allotment of points per kill. Mechanics of a DKP system: The basic concepts of most DKP systems are simple. Players are given points for participating in raids or other guild events and spend those points on the item of their choice when the boss 'drops' the item. A player who does not get a chance to spend their DKP accumulates it between raids and is able to spend it in the future. These points, while earned and spent like currency, are not the same thing as the virtual currency provided by the game company for the virtual world. The points themselves represent only the social contract that guilds extend to players. Should that player leave the guild or the guild disband, those points become valueless.: 17  These measures vary considerably in usage. Some guilds eschew formalized 'loot' systems completely, allowing guild leaders to direct which players receive items from bosses. Some use complex measures to determine item price while others use an auction system to allocate goods via bidding. A few common variations are described below. Mechanics of a DKP system: Zero-sum DKP Zero-sum DKP systems are designed to ensure the net change in points among the raid is zero for each item dropped, as the name might suggest. When the item drops, each player who is interested in it indicates as much to a guild leader. The player who has the highest DKP total receives the item for its specified price and the same number of points are divided evenly among the rest of the raid and given out, resulting in no net change to the raid total. As a result, the raid would only be rewarded DKP if at least one player desired the item dropped by the boss. Since over time guilds will revisit the same boss multiple times, some zero-sum DKP systems are modified to introduce a "dummy" character which may be awarded DKP for the boss "kill" even though no player in the guild received an item. This is purely an accounting measure and allows the guild to reward players for defeating a boss if they are using an automated point tracking system.: 184 : 13–14 Simple DKP The simplest DKP variation is one where every item has a set price list and each player earns some specified number of DKP each time they participate in a guild raid. Like zero-sum systems, the player with the most points recorded actually received the item, paying the specified price. Unlike zero-sum, a simple DKP system does not compensate the rest of the raid based in the value of the items received. Mechanics of a DKP system: Auction systems Setting "prices" in DKP for specific items can be difficult, as analysis of a particular item can be subjective and laborious.: 5–6  In order to avoid this quandary, guilds may establish an auction system for items. Points are awarded to the player at some specified rate and when the items are awarded to the raid group, players bid DKP for the item of their choice. Auctions may be conducted in an open ascending fashion or through sealed bids over private messages to guild leaders. While this process results in relatively efficient allocation of items to players willing to part with DKP, it presents the social consequence that perceived selfish bidding could result in an item being awarded to a character who would not make the best use of it. Mechanics of a DKP system: GDKP (Gold DKP) Gold DKP (GDKP) is a system developed for pick up groups (PUG). This system was introduced to support individuals without a guild to support raids for difficult bosses/zones. In GDKP, when a boss is killed, each item dropped is put up for auction with a low value. Each item is then auctioned. The eventual winner pays the loot master, and after every item has been auctioned off, every participant in the group is rewarded an equal share of gold. For example, if 20 members were in the group, and 500 gold was spent on items, each raid participant would receive 25 gold. DKP as virtual capital: Since the intention of DKP is to allocate scarce resources amongst guild members, they can be understood in the context of virtual capital. Players "earn" and "spend" DKP, bidding in a system of auctions for an item which holds some value for them. DKP are referred to as "currency" a guild leader pays his "employees". Despite these analogies, DKP remain a kind of "private money system", allowing guilds to mete out these otherwise unachievable items in return for participation and discipline.: 8  The points cannot be traded or redeemed outside the guild and are not actually part of the game itself; they are tracked on external websites. In contrast, the virtual currencies created by game developers are part of the game software and may be traded between players without respect to any social affiliation. Just as DKP is valueless outside the guild, parlaying of economic capital for DKP (paying real world currency in exchange for DKP) is almost unheard of.: 7–10  Because guilds mete out DKP in return for participation in events, the functional result is that DKP serve less as currency or material capital and more as what Torill Mortensen refers to as a "social stabilizer"; players who attend raids more frequently or play by the rules reap the rewards while more "casual" gamers do not. This provides an incentive for players to remain in the social system (the guild) longer than they might otherwise.: 16–19 Within the guild, DKP may stand in for competence—high level items (Krista-Lee Malone mentions a specific item from World of Warcraft, the "Cold Snap" wand) are forms of cultural capital themselves.: 18  Since the items are "bound" to the player who first receives them, the only way to wield a desired item is to be involved in the raid that defeated the boss which rewards it. As such, a "Cold Snap" represents a signal to other players that the bearer has defeated a particular high-level monster and therefore mastered the skills needed to do so. The points themselves represent a mélange of cultural and material capital. The language of material capital is used: "price", "bid", and "currency", but these terms belie a unit of account that "crosses the line between material and symbolic".: 29
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Iterative Viterbi decoding** Iterative Viterbi decoding: Iterative Viterbi decoding is an algorithm that spots the subsequence S of an observation O = {o1, ..., on} having the highest average probability (i.e., probability scaled by the length of S) of being generated by a given hidden Markov model M with m states. The algorithm uses a modified Viterbi algorithm as an internal step. The scaled probability measure was first proposed by John S. Bridle. An early algorithm to solve this problem, sliding window, was proposed by Jay G. Wilpon et al., 1989, with constant cost T = mn2/2. A faster algorithm consists of an iteration of calls to the Viterbi algorithm, reestimating a filler score until convergence. The algorithm: A basic (non-optimized) version, finding the sequence s with the smallest normalized distance from some subsequence of t is: // input is placed in observation s[1..n], template t[1..m], // and [[distance matrix]] d[1..n,1..m] // remaining elements in matrices are solely for internal computations (int, int, int) AverageSubmatchDistance(char s[0..(n+1)], char t[0..(m+1)], int d[1..n,0..(m+1)]) { // score, subsequence start, subsequence end declare int e, B, E t'[0] := t'[m+1] := s'[0] := s'[n+1] := 'e' e := random() do e' := e for i := 1 to n do d'[i,0] := d'[i,m+1] := e (e, B, E) := ViterbiDistance(s', t', d') e := e/(E-B+1) until (e == e') return (e, B, E) } The ViterbiDistance() procedure returns the tuple (e, B, E), i.e., the Viterbi score "e" for the match of t and the selected entry (B) and exit (E) points from it. "B" and "E" have to be recorded using a simple modification to Viterbi. The algorithm: A modification that can be applied to CYK tables, proposed by Antoine Rozenknop, consists in subtracting e from all elements of the initial matrix d.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hepatitis F virus** Hepatitis F virus: Hepatitis F is a hypothetical virus linked to viral hepatitis. Several hepatitis F candidates emerged in the 1990s; however, none of these claims were substantiated.In 1994, Deka et al. reported that novel viral particles had been discovered in the stool of post-transfusion, non-hepatitis A, non-hepatitis B, non-hepatitis C, non-hepatitis E patients. Injection of these particles into the bloodstream of Indian rhesus monkeys caused hepatitis, and the virus was named hepatitis F or Toga virus. Further investigations failed to confirm the existence of the virus, and it was delisted as a cause for infectious hepatitis.A subsequently-discovered virus thought to cause hepatitis was named Hepatitis G, though its role in hepatitis has not been confirmed and it is now considered synonymous with GB virus C. It is an "orphan virus" with no causal links to any human disease.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Structure specific recognition protein 1** Structure specific recognition protein 1: FACT complex subunit SSRP1 also known as structure specific recognition protein 1 is a protein that in humans is encoded by the SSRP1 gene. Function: The protein encoded by this gene is a subunit of a heterodimer that, along with SUPT16H, forms chromatin transcriptional elongation factor FACT. FACT interacts specifically with histones H2A/H2B to effect nucleosome disassembly and transcription elongation. FACT and cisplatin-damaged DNA may be crucial to the anticancer mechanism of cisplatin. This encoded protein contains a high mobility group box which most likely constitutes the structure recognition element for cisplatin-modified DNA. This protein also functions as a co-activator of the transcriptional activator p63. Interactions: Structure specific recognition protein 1 has been shown to interact with NEK9. SSRP1 further interacts with transcriptional activator p63. SSRP1 enhances the activity of full-length p63, but it has no effect on the N-terminus-deleted p63 (DeltaN-p63) variant.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OGRE** OGRE: Object-Oriented Graphics Rendering Engine (OGRE) is a scene-oriented, real-time, open-source, 3D rendering engine.Ogre has been ported to Windows, macOS, Linux, PocketPC, Xbox, and PS3.Since 2019, Ogre consists of two forks developed separately, namely Ogre (also called Ogre1), which is based on the original 1.x codebase and Ogre Next (also called Ogre2), which is based on the 2.x development efforts. Games and applications: Gazebo simulator and Ignition Gazebo Hob OpenMW (until v0.37.0) Rebel Galaxy Rebel Galaxy Outlaw Rigs of Rods Roblox (2009 - 2014) Running with Rifles Scrap Mechanic (until 2016) Shadows: Heretic Kingdoms TROUBLESHOOTER: Abandoned Children Torchlight & Torchlight II Walaber's Trampoline Zombie Driver Kenshi
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Submarine branching unit** Submarine branching unit: A submarine branching unit is a piece of equipment used in submarine telecommunications cable systems to allow the cable to split to serve more than one destination. For example, one branch might head for a cable landing point and others may continue.There are several methods by which the split can be affected, which can also depend on the type of cable system: Purely electrical systems (now almost obsolete) can be split by either: Physically separating the signal cables so some go in one direction and some in another, which requires no additional power. Submarine branching unit: Using an add-drop multiplexer to direct the signals down one path or the other. The electrical equipment that acts as the add-drop multiplexer will need powering. Optical fibre cable systems can be split by either: Physically separating the signal-carrying fibres so some go in one direction and some in another, which requires no additional power. Converting the optically carried signals to electrical signals, using an add-drop multiplexer to divide and recombine the signals on the desired paths, the reconverting back to optically carried signals. This signal conversion and multiplexing equipment will require power. Submarine branching unit: Using a reconfigurable optical add-drop multiplexer to direct optical carrier frequencies down desired paths. The power requirements of optical multiplexing in this manner will be lower than the previous method.In both types of cable system, more than one technique can be used simultaneously.The conventional symbol used for a submarine branching unit in maps of cable routes is a small equilateral triangle with (usually) one vertex pointing towards the top of the map.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lewis (lifting appliance)** Lewis (lifting appliance): A lewis (sometimes called a lewisson) is one of a category of lifting devices used by stonemasons to lift large stones into place with a crane, chain block, or winch. It is inserted into a specially prepared hole, or seating, in the top of a stone, preferably above its centre of mass. It works by applying principles of the lever and utilises the weight of the stone to act on the long lever-arms, which in turn results in a very high reaction force and friction where the short lever-arms make contact with the stone inside the hole and thereby prevents slipping. Etymology: The name lewis may come from the Latin levo -avi, -atum meaning to levitate or lift, but the Oxford English Dictionary Online states, "the formation and the phonology are not easily explained on this hypothesis", preferring "origin obscure", and speculating that the term may derive from a personal name. The Romans used the lewis. The specially shaped hole that is shaped to fit the device is known as a lewis hole. Lewis holes in the uppermost masonry coursings are neatly repaired with matching indented plugs after the stone has been set in place. Use: A lewis is most useful when it is not possible to lift the stone with chains or slings, because of either the location or shape of the stone, or delicate projections. Examples include the closing stone in a string course, cylindrical column drums, decorated column capitals, and coping stones in a pediment. Heavy ashlar stones are also bedded using a lewis. Use: The lewis is liable to slip out of the seating if some of the weight of the stone is subtracted from the appliance, such as when the stone bumps on the scaffolding on its way up to its final location. For this reason, a safety sling should always be used together with the lewis until the stone is reasonably close to its final position. Lifting the stone a small distance from the ground before hoisting is the best way to test a lewis. Any sign of looseness or damage should be corrected by adjusting the lewis hole or packing the lewis with metal shims. Use: To bed a stone using a lewis, the stone is placed on dunnage laid flat with enough clearance for a mortar bed to be placed beneath it. The safety straps are removed, the stone is lifted using the lewis alone, and the dunnage removed with fingers clear. The stone is then lowered onto the mortar bed, and positioned with sharp taps from a rubber mallet. Types of lewis: There are a number of different types of lewis used in the stonemasonry trade: Chain-linked lewis A chain-linked lewis or chain lewis is made from two curved steel legs, linked by three steel rings. The legs fit into a seating cut in the top of the stone, above the centre of mass. When the top of the curved legs are pulled together by the rings, the bottom portions are forced into the lower part of the seating, thereby providing enough friction to lift the stone. Types of lewis: Split-pin lewis The split-pin lewis is similar to the chain-linked lewis in that it uses a scissor-like action to produce friction against the inside of the lewis hole. The two legs, semicircular in section, lie side-by-side, and fit inside a hole drilled in the stone. This type of lewis seating is the simplest to prepare, requiring a single drilled hole. Types of lewis: Two-pinned lewis A two-pinned lewis consists of two pins, linked by a short chain. The pins are inserted into opposing holes that are drilled into the top of the stone at about 15° from vertical. It operates by gripping the stone (like two fingers lifting a tenpin bowling ball) as the weight of the stone is taken up by a crane or winch. The advantage of using this type of lewis is that it is simple to prepare: two angled drill holes are all that is necessary. Like other types of lewis, it is susceptible to pulling out as the stone is lifted. It should always be tested before hoisting, and used in conjunction with safety slings. Types of lewis: Three-legged lewis A three-legged lewis, also known as a dovetailed lewis, St Peter's keys, or a Wilson bolt, fits into a dovetailed seating in the top of a building stone. It is made from three pieces of rectangular-section 13 mm (0.51 in)-thick steel legs held together with a shackle, allowing connection to a lifting hook. The middle leg is square throughout its length, while the outer legs are thinner at the top, flaring towards the bottom. Held together, the three legs thus form a dovetail shape. The lewis hole seating is undercut (similar to a chain-linked lewis hole) to match its profile. Types of lewis: The first outer leg is inserted into the lewis hole, followed by the second outer leg. The inner (parallel) leg is inserted last, pushing the outer legs into contact with the inside of the lewis hole. The shackle is unbolted, placed over the legs, and the bolt fastened through both the shackle eyes and the eye in the top of each leg. (See gallery below for diagram.) This type of lewis is the safest to use because it relies on its dovetailed shape for security instead of friction alone, but the seating is time-consuming to prepare. Types of lewis: Their resemblance, once assembled, to a bunch of keys gave rise to an alternative name for them of "St Peter's keys". This has frequently been represented allegorically, drawing the name of "St. Peter" as "the Rock on which I shall found my Church" into an allegory between the fabric of a church building and the community of the church itself. Some illustrations of St Peter even show him carrying a bunch of keys, which appear to have no wards. These are not keys in the lock-making sense, but in this sense of stonemasonry. Types of lewis: External lewis The external lewis, kerb lifter or slab lifter is a type of lifting device used in the stonemasonry trade since Medieval times. The external lewis was originally shaped like a pair of scissor-tongs, and swung from a treadwheel crane. Types of lewis: This type of tong device has been known as dogs and the holes in the stone as dog holes for many centuries. Many old bridges and walls in the UK still have dog holes to reveal how the stones were lifted, particularly onto bridge parapets. The external lewis has been modified to handle kerbstones and large slabs of polished stone in contemporary stone yards. Types of lewis: A manual kerb lifter is a large, adjustable pair of tongs, made with a pair of handles so that two men can manoeuvre heavy blocks of stone into position. A mechanical kerb lifter can also be made to fit mechanical lifters like forklifts or crane-trucks so that larger stones can be placed. In stone yards, a slab lifter is hung from a shed gantry or forklift to transport slabs of stone between storage racks and stone processing machines. It consists of two hinged, weighted friction pads that close astride the top of a slab, and are pulled tightly together by the weight of the slab itself. Types of lewis: The slab lifter uses two safety devices. Safety chains and a support bracket allow safe lifting of large slabs. This lifting appliance also has a safety locking device that is engaged when the gripping pads are activated by the weight of the stone. This prevents any jerking movement from releasing the stone. It is easily disengaged once the slab is secured at its destination.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bent press** Bent press: A bent press is a type of weight training exercise wherein a weight is brought from shoulder-level to overhead one-handed using the muscles of the back, legs, and arm. A very large amount of weight can be lifted this way, compared to other types of one-hand press. It has been said that more weight can be lifted with one hand in this manner than in the typical two-handed overhead barbell press. It was a staple of the old-time strongmen and strongwomen such as Eugen Sandow, Arthur Saxon, and Louis Cyr, but is no longer popular. Like any exercise that is attempted without proper progression and full understanding, it poses safety concerns due to the thoracic rotation, and core strength required. However, proponents of the exercise argue that, since it uses the leverage of the body in order to lift the weight, if progressed to and performed correctly, it is a safe exercise. Despite its name, the arm does not press the weight aloft. Method: To do the bent press, one would begin by lifting the weight to the shoulder (usually a barbell, but it could be done with a kettlebell or dumbbell), either by a one or two handed clean, or by lifting one end and "rocking" it onto the shoulder. If done with the right hand (the reverse is done for the left hand), the right leg would be straight and directly underneath the weight, with the left leg bent at a slight angle. The lifter would then bend to the left, holding the weight in the same position. The bent position, the origin of the name "bent press", allows the arm to hold the weight in position without dropping down because of the body's leverage, creating an imaginary line between the bell and the floor that travels through the right arm and right leg. The lifter continues to bend to the left until the arm is fully extended. The weight is not pressed, but held aloft while bending "underneath it". To complete the lift, after the arm is fully extended, the lifter does a slight corkscrew to get "underneath the weight" in a half or full squat position, again without pressing the weight, and then once underneath the weight with the arm locked out overhead holding the weight, the lifter stands erect, still holding the weight overhead. The weight can either be dropped or lowered in military press fashion after the lift is complete. Method: A key element of this lift is balance. The lifter should stare at the weight once shouldered and while the arm moves to a locked position overhead. In reality, the lifter bends his body and shoulder away from the weight, bending the opposite leg to help lower the shoulder away from the weight. The whole arm that holds the weight sort of rests on the lifter's back on that side. The opposite arm is held straight out for balance as well. Although most of the lockout is achieved by bending away from the weight, some pressing of the arm is also employed. The only real danger I ever found in this lift was dropping it on things if balance was lost (once on my mother's suitcase). A lifter can easily move away from the weight if it falls. In the 1963, as a 16-year-old, I could do 165 weighing 160 and in 1972, I did 200 weighing 198. When I was in my 50's, upon doing this lift again, I discovered extreme shoulder flexibility is required and could only do 100lbs. x 10 reps. Without good shoulder flexibility, a tear could occur. Dumbbells are harder to control than a long bar of the same weight as the longer bar will turn or rotate much more slowly while being moved. - Dale Rhoades, owner of the Des Moines Strength Institute Records: The world record in the bent press is 371 pounds (168 kg) by Arthur Saxon, but there were unofficial reports of him bent pressing 409.5 pounds (185.7 kg).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Protein asunder homolog** Protein asunder homolog: Protein asunder homolog (Asun) also known as cell cycle regulator Mat89Bb homolog (Mat89Bb) is a protein that in humans is encoded by the Asun gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hellenic Organization for Standardization** Hellenic Organization for Standardization: The Hellenic Organization for Standardization (Greek: Ελληνικός Οργανισμός Τυποποίησης, Ellīnikós Organismós Typopoíīsīs; abbreviated ΕΛΟΤ in Greek and ELOT in English) is the national standards organization for the Hellenic Republic (Greece). It issues Greece's conformance marks and is responsible for various Greek standards, notably ELOT 743, Greece's official format for romanization of Modern Greek.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nonprocedural language** Nonprocedural language: NPL (for NonProcedural Language) was a relational database language developed by T.D. Truitt et al. in 1980 for Apple II and MS-DOS. In general, a non-procedural language (also called a declarative language) requires the programmer to specify what the program should do, rather than (as with a procedural language) providing the sequential steps indicating how the program should perform its task(s).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Benzyl-2-methyl-hydroxybutyrate dehydrogenase** Benzyl-2-methyl-hydroxybutyrate dehydrogenase: In enzymology, a benzyl-2-methyl-hydroxybutyrate dehydrogenase (EC 1.1.1.217) is an enzyme that catalyzes the chemical reaction benzyl (2R,3S)-2-methyl-3-hydroxybutanoate + NADP+ ⇌ benzyl 2-methyl-3-oxobutanoate + NADPH + H+Thus, the two substrates of this enzyme are benzyl (2R,3S)-2-methyl-3-hydroxybutanoate and NADP+, whereas its 3 products are benzyl 2-methyl-3-oxobutanoate, NADPH, and H+. This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is benzyl-(2R,3S)-2-methyl-3-hydroxybutanoate:NADP+ 3-oxidoreductase. This enzyme is also called benzyl 2-methyl-3-hydroxybutyrate dehydrogenase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Generative topographic map** Generative topographic map: Generative topographic map (GTM) is a machine learning method that is a probabilistic counterpart of the self-organizing map (SOM), is probably convergent and does not require a shrinking neighborhood or a decreasing step size. It is a generative model: the data is assumed to arise by first probabilistically picking a point in a low-dimensional space, mapping the point to the observed high-dimensional input space (via a smooth function), then adding noise in that space. The parameters of the low-dimensional probability distribution, the smooth map and the noise are all learned from the training data using the expectation-maximization (EM) algorithm. GTM was introduced in 1996 in a paper by Christopher Bishop, Markus Svensen, and Christopher K. I. Williams. Details of the algorithm: The approach is strongly related to density networks which use importance sampling and a multi-layer perceptron to form a non-linear latent variable model. In the GTM the latent space is a discrete grid of points which is assumed to be non-linearly projected into data space. A Gaussian noise assumption is then made in data space so that the model becomes a constrained mixture of Gaussians. Then the model's likelihood can be maximized by EM. Details of the algorithm: In theory, an arbitrary nonlinear parametric deformation could be used. The optimal parameters could be found by gradient descent, etc. Details of the algorithm: The suggested approach to the nonlinear mapping is to use a radial basis function network (RBF) to create a nonlinear mapping between the latent space and the data space. The nodes of the RBF network then form a feature space and the nonlinear mapping can then be taken as a linear transform of this feature space. This approach has the advantage over the suggested density network approach that it can be optimised analytically. Uses: In data analysis, GTMs are like a nonlinear version of principal components analysis, which allows high-dimensional data to be modelled as resulting from Gaussian noise added to sources in lower-dimensional latent space. For example, to locate stocks in plottable 2D space based on their hi-D time-series shapes. Other applications may want to have fewer sources than data points, for example mixture models. Uses: In generative deformational modelling, the latent and data spaces have the same dimensions, for example, 2D images or 1 audio sound waves. Extra 'empty' dimensions are added to the source (known as the 'template' in this form of modelling), for example locating the 1D sound wave in 2D space. Further nonlinear dimensions are then added, produced by combining the original dimensions. The enlarged latent space is then projected back into the 1D data space. The probability of a given projection is, as before, given by the product of the likelihood of the data under the Gaussian noise model with the prior on the deformation parameter. Unlike conventional spring-based deformation modelling, this has the advantage of being analytically optimizable. The disadvantage is that it is a 'data-mining' approach, i.e. the shape of the deformation prior is unlikely to be meaningful as an explanation of the possible deformations, as it is based on a very high, artificial- and arbitrarily constructed nonlinear latent space. For this reason the prior is learned from data rather than created by a human expert, as is possible for spring-based models. Comparison with Kohonen's self-organizing maps: While nodes in the self-organizing map (SOM) can wander around at will, GTM nodes are constrained by the allowable transformations and their probabilities. If the deformations are well-behaved the topology of the latent space is preserved. The SOM was created as a biological model of neurons and is a heuristic algorithm. By contrast, the GTM has nothing to do with neuroscience or cognition and is a probabilistically principled model. Thus, it has a number of advantages over SOM, namely: it explicitly formulates a density model over the data. it uses a cost function that quantifies how well the map is trained. it uses a sound optimization procedure (EM algorithm).GTM was introduced by Bishop, Svensen and Williams in their Technical Report in 1997 (Technical Report NCRG/96/015, Aston University, UK) published later in Neural Computation. It was also described in the PhD thesis of Markus Svensen (Aston, 1998).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stability (probability)** Stability (probability): In probability theory, the stability of a random variable is the property that a linear combination of two independent copies of the variable has the same distribution, up to location and scale parameters. The distributions of random variables having this property are said to be "stable distributions". Results available in probability theory show that all possible distributions having this property are members of a four-parameter family of distributions. The article on the stable distribution describes this family together with some of the properties of these distributions. Stability (probability): The importance in probability theory of "stability" and of the stable family of probability distributions is that they are "attractors" for properly normed sums of independent and identically distributed random variables. Important special cases of stable distributions are the normal distribution, the Cauchy distribution and the Lévy distribution. For details see stable distribution. Definition: There are several basic definitions for what is meant by stability. Some are based on summations of random variables and others on properties of characteristic functions. Definition: Definition via distribution functions Feller makes the following basic definition. A random variable X is called stable (has a stable distribution) if, for n independent copies Xi of X, there exist constants cn > 0 and dn such that X1+X2+…+Xn=dcnX+dn, where this equality refers to equality of distributions. A conclusion drawn from this starting point is that the sequence of constants cn must be of the form cn=n1/α for 2. Definition: A further conclusion is that it is enough for the above distributional identity to hold for n=2 and n=3 only. Stability in probability theory: There are a number of mathematical results that can be derived for distributions which have the stability property. That is, all possible families of distributions which have the property of being closed under convolution are being considered. It is convenient here to call these stable distributions, without meaning specifically the distribution described in the article named stable distribution, or to say that a distribution is stable if it is assumed that it has the stability property. The following results can be obtained for univariate distributions which are stable. Stability in probability theory: Stable distributions are always infinitely divisible. All stable distributions are absolutely continuous. All stable distributions are unimodal. Other types of stability: The above concept of stability is based on the idea of a class of distributions being closed under a given set of operations on random variables, where the operation is "summation" or "averaging". Other operations that have been considered include: geometric stability: here the operation is to take the sum of a random number of random variables, where the number has a geometric distribution. The counterpart of the stable distribution in this case is the geometric stable distribution Max-stability: here the operation is to take the maximum of a number of random variables. The counterpart of the stable distribution in this case is the generalized extreme value distribution, and the theory for this case is dealt with as extreme value theory. See also the stability postulate. A version of this case in which the minimum is taken instead of the maximum is available by a simple extension.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Muon spin spectroscopy** Muon spin spectroscopy: Muon spin spectroscopy, also known as µSR, is an experimental technique based on the implantation of spin-polarized muons in matter and on the detection of the influence of the atomic, molecular or crystalline surroundings on their spin motion. The motion of the muon spin is due to the magnetic field experienced by the particle and may provide information on its local environment in a very similar way to other magnetic resonance techniques, such as electron spin resonance (ESR or EPR) and, more closely, nuclear magnetic resonance (NMR). Introduction: Muon spin spectroscopy is an atomic, molecular and condensed matter experimental technique that exploits nuclear detection methods. In analogy with the acronyms for the previously established spectroscopies NMR and ESR, muon spin spectroscopy is also known as µSR. The acronym stands for muon spin rotation, relaxation, or resonance, depending respectively on whether the muon spin motion is predominantly a rotation (more precisely a precession around a still magnetic field), a relaxation towards an equilibrium direction, or a more complex dynamic dictated by the addition of short radio frequency pulses. µSR does not require any radio-frequency technique to align the probing spin. Introduction: More generally speaking, muon spin spectroscopy includes any study of the interactions of the muon's magnetic moment with its surroundings when implanted into any kind of matter. Its two most notable features are its ability to study local environments, due to the short effective range of muon interactions with matter, and the characteristic time-window (10−13 – 10−5 s) of the dynamical processes in atomic, molecular and condensed media. The closest parallel to µSR is "pulsed NMR", in which one observes time-dependent transverse nuclear polarization or the so-called "free induction decay" of the nuclear polarization. However, a key difference is that in µSR one uses a specifically implanted spin (the muon's) and does not rely on internal nuclear spins. Introduction: Although particles are used as a probe, µSR is not a diffraction technique. A clear distinction between the µSR technique and those involving neutrons or X-rays is that scattering is not involved. Neutron diffraction techniques, for example, use the change in energy and/or momentum of a scattered neutron to deduce the sample properties. In contrast, the implanted muons are not diffracted but remain in a sample until they decay. Only a careful analysis of the decay product (i.e. a positron) provides information about the interaction between the implanted muon and its environment in the sample. Introduction: As with many of the other nuclear methods, µSR relies on discoveries and developments made in the field of particle physics. Following the discovery of the muon by Seth Neddermeyer and Carl D. Anderson in 1936, pioneer experiments on its properties were performed with cosmic rays. Indeed, with one muon hitting each square centimeter of the earth's surface every minute, the muons constitute the foremost constituent of cosmic rays arriving at ground level. However, µSR experiments require muon fluxes of the order of 10 10 7 muons per second per square centimeter. Such fluxes can only be obtained in high-energy particle accelerators which have been developed during the last 50 years. Muon production: The collision of an accelerated proton beam (typical energy 600 MeV) with the nuclei of a production target produces positive pions ( π+ ) via the possible reactions: p+p→p+n+π+p+n→n+n+π+ From the subsequent weak decay of the pions (MEAN lifetime τπ+ = 26.03 ns) positive muons ( μ+ ) are formed via the two body decay: π+→μ++νμ. Muon production: Parity violation in the weak interactions implies that only left-handed neutrinos exist, with their spin antiparallel to their linear momentum (likewise only right-handed anti-neutrino are found in nature). Since the pion is spinless both the neutrino and the μ+ are ejected with spin antiparallel to their momentum in the pion rest frame. This is the key to provide spin-polarised muon beams. According to the value of the pion momentum different types of μ+ -beams are available for µSR measurements. Muon production: Energy classes of muon beams Muon beams are classified into three types based on the energy of the muons being produced: high-energy, surface or "Arizona", and ultra-slow muon beams. Muon production: High-energy muon beams are formed by the pions escaping the production target at high energies. They are collected over a certain solid angle by quadrupole magnets and directed onto a decay section consisting of a long superconducting solenoid with a field of several tesla. If the pion momentum is not too high, a large fraction of the pions will have decayed before they reach the end of the solenoid. In the laboratory frame the polarization of a high-energy muon beam is limited to about 80% and its energy is of the order of ~40-50MeV. Although such a high energy beam requires the use of suitable moderators and samples with sufficient thickness, it guarantees a homogeneous implantation of the muons in the sample volume. Such beams are also used to study specimens inside of recipients, e.g. samples inside pressure cells. Such muon beams are available at PSI, TRIUMF, J-PARC and RIKEN-RAL. Muon production: The second type of muon beam is often called the surface or Arizona beam (recalling the pioneering work of Pifer et al. from the University of Arizona). In these beams, muons arise from pions decaying at rest inside but near the surface of the production target. Such muons are 100% polarized, ideally monochromatic, and have a very low momentum of 29.8 MeV/c (corresponding to a kinetic energy of 4.1 MeV). They have a range width in matter of the order of 180 mg/cm2. The paramount advantage of this type of beam is the ability to use relatively thin samples. Beams of this type are available at PSI (Swiss Muon Source SµS), TRIUMF, J-PARC, ISIS Neutron and Muon Source and RIKEN-RAL. Muon production: Positive muon beams of even lower energy (ultra-slow muons with energy down to the eV-keV range) can be obtained by further reducing the energy of an Arizona beam by utilizing the energy-loss characteristics of large band gap solid moderators. This technique was pioneered by researchers at the TRIUMF cyclotron facility in Vancouver, B.C., Canada. It was christened with the acronym μSOL (muon separator on-line) and initially employed LiF as the moderating solid. The same 1986 paper also reported the observation of negative muonium ions (i.e., Mu− or μ+ e− e−) in vacuum. In 1987, the slow μ+ production rate was increased 100-fold using thin-film rare-gas solid moderators, producing a usable flux of low-energy positive muons. This production technique was subsequently adopted by PSI for their low-energy positive muon beam facility. The tunable energy range of such muon beams corresponds to implantation depths in solids of less than a nanometer up to several hundred nanometers. Therefore, the study of magnetic properties as a function of the distance from the surface of the sample is possible. At the present time, PSI is the only facility where such a low-energy muon beam is available on a regular basis. Technical developments have been also conducted at RIKEN-RAL, but with a strongly reduced low-energy muon rate. J-PARC is projecting the development of a high-intensity low-energy muon beam. Muon production: Continuous vs. pulsed muon beams In addition to the above-mentioned classification based on energy, muon beams are also divided according to the time structure of the particle accelerator, i.e. continuous or pulsed. Muon production: For continuous muon sources no dominating time structure is present. By selecting an appropriate incoming muon rate, muons are implanted into the sample one-by-one. The main advantage is that the time resolution is solely determined by the detector construction and the read-out electronics. There are two main limitations for this type of source, however: (i) unrejected charged particles accidentally hitting the detectors produce non-negligible random background counts; this compromises measurements after a few muon lifetimes, when the random background exceeds the true decay events; and (ii) the requirement to detect muons one at a time sets a maximum event rate. The background problem can be reduced by the use of electrostatic deflectors to ensure that no muons enter the sample before the decay of the previous muon. PSI and TRIUMF host the two continuous muon sources available for µSR experiments. Muon production: At pulsed muon sources protons hitting the production target are bunched into short, intense, and widely separated pulses that provide a similar time structure in the secondary muon beam. An advantage of pulsed muon sources is that the event rate is only limited by detector construction. Furthermore, detectors are active only after the incoming muon pulse, strongly reducing the accidental background counts. The virtual absence of background allows the extension of the time window for measurements up to about ten times the muon mean lifetime. The principal downside is that the width of the muon pulse limits the time resolution. ISIS Neutron and Muon Source and J-PARC are the two pulsed muon sources available for µSR experiments. Spectroscopic technique: Muon implantation The muons are implanted into the sample of interest where they lose energy very quickly. Fortunately, this deceleration process occurs in such a way that it does not jeopardize a μSR measurement. On one side it is very fast (much faster than 100 ps), which is much shorter than a typical μSR time window (up to 20 μs), and on the other side, all the processes involved during the deceleration are Coulombic (ionization of atoms, electron scattering, electron capture) in origin and do not interact with the muon spin, so that the muon is thermalized without any significant loss of polarization. Spectroscopic technique: The positive muons usually adopt interstitial sites of the crystallographic lattice, markedly distinguished by their electronic (charge) state. The spectroscopy of a muon chemically bound to an unpaired electron is remarkably different from that of all other muon states, which motivates the historical distinction in paramagnetic and diamagnetic states. Note that many diamagnetic muon states really behave like paramagnetic centers, according to the standard definition of a paramagnet. For example, in most metallic samples, which are Pauli paramagnets, the muon's positive charge is collectively screened by a cloud of conduction electrons. Thus, in metals, the muon is not bound to a single electron, hence it is in the so-called diamagnetic state and behaves like a free muon. In insulators or semiconductors a collective screening cannot take place and the muon will usually pick up one electron and form a so-called muonium (Mu=μ++e−), which has similar size (Bohr radius), reduced mass, and ionization energy to the hydrogen atom. This is the prototype of the so-called paramagnetic state. Spectroscopic technique: Detection of muon polarization The decay of the positive muon into a positron and two neutrinos occurs via the weak interaction process after a mean lifetime of τμ = 2.197034(21) μs: μ+→e++νe+ν¯μ. Spectroscopic technique: Parity violation in the weak interaction leads in this more complicated case (three body decay) to an anisotropic distribution of the positron emission with respect to the spin direction of the μ+ at the decay time. The positron emission probability is given by cos ⁡θ)dθ, where θ is the angle between the positron trajectory and the μ+-spin, and a is an intrinsic asymmetry parameter determined by the weak decay mechanism. This anisotropic emission constitutes in fact the basics for the μSR technique. Spectroscopic technique: The average asymmetry A is measured over a statistical ensemble of implanted muons and it depends on further experimental parameters, such as the beam spin polarization Pμ , close to one, as already mentioned. Theoretically A =1/3 is obtained if all emitted positrons are detected with the same efficiency, irrespective of their energy. Practically, values of A ≈ 0.25 are routinely obtained. Spectroscopic technique: The muon spin motion may be measured over a time scale dictated by the muon decay, i.e. a few times τμ, roughly 10 µs. The asymmetry in the muon decay correlates the positron emission and the muon spin directions. The simplest example is when the spin direction of all muons remains constant in time after implantation (no motion). In this case the asymmetry shows up as an imbalance between the positron counts in two equivalent detectors placed in front and behind the sample, along the beam axis. Each of them records an exponentially decaying rate as a function of the time t elapsed from implantation, according to exp ⁡(−t/τμ)(1+αA) with α=±1 for the detector looking towards and away from the spin arrow, respectively. Considering that the huge muon spin polarization is completely outside thermal equilibrium, a dynamical relaxation towards the equilibrium unpolarized state typically shows up in the count rate, as an additional decay factor in front of the experimental asymmetry parameter, A. A magnetic field parallel to the initial muon spin direction probes the dynamical relaxation rate as a function of the additional muon Zeeman energy, without introducing additional coherent spin dynamics. This experimental arrangement is called Longitudinal Field (LF) μSR. Spectroscopic technique: A special case of LF μSR is Zero Field (ZF) μSR, when the external magnetic field is zero. This experimental condition is particularly important since it allows to probe any internal quasi-static (i.e. static on the muon time-scale) magnetic field of field distribution at the muon site. Internal quasi-static fields may appear spontaneously, not induced by the magnetic response of the sample to an external field They are produced by disordered nuclear magnetic moments or, more importantly, by ordered electron magnetic moments and orbital currents. Spectroscopic technique: Another simple type of μSR experiment is when implanted all muon spins precess coherently around the external magnetic field of modulus B , perpendicular to the beam axis, causing the count unbalance to oscillate at the corresponding Larmor frequency ω between the same two detectors, according to exp cos ⁡ωt) Since the Larmor frequency is ω=γμB , with a gyromagnetic ratio 851.616 Mrad(sT)−1, the frequency spectrum obtained by means of this experimental arrangement provides a direct measure of the internal magnetic field intensity distribution. The distribution produces an additional decay factor of the experimental asymmetry A. This method is usually referred to as Transverse Field (TF) μSR. Spectroscopic technique: A more general case is when the initial muon spin direction (coinciding with the detector axis) forms an angle θ with the field direction. In this case the muon spin precession describes a cone which results in both a longitudinal component, cos 2⁡θ , and a transverse precessing component, sin cos ⁡ωt , of the total asymmetry. ZF μSR experiments in the presence of a spontaneous internal field fall into this category as well. Applications: Muon spin rotation and relaxation are mostly performed with positive muons. They are well suited to the study of magnetic fields at the atomic scale inside matter, such as those produced by various kinds of magnetism and/or superconductivity encountered in compounds occurring in nature or artificially produced by modern material science. Applications: The London penetration depth is one of the most important parameters characterizing a superconductor because its inverse square provides a measure of the density ns of Cooper pairs. The dependence of ns on temperature and magnetic field directly indicates the symmetry of the superconducting gap. Muon spin spectroscopy provides a way to measure the penetration depth, and so has been used to study high-temperature cuprate superconductors since their discovery in 1986. Applications: Other important fields of application of µSR exploit the fact that positive muons capture electrons to form muonium atoms which behave chemically as light isotopes of the hydrogen atom. This allows investigation of the largest known kinetic isotope effect in some of the simplest types of chemical reactions, as well as the early stages of formation of radicals in organic chemicals. Muonium is also studied as an analogue of hydrogen in semiconductors, where hydrogen is one of the most ubiquitous impurities. Facilities: µSR requires a particle accelerator for the production of a muon beam. This is presently achieved at few large scale facilities in the world: the CMMS continuous source at TRIUMF in Vancouver, Canada; the SµS continuous source at the Paul Scherrer Institut (PSI) in Villigen, Switzerland; the ISIS Neutron and Muon Source and RIKEN-RAL pulsed sources at the Rutherford Appleton Laboratory in Chilton, United Kingdom; and the J-PARC facility in Tokai, Japan, where a new pulsed source is being built to replace that at KEK in Tsukuba, Japan. Facilities: Muon beams are also available at the Laboratory of Nuclear Problems, Joint Institute for Nuclear Research (JINR) in Dubna, Russia. The International Society for µSR Spectroscopy (ISMS) exists to promote the worldwide advancement of µSR. Membership in the society is open free of charge to all individuals in academia, government laboratories and industry who have an interest in the society's goals.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gradient discretisation method** Gradient discretisation method: In numerical mathematics, the gradient discretisation method (GDM) is a framework which contains classical and recent numerical schemes for diffusion problems of various kinds: linear or non-linear, steady-state or time-dependent. The schemes may be conforming or non-conforming, and may rely on very general polygonal or polyhedral meshes (or may even be meshless). Gradient discretisation method: Some core properties are required to prove the convergence of a GDM. These core properties enable complete proofs of convergence of the GDM for elliptic and parabolic problems, linear or non-linear. For linear problems, stationary or transient, error estimates can be established based on three indicators specific to the GDM (the quantities CD , SD and WD , see below). For non-linear problems, the proofs are based on compactness techniques and do not require any non-physical strong regularity assumption on the solution or the model data. Non-linear models for which such convergence proof of the GDM have been carried out comprise: the Stefan problem which is modelling a melting material, two-phase flows in porous media, the Richards equation of underground water flow, the fully non-linear Leray—Lions equations.Any scheme entering the GDM framework is then known to converge on all these problems. This applies in particular to conforming Finite Elements, Mixed Finite Elements, nonconforming Finite Elements, and, in the case of more recent schemes, the Discontinuous Galerkin method, Hybrid Mixed Mimetic method, the Nodal Mimetic Finite Difference method, some Discrete Duality Finite Volume schemes, and some Multi-Point Flux Approximation schemes The example of a linear diffusion problem: Consider Poisson's equation in a bounded open domain Ω⊂Rd , with homogeneous Dirichlet boundary condition where f\in L^{2}(\Omega ) . The usual sense of weak solution to this model is: In a nutshell, the GDM for such a model consists in selecting a finite-dimensional space and two reconstruction operators (one for the functions, one for the gradients) and to substitute these discrete elements in lieu of the continuous elements in (2). More precisely, the GDM starts by defining a Gradient Discretization (GD), which is a triplet D=(XD,0,ΠD,∇D) , where: the set of discrete unknowns XD,0 is a finite dimensional real vector space, the function reconstruction ΠD:XD,0→L2(Ω) is a linear mapping that reconstructs, from an element of XD,0 , a function over \Omega the gradient reconstruction ∇D:XD,0→L2(Ω)d is a linear mapping which reconstructs, from an element of XD,0 , a "gradient" (vector-valued function) over \Omega . This gradient reconstruction must be chosen such that ‖∇D⋅‖L2(Ω)d is a norm on XD,0 .The related Gradient Scheme for the approximation of (2) is given by: find u∈XD,0 such that The GDM is then in this case a nonconforming method for the approximation of (2), which includes the nonconforming finite element method. Note that the reciprocal is not true, in the sense that the GDM framework includes methods such that the function ∇Du cannot be computed from the function ΠDu The following error estimate, inspired by G. Strang's second lemma, holds and defining: which measures the coercivity (discrete Poincaré constant), which measures the interpolation error, which measures the defect of conformity. The example of a linear diffusion problem: Note that the following upper and lower bounds of the approximation error can be derived: Then the core properties which are necessary and sufficient for the convergence of the method are, for a family of GDs, the coercivity, the GD-consistency and the limit-conformity properties, as defined in the next section. More generally, these three core properties are sufficient to prove the convergence of the GDM for linear problems and for some nonlinear problems like the -Laplace problem. For nonlinear problems such as nonlinear diffusion, degenerate parabolic problems..., we add in the next section two other core properties which may be required. The core properties allowing for the convergence of a GDM: Let (Dm)m∈N be a family of GDs, defined as above (generally associated with a sequence of regular meshes whose size tends to 0). Coercivity The sequence (CDm)m∈N (defined by (6)) remains bounded. GD-consistency For all φ∈H01(Ω) , lim m→∞SDm(φ)=0 (defined by (7)). Limit-conformity For all div (Ω) , lim m→∞WDm(φ)=0 (defined by (8)). This property implies the coercivity property. Compactness (needed for some nonlinear problems) For all sequence (um)m∈N such that um∈XDm,0 for all m\in \mathbb {N} and (‖um‖Dm)m∈N is bounded, then the sequence (ΠDmum)m∈N is relatively compact in L^{2}(\Omega ) (this property implies the coercivity property). Piecewise constant reconstruction (needed for some nonlinear problems) Let D=(XD,0,ΠD,∇D) be a gradient discretisation as defined above. The operator ΠD is a piecewise constant reconstruction if there exists a basis (ei)i∈B of XD,0 and a family of disjoint subsets (Ωi)i∈B of \Omega such that {\textstyle \Pi _{D}u=\sum _{i\in B}u_{i}\chi _{\Omega _{i}}} for all {\textstyle u=\sum _{i\in B}u_{i}e_{i}\in X_{D,0}} , where χΩi is the characteristic function of \Omega _{i} Some non-linear problems with complete convergence proofs of the GDM: We review some problems for which the GDM can be proved to converge when the above core properties are satisfied. Nonlinear stationary diffusion problems div ⁡(Λ(u¯)∇u¯)=f In this case, the GDM converges under the coercivity, GD-consistency, limit-conformity and compactness properties. p-Laplace problem for p > 1 div ⁡(|∇u¯|p−2∇u¯)=f In this case, the core properties must be written, replacing L^{2}(\Omega ) by Lp(Ω) , H_{0}^{1}(\Omega ) by W01,p(Ω) and div (Ω) by div p′(Ω) with {\textstyle {\frac {1}{p}}+{\frac {1}{p'}}=1} , and the GDM converges only under the coercivity, GD-consistency and limit-conformity properties. Linear and nonlinear heat equation div ⁡(Λ(u¯)∇u¯)=f In this case, the GDM converges under the coercivity, GD-consistency (adapted to space-time problems), limit-conformity and compactness (for the nonlinear case) properties. Degenerate parabolic problems Assume that \beta and \zeta are nondecreasing Lipschitz continuous functions: ∂tβ(u¯)−Δζ(u¯)=f Note that, for this problem, the piecewise constant reconstruction property is needed, in addition to the coercivity, GD-consistency (adapted to space-time problems), limit-conformity and compactness properties. Review of some numerical methods which are GDM: All the methods below satisfy the first four core properties of GDM (coercivity, GD-consistency, limit-conformity, compactness), and in some cases the fifth one (piecewise constant reconstruction). Galerkin methods and conforming finite element methods Let Vh⊂H01(Ω) be spanned by the finite basis (ψi)i∈I . The Galerkin method in V_{h} is identical to the GDM where one defines XD,0={u=(ui)i∈I}=RI, ΠDu=∑i∈Iuiψi ∇Du=∑i∈Iui∇ψi. In this case, C_{D} is the constant involved in the continuous Poincaré inequality, and, for all div (Ω) , WD(φ)=0 (defined by (8)). Then (4) and (5) are implied by Céa's lemma. The "mass-lumped" P1 finite element case enters the framework of the GDM, replacing ΠDu by {\textstyle {\widetilde {\Pi }}_{D}u=\sum _{i\in I}u_{i}\chi _{\Omega _{i}}} , where \Omega _{i} is a dual cell centred on the vertex indexed by i\in I . Using mass lumping allows to get the piecewise constant reconstruction property. Review of some numerical methods which are GDM: Nonconforming finite element On a mesh which is a conforming set of simplices of \mathbb {R} ^{d} , the nonconforming P1 finite elements are defined by the basis (ψi)i∈I of the functions which are affine in any K∈T , and whose value at the centre of gravity of one given face of the mesh is 1 and 0 at all the others (these finite elements are used in [Crouzeix et al] for the approximation of the Stokes and Navier-Stokes equations). Then the method enters the GDM framework with the same definition as in the case of the Galerkin method, except for the fact that ∇ψi must be understood as the "broken gradient" of \psi _{i} , in the sense that it is the piecewise constant function equal in each simplex to the gradient of the affine function in the simplex. Review of some numerical methods which are GDM: Mixed finite element The mixed finite element method consists in defining two discrete spaces, one for the approximation of ∇u¯ and another one for {\overline {u}} . It suffices to use the discrete relations between these approximations to define a GDM. Using the low degree Raviart–Thomas basis functions allows to get the piecewise constant reconstruction property. Discontinuous Galerkin method The Discontinuous Galerkin method consists in approximating problems by a piecewise polynomial function, without requirements on the jumps from an element to the other. It is plugged in the GDM framework by including in the discrete gradient a jump term, acting as the regularization of the gradient in the distribution sense. Mimetic finite difference method and nodal mimetic finite difference method This family of methods is introduced by [Brezzi et al] and completed in [Lipnikov et al]. It allows the approximation of elliptic problems using a large class of polyhedral meshes. The proof that it enters the GDM framework is done in [Droniou et al].
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**X-Moto** X-Moto: X-Moto is a free and open source 2D motocross platform game developed for Linux, FreeBSD, Mac OS X and Microsoft Windows, where physics play an all important role in the gameplay. The basic gameplay clones that of Elasto Mania, but the simulated physics are subtly different. Gameplay: In X-Moto, a player selects a level and tries to collect the strawberries. Strawberries are required to complete a level, along with touching a flower. Obstacles to this goal are challenging terrain features and "wrecker" objects which should not be touched; in most levels there are no moving objects (only scripted or physics levels may have them). Also, these can be changed in some levels. The driver is not harmed directly by falling, only by hitting his head on rock or hitting any part of his body or the bike on a wrecker object. If this happens the level is lost (as of version 0.5.3 levels can feature check points). It is possible to save a replay, and to show a previous replay ("ghost driver") in parallel to gameplay. Gameplay: The game is extensible with over 2500 user-created custom levels that can be automatically downloaded. These are created using Inkscape with the Inksmoto extension. Development: The project was started in 2005 on a sourceforge.net repository. The game was developed completely 2D, but utilizing 3D hardware acceleration (OpenGL) for faster rendering. An optional non-OpenGL ultra low requirements vector wireframe render mode is available that should run on any legacy platform. Graphics are kept simple, sound is sparse. The game features only engine sounds, level lost/won sounds, and a strawberry pickup sound, while the main menu features a single soundtrack. Levels can feature their own music. Development: The game uses the Open Dynamics Engine for physical simulation. Moving objects, variable gravity and other features can be provided by scripting the levels using the Lua programming language. As of version 0.5.0, integration with the Chipmunk physics engine enables levels with multi-body dynamics. Reception: X-Moto was selected in May 2008 and April 2015 as "HotPick" by Linux Format. Thinkdigit 2009-05 ranked X-Moto among the "Most addictive Linux games". Reception: The game was a quite popular freeware game: Between 2005 and May 2017 the game was downloaded alone via SourceForge.net over 630,000 times. Over various other download portals over 600,000 downloads are aggregated: on Softonic over 357,707 for the Windows version and 70,390 for the Mac version, on Chip.de 67,471 downloads of the Windows version, on Computer Bild 54,351 downloads of Windows version, on Softpedia 48,428 downloads for Linux version and on netzwelt 8,134 downloads. The game was included in Heinz Heise's c't software collection 6/2009 of the c't issue 24/2009.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IRC poker** IRC poker: IRC poker was a form of poker played over the IRC chat protocol before the surge in popularity of online poker in the early 2000s. A computer program was used to deal and manage the games. Commands could be typed in directly with a standard IRC client but point-and-click graphical clients were soon developed. The ability to message the dealer program directly before one's turn to act made games flow more quickly than face to face games. IRC poker: IRC poker was played with imaginary money, but attracted a devoted following of experts. World Series of Poker champion Chris Ferguson got his start playing IRC poker.IRC poker offered limit Texas hold 'em, limit Omaha hold 'em (Hi-Lo), no-limit Texas hold 'em, and tournaments. Each account was limited to "buying" 1000 chips per day, but there were no restrictions on creating new accounts so some players created multiple accounts and "harvested" the chips in fake games. IRC poker: Players in no-limit automatically bought in for the full value of their account; the most successful accumulated over one million chips and joked about selling their accounts.Tournaments often started with the theoretical maximum of 23 players at one table and could be completed in less than an hour. A poker playing program, r00lbot, was able to maintain a winning record, and provided amusing quotes as well.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Argument shift method** Argument shift method: In mathematics, the argument shift method is a method for constructing functions in involution with respect to Poisson–Lie brackets, introduced by Mishchenko and Fomenko (1978). They used it to prove that the Poisson algebra of a finite-dimensional semisimple Lie algebra contains a complete commuting set of polynomials.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Acetylcarnosine** Acetylcarnosine: N-Acetylcarnosine (NAC) (Not to confuse with N-Acetylcysteine) is a naturally occurring compound chemically related to the dipeptide carnosine. The NAC molecular structure is identical to carnosine with the exception that it carries an additional acetyl group. The acetylation makes NAC more resistant to degradation by carnosinase, an enzyme that breaks down carnosine to its constituent amino acids, beta-alanine and histidine. Actions: Carnosine and metabolic derivatives of carnosine, including NAC, are found in a variety of tissues but particularly muscle tissue. These compounds have varying degrees of activity as free radical scavengers. It has been suggested that NAC is particularly active against lipid peroxidation in the different parts of the lens in the eye. It is an ingredient in eye drops that are marketed as a dietary supplement (not a drug) and have been promoted for the prevention and treatment of cataracts. There is scant evidence on its safety, and no convincing evidence that the compound has any effect on ocular health. Research: Most of the clinical research on NAC has been conducted by Mark Babizhayev of the US-based company Innovative Vision Products (IVP), which markets NAC treatments. Research: During early experiments performed at the Moscow Helmholtz Research Institute for Eye Diseases, it was shown that NAC (1% concentration), was able to pass from the cornea to the aqueous humour after about 15 to 30 minutes. In a 2004 trial of 90 canine eyes with cataracts, NAC was reported to have performed better than placebo in positively affecting lens clarity. An early human study NAC reported that NAC was effective in improving vision in cataract patients and reduced the appearance of cataract.The Babizhayev group later published a placebo-controlled clinical trial of NAC in 76 human eyes with mild to advanced cataracts and reported similar positive results for NAC. However, a 2007 scientific review of the current literature discussed the limitations of the clinical trial, noting that the study had low statistical power, a high dropout rate and "insufficient baseline measurement to compare the effect of NAC", concluding that "a separate larger trial is needed to justify the benefit of long-term NAC therapy".Babizhayev and colleagues published a further human clinical trial in 2009. They reported positive results for NAC as well as arguing "only certain formulas designed by IVP... are efficacious in the prevention and treatment of senile cataract for long-term use." Commentary: The Royal College of Ophthalmologists remains very skeptical about claims of efficacy in cataract reversal. It issued the following public statement about NAC in August 2008:The evidence for the effectiveness of N-acetyl carnosine eye drops is based on experience on a small number of cases carried out by a Russian research team [Babizhayev]. To date, the research has not been corroborated and the results replicated by others. The long-term effect is unknown. Unfortunately, the evidence to date does not support the 'promising potential' of this drug in cataract reversal. More robust data from well conducted clinical trials on adequate sample sizes will be required to support these claims of efficacy. Furthermore, we do not feel the evidence base for the safety is in any way sufficient to recommend its use in the short term. More research is needed. Commentary: In a 2010 book on ocular disease, the current state of this subject is summarized as follows: Carnosine (β-alanyl-L-histidine), and its topical prodrug formulation N-acetylcarnosine (NAC), is advertised (especially on the internet) to treat a range of ophthalmic disorders associated with oxidative stress, including age-related and diabetic cataracts. No convincing animal studies or masked clinical trials have been reported. A Cochrane review summarizing research up to June 2016 has concluded that there is "no convincing evidence that NAC reverses cataract, nor prevents progression of cataract". The authors did not include the studies conducted by the Babizhayev group because they were unable to establish that the research used scientific methods appropriate for clinical trials.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Crus fracture** Crus fracture: A crus fracture is a fracture of the lower legs bones meaning either or both of the tibia and fibula. Tibia fractures: Pilon fracture Tibial plateau fracture Tibia shaft fracture Bumper fracture - a fracture of the lateral tibial plateau caused by a forced valgus applied to the knee Segond fracture - an avulsion fracture of the lateral tibial condyle Gosselin fracture - a fractures of the tibial plafond into anterior and posterior fragments Toddler's fracture - an undisplaced and spiral fracture of the distal third to distal half of the tibia Fibular fracture: Maisonneuve fracture - a spiral fracture of the proximal third of the fibula associated with a tear of the distal tibiofibular syndesmosis and the interosseous membrane. Le Fort fracture of ankle - a vertical fracture of the antero-medial part of the distal fibula with avulsion of the anterior tibiofibular ligament. Bosworth fracture - a fracture with an associated fixed posterior dislocation of the proximal fibular fragment which becomes trapped behind the posterior tibial tubercle. The injury is caused by severe external rotation of the ankle. Volkmann's fracture or Earle's fracture, a fracture of the postero-lateral rim of the distal fibula. Combined tibia and fibula fracture: A tib-fib fracture is a fracture of both the tibia and fibula of the same leg in the same incident. In 78% of cases, a fracture of the fibula is associated with a tibial fracture. Since the fibula is smaller and weaker than the tibia, a force strong enough to fracture the tibia often fractures the fibula as well. Types include: Trimalleolar fracture - involving the lateral malleolus, medial malleolus and the distal posterior aspect of the tibia Bimalleolar fracture - involving the lateral malleolus and the medial malleolus. Combined tibia and fibula fracture: Pott's fracture - an archaic term loosely applied to a variety of bimalleolar ankle fractures.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Grandmaster (martial arts)** Grandmaster (martial arts): Grandmaster and Master are titles used to describe or address some senior or experienced martial artists. Typically these titles are honorary in nature, meaning that they do not confer rank, but rather distinguish the individual as very highly revered in their school, system, or style. History: Asian martial arts traditionally use terms that are usually translated as "teacher" and the use of "master" was a Western invention derived from 1950s United States war veterans returning home with stories of the incredible martial feats of certain individuals and groups. Subsequently, they found their way into martial arts culture as marketing tactics to the extent that the titles are aligned to the 'elderly martial arts master' stock character. In Asian countries, such titles are more commonly reserved for religious leaders and saints. Modern use: The use of "master," "grandmaster," etc. is decided within an individual art or organization. The use may be self assigned; for example having promoted a student to 'teacher' level, or may be assigned by a governing body in arts with a more formalised structure, and some do not use it at all, for historic reasons or to avoid the 'elderly master' stereotype. The modern use of Dan rankings and Black belt and Red belt in martial arts both derive from Judo where they were adopted by its founder Kanō Jigorō. Traditional systems: There are many terms similar or equivalent to 'master' used by various martial arts traditions. Some of these terms derive from older systems, while others are relatively modern. Traditional systems: Japan Japanese martial arts commonly use Sensei (先生) meaning "teacher" or literally translated, "born first" or "one who has gone before". A Sensei is a person who has knowledge and is willing to teach that knowledge to another. A Sensei assists students in ken shiki "the pursuit of knowledge". Several Japanese organizations, such as the Bujinkan, Kodokan (Judo), and most branches of Aikido, formally award a certificate conferring the title Shihan ("teacher of teachers" or "master teacher") to recognize high-ranking or highly distinguished instructors. Traditional systems: Sōke (宗家), meaning "the head family [house]," is sometimes used to refer to "founder of a style" because many modern sōke are the first generation headmasters of their art, but most correctly refers to the current head. A sōke is considered the ultimate authority within their art and has the authority to issue a menkyo kaiden certificate indicating that someone has mastered all aspects of the style. Traditional systems: Korea The actual Korean word for a student's master is sonsaeng. This term is only used by the student when speaking to the instructor. The student is haksaeng. (학생 HakSaeng 學生) Many Korean titles are often mistakenly translated as "grandmaster" (태사님 TaeSaNim 太師님). Sonseang-nim (선생님 SeonSaengNim 先生님) is a general term for a teacher of any subject as well as a respectful form of the word "you". Martial arts instructors (in Korea 4th Dan and above) are called Sabom-nim (사범님 SaBeomNim 師範님). Traditional systems: China Various dialects of the Chinese language use different terms. Traditional systems: "Sifu" is a common romanization, although the term and pronunciation are also used in other southern languages. In Mandarin Chinese, it is spelled "shifu" in pinyin. Using non-rhotic British English pronunciation, in Mandarin it would sound something similar to "sure foo". Using IPA, 'shi' is pronounced 'ʂɨ'. The 'i' is a short vowel. Many martial arts studios incorrectly pronounce this like "she foo". In Cantonese, it is said as "see foo" (almost like "sea food", without the "d" on the end). Traditional systems: (師傅 or 師父; Pinyin: shīfu, Standard pinyin: si1 fu6) a modern term for "teacher". Traditional systems: The term Shifu is a combination of the characters "teacher" and "father" (師父) or a combination of the characters "teacher" and "mentor" (師傅). The traditional Chinese martial arts school, or kwoon (館, guǎn) is an extended family headed by the Shifu. The Shifu's teacher is the "師公 honorable master" or Shigong. Similarly the Shifu's wife is the Shimu "teacher mother" and the grandmaster's wife is known as: 師姥 shi lao; or 師婆 shi po. Male and female students who began training before you and are thus senior, are 師兄Shixiong "teacher older brothers" and 師姐 Shijie "teacher's sisters". Women in traditional society did not have the same status as males (despite what modern movies depict). Students junior to you are your Shidi and Shimei. The pattern extends to uncles, aunts, cousins, great uncles, and so forth (see above for a complete list of relational terms). Popular culture: Such titles may be, to some extent, aligned to the elderly martial arts master stock character in fiction. In Asian martial arts, traditional titular systems vary between nations and arts, but terms such as "teacher" were more common than "master." The modern use came from Eastern to Western society in the 1950s with stories of martial feats seen in Asia.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**World Scout Emblem** World Scout Emblem: The World Scout Emblem is the emblem of the World Organization of the Scout Movement and is worn by Scouts and Scouters around the world to indicate their membership. Each national Scout organization determines the manner in which the emblem is worn. Origins of the design: Lord Baden-Powell began awarding a brass badge in the shape of the fleur-de-lis arrowhead to army scouts whom he had trained while serving in India in 1897. He later issued a copper fleur-de-lis badge to all participants of the experimental camp on Brownsea Island in 1907.Baden-Powell included a design for the Scout's badge in his work, Scouting for Boys, which was a simple fleur-de-lis with the motto "Be Prepared" on a scroll below it. He reasoned that the fleur-de-lis was commonly used as the symbol for north on maps, and a Boy Scout was to show the way in doing his duty and helping others.The plumes of the fleur-de-lis became symbols for Service to Others, Duty to God, and Obedience to the Scout Law. These three principles form the Scout Promise which is made by new Scouts as they join the movement. The fleur-de-lis was modified shortly after, to include the two five-pointed stars, which symbolize knowledge and truth. A "bond" was also added tying the three plumes together to symbolize the family of Scouting. Origins of the design: J. S. Wilson introduced an international Scout badge in 1939-a silver fleur-de-lis on a purple background surrounded by the names of the five continents in silver within a circular frame. The wearing of it was not universal, but was confined to past and present members of the International Committee and staff of the Bureau. A flag of similar design followed, the flying of which was restricted to international Scout gatherings. Origins of the design: The current emblem design was introduced at the 8th World Scout Jamboree in 1955 by former Boy Scouts of Greece National Commissioner Demetrios Alexatos. The final design which is now worn on the uniforms of Scouts around the world includes a rope which encircles the fleur-de-lis and is tied in a reef knot at the bottom of the badge. The rope is there to symbolize the family of the World Scout Movement and the knot symbolizes the strength of the unity of the World Scout Movement. The colors chosen have heraldic significance, with the white of the arrowhead and rope representing purity, and the royal purple denoting leadership and service.The use of the fleur-de-lis has led to some controversy, with critics citing its military symbolism. However, Robert Baden-Powell himself denied this link, writing and speaking about the various other meanings of the symbol. Organization usage: Several of the national Scout organizations use the emblem in various ways. The Scout Association The Scout Association refers to the emblem as the World Membership Badge. It is used as the joining award for each section—Beavers, Cubs, Scouts, Explorer Scouts and Scout Network—with requirements intended to help the Scout understand their commitment to Scouting. Organization usage: Boy Scouts of America The Boy Scouts of America (BSA) refers to the emblem as the World Crest; it may be worn on the uniform as an emblem of worldwide Scouting. The BSA first used the badge as an award for Scouts and Scouters who participated in an international Scouting event from early 1956 through 1991; requirements were devised by each council. In 1991, the BSA made it part of the uniform for all Scouts and the International Activity Patch replaced the World Scout Crest as an award. Organization usage: Scouts South Africa Scouts South Africa uses this badge when new members join, either as a Cub, a Scout or an Adult Leader. The badge is worn on the left front pocket of the uniform, over the heart. The five-pointed stars of the fleur-de-lis are often explained to be symbolic of the ten points of the Scout Promise.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Beach polo** Beach polo: Beach polo is a team sport and close variant of arena polo. Game play: A game of beach polo consists of two three-player teams as opposed to the usual four-player teams in field polo. A game consists of four seven-minute periods of play, called chukkers. The game is played in an enclosed sand arena with sideboards of approximately four feet in height, designed to keep the ball in play. Depending on playing areas available, some of the playing arenas have enclosed ends while others allow for 20 yards of run out room for the horses, past the end line, and utilize standing goal posts. Game play: Two umpires are suggested for tournament play which may be stationed outside the arena to officiate the game. Penalties are called and resulting free hits are awarded to the fouled party. Traditional polo ponies are used with players changing horses following each chukker. Unlike the hard plastic ball used in field polo, beach polo employs a leather or rubber inflated ball no less than 12.5 inches in circumference. Other equipment employed is the same as that used in field or arena polo. History: Reto Gaudenzi and Rashid Al Habtoor have been credited with the creation of the game in 2004 in Dubai, followed by the Miami Beach Polo World Cup in the United States in 2005 which also was created by Reto Gaudenzi and his son Tito. Reto Gaudenzi is also the inventor of Snow Polo that initiated in 1985 on the frozen lake of St. Moritz. Additional tournaments matches have arisen in Argentina, Australia, Austria, Belgium, Chile, China, Colombia, Croatia, England, France, Germany, India, Italy, Mexico, New Zealand, Poland, Spain, Thailand, The Netherlands, Uruguay and Wales. History: The most notable event of this kind continues to take place annually in Miami Beach. Having featured some of the best international polo players include Argentine 10-goaler Gonzalo Pieres, 8-goaler Alejandro Novillo Astrada, Mexico's late Carlos Gracida (9), USA's 9-goaler Nic Roldan amongst many more. This event is organized by Tito Gaudenzi and Melissa Ganzi to this day. The island of Jersey (off the coast of Normandy, France) staged its first beach polo tournament in September 2012 while New Zealand staged a tournament in December 2013, and Croatia created their first beach polo event in 2016.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nidogen-2** Nidogen-2: Nidogen-2, also known as osteonidogen, is a basal lamina protein of the nidogen family. It was the second nidogen to be described after nidogen-1 (entactin). Both play key roles during late embryonic development. In humans it is encoded by the NID2 gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gigantism** Gigantism: Gigantism (Greek: γίγας, gígas, "giant", plural γίγαντες, gígantes), also known as giantism, is a condition characterized by excessive growth and height significantly above average. In humans, this condition is caused by over-production of growth hormone in childhood, resulting in people up to 2.7 m (9.0 ft) in height.It is a rare disorder resulting from increased levels of growth hormone before the fusion of the growth plate which usually occurs at some point soon after puberty. This increase is most often due to abnormal tumor growths on the pituitary gland. Gigantism should not be confused with acromegaly, the adult form of the disorder, characterized by somatic enlargement specifically in the extremities and face. Cause: Gigantism is characterized by an excess of growth hormone (GH). The excess of growth hormone that brings about gigantism is virtually always caused by pituitary growths (adenomas). These adenomas are on the anterior pituitary gland. They can also cause overproduction of GH's hypothalamic precursor known as growth hormone releasing hormone (GHRH).As a result of the excessive amounts of growth hormone, children achieve heights that are well above normal ranges. The specific age of onset for gigantism varies between patients and gender, but the common age that excessive growth symptoms start to appear has been found to be around 13 years. Other health complications, such as hypertension, may occur in pediatric patients with hyper-secretion of growth hormone. Characteristics more similar to those seen in acromegaly may occur in patients that are closer in age to adolescence since they are nearing growth plate fusion. Cause: Hormonal cause Growth hormone (GH) and insulin-like growth factor-I (IGF-I) are two substances that have been identified as influencing growth plate formation and bone growth and, therefore, gigantism. Their specific mechanisms are still not well understood.More broadly, GH and IGF have both been identified to be involved in most stages of growth: embryonic, prenatal, and postnatal. Moreover, the receptor gene for IGF has been shown to be particularly influential throughout various stages of development, especially prenatally. This is the same for GH receptor genes which have been known to drive overall growth throughout various pathways.Growth hormone is a precursor (upstream) of IGF-I, but each has its independent role in hormonal pathways. Yet both seem to ultimately come together to have a joint effect on growth. Cause: Diagnostic testing Evaluation of growth hormone hypersecretion cannot be excluded with a single normal GH level due to diurnal variation. However, a random blood sample showing markedly elevated GH is adequate for diagnosis of GH hyper-secretion. Additionally, a high-normal GH level that fails to suppress with administration of glucose is also sufficient for a diagnosis of GH hyper-secretion.Insulin-like growth factor-1 (IGF-1) is an excellent test for evaluation of GH hyper-secretion. It does not undergo diurnal variation and will thus be consistently elevated in GH hyper-secretion and therefore patients with gigantism. A single normal IGF-1 value will reliably exclude GH hypersecretion. Genetics: Finding a specific genetic cause for gigantism has proven to be difficult. Gigantism is the primary example of growth hormone hyper-secretion disorders, a group of illnesses that are not yet deeply understood.Some common mutations have been associated with gigantism. Pediatric gigantism patients have shown to have duplications of genes on a specific chromosome, Xq26. Typically, these patients also experienced an onset of typical gigantism symptoms before reaching the age of 5. This indicates a possible linkage between gene duplications and the gigantism.Additionally, DNA mutations in the aryl hydrocarbon receptor interacting protein (AIP) gene are common in gigantism patients. They have been found to be present in about 29 percent of patients with gigantism. AIP is labeled as a tumor suppressor gene and a pituitary adenoma disposition gene.Mutations in AIP sequencing can have deleterious effects by inducing the development of pituitary adenomas which in turn can cause gigantism.Two specific mutations in the AIP gene have been identified as possible causes of pituitary adenomas. These mutations also have the ability to cause adenoma growth to occur early in life. This is typical in gigantism. Genetics: Additionally, a large variety of other known genetic disorders have been found to influence the development of gigantism such as multiple endocrine neoplasia type 1 and 4, McCune-Albright syndrome, Carney complex, familial isolated pituitary adenoma, X-linked acrogigantism (X-LAG).Although various gene mutations have been associated with gigantism, over 50 percent of cases cannot be linked to genetic causes, showing the complex nature of the disorder. Treatment: Many treatments for gigantism receive criticism and are not accepted as ideal. Various treatments involving surgery and drugs have been used to treat gigantism. Treatment: Pharmaceuticals Pegvisomant is one pharmaceutical drug which has received attention for being a possible treatment route for gigantism. Reduction of the levels of IGF-I as a result of pegvisomant administration can be incredibly beneficial for the pediatric gigantism patients.After treatment with pegvisomant, high growth rates, a feature characteristic of gigantism, can be significantly decreased. Pegvisomant has been seen to be a powerful alternative to other treatments such as somatostatin analogues, a common treatment method for acromegaly, if drug treatment is paired with radiation.Finding the optimal level of pegvisomant is important so normal body growth is not negatively affected. In order to do this, titration of the medication can be used as a way to find the proper administration level.See acromegaly for additional treatment possibilities. Terminology: The term is typically applied to those whose height is not just in the upper 1% of the population but several standard deviations above mean for persons of the same sex, age, and ethnic ancestry. The term is seldom applied to those who are simply "tall" or "above average" whose heights appear to be the healthy result of normal genetics and nutrition. Gigantism is usually caused by a tumor on the pituitary gland of the brain. It causes growth of the hands, face, and feet. In some cases the condition can be passed on genetically through a mutated gene.Other names somewhat obsolete for this pathology are hypersoma (Greek: hyper over the normal level; soma body) and somatomegaly (Greek; soma body, genitive somatos of the body; megas, gen. megalou great). In the past, while many of them were social outcasts because of their height, some (usually unintentionally) found employment in Friedrich Wilhelm I's famous Potsdam Giants regiment. Terminology: Many of those who have been identified with gigantism have had multiple health problems involving the circulatory or skeletal system, as the strain of maintaining a large, heavy body places abnormal demands on both the bones and the heart.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Principle of explosion** Principle of explosion: In classical logic, intuitionistic logic and similar logical systems, the principle of explosion (Latin: ex falso [sequitur] quodlibet, 'from falsehood, anything [follows]'; or ex contradictione [sequitur] quodlibet, 'from contradiction, anything [follows]'), or the principle of Pseudo-Scotus (falsely attributed to Duns Scotus), is the law according to which any statement can be proven from a contradiction. That is, from a contradiction, any proposition (including its negation) can be inferred from it; this is known as deductive explosion.The proof of this principle was first given by 12th-century French philosopher William of Soissons. Due to the principle of explosion, the existence of a contradiction (inconsistency) in a formal axiomatic system is disastrous; since any statement can be proven, it trivializes the concepts of truth and falsity. Around the turn of the 20th century, the discovery of contradictions such as Russell's paradox at the foundations of mathematics thus threatened the entire structure of mathematics. Mathematicians such as Gottlob Frege, Ernst Zermelo, Abraham Fraenkel, and Thoralf Skolem put much effort into revising set theory to eliminate these contradictions, resulting in the modern Zermelo–Fraenkel set theory. Principle of explosion: As a demonstration of the principle, consider two contradictory statements—"All lemons are yellow" and "Not all lemons are yellow"—and suppose that both are true. If that is the case, anything can be proven, e.g., the assertion that "unicorns exist", by using the following argument: We know that "Not all lemons are yellow", as it has been assumed to be true. Principle of explosion: We know that "All lemons are yellow", as it has been assumed to be true. Therefore, the two-part statement "All lemons are yellow or unicorns exist" must also be true, since the first part "All lemons are yellow" of the two-part statement is true (as this has been assumed). Principle of explosion: However, since we know that "Not all lemons are yellow" (as this has been assumed), the first part is false, and hence the second part must be true to ensure the two-part statement to be true, i.e., unicorns exist.In a different solution to these problems, a few mathematicians have devised alternative theories of logic called paraconsistent logics, which eliminate the principle of explosion. These allow some contradictory statements to be proven without affecting other proofs. Symbolic representation: In symbolic logic, the principle of explosion can be expressed schematically in the following way: Proof: Below is a formal proof of the principle using symbolic logic. Proof: This is just the symbolic version of the informal argument given in the introduction, with P standing for "all lemons are yellow" and Q standing for "Unicorns exist". We start out by assuming that (1) all lemons are yellow and that (2) not all lemons are yellow. From the proposition that all lemons are yellow, we infer that (3) either all lemons are yellow or unicorns exist. But then from this and the fact that not all lemons are yellow, we infer that (4) unicorns exist by disjunctive syllogism. Proof: Semantic argument An alternate argument for the principle stems from model theory. A sentence P is a semantic consequence of a set of sentences Γ only if every model of Γ is a model of P . However, there is no model of the contradictory set (P∧¬P) . A fortiori, there is no model of (P∧¬P) that is not a model of Q . Thus, vacuously, every model of (P∧¬P) is a model of Q . Thus Q is a semantic consequence of (P∧¬P) Paraconsistent logic: Paraconsistent logics have been developed that allow for subcontrary-forming operators. Model-theoretic paraconsistent logicians often deny the assumption that there can be no model of {ϕ,¬ϕ} and devise semantical systems in which there are such models. Alternatively, they reject the idea that propositions can be classified as true or false. Proof-theoretic paraconsistent logics usually deny the validity of one of the steps necessary for deriving an explosion, typically including disjunctive syllogism, disjunction introduction, and reductio ad absurdum. Usage: The metamathematical value of the principle of explosion is that for any logical system where this principle holds, any derived theory which proves ⊥ (or an equivalent form, ϕ∧¬ϕ ) is worthless because all its statements would become theorems, making it impossible to distinguish truth from falsehood. That is to say, the principle of explosion is an argument for the law of non-contradiction in classical logic, because without it all truth statements become meaningless. Reduction in proof strength of logics without ex falso are discussed in minimal logic.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Error-related negativity** Error-related negativity: Error-related negativity (ERN), sometimes referred to as the Ne, is a component of an event-related potential (ERP). ERPs are electrical activity in the brain as measured through electroencephalography (EEG) and time-locked to an external event (e.g., presentation of a visual stimulus) or a response (e.g. an error of commission). A robust ERN component is observed after errors are committed during various choice tasks, even when the participant is not explicitly aware of making the error; however, in the case of unconscious errors the ERN is reduced. An ERN is also observed when non-human primates commit errors. History: The ERN was first discovered in 1968 by Russian Natalia Petrovna Bekhtereva neuroscientist and psychologist and was called "error detector". Later in 1990 ERN was developed by two independent research teams; Michael Falkenstein, J. Hohnsbein, J. Hoormann, & L. Blanke (1990) at the Institute for Work Physiology and Neurophysiology in Dortmund, Germany (who called it the "Ne"), and W.J. "Bill" Gehring, M.G.H. Coles, D.E. Meyer & E. Donchin (1990) at the University of Michigan, USA. The ERN was observed in response to errors committed by study participants during simple choice response tasks. Component characteristics: The ERN is a sharp negative going signal which begins about the same time an incorrect motor response begins, (response locked event-related potential), and typically peaks from 80 to 150 milliseconds (ms) after the erroneous response begins (or 40-80 ms after the onset of electromyographic activity). The ERN is the largest at frontal and central electrode sites. A typical method for determining the average ERN amplitude for an individual involves calculating the peak-to-peak difference in voltage between the average of the most negative peaks 1-150 ms after response onset, and the average amplitude of positive peaks 100-0 ms before response onset. For optimal resolution of the signal, reference electrodes are typically placed behind both ears using either hardware or arithmetically linked mastoid electrodes. Main paradigms: Any paradigm in which mistakes are made during motor responses can be used to measure the ERN. Natural keyboarding is one such example where typing errors are shown to elicit ERN. The most important feature of any ERN paradigm is obtaining a sufficient number of errors in the participant's responses, and the number of trials needed to obtain reliable scores can vary widely. Early experiments identifying the component used a variety of techniques, including word and tone identification, and categorical discrimination (e.g. are the following an animal?). However, the majority of experimental paradigms that elicit ERN deflections have been a variant on the Eriksen "Flanker", and "Go/NoGo". In addition to responses with the hands, the ERN can also be measured in paradigms where the task is performed with the feet or with vocal responses as in the Stroop paradigm.A standard Flanker task involves discerning the central "target" letter from a string of distracting "flanker" letters which surround it. For example, congruous letter strings such as "SSSSS" or "HHHHH" and incongruous letter strings such as "HHSHH" or "SSHSS" may be presented on a computer screen. Each target letter would be assigned a key stroke response on a keyboard, such as "S" = right shift key and "H" = left shift key. Presentation of each letter string is brief, generally less than 100 ms, and central on the screen. Participants have approximately 2000 ms to respond before the next presentation. Main paradigms: The most simple Go/NoGo tasks involve assigning a property of discernment to responding "Go" or not responding "NoGo." For example, again congruous letter strings such as "SSSSS" or "HHHHH" and incongruous letter strings such as "HHSHH" or "SSHSS" may be presented on a computer screen. The participant could be instructed to respond by pressing the space bar, only for congruous strings, and to not respond when presented with incongruous letter strings. More complicated Go/NoGo tasks are usually created when the ERN is the component of interest however, because in order to observe the robust negativity errors must be made. Main paradigms: The classic Stroop paradigm involves a color-word task. Color words such as "red, yellow, orange, green" are presented centrally on a computer screen either in a color congruent with the word, ("red" in the color red) or in a color incongruent with the word ("red" in the color yellow). Participants may be asked to verbalize the color each word is written in. Incongruent and congruent presentations of the words can be manipulated to different rates, such as 25/75, 50/50, 30/70 etc. Functional sensitivity: The amplitude of the ERN is sensitive to the intent and motivation of participants. When a participant is instructed to strive for accuracy in responses, observed amplitudes are typically larger than when participants are instructed to strive for speed. Monetary incentives typically result in larger amplitudes as well. Latency of the ERN peak amplitude can also vary between subjects, and does so reliably in special populations such as those diagnosed with ADHD, who show shorter latencies. Participants with clinically diagnosed Obsessive Compulsive Disorder have exhibited ERN deflections with increased amplitude, prolonged latency, and a more posterior topography compared to clinically normal participants. ERN latency has been manipulated through rapid feedback, wherein participants who received rapid feedback regarding the incorrect response subsequently showed shorter ERN peak latencies. Additionally, a heightened ERN amplitude during social situations has been linked to anxiety symptoms in both childhood and adulthood.Developmental studies have shown that the ERN emerges throughout childhood and adolescence becoming more negative in amplitude and with a more defined peak. The ERN appears to be modulated by the environment during childhood, with children who experience early adversity showing evidence of less negative ERN amplitudes. Theory/source: Although it is difficult to localize the origin of an ERP signal, extensive empirical research indicates that the ERN is most likely generated in the Anterior cingulate cortex (ACC) area of the brain. This conclusion is supported by fMRI, and brain lesion research, as well as dipole source modeling. The Dorsolateral prefrontal cortex (DLPFC) may also be involved in the generation of the ERN to some degree, and it has been found that persons with higher levels of "absent-mindedness" have their ERN sourced more from that region.There is some debate within the field about what the ERN reflects (see especially Burle, et al.) Some researchers maintain that the ERN is generated during the detection of or response to errors. Others argue that the ERN is generated by a comparison process or a conflict monitoring system, and not specific to errors. In contrast to the above cognitive theories, new models suggest that the ERN may reflect the motivational significance of a task or perhaps the emotional reaction to making an error. This later view is consistent with findings linking errors and the ERN to autonomic arousal and defensive motivated states, and with findings suggesting that the ERN is dissociable from cognitive factors, but not affective ones. Unfortunately, it is still unclear how to interpret differences in sizes of ERN, as both smaller and larger ERN have been interpreted as "better". Feedback error-related negativity: A stimulus locked event-related potential is also observed following the presentation of negative feedback stimuli in a cognitive task indicating the outcome of a response, often referred to as the feedback ERN (fERN). This has led some researchers to extend the error-detection account of the response ERN (rERN) to a generic error detection system. This position has been elaborated into a reinforcement learning account of the ERN, arguing that both the rERN and the fERN are products of prediction error signals carried by the dopamine system arriving in the anterior cingulate cortex indicating that events have gone worse than expected. In this framework it is common to measure both the rERN and the fERN as the difference in voltage between correct and incorrect responses and feedback, respectively. Clinical applications: Debates about psychiatric disorders often become "chicken and egg" conundrums. The ERN has been proposed as a potential arbitrator of this argument. A body of empirical research has shown that the ERN reflects a "trait" level difference in individual error processing; especially concerning anxiety, rather than a "state" level difference. For example; most people who experience depression do not feel depressed all of the time. Instead, they have periods of depressive "states" which may be minor and unique to an extreme situation such as death of a loved one, loss of employment, or major injury. However a person who has a depressive "trait" will have experienced more than one minor depressive "state" and usually at least one major depressive state, any of which may not be unique to an obviously extreme situation. In fact, there is some evidence, albeit weak, that people with depression show small ERNs. Scientists are exploring the use of the ERN and other ERP signals in identifying people at risk for psychiatric disorders in hopes of implementing early interventions. People with addictive behaviors such as smoking, alcoholism, and substance abuse have also shown differential ERN responses compared to individuals without the same addictive behavior. Pre-movement positivity: The ERN is often preceded by a small positive voltage deflection with a latency in the interval of -200 to -50 milliseconds in the response-locked ERP in channels over the scalp vertex, which is sometimes referred to as the "positive peak preceding the Ne" or "PNe", but more generally thought to reflect the pre-movement positivity (PMP) described by Deecke et al. (1969). The PMP is thought to reflect a "go signal" by which pre-SMA and SMA permit a motor response to be carried out. PMP is smaller before error motor responses than it is before correct motor responses, suggesting that it may be an important signal for discriminating erroneous from correct actions. Additionally, PMP is smaller in people who make more mistakes during the Flankers task and may have clinical utility in accident prone populations, such as youths with ADHD. Error-related positivity: The ERN is often followed by a positivity, known as the error-related positivity or Pe. The Pe is a positive deflection with a centro-parietal distribution. When elicited, the Pe can occur 200-500ms after making an incorrect response, following the error negativity (Ne, ERN), but is not evident on all error trials. In particular, the Pe is dependent on awareness or ability to detect errors. Pe is basically the same as the P300 wave associated with conscious sensations.: 128  Additionally, Vocat et al. (2008) established the Ne and Pe not only have different topographical distributions, but have different generators. Source localization indicates that the Ne has a dipole in the anterior cingulate cortex and the Pe has a dipole in the posterior cingulate cortex. The Pe amplitude reflects the perception of the error, meaning with more awareness of the error, the amplitude of the Pe is larger. Falkenstein and colleagues (2000) have shown that the Pe is elicited on uncorrected trials and false alarm trials, suggesting it is not directly related to error correction. It thus seems to be related to error monitoring, albeit with different neural and cognitive roots from the error-related processing reflected in the Ne. Error-related positivity: If the Pe reflects conscious error processing, then it might be expected to be different for people with deficits in conflict monitoring, such as ADHD and OCD. Whether this is true remains controversial. Some studies do indicate these conditions are associated with different Pe responses, whereas other studies have not replicated those findings. The Pe has also been used to evaluate error processing in patients with severe brain traumatic injury. In a study using a variation of the Stroop task, patients with severe traumatic brain injury associated with deficits in error processing were found to show a significantly smaller Pe on error trials when compared against the healthy controls.Some researchers argue that error-related negativity or error-related positivity is in fact, reward-related positivity. Reward-related positivity is also referred to as reward positivity, or RewP. It has been suggested that ERP data is depicting neural positivity to rewards (aka reward positivity) rather than neural negativity to loss (aka error-related negativity). Thus, this shift in how we conceptualize neural responses to gains/losses allows us to further understand the underlying neural processes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DNS hijacking** DNS hijacking: DNS hijacking, DNS poisoning, or DNS redirection is the practice of subverting the resolution of Domain Name System (DNS) queries. This can be achieved by malware that overrides a computer's TCP/IP configuration to point at a rogue DNS server under the control of an attacker, or through modifying the behaviour of a trusted DNS server so that it does not comply with internet standards. DNS hijacking: These modifications may be made for malicious purposes such as phishing, for self-serving purposes by Internet service providers (ISPs), by the Great Firewall of China and public/router-based online DNS server providers to direct users' web traffic to the ISP's own web servers where advertisements can be served, statistics collected, or other purposes of the ISP; and by DNS service providers to block access to selected domains as a form of censorship. Technical background: One of the functions of a DNS server is to translate a domain name into an IP address that applications need to connect to an Internet resource such as a website. This functionality is defined in various formal internet standards that define the protocol in considerable detail. DNS servers are implicitly trusted by internet-facing computers and users to correctly resolve names to the actual addresses that are registered by the owners of an internet domain. Technical background: Rogue DNS server A rogue DNS server translates domain names of desirable websites (search engines, banks, brokers, etc.) into IP addresses of sites with unintended content, even malicious websites. Most users depend on DNS servers automatically assigned by their ISPs. A router's assigned DNS servers can also be altered through the remote exploitation of a vulnerability within the router's firmware. When users try to visit websites, they are instead sent to a bogus website. This attack is termed pharming. If the site they are redirected to is a malicious website, masquerading as a legitimate website, in order to fraudulently obtain sensitive information, it is called phishing. Technical background: Manipulation by ISPs A number of consumer ISPs such as AT&T, Cablevision's Optimum Online, CenturyLink, Cox Communications, RCN, Rogers, Charter Communications (Spectrum), Plusnet, Verizon, Sprint, T-Mobile US, Virgin Media, Frontier Communications, Bell Sympatico, Deutsche Telekom AG, Optus, Mediacom, ONO, TalkTalk, Bigpond (Telstra), TTNET, Türksat, and all Indonesian customer ISPs use or used DNS hijacking for their own purposes, such as displaying advertisements or collecting statistics. Dutch ISPs XS4ALL and Ziggo use DNS hijacking by court order: they were ordered to block access to The Pirate Bay and display a warning page while all customer ISP in Indonesia do DNS hijacking to comply with the National DNS law which requires every customer Indonesian ISP to hijack port 53 and redirect it to their own server to block website that are listed in Trustpositif by Kominfo under Internet Sehat campaign. These practices violate the RFC standard for DNS (NXDOMAIN) responses, and can potentially open users to cross-site scripting attacks.The concern with DNS hijacking involves this hijacking of the NXDOMAIN response. Internet and intranet applications rely on the NXDOMAIN response to describe the condition where the DNS has no entry for the specified host. If one were to query the invalid domain name (for example www.example.invalid), one should get an NXDOMAIN response – informing the application that the name is invalid and taking the appropriate action (for example, displaying an error or not attempting to connect to the server). However, if the domain name is queried on one of these non-compliant ISPs, one would always receive a fake IP address belonging to the ISP. In a web browser, this behavior can be annoying or offensive as connections to this IP address display the ISP redirect page of the provider, sometimes with advertising, instead of a proper error message. However, other applications that rely on the NXDOMAIN error will instead attempt to initiate connections to this spoofed IP address, potentially exposing sensitive information. Technical background: Examples of functionality that breaks when an ISP hijacks DNS: Roaming laptops that are members of a Windows Server domain will falsely be led to believe that they are back on a corporate network because resources such as domain controllers, email servers and other infrastructure will appear to be available. Applications will therefore attempt to initiate connections to these corporate servers, but fail, resulting in degraded performance, unnecessary traffic on the Internet connection and timeouts. Technical background: Many small office and home networks do not have their own DNS server, relying instead on broadcast name resolution. Many versions of Microsoft Windows default to prioritizing DNS name resolution above NetBIOS name resolution broadcasts; therefore, when an ISP DNS server returns a (technically valid) IP address for the name of the desired computer on the LAN, the connecting computer uses this incorrect IP address and inevitably fails to connect to the desired computer on the LAN. Workarounds include using the correct IP address instead of the computer name, or changing the DhcpNodeType registry value to change name resolution service ordering. Technical background: Browsers such as Firefox no longer have their 'Browse By Name' functionality (where keywords typed in the address bar take users to the closest matching site). The local DNS client built into modern operating systems will cache results of DNS searches for performance reasons. If a client switches between a home network and a VPN, false entries may remain cached, thereby creating a service outage on the VPN connection. DNSBL anti-spam solutions rely on DNS; false DNS results therefore interfere with their operation. Confidential user data might be leaked by applications that are tricked by the ISP into believing that the servers they wish to connect to are available. User choice over which search engine to consult in the event of a URL being mistyped in a browser is removed as the ISP determines what search results are displayed to the user. Technical background: Computers configured to use a split tunnel with a VPN connection will stop working because intranet names that should not be resolved outside the tunnel over the public Internet will start resolving to fictitious addresses, instead of resolving correctly over the VPN tunnel on a private DNS server when an NXDOMAIN response is received from the Internet. For example, a mail client attempting to resolve the DNS A record for an internal mail server may receive a false DNS response that directed it to a paid-results web server, with messages queued for delivery for days while retransmission was attempted in vain. Technical background: It breaks Web Proxy Autodiscovery Protocol (WPAD) by leading web browsers to believe incorrectly that the ISP has a proxy server configured. Technical background: It breaks monitoring software. For example, if one periodically contacts a server to determine its health, a monitor will never see a failure unless the monitor tries to verify the server's cryptographic key.In some, but not most cases, the ISPs provide subscriber-configurable settings to disable hijacking of NXDOMAIN responses. Correctly implemented, such a setting reverts DNS to standard behavior. Other ISPs, however, instead use a web browser cookie to store the preference. In this case, the underlying behavior is not resolved: DNS queries continue to be redirected, while the ISP redirect page is replaced with a counterfeit DNS error page. Applications other than web browsers cannot be opted out of the scheme using cookies as the opt-out targets only the HTTP protocol, when the scheme is actually implemented in the protocol-neutral DNS. Response: In the UK, the Information Commissioner's Office has acknowledged that the practice of involuntary DNS hijacking contravenes PECR, and EC Directive 95/46 on Data Protection which require explicit consent for processing of communication traffic. However, they have refused to intervene, claiming that it would not be sensible to enforce the law, because it would not cause significant (or indeed any) demonstrable detriment to individuals. In Germany, in 2019 it was revealed that the Deutsche Telekom AG not only manipulated their DNS servers, but also transmitted network traffic (such as non-secure cookies when users did not use HTTPS) to a third party company because the web portal T-Online, at which users were redirected due to the DNS manipulation, was not (any more) owned by the Deutsche Telekom. After a user filed a criminal complaint, the Deutsche Telekom stopped further DNS manipulations. Response: ICANN, the international body responsible for administering top-level domain names, has published a memorandum highlighting its concerns, and affirming: ICANN strongly discourages the use of DNS redirection, wildcards, synthesized responses and any other form of NXDOMAIN substitution in existing gTLDs, ccTLDs and any other level in the DNS tree for registry-class domain names. Remedy: End users, dissatisfied with poor "opt-out" options like cookies, have responded to the controversy by finding ways to avoid spoofed NXDOMAIN responses. DNS software such as BIND and Dnsmasq offer options to filter results, and can be run from a gateway or router to protect an entire network. Google, among others, run open DNS servers that currently do not return spoofed results. So a user could use Google Public DNS instead of their ISP's DNS servers if they are willing to accept that they use the service under Google's privacy policy and potentially be exposed to another method by which Google can track the user. One limitation of this approach is that some providers block or rewrite outside DNS requests. OpenDNS, owned by Cisco, is a similar popular service which does not alter NXDOMAIN responses. Remedy: Google in April 2016 launched DNS-over-HTTPS service. This scheme can overcome the limitations of the legacy DNS protocol. It performs remote DNSSEC check and transfers the results in a secure HTTPS tunnel. Remedy: There are also application-level work-arounds, such as the NoRedirect Firefox extension, that mitigate some of the behavior. An approach like that only fixes one application (in this example, Firefox) and will not address any other issues caused. Website owners may be able to fool some hijackers by using certain DNS settings. For example, setting a TXT record of "unused" on their wildcard address (e.g. *.example.com). Alternatively, they can try setting the CNAME of the wildcard to "example.invalid", making use of the fact that '.invalid' is guaranteed not to exist per the RFC. The limitation of that approach is that it only prevents hijacking on those particular domains, but it may address some VPN security issues caused by DNS hijacking.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Univariate** Univariate: In mathematics, a univariate object is an expression, equation, function or polynomial involving only one variable. Objects involving more than one variable are multivariate. In some cases the distinction between the univariate and multivariate cases is fundamental; for example, the fundamental theorem of algebra and Euclid's algorithm for polynomials are fundamental properties of univariate polynomials that cannot be generalized to multivariate polynomials. Univariate: In statistics, a univariate distribution characterizes one variable, although it can be applied in other ways as well. For example, univariate data are composed of a single scalar component. In time series analysis, the whole time series is the "variable": a univariate time series is the series of values over time of a single quantity. Correspondingly, a "multivariate time series" characterizes the changing values over time of several quantities. In some cases, the terminology is ambiguous, since the values within a univariate time series may be treated using certain types of multivariate statistical analyses and may be represented using multivariate distributions. Univariate: In addition to the question of scaling, a criterion (variable) in univariate statistics can be described by two important measures (also key figures or parameters): Location & Variation. Measures of Location Scales (e.g. mode, median, arithmetic mean) describe in which area the data is arranged centrally. Measures of Variation (e.g. span, interquartile distance, standard deviation) describe how similar or different the data are scattered.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lymphotoxin beta** Lymphotoxin beta: Lymphotoxin-beta (LT-beta) also known as tumor necrosis factor C (TNF-C) is a protein that in humans is encoded by the LTB gene. Function: Lymphotoxin beta is a type II membrane protein of the TNF family. It anchors lymphotoxin-alpha to the cell surface through heterotrimer formation. The predominant form on the lymphocyte surface is the lymphotoxin-alpha 1/beta 2 complex (e.g. 1 molecule alpha/2 molecules beta) and this complex is the primary ligand for the lymphotoxin-beta receptor. The minor complex is lymphotoxin-alpha 2/beta 1. LTB is an inducer of the inflammatory response system and involved in normal development of lymphoid tissue. Lymphotoxin-beta isoform b is unable to complex with lymphotoxin-alpha suggesting a function for lymphotoxin-beta which is independent of lymphotoxin-alpha. Alternative splicing results in multiple transcript variants encoding different isoforms.Pro-tumorigenic function of membrane LT is clearly established: mice with overexpression of LTα or LTβ showed increased tumor growth and metastasis in several models of cancer. However, these studies utilized mice with complete LTα gene deficiency that did not allow to distinguish effects of soluble versus membrane-associated LT. Interactions: LTB has been shown to interact with Lymphotoxin alpha.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HP TopShot Technology** HP TopShot Technology: HP TopShot technology is digital camera technology that serves as the scanning mechanism on a LaserJet Multifunctional Printer (MFP). Process: TopShot operates like a small photography studio that captures three-dimensional objects on a specially-designed platform on top of the MFP. It also functions as a document scanner to capture text and images that are on paper. TopShot can capture any object that fits on its scanning platform. Process: The MFP firmware processes the data from TopShot into image files that the MFP can print (copy), send to a computer directory (using the scanner driver), send to email, and send directly to the Internet using HP ePrintCenter apps, such as eStorage. The TopShot scanner driver converts the image data into various formats, including PDF, bitmap, JPEG, PNG, and TIF. Process: Topshot captures six exposures of the subject; three of which are captured using flashes from three different angles. Then it eliminates the distortions by aggregating the exposures and processing them into a single image.TopShot also removes the background behind the object to make the image stand out and to eliminate the need to edit the image in a separate step. The surface of the scanning platform is engineered to reduce shadows and to reflect a specific shade of white, which TopShot identifies and removes from the final image. Process: For documents, TopShot uses text recognition software to identify the text, clarify it, and match it to known fonts for printing. It also identifies graphics on the paper and optimizes them. Then, it aggregates the six exposures it took into a document image that appears flat, clear, and undistorted. Figure 2 shows a document captured using a smartphone camera. The image is dark, and the graphic is obscured by glare. Figure 3 shows the same document captured by TopShot in JPEG format. The text is clear, the background is white, and the image viewable. Uses: TopShot fits into an environment as a digital photography device where good image quality and ease of use are important. It captures images quickly without extra steps to prepare them for use. For example, it can photograph objects for internet sales, how-to articles, or social networking. TopShot also fits into an environment as a scanner of non-traditional documents, such as pages in books, artwork that does not always lie flat, or fragile documents. For example, TopShot can archive rare books or historical documents where the documents themselves are valuable artifacts. It captures the information in documents without touching them. Using a digital camera in the place of a traditional scanning mechanism presents some challenges that must be overcome for an effective scanning device: The ambient lighting and shadows affect the images with the subject in the open. Close-up images of three-dimensional objects can appear dark and distorted. The flash can cause glare. Close-up images of documents present anomalies, such as fisheye effect, glare, and darkened corners. Uses: The type of paper and the printed graphics can increase glare especially with a flash.HP developed new technologies for TopShot to address these challenges.TopShot also removes the background behind the object to make the image stand out and to eliminate the need to edit the image in a separate step. The surface of the scanning platform is engineered to reduce shadows and to reflect a specific shade of white, which TopShot identifies and removes from the final image. Uses: Readiris Pro Software that comes with the MFP scans documents through TWAIN and converts the text into searchable and editable text. Figure 4 shows the same document scanned using this software. Note: the document in Figure 4 is a screenshot of the scan results. The TopShot camera is placed at the end of an arm that the user must lift before scanning. The arm is just the right length to allow the camera to aim and focus properly. Since the position of the camera arm is critical to the success of capturing images, it has a detent in the fully lifted position. Uses: The TopShot camera arm looks like a handle. It looks so much like a handle that one has to resist the temptation to lift the MFP with it. Because of this, the camera arm is designed to be light enough that it feels too weak to lift the device. The panel on the device where the camera is mounted is also designed to be flexible making it even more obvious that the camera arm is not a handle. This is why HP begins most of the user documentation with a warning to resist using the camera arm as a handle. Making the camera arm lighter also costs less, which reduces the cost of the MFP.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FBXO6** FBXO6: F-box only protein 6 is a protein that in humans is encoded by the FBXO6 gene.This gene encodes a member of the F-box protein family which is characterized by an approximately 40 amino acid motif, the F-box. The F-box proteins constitute one of the four subunits of the ubiquitin protein ligase complex called SCFs (SKP1-cullin-F-box), which function in phosphorylation-dependent ubiquitination. The F-box proteins are divided into 3 classes: Fbws containing WD-40 domains, Fbls containing leucine-rich repeats, and Fbxs containing either different protein-protein interaction modules or no recognizable motifs. The protein encoded by this gene belongs to the Fbxs class, and its C-terminal region is highly similar to that of rat NFB42 (neural F Box 42 kDa) which may be involved in the control of the cell cycle.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Radial fossa** Radial fossa: The radial fossa is a slight depression found on the humerus above the front part of the capitulum. It receives the anterior border of the head of the radius when the forearm is flexed. Structure: The joint capsule of the elbow attaches to the humerus just proximal to the radial fossa.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Surfactant–albumin ratio** Surfactant–albumin ratio: The surfactant–albumin ratio is a test for assessing fetal lung maturity. The test, though no longer commercially available, used an automatic analyzer to measure the polarized fluorescent light emitted from a sample of amniotic fluid that had been challenged with a fluorescent probe that interacted competitively with both lecithin (phosphatidylcholine) and albumin in such a way that direct quantitative measurements of both substances could be attained. Higher amounts of lecithin – in reference to albumin – is indicative of lung maturity (and thus survival of the baby). When this test was still used in practice, the Standards of Laboratory Practice set the threshold for lung maturity at 55 mg of lecithin per gram of albumin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Advanced oxidation process** Advanced oxidation process: Advanced oxidation processes (AOPs), in a broad sense, are a set of chemical treatment procedures designed to remove organic (and sometimes inorganic) materials in water and wastewater by oxidation through reactions with hydroxyl radicals (·OH). In real-world applications of wastewater treatment, however, this term usually refers more specifically to a subset of such chemical processes that employ ozone (O3), hydrogen peroxide (H2O2) and/or UV light. Description: AOPs rely on in-situ production of highly reactive hydroxyl radicals (·OH). These reactive species are the strongest oxidants that can be applied in water and can oxidize virtually any compound present in the water matrix, often at a diffusion-controlled reaction speed. Consequently, ·OH reacts unselectively once formed and contaminants will be quickly and efficiently fragmented and converted into small inorganic molecules. Hydroxyl radicals are produced with the help of one or more primary oxidants (e.g. ozone, hydrogen peroxide, oxygen) and/or energy sources (e.g. ultraviolet light) or catalysts (e.g. titanium dioxide). Precise, pre-programmed dosages, sequences and combinations of these reagents are applied in order to obtain a maximum •OH yield. In general, when applied in properly tuned conditions, AOPs can reduce the concentration of contaminants from several-hundreds ppm to less than 5 ppb and therefore significantly bring COD and TOC down, which earned it the credit of "water treatment processes of the 21st century".The AOP procedure is particularly useful for cleaning biologically toxic or non-degradable materials such as aromatics, pesticides, petroleum constituents, and volatile organic compounds in wastewater. Additionally, AOPs can be used to treat effluent of secondary treated wastewater which is then called tertiary treatment. The contaminant materials are largely converted into stable inorganic compounds such as water, carbon dioxide and salts, i.e. they undergo mineralization. A goal of the wastewater purification by means of AOP procedures is the reduction of the chemical contaminants and the toxicity to such an extent that the cleaned wastewater may be reintroduced into receiving streams or, at least, into a conventional sewage treatment. Description: Although oxidation processes involving ·OH have been in use since late 19th century (such as Fenton's reagent, which was used as an analytical reagent at that time), the utilization of such oxidative species in water treatment did not receive adequate attention until Glaze et al. suggested the possible generation of ·OH "in sufficient quantity to affect water purification" and defined the term "Advanced Oxidation Processes" for the first time in 1987. AOPs still have not been put into commercial use on a large scale (especially in developing countries) even up to today mostly because of relatively high associated costs. Nevertheless, its high oxidative capability and efficiency make AOPs a popular technique in tertiary treatment in which the most recalcitrant organic and inorganic contaminants are to be eliminated. The increasing interest in water reuse and more stringent regulations regarding water pollution are currently accelerating the implementation of AOPs at full-scale. Description: There are roughly 500 commercialized AOP installations around the world at present, mostly in Europe and the United States. Other countries like China are showing increasing interests in AOPs.The reaction, using H2O2 for the formation of ·OH, is carried out in an acidic medium (2.5-4.5 pH) and a low temperature (30 °C - 50 °C), in a safe and efficient way, using optimized catalyst and hydrogen peroxide formulations. Chemical principles: Generally speaking, chemistry in AOPs could be essentially divided into three parts: Formation of ·OH; Initial attacks on target molecules by ·OH and their breakdown to fragments; Subsequent attacks by ·OH until ultimate mineralization.The mechanism of ·OH production (Part 1) highly depends on the sort of AOP technique that is used. For example, ozonation, UV/H2O2, photocatalytic oxidation and Fenton's oxidation rely on different mechanisms of ·OH generation: UV/H2O2:H2O2 + UV → 2·OH (homolytic bond cleavage of the O-O bond of H2O2 leads to formation of 2·OH radicals)UV/HOCl:HOCl + UV → ·OH + Cl·Ozone based AOP:O3 + HO− → HO2− + O2 (reaction between O3 and a hydroxyl ion leads to the formation of H2O2 (in charged form)) O3 + HO2− → HO2· + O3−· (a second O3 molecule reacts with the HO2− to produce the ozonide radical) O3−· + H+ → HO3· (this radical gives to ·OH upon protonation) HO3· → ·OH + O2 the reaction steps presented here are just a part of the reaction sequence, see reference for more detailsFenton based AOP:Fe2+ + H2O2 → Fe3++ HO· + OH− (initiation of Fenton's reagent) Fe3+ + H2O2 → Fe2++ HOO· + H+ (regeneration of Fe2+ catalyst) H2O2 → HO· + HOO· + H2O (Self scavenging and decomposition of H2O2) the reaction steps presented here are just a part of the reaction sequence, see reference for more details Photocatalytic oxidation with TiO2:TiO2 + UV → e− + h+ (irradiation of the photocatalytic surface leads to an excited electron (e−) and electron gap (h+)) Ti(IV) + H2O ⇌ Ti(IV)-H2O (water adsorbs onto the catalyst surface) Ti(IV)-H2O + h+ ⇌ Ti(IV)-·OH + H+ the highly reactive electron gap will react with water the reaction steps presented here are just a part of the reaction sequence, see reference for more detailsCurrently there is no consensus on the detailed mechanisms in Part 3, but researchers have cast light on the processes of initial attacks in Part 2. In essence, ·OH is a radical species and should behave like a highly reactive electrophile. Thus two type of initial attacks are supposed to be Hydrogen Abstraction and Addition. The following scheme, adopted from a technical handbook and later refined, describes a possible mechanism of the oxidation of benzene by ·OH. Chemical principles: Scheme 1. Proposed mechanism of the oxidation of benzene by hydroxyl radicals The first and second steps are electrophilic addition that breaks the aromatic ring in benzene (A) and forms two hydroxyl groups (–OH) in intermediate C. Later an ·OH grabs a hydrogen atom in one of the hydroxyl groups, producing a radical species (D) that is prone to undergo rearrangement to form a more stable radical (E). E, on the other hand, is readily attacked by ·OH and eventually forms 2,4-hexadiene-1,6-dione (F). Chemical principles: As long as there are sufficient ·OH radicals, subsequent attacks on compound F will continue until the fragments are all converted into small and stable molecules like H2O and CO2 in the end, but such processes may still be subject to a myriad of possible and partially unknown mechanisms. Advantages: AOPs hold several advantages in the field of water treatment: They can effectively eliminate organic compounds in aqueous phase, rather than collecting or transferring pollutants into another phase. Due to the reactivity of ·OH, it reacts with many aqueous pollutants without discriminating. AOPs are therefore applicable in many, if not all, scenarios where many organic contaminants must be removed at the same time. Some heavy metals can also be removed in forms of precipitated M(OH)x. In some AOPs designs, disinfection can also be achieved, which makes these AOPs an integrated solution to some water quality problems. Since the complete reduction product of ·OH is H2O, AOPs theoretically do not introduce any new hazardous substances into the water. Current shortcomings: AOPs are not perfect and have several drawbacks. Most prominently, the cost of AOPs is fairly high, since a continuous input of expensive chemical reagents is required to maintain the operation of most AOP systems. As a result of their very nature, AOPs require hydroxyl radicals and other reagents proportional to the quantity of contaminants to be removed. Current shortcomings: Some techniques require pre-treatment of wastewater to ensure reliable performance, which could be potentially costly and technically demanding. For instance, presence of bicarbonate ions (HCO3−) can appreciably reduce the concentration of ·OH due to scavenging processes that yield H2O and a much less reactive species, ·CO3−. As a result, bicarbonate must be wiped out from the system or AOPs are compromised. Current shortcomings: It is not cost effective to use solely AOPs to handle a large amount of wastewater; instead, AOPs should be deployed in the final stage after primary and secondary treatment have successfully removed a large proportion of contaminants. Ongoing research also been done to combine AOPs with biological treatment to bring the cost down. Future: Since AOPs were first defined in 1987, the field has witnessed a rapid development both in theory and in application. So far, TiO2/UV systems, H2O2/UV systems, and Fenton, photo-Fenton and Electro-Fenton systems have received extensive scrutiny. However, there are still many research needs on these existing AOPs.Recent trends are the development of new, modified AOPs that are efficient and economical. In fact, there has been some studies that offer constructive solutions. For instance, doping TiO2 with non-metallic elements could possibly enhance the photocatalytic activity; and implementation of ultrasonic treatment could promote the production of hydroxyl radicals. Modified AOPs such as Fluidized-Bed Fenton has also shown great potential in terms of degradation performance and economics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Queen clip** Queen clip: In beekeeping, a queen clip is a small spring-loaded metal or plastic clamshell-shaped clip designed to pick up or contain a queen bee. It has slits in its sides that worker bees can pass through to attend to the queen's needs or to receive queen substance, but the queen bee cannot pass through. When empty, it can be clipped onto some convenient place, such as the edge of the beekeeper's lapel. The queen clip is completely different to the queen mailing/introduction cage which, as the name implies, is employed by queen breeders to mail each single queen to their customer. The customer then uses it to introduce the queen into a hive. The mailing cage has room for the queen and a few escort nurse bees, and some candied honey covering the exit. The hive's worker bees consume the candied honey over a period of a few days to release the queen.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spiral array model** Spiral array model: In music theory, the spiral array model is an extended type of pitch space. A mathematical model involving concentric helices (an "array of spirals"), it represents human perceptions of pitches, chords, and keys in the same geometric space. It was proposed in 2000 by Elaine Chew in her MIT doctoral thesis Toward a Mathematical Model of Tonality. Further research by Chew and others have produced modifications of the spiral array model, and, applied it to various problems in music theory and practice, such as key finding (symbolic and audio), pitch spelling, tonal segmentation, similarity assessment, and musical humor. The extensions and applications are described in Mathematical and Computational Modeling of Tonality: Theory and Applications.The spiral array model can be viewed as a generalized tonnetz, which maps pitches into a two-dimensional lattice (array) structure. The spiral array wraps up the two-dimensional tonnetz into a three-dimensional lattice, and models higher order structures such as chords and keys in the interior of the lattice space. This allows the spiral array model to produce geometric interpretations of relationships between low- and high-level structures. For example, it is possible to model and measure geometrically the distance between a particular pitch and a particular key, both represented as points in the spiral array space. To preserve pitch spelling, because musically A# ≠ Bb in their function and usage, the spiral array does not assume enharmonic equivalence, i.e. it does not fold into a torus. The spatial relationships between pitches, between chords, and between keys agree with those in other representations of tonal space.The model and its real-time algorithms have been implemented in the tonal visualization software MuSA.RT (Music on the Spiral Array . Real-Time) and a free app, MuSA_RT, both of which have been used in music education videos and in live performance. Structure of the spiral array: The model as proposed covers basic pitches, major chords, minor chords, major keys and minor keys, represented on five concentric helices. Starting with a formulation of the pitch helix, inner helices are generated as convex combinations of points on outer ones. For example, the pitches C, E, and G are represented as the Cartesian points P(0), P(1), and P(4) (see definitions in next section), which outline a triangle. The convex combination of these three points is a point inside the triangle, and represents their center of effect (ce). This interior point, CM(0), represents the C major chord in the spiral array model. Similarly, keys may be constructed by the centers of effect of their I, IV, and V chords. Structure of the spiral array: The outer helix represents pitches classes. Neighboring pitch classes are a music interval of a perfect fifth, and spatially a quarter rotation, apart. The order of the pitch classes can be determined by the line of fifths. For example, C would be followed by G (C and G are a perfect fifth apart), which would be followed D (G and D are a perfect fifth apart), etc. As a result of this structure, and one of the important properties leading to its selection, vertical neighbors are a music interval of a major third apart. Thus, a pitch class's nearest neighbors and itself form perfect fifth and major third intervals. Structure of the spiral array: By taking every consecutive triads along the helix, and connecting their centers of effect, a second helix is formed inside the pitch helix, representing the major chords. Similarly, by taking the proper minor triads and connecting their centers of effect, a third helix is formed, representing the minor chords. The major key helix is formed by the centers of effect of the centers of effect of the I, IV, and V chords The minor key helix is formed by connecting similar combinations of the i, iv/IV, and V/v chords. Equations for pitch, chord, and key representations: In Chew's model, the pitch class helix, P, is represented in parametric form by: sin cos ⁡(k⋅π/2)kh] where k is an integer representing the pitch's distance from C along the line of fifths, r is the radius of the spiral, and h is the "rise" of the spiral. Equations for pitch, chord, and key representations: The major chord helix, CM is represented by: CM(k)=w1⋅P(k)+w2⋅P(k+1)+w3⋅P(k+4) where w1≥w2≥w3>0 and {\textstyle \sum _{i=1}^{3}w_{i}=1} The weights "w" affect how close the center of effect are to the fundamental, major third, and perfect fifth of the chord. By changing the relative values of these weights, the spiral array model controls how "close" the resulting chord is to the three constituent pitches. Generally in western music, the fundamental is given the greatest weight in identifying the chord (w1), followed by the fifth (w2), followed by the third (w3). Equations for pitch, chord, and key representations: The minor chord helix, Cm is represented by: Cm(k)=u1⋅P(k)+u2⋅P(k+1)+u3⋅P(k−3) where u1≥u2≥u3>0 and 1. The weights "u" function similarly to the major chord. Equations for pitch, chord, and key representations: The major key helix, TM is represented by: TM(k)=ω1⋅P(k)+ω2⋅P(k+1)+ω3⋅P(k−1) where ω1≥ω2≥ω3>0 and {\textstyle \sum _{i=1}^{3}\omega _{i}=1} Similar to the weights controlling how close constituent pitches are to the center of effect of the chord they produce, the weights ω control the relative effect of the I, IV, and V chord in determining how close they are to the resultant key. Equations for pitch, chord, and key representations: The minor key helix, Tm is represented by: Tm(k)=ν1⋅CM(k)+ν2⋅(α⋅CM(k+1)+(1−α)⋅Cm(k+1))+ν3⋅(β∗Cm(k−1)+(1−β)⋅CM(k−1)). where ν1≥ν2≥ν3>0 and ν1+ν2+ν3=1 and 0≥α≥1 and β≥1
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Michael D. Fox** Michael D. Fox: Michael D. Fox is an American neurologist at Harvard Medical School in Boston, Massachusetts where he holds the Raymond D. Adams Distinguished Chair in Neurology and directs the Center for Brain Circuit Therapeutics at Brigham and Women's Hospital. His research has focused on resting state brain fMRI which uses spontaneous fluctuations in blood oxygenation to map brain networks including the default mode network. He developed the technique lesion network mapping to study the connectivity patterns of brain lesions to help understand the neuroanatomy of a diverse range of processes including addiction, criminality, blindsight, free will and religiosity. Michael D. Fox has been considered among the "World's Most Influential Scientific Minds" by Thomson Reuters since 2014.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Small-angle X-ray scattering** Small-angle X-ray scattering: Small-angle X-ray scattering (SAXS) is a small-angle scattering technique by which nanoscale density differences in a sample can be quantified. This means that it can determine nanoparticle size distributions, resolve the size and shape of (monodisperse) macromolecules, determine pore sizes, characteristic distances of partially ordered materials, and much more. This is achieved by analyzing the elastic scattering behaviour of X-rays when travelling through the material, recording their scattering at small angles (typically 0.1 – 10°, hence the "Small-angle" in its name). It belongs to the family of small-angle scattering (SAS) techniques along with small-angle neutron scattering, and is typically done using hard X-rays with a wavelength of 0.07 – 0.2 nm. Depending on the angular range in which a clear scattering signal can be recorded, SAXS is capable of delivering structural information of dimensions between 1 and 100 nm, and of repeat distances in partially ordered systems of up to 150 nm. USAXS (ultra-small angle X-ray scattering) can resolve even larger dimensions, as the smaller the recorded angle, the larger the object dimensions that are probed. Small-angle X-ray scattering: SAXS and USAXS belong to a family of X-ray scattering techniques that are used in the characterization of materials. In the case of biological macromolecules such as proteins, the advantage of SAXS over crystallography is that a crystalline sample is not needed. Furthermore, the properties of SAXS allow investigation of conformational diversity in these molecules. Nuclear magnetic resonance spectroscopy methods encounter problems with macromolecules of higher molecular mass (> 30–40 kDa). However, owing to the random orientation of dissolved or partially ordered molecules, the spatial averaging leads to a loss of information in SAXS compared to crystallography. Applications: SAXS is used for the determination of the microscale or nanoscale structure of particle systems in terms of such parameters as averaged particle sizes, shapes, distribution, and surface-to-volume ratio. The materials can be solid or liquid and they can contain solid, liquid or gaseous domains (so-called particles) of the same or another material in any combination. Not only particles, but also the structure of ordered systems like lamellae, and fractal-like materials can be studied. The method is accurate, non-destructive and usually requires only a minimum of sample preparation. Applications are very broad and include colloids,,, of all types including interpolyelectrolyte complexes,,, micelles,,,,, microgels, liposomes,,, polymersomes,, metals, cement, oil, polymers,,,, plastics, proteins,, foods and pharmaceuticals and can be found in research as well as in quality control. The X-ray source can be a laboratory source or synchrotron light which provides a higher X-ray flux. Applications: Resonant small-angle X-ray scattering It is possible to enhance the X-ray scattering yield by matching the energy of X-ray source to a resonant absorption edge in as it is done for resonant inelastic X-ray scattering. Different from standard RIXS measurements, the scattered photons are considered to have the same energy as the incident photons. SAXS instruments: In a SAXS instrument, a monochromatic beam of X-rays is brought to a sample from which some of the X-rays scatter, while most simply go through the sample without interacting with it. The scattered X-rays form a scattering pattern which is then detected at a detector which is typically a 2-dimensional flat X-ray detector situated behind the sample perpendicular to the direction of the primary beam that initially hit the sample. The scattering pattern contains the information on the structure of the sample. SAXS instruments: The major problem that must be overcome in SAXS instrumentation is the separation of the weak scattered intensity from the strong main beam. The smaller the desired angle, the more difficult this becomes. The problem is comparable to one encountered when trying to observe a weakly radiant object close to the sun, like the sun's corona. Only if the moon blocks out the main light source does the corona become visible. Likewise, in SAXS the non-scattered beam that merely travels through the sample must be blocked, without blocking the closely adjacent scattered radiation. Most available X-ray sources produce divergent beams and this compounds the problem. In principle the problem could be overcome by focusing the beam, but this is not easy when dealing with X-rays and was previously not done except on synchrotrons where large bent mirrors can be used. This is why most laboratory small angle devices rely on collimation instead. SAXS instruments: Laboratory SAXS instruments can be divided into two main groups: point-collimation and line-collimation instruments: Point-collimation instruments Point-collimation instruments have pinholes that shape the X-ray beam to a small circular or elliptical spot that illuminates the sample. Thus the scattering is centro-symmetrically distributed around the primary X-ray beam and the scattering pattern in the detection plane consists of circles around the primary beam. Owing to the small illuminated sample volume and the wastefulness of the collimation process—only those photons are allowed to pass that happen to fly in the right direction—the scattered intensity is small and therefore the measurement time is in the order of hours or days in case of very weak scatterers. If focusing optics like bent mirrors or bent monochromator crystals or collimating and monochromating optics like multilayers are used, measurement time can be greatly reduced. Point-collimation allows the orientation of non-isotropic systems (fibres, sheared liquids) to be determined. SAXS instruments: Line-collimation instruments Line-collimation instruments restrict the beam only in one dimension (rather than two as for point collimation) so that the beam cross-section is a long but narrow line. The illuminated sample volume is much larger compared to point-collimation and the scattered intensity at the same flux density is proportionally larger. Thus measuring times with line-collimation SAXS instruments are much shorter compared to point-collimation and are in the range of minutes. A disadvantage is that the recorded pattern is essentially an integrated superposition (a self-convolution) of many adjacent pinhole patterns. The resulting smearing can be easily removed using model-free algorithms or deconvolution methods based on Fourier transformation, but only if the system is isotropic. Line collimation is of great benefit for any isotropic nanostructured materials, e.g. proteins, surfactants, particle dispersion and emulsions. SAXS instrument manufacturers: SAXS instrument manufacturers include Anton Paar, Austria; Bruker AXS, Germany; Hecus X-Ray Systems Graz, Austria; Malvern Panalytical. the Netherlands, Rigaku Corporation, Japan; Xenocs, France; and Xenocs, United States.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Infinite group** Infinite group: In group theory, an area of mathematics, an infinite group is a group whose underlying set contains an infinite number of elements. In other words, it is a group of infinite order. Examples: (Z, +), the group of integers with addition is infinite Non-discrete Lie groups are infinite. For example, (R, +), the group of real numbers with addition is an infinite group The general linear group of order n > 0 over an infinite field is infinite
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RIS (file format)** RIS (file format): RIS is a standardized tag format developed by Research Information Systems, Incorporated (the format name refers to the company) to enable citation programs to exchange data. It is supported by a number of reference managers. Many digital libraries, like IEEE Xplore, Scopus, the ACM Portal, Scopemed, ScienceDirect, SpringerLink, Rayyan, Accordance Bible Software, and online library catalogs can export citations in this format. Citation management applications can export and import citations in this format. Format: The RIS file format—two letters, two spaces and a hyphen—is a tagged format for expressing bibliographic citations. According to the specifications, the lines must end with the ASCII carriage return and line feed characters. Note that this is the convention on Microsoft Windows, while in other contemporary operating systems, particularly Unix, the end of line is typically marked by line feed only. Format: Multiple citation records can be present in a single RIS file. A record ends with an "end record" tag ER - with no additional blank lines between records. Example record This is an example of how the article "Claude E. Shannon. A mathematical theory of communication. Bell System Technical Journal, 27:379–423, July 1948" would be expressed in the RIS file format: TY - JOUR AU - Shannon, Claude E. Format: PY - 1948 DA - July TI - A Mathematical Theory of Communication T2 - Bell System Technical Journal SP - 379 EP - 423 VL - 27 ER - Example multi-record format This is an example of how two citation records would be expressed in a single RIS file. Note the first record ends with ER - and the second record begins with TY - JOUR: TY - JOUR AU - Shannon, Claude E. Format: PY - 1948 DA - July TI - A Mathematical Theory of Communication T2 - Bell System Technical Journal SP - 379 EP - 423 VL - 27 ER - TY - JOUR T1 - On computable numbers, with an application to the Entscheidungsproblem A1 - Turing, Alan Mathison JO - Proc. of London Mathematical Society VL - 47 IS - 1 SP - 230 EP - 265 Y1 - 1937 ER - Tags: There are two major versions of the RIS specification. The second version, introduced near the end of 2011 has different lists of tags for each type of record, sometimes with different meanings. Below is an excerpt of the main RIS tags, from both versions. Except for TY - and ER -, order of tags is free and their inclusion is optional. Type of reference: The type of reference preceded by the TY - tag may abbreviated:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Argon oxygen decarburization** Argon oxygen decarburization: Argon oxygen decarburization (AOD) is a process primarily used in stainless steel making and other high grade alloys with oxidizable elements such as chromium and aluminium. After initial melting the metal is then transferred to an AOD vessel where it will be subjected to three steps of refining; decarburization, reduction, and desulfurization. The AOD process was invented in 1954 by the Lindé Division of The Union Carbide Corporation (which became known as Praxair in 1992). Process: The AOD process is usually divided in three main steps: decarburization, reduction, and desulfurization. Process: Decarburization Prior to the decarburization step, one more step should be taken into consideration: de-siliconization, which is a very important factor for refractory lining and further refinement. The decarburization step is controlled by ratios of oxygen to argon or nitrogen to remove the carbon from the metal bath. The ratios can be done in any number of phases to facilitate the reaction. The gases are usually blown through a top lance (oxygen only) and tuyeres in the sides/bottom (oxygen with an inert gas shroud). The stages of blowing remove carbon by the combination of oxygen and carbon forming CO gas. 4 Cr(bath) + 3 O2 → 2 Cr2O3(slag)Cr2O3(slag) + 3 C(bath) → 3 CO(gas) + 2 Cr(bath)To drive the reaction to the forming of CO, the partial pressure of CO is lowered using argon or nitrogen. Since the AOD vessel is not externally heated, the blowing stages are also used for temperature control. The burning of carbon increases the bath temperature. By the end of this process around 97% of Cr is retained in the steel. Process: Reduction After a desired carbon and temperature level have been reached the process moves to reduction. Reduction recovers the oxidized elements such as chromium from the slag. To achieve this, alloy additions are made with elements that have a higher affinity for oxygen than chromium, using either a silicon alloy or aluminium. The reduction mix also includes lime (CaO) and fluorspar (CaF2). The addition of lime and fluorspar help with driving the reduction of Cr2O3 and managing the slag, keeping the slag fluid and its volume small. Process: Desulfurization Desulfurization is achieved by having a high lime concentration in the slag and a low oxygen activity in the metal bath. S(bath) + CaO(slag) → CaS(slag) + O(bath)So, additions of lime are added to dilute sulfur in the metal bath. Also, aluminium or silicon may be added to remove oxygen. Other trimming alloy additions might be added at the end of the step. After sulfur levels have been achieved the slag is removed from the AOD vessel and the metal bath is ready for tapping. The tapped bath is then either sent to a stir station for further chemistry trimming or to a caster for casting. Process: Usually desulfurization step is the first step of the process. History: The AOD process has a significant place in the history of steelmaking, introducing a transformative method for refining stainless steel and shaping the industry's landscape. 1960s The development of AOD technology began in the 1960s as an alternative to traditional steelmaking methods. The process was initially introduced by American chemical companies who aimed to refine stainless steel more efficiently and economically. Late 1960s In the late 1960s, the AOD process gained recognition for its ability to remove carbon efficiently, achieving lower carbon levels than other refining methods. It also offered the advantage of being able to produce stainless steel with low carbon content, making it suitable for various applications. 1970s During the 1970s, the AOD process underwent further refinements and improvements. Steel companies in Europe and the United States increasingly adopted the AOD method in their operations, attracted by its flexibility and ability to produce high-quality stainless steel. 1980s In the 1980s, the AOD process became widely accepted as a standard refining method for stainless steel worldwide. Its advantages, such as high metallic yields, precise control over chemical composition, carbon control, desulfurization capabilities, and cleaner metal production, contributed to its popularity. Present Day Today, the AOD process remains a prominent method in the stainless steel industry. It offers steelmakers greater flexibility in raw material selection, enabling the use of cost-effective inputs and ensuring accurate and consistent results.The process has also contributed to increased production capacity with relatively small capital investments compared to conventional electric furnace methods. Additional uses: In additional to its primary application in the production of stainless steel, many various additional uses have been found for AOD across different industries and materials. Additional uses: Carbon Capture and Utilization AOD slag has shown promising potential for usage as a carbon-capture construction material due to its high capacity for CO2 and its low cost. Carbonation curing, a process utilizing CO2 as a curing agent in concrete manufacturing, enhances the chemical properties of stainless steel slag by stabilizing it. During carbonation, g-C2S in the slag reacts with CO2 to produce compounds like calcite and silica gel, resulting in increased compressive strength and improved durability of cementitious materials. The incorporation of AOD slag as a replacement material in ordinary Portland cement (OPC) during carbonation curing has been studied, demonstrating positive effects on strength and reduced porosity. Additional uses: Cementitious Activity and Modifiers AOD slag exhibits cementitious activity, but its properties can be changed by modifiers. Studies have focused on the impact of modifiers, such as B2O3 and P2O5 on preventing the crystal transition of β-C2S and improving the cementitious activity of the slag. Addition of B2O3 and P2O5 has shown curing effects and increased compressive strength. These findings suggest that proper selection of modifiers can enhance the performance of stainless steel slag in cementitious applications. Additional uses: Chromium Leachability and Carbonation Another aspect of AOD slag research is its carbonation potential and its impact on chromium leachability. Carbonation of the dicalcium silicate in AOD slag leads to the formation of various compounds, including amorphous calcium carbonate, crystalline calcite, and silica gel. The carbonation ratio of the slag affects the mineral phases, which subsequently influence chromium leachability. Optimal carbonation ratios have been identified to minimize chromium leaching risks during carbonation-related production activities.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lax–Wendroff theorem** Lax–Wendroff theorem: In computational mathematics, the Lax–Wendroff theorem, named after Peter Lax and Burton Wendroff, states that if a conservative numerical scheme for a hyperbolic system of conservation laws converges, then it converges towards a weak solution.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dbndns** Dbndns: dbndns was a fork of the djbdns software package, maintained by the Debian Project, made possible by the release of djbdns to the public domain. The fork was created so as to add many common patches to djbdns. Most notably, this now includes IPv6 support. Previously, it was necessary to get a special 'djbdns-installer' package that downloaded the djbdns source from the author's site and apply a patch, but the free software status means this is no longer necessary, and Debian can directly carry the source and patches, producing a redistributable binary deb package. This package had filtered through into Ubuntu. Dbndns is no longer maintained as part of the Debian distribution.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Group-stack** Group-stack: In algebraic geometry, a group-stack is an algebraic stack whose categories of points have group structures or even groupoid structures in a compatible way. It generalizes a group scheme, which is a scheme whose sets of points have group structures in a compatible way. Examples: A group scheme is a group-stack. More generally, a group algebraic-space, an algebraic-space analog of a group scheme, is a group-stack. Over a field k, a vector bundle stack V on a Deligne–Mumford stack X is a group-stack such that there is a vector bundle V over k on X and a presentation V→V . It has an action by the affine line A1 corresponding to scalar multiplication. A Picard stack is an example of a group-stack (or groupoid-stack). Actions of group-stacks: The definition of a group action of a group-stack is a bit tricky. First, given an algebraic stack X and a group scheme G on a base scheme S, a right action of G on X consists of a morphism σ:X×G→X (associativity) a natural isomorphism σ∘(m×1X)→∼σ∘(1X×σ) , where m is the multiplication on G, (identity) a natural isomorphism 1X→∼σ∘(1X×e) , where e:S→G is the identity section of G,that satisfy the typical compatibility conditions. Actions of group-stacks: If, more generally, G is a group-stack, one then extends the above using local presentations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**On-again, off-again relationship** On-again, off-again relationship: An on-again, off-again relationship (also known as an on-and-off relationship) is a form of interpersonal relationship between two people whose breakups are followed by reconciliation, perpetuating a cycle. Relationship reconciliation is defined as the process in which partners attempt to heal the hurt or wrong that was done and move on from it in order to progress forward in the relationship. This process of breaking up and getting back together can be short-term or long-term.These relationships differ from non-cyclical relationships in that on-again, off-again relationships are between partners that have pre-existing knowledge and experiences with each other. In addition to this, on-and-off partners often report more relationship uncertainty, questioning the meaning of the relationship, its strength, and future. Despite this, a 2009 study published in the Personal Relationships Journal revealed that nearly two-thirds of participants have experienced being in an on-again, off-again relationship.According to Professor Rene Dailey at the University of Texas at Austin, there are no specific relationship dispositions that make someone more or less likely to be in an on-again, off-again relationship. Dailey defines relationship disposition to be the way that individuals approach their relationship in regards to its purpose and functioning. This includes attachment style, destiny and growth beliefs, and communal orientation. In her 2020 study of on-and-off and non-cyclical partners, results did not show on-and-off partners to be more avoidant, believe in destiny more strongly, or have less communal orientation than the non-cyclical couples. Causes: A 2011 study published in the Journal of Social Psychology revealed that lingering feelings and continued attachment were the most common reasons why partners decided to get back together. Furthermore, reconciliation often was initiated by one person. While the other partner may not have strongly wanted to get back together, familiarity with the relationship may have led to the decision to get back together. Other common causes for renewal of these relationships include changing perceptions, dissatisfaction with alternative partners, missing companionship, sympathy for the partner, and investment.Those who experienced on-and-off patterns also tended to show strong beliefs in that love overcomes all obstacles and that there is only one true partner for that person. In "Relationship Churning in Emerging Adulthood: On/Off Relationships and Sex with an Ex," the authors note that individuals going through this process often look to the positive qualities of the relationship to guide their decisions.Some research also suggests that breaking up can happen more frequently when it used as a tactic to attain what an individual wants, and thus, it creates an unhealthy cycle of conflict followed by ending the relationship and getting back together. Impact: Potential drawbacks and risks On-and-off partners report experiencing more negative aspects of the relationship in comparison to non-cyclical partners. These relationships are often strained by doubt, disappointment, and emotional frustration. Thus, being in an on-again, off-again relationship can damage one's mental health. Researcher Kale Monk, an assistant professor of human development and family sciences at the University of Missouri, discusses how these types of relationships can have higher rates of abuse, poorer communication, and lower levels of commitment.In a 2013 study analyzing relationship churning in relation to physical violence and verbal abuse, researchers found that relationships with on-and-off patterns are twice as likely as couples who stably broke up or are together to report physical violence and half as likely to report verbal abuse. This may arise from the instability that comes with many on-and-off relationships, as there may be a tendency for quicker escalation and poor communication and relationship skills.Furthermore, on-and-off relationships pose risks in the healing process. Research has shown more difficulty in partners moving on by continuing this cycle, especially if partners have sex during periods of technically not being together. Partners's feelings of pain may also intensify with such emotionally-taxing events. On the other hand, on-and-off patterns can potentially normalize relationship disruptions and reconciliations for future relationships. Because of this, breakups may not have the same impact as they once did. Impact: Potential benefits Despite this, not all on-again, off-again relationships are considered toxic, as breaking up and reconciling can help a couple with better communication and address the issues in their relationships. On-and-off partners have reported “future relationship knowledge" as being the top benefit of these types of relationships. Other benefits include new perspectives, improving the current relationship, and learning more about yourself. Emerging adulthood: This cyclical nature of relationships has proven to be a common part of emerging adulthood of many. From a developmental perspective, this is in some ways expected, as it is a part of exploration in young adulthood. Individuals attempt to learn what they want in future relationships and long-term partners, and in doing so, this time period can be tumultuous, as they are building up experience in relationships. Emerging adulthood: In a 2013 study analyzing relationship instability published in the National Institute of Health, researchers reported that half of the young adults in the sample reported reconciliation from their current or most recent relationship. Dating and cohabiting couples in emerging adulthood showed higher frequency in reconciliation than in married couples, in part, due to less commitment, less investment, and simply the nature of the relationship. Less committed couples may breakup in less extreme circumstances, and thus, reconciliations are more likely to occur.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Plasmon coupling** Plasmon coupling: Plasmon coupling is a phenomenon that occurs when two or more plasmonic particles approach each other to a distance below approximately one diameter's length. Upon the occurrence of plasmon coupling, the resonance of individual particles start to hybridize, and their resonance spectrum peak wavelength will shift (either blueshift or redshift), depending on how surface charge density distributes over the coupled particles. At a single particle's resonance wavelength, the surface charge densities of close particles can either be out of phase or in phase, causing repulsion or attraction and thus leading to increase (blueshift) or decrease (redshift) of hybridized mode energy. The magnitude of the shift, which can be the measure of plasmon coupling, is dependent on the interparticle gap as well as particles geometry and plasmonic resonances supported by individual particles. A larger redshift is usually associated with smaller interparticle gap and larger cluster size. Plasmon coupling: Plasmon coupling can also cause the electric field in the interparticle gap to be boosted by several orders of magnitude, which far-exceeds the field enhancement for a single plasmonic nanoparticle. Many sensing applications such as surface enhanced Raman spectroscopy (SERS) utilize the plasmon coupling between nanoparticles to achieve ultralow detection limit. Plasmon ruler: Plasmon ruler refers to a dimer of two identical plasmonic nanospheres linked together through a polymer, typically DNA or RNA. Based on the Universal Scaling Law between spectral shift and the interparticle separations, the nanometer scale distance can be monitored by the color shifts of dimer resonance peak. Plasmon rulers are typically used to monitor distance fluctuation below the diffraction limit, between tens of nanometers and a few nanometers. Plasmon coupling microscopy: Plasmon coupling microscopy is a ratiometric widefield imaging approach that allows monitoring of multiple plasmon rulers with high temporal resolution. The entire field of view is imaged simultaneously on two wavelength channels, which corresponds to the red and blue flank of the plasmon ruler resonance. The spectral information of an individual plasmon ruler is expressed in the intensity distribution on the two monitored channels, quantified as R=(I1-I2)/(I1+I2). Each R value corresponds to a certain nanometer scale distance which can be calculated using computer simulation or generated from experiments.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Roller hockey rankings** Roller hockey rankings: The roller hockey rankings are rankings of national teams and clubs of this sport. Men's national teams ranking: The unofficial national teams ranking is calculated by the Elo rating system.This is the ranking as of 14 July 2019. Women's national teams ranking: As the men's ranking, the unofficial women's ranking is also calculated by using the Elo rating system. This is the ranking updated as of 14 July 2019. World Skate Europe league ranking: The coefficients of the league ranking take into account the performance of each association's representative teams in European competitions between the past four seasons. The coefficient is calculated by dividing the total of points accumulated by the number of participating teams and serves for determine the number of teams for each country in the Euroleague. This is the ranking as of the end of the 2018–19 season. World Skate Europe men's club ranking: The men's club ranking is made by the World Skate Europe for their club competitions. It is determined by the results of the clubs in the Euroleague, World Skate Europe Cup and the Continental Cup over the past four seasons. This table shows the top 20 as of the end of the 2018–19 season. World Skate Europe women's club ranking: The women's club ranking is also made by the World Skate Europe for their club competitions. It is determined by the results of the clubs in the Female League over the past four seasons. This table shows the top 10 as of the end of the 2018–19 season.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IF-16** IF-16: iF-16 is a 1997 video game developed by Digital Integration and published by Interactive Magic. Reception: Computer Gaming World gave the game a score of 2.5 out of 5 stating "iF-16 is essentially a marriage of the most often simulated combat aircraft in history with a slightly tweaked version of the APACHE/HIND engine. It brings almost nothing new to the table"
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Takeda (video game)** Takeda (video game): Takeda is a real-time tactics video game based on the life of Takeda Shingen. Takeda was developed by Magitech Corporation. Sequels: Magitech Corporation also has produced a sequel, Takeda 2, which incorporates more aspects of the individual development of the generals, aspects such as leadership, etc. Takeda 3 was completed in February 2009.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**AT&amp;T DSP1** AT&amp;T DSP1: The AT&T DSP1 was a pioneering digital signal processor (DSP) created by Bell Labs. AT&amp;T DSP1: The DSP1 started in 1977 with a Bell Labs study that recommended creating a large-scale integrated circuit for digital signal processing. It described a basic DSP architecture with multiplier/accumulator, addressing unit, and control; the I/O, data, and control memories were planned to be off-chip until large-scale integration could make a single chip implementation feasible. The DSP1 specification was completed in 1978, with first samples tested in May 1979. This first implementation was a single-chip DSP, containing all functional elements found in today's DSPs including multiplier–accumulator (MAC), parallel addressing unit, control, control memory, data memory, and I/O. It was designed with a 20-bit fixed point data format, and 16-bit coefficients and instructions, implemented in a 4.5 micrometre DRAM process technology. By October 1979 other Bell Labs groups began development using the DSP1, most notably as a key component in AT&T's 5ESS switch.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Meningococcal vaccine** Meningococcal vaccine: Meningococcal vaccine refers to any vaccine used to prevent infection by Neisseria meningitidis. Different versions are effective against some or all of the following types of meningococcus: A, B, C, W-135, and Y. The vaccines are between 85 and 100% effective for at least two years. They result in a decrease in meningitis and sepsis among populations where they are widely used. They are given either by injection into a muscle or just under the skin.The World Health Organization recommends that countries with a moderate or high rate of disease or with frequent outbreaks should routinely vaccinate. In countries with a low risk of disease, they recommend that high risk groups should be immunized. In the African meningitis belt efforts to immunize all people between the ages of one and thirty with the meningococcal A conjugate vaccine are ongoing. In Canada and the United States the vaccines effective against four types of meningococcus (A, C, W, and Y) are recommended routinely for teenagers and others who are at high risk. Saudi Arabia requires vaccination with the quadrivalent vaccine for international travellers to Mecca for Hajj.Meningococcal vaccines are generally safe. Some people develop pain and redness at the injection site. Use in pregnancy appears to be safe. Severe allergic reactions occur in less than one in a million doses.The first meningococcal vaccine became available in the 1970s. It is on the World Health Organization's List of Essential Medicines.Inspired by the response to the 1997 outbreak in Nigeria, the WHO, Médecins Sans Frontières, and other groups created the International Coordinating Group on Vaccine Provision for Epidemic Meningitis Control, which manages global response strategy. ICGs have since been created for other epidemic diseases. Types: Neisseria meningitidis has 13 clinically significant serogroups, classified according to the antigenic structure of their polysaccharide capsule. Six serogroups, A, B, C, Y, W-135, and X, are responsible for virtually all cases of the disease in humans. Types: Quadrivalent (serogroups A, C, W-135, and Y) There are three vaccines available in the United States to prevent meningococcal disease, all quadrivalent in nature, targeting serogroups A, C, W-135, and Y: three conjugate vaccines (MCV-4), Menactra, Menveo and MenQuadfi. The pure polysaccharide vaccine Menomune, MPSV4, was discontinued in the United States in 2017.Menveo and MenQuadfi are approved for medical use in the European Union. Types: Menactra and Menveo The first meningococcal conjugate vaccine (MCV-4), Menactra, was licensed in the U.S. in 2005 by Sanofi Pasteur; Menveo was licensed in 2010 by Novartis. Both MCV-4 vaccines have been approved by the Food and Drug Administration (FDA) for people 2 through 55 years of age. Menactra received FDA approval for use in children as young as 9 months in April 2011 while Menveo received FDA approval for use in children as young as two months in August 2013. The Centers for Disease Control and Prevention (CDC) has not made recommendations for or against its use in children less than two years. Types: Menquadfi Menquadfi, manufactured by Sanofi Pasteur, was approved by the US Food and Drug Administration (FDA) in April 2020, for use in individuals two years of age and older. Menomune Meningococcal polysaccharide vaccine (MPSV-4), Menomune, has been available since the 1970s. It may be used if MCV-4 is not available, and is the only meningococcal vaccine licensed for people older than 55. Information about who should receive the meningococcal vaccine is available from the CDC. Types: Nimenrix Nimenrix (developed by GlaxoSmithKline and later acquired by Pfizer), is a quadrivalent conjugate vaccine against serogroups A, C, W-135, and Y. In April 2012 Nimenrix was approved as the first quadrivalent vaccine against invasive meningococcal disease to be administered as a single dose in those over the age of one year, by the European Medicines Agency. In 2016, they approved the vaccine in infants six weeks of age and older, and it has been approved in other countries including Canada and Australia, among others. It is not licensed in the United States. Types: Mencevax Mencevax (GlaxoSmithKline) and NmVac4-A/C/Y/W-135 (JN-International Medical Corporation) are used worldwide, but have not been licensed in the United States. Types: Limitations The duration of immunity mediated by Menomune (MPSV-4) is three years or less in children aged under five because it does not generate memory T cells. Attempting to overcome this problem by repeated immunization results in a diminished, not increased, antibody response, so boosters are not recommended with this vaccine. As with all polysaccharide vaccines, Menomune does not produce mucosal immunity, so people can still become colonised with virulent strains of meningococcus, and no herd immunity can develop. For this reason, Menomune is suitable for travellers requiring short-term protection, but not for national public health prevention programs. Types: Menveo and Menactra contain the same antigens as Menomune, but the antigens are conjugated to a diphtheria toxoid polysaccharide–protein complex, resulting in anticipated enhanced duration of protection, increased immunity with booster vaccinations, and effective herd immunity. Types: Endurance A study published in March 2006, comparing the two kinds of vaccines found that 76% of subjects still had passive protection three years after receiving MCV-4 (63% protective compared with controls), but only 49% had passive protection after receiving MPSV-4 (31% protective compared with controls). As of 2010, there remains limited evidence that any of the current conjugate vaccines offer continued protection beyond three years; studies are ongoing to determine the actual duration of immunity, and the subsequent requirement of booster vaccinations. The CDC offers recommendations regarding who they feel should get booster vaccinations. Types: Bivalent (serogroups C and Y) In June 2012, the FDA approved a combination vaccine against two types of meningococcal disease and Hib disease for infants and children 6 weeks to 18 months old. The vaccine, Menhibrix, prevents disease caused by Neisseria meningitidis serogroups C and Y and Haemophilus influenzae type b. This was the first meningococcal vaccine that could be given to infants as young as six weeks old. Menhibrix is indicated for active immunization to prevent invasive disease caused by Neisseria meningitidis serogroups C and Y and Haemophilus influenzae type b for children 6 weeks of age through 18 months of age. Types: Serogroup A A vaccine called MenAfriVac has been developed through a program called the Meningitis Vaccine Project and has the potential to prevent outbreaks of group A meningitis, which is common in sub-Saharan Africa. Types: Serogroup B Vaccines against serotype B meningococcal disease have proved difficult to produce, and require a different approach from vaccines against other serotypes. Whereas effective polysaccharide vaccines have been produced against types A, C, W-135, and Y, the capsular polysaccharide on the type B bacterium is too similar to human neural adhesion molecules to be a useful target.A number of "serogroup B" vaccines have been produced. Strictly speaking, these are not "serogroup B" vaccines, as they do not aim to produce antibodies to the group B antigen: it would be more accurate to describe them as serogroup independent vaccines, as they employ different antigenic components of the organism; indeed, some of the antigens are common to different Neisseria species.A vaccine for serogroup B was developed in Cuba in response to a large outbreak of meningitis B during the 1980s. This vaccine was based on artificially produced outer membrane vesicles of the bacterium. The VA-MENGOC-BC vaccine proved safe and effective in randomized double-blind studies, but it was granted a licence only for research purposes in the United States as political differences limited cooperation between the two countries.Due to a similarly high prevalence of B-serotype meningitis in Norway between 1974 and 1988, Norwegian health authorities developed a vaccine specifically designed for Norwegian children and young adolescents. Clinical trials were discontinued after the vaccine was shown to cover only slightly more than 50% of all cases. Furthermore, lawsuits for damages were filed against the State of Norway by persons affected by serious adverse reactions. Information that the health authorities obtained during the vaccine development were subsequently passed on to Chiron (now GlaxoSmithKline), who developed a similar vaccine, MeNZB, for New Zealand.A MenB vaccine was approved for use in Europe in January 2013. Following a positive recommendation from the European Union's Committee for Medicinal Products for Human Use, Bexsero, produced by Novartis, received a licence from the European Commission. However, deployment in individual EU member countries still depends on decisions by national governments. In July 2013, the United Kingdom's Joint Committee on Vaccination and Immunisation (JCVI) issued an interim position statement recommending against adoption of Bexsero as part of a routine meningococcal B immunisation program, on the grounds of cost-effectiveness. This decision was reverted in favor of Bexsero vaccination in March 2014. In March 2015 the UK government announced that they had reached agreement with GlaxoSmithKline who had taken over Novartis' vaccines business, and that Bexsero would be introduced into the UK routine immunization schedule later in 2015.In November 2013, in response to an outbreak of B-serotype meningitis on the campus of Princeton University, the acting head of the Centers for Disease Control and Prevention (CDC) meningitis and vaccine preventable diseases branch told NBC News that they had authorized emergency importation of Bexsero to stop the outbreak. Bexsero was subsequently approved by the FDA in February 2015 for use in individuals 10 through 25 years of age. In October 2014, Trumenba, a serogroup B vaccine produced by Pfizer, was approved by the FDA for use in individuals 10 through 25 years of age. Types: Serogroup X The occurrence of serogroup X has been reported in North America, Europe, Australia, and West Africa. There is no vaccine to protect against serogroup X N. meningitidis disease. Side effects: Common side effects include pain and redness around the site of injection (up to 50% of recipients). A small percentage of people develop a mild fever. A small proportion of people develop a severe allergic reaction. In 2016 Health Canada warned of an increased risk of anemia or hemolysis in people treated with eculizumab (Soliris). The highest risk was when individuals "received a dose of Soliris within 2 weeks after being vaccinated with Bexsero".Despite initial concerns about Guillain-Barré syndrome, subsequent studies in 2012 have shown no increased risk of GBS after meningococcal conjugate vaccination. Travel requirements: Travellers who wish to enter or leave certain countries or territories must be vaccinated against meningococcal meningitis, preferably 10–14 days before crossing the border, and be able to present a vaccination record/certificate at the border checks.: 21–24  Countries with required meningococcal vaccination for travellers include The Gambia, Indonesia, Lebanon, Libya, the Philippines and, most importantly and extensively, Saudi Arabia for Muslims visiting or working in Mecca during the Hajj or Umrah pilgrimages. For some countries in African meningitis belt, vaccinations prior to entry are not required, but highly recommended.: 21–24
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded