text
stringlengths
60
353k
source
stringclasses
2 values
**Unlimited Cities** Unlimited Cities: Unlimited Cities (in French Villes sans limites) are methods and apps to facilitate the civil society involvement in urban transformations. Unlimited Cities DIY is an Open Source upgrade of the application linked with the New Urban Agenda of the United Nations "Habitat III" Conference. Use: These apps that can be used on mobile devices (tablets and smartphones) for people to express their views on the evolution of a neighbourhood before future developments are outlined by professionals. "Through a simple interface, they make up a realistic representation of their expectations for a given site. Six cursors can be played with: urban density, nature, mobility, neighbourhood life, digital, creativity/art in the city. Designed by the UFO urban planning agency in partnership with the HOST architectural and urban planning firm, the apps provides upstream information to urban project developers, as well as to people to query their design wishes and thus to appropriate the future project.”. Thus the Unlimited Cities method gives the civil society the opportunity to act and co-construct with professional urban developers without being subject to solutions predetermined by experts and public authorities. Use: According to one of its creators, the urban architect Alain Renk: "Today the future of cities and metropoles lies less in the poetic, imaginary and solitary techniques found in Jules Verne’s novels than in capacities offered by digital mediations to imagine, represent and openly share knowledge, through the collective intelligence, offering opportunities to consider less standardized and prioritized lifestyles, freer creativity, shorter design and manufacturing circuits of circular economies and, ultimately, preservation of common goods. Backgrounds: The project originates in 2002 at the Orleans ArchiLab international meetings, with the publication of the book named Construire la Ville Complexe? (Building the Complex City?), published by Jean Michel Place, a well-known editor in the world of architecture. Then in 2007 with a research using digital urban ecosystem simulators on the Plan Construction Architecture du Ministère du développement durable (Construction Architecture Plan of the Ministry of Sustainable Development). A crossed interview with Alain Renk and the sociologist Marc Augé discusses the possibilities of simulators to operate the collective intelligence. Backgrounds: In 2009, the HOST Agency, responsible for the creation of the Civic-Tech UFO, got the certification of the Cap Digital and Advancity competitiveness clusters to mount the UrbanD collaborative research program, intended to lay the collaborative software theoretical and technical basis for the evaluation and representation of the quality of urban life to enlighten decisions. This 3-year program (from early 2010 to late 2012) was the basis for the creation of "Unlimited Cities", apps and required an 800 000 € budget that was half funded by European Regional Development Fund (ERDF) subsidies.In June 2011 a beta version of Unlimited Cities PRO is presented in Paris in the Futur en Seine Festival with real world tests with visitors, then shown in Tokyo in November and in Rio de Janeiro in December of the same year. Backgrounds: On October 2, 2012, the first operational deployment is implemented by the town hall of Rennes,: " the first tests were carried out in the area around the TGV1 train station and the prison demolition site in Rennes, and we discovered that when being able to build what they wanted, users quickly forgot reluctance such as for urban density, and they conceived urban projects that often went against conventional wisdom. The idea of urban density and tall buildings is often rejected, but it is accepted as soon as people can adapt it to their own logic.”. Backgrounds: Then the tool is implemented in Montpellier in June 2013. Then in June, July and August in Evreux, where UFO worked on the conversion of the downtown former Saint-Louis Hospital.,In June 2015 in Grenoble, "the application is used to imagine, jointly with the population, solutions to give more visibility to the transport offer. It is a different way of working. We do not turn anymore only to the planners but we directly go to the locals and ask for their opinion, their vision. The purpose is obviously to increase buses utilization, but it is also to have people satisfied with the arrangements put in place. ".The first cities to use Unlimited Cities PRO call the attention due to the mediators’ ability to query people in the street, often off guard, with an appealing playful tablet. Their presence in the neighbourhoods for several weeks, right where people live and work, prompts the number of participants to be much higher (over 1 600 people in Evreux) than with conventional methods of consultation that struggle to get people go to places allocated for this. These achievements arouse the interest of some researchers who will analyse some attitude changes in urban professionals and citizens. Can we talk about a rebirth of participatory democracy? Are those images, which belong to hyperrealism, misleading or conversely are they accessible to all kinds of people? Are the Open-Source dimension of the collected data and its accessibility in real time involved in building trust between experts and non-experts? The method is the topic of several scientific articles, and has been honoured with several awards in France, as well as with the Open Cities Award from the European Commission. UN-Habitat and Unlimited Cities DIY: The first requests were initially applied for in Rio, in 2011, for associations to use the software in the favelas, and then recurrently in Africa, South America and India. In parallel associations and groups in Europe also wanted to be free to implement, in their territories and independently, the collaborative urban planning device without needing further financing than users’ support. UN-Habitat and Unlimited Cities DIY: In June 2013 the Civic-Tech UFO presented at the festival Futur en Seine the Unlimited Cities DIY prototype: an open source, free and easy to implement upgrade. Presentations of the beta version are then non-stop: September 2013 Nantes for Ecocity symposium; November 2013, Barcelona in the Open Cities Award Ceremony; January 2014 Rennes for a meeting at the Institute of Urbanism, March 2014 Le Havre at a conference of Urbanism collaboration; May 2014 London in the Franco-British symposium on Smart-Cities; July 2014 Berlin for the Open Knowledge Festival; early October 2014 Hyderabad, India for Congress Metropolis; late October 2014 Wuhan China for the conference of the Sino-French ecocity Caiden; 2015 Wroclaw for the Hacking of the Social Operating System; September 2015, Lyon at the annual conference of the national Federation Planning Agencies and many other workshops that confirmed recurring applications for an Open Source version easy to implement. UN-Habitat and Unlimited Cities DIY: 2016 has been showing a strong acceleration of the Open Source version expansion because several workshops were organized with the University of Lyon in April to redesign the campus of the Central School (Wikibuilding-campus project), and then again in China several conferences and use of the software with farmers (Wikibuilding-Village project), with children (project Wikibuilding Natur), and with students and faculty of the University of Wuhan HUST. UN-Habitat and Unlimited Cities DIY: The first contacts between the agency UN-Habitat and Unlimited Cities DIY software are held in October 2014 in Hyderabad with the City Resilience Profiling Program and then in Barcelona in 2015. The connection is concretized in the following Habitat III Conference. Held every twenty years, Habitat conferences organized by the UN form a sounding board that accelerates the consideration of major urban issues in public policy. This year 2016, the preparatory document for the Habitat III Conference in Quito highlights the need to evolve towards urban planning construction carried together with civil society. The non-profit organisation "7 Milliards d'urbanistes (7 billion urban planners) " will be present in Quito to introduce the open source Unlimited Cities DIY software to delegates of the 197 countries member for collaborative urban planning to become available to the greatest possible number of people. Honours and awards: In 2015, the Wikibuilding project designed for the future Paris Rive gauche is preselected under the contest "Réinventer Paris." 2013 Winner of Printemps du numérique (Rural TIC) 2013 Winner of Territoires innovants (Interconnected) 2013 Winner of Open Cities Awards (European Commission) 2011 Winner of the call for projects for Futur en Seine (Cap digital) 2011 Nominee of the 2011 Prix de la croissance verte numérique Award (Acidd) 2010 Selection of the Carrefours Innovations&Territoires (CDC) Publications: (fr) Créer virtuellement un urbanisme collectif by Julie Nicolas and Xavier Crépin, Le Moniteur - N°5813, April 2015. (fr) L’urbanisme collaboratif, expérience et contexte by Nancy Ottaviano, GIS Symposium Participation. (fr) Clément Marquet, Nancy Ottaviano and Alain Renk, « Pour une ville contributive », Urbanisme dossier "Villes numériques, villes intelligentes?", Autumn 2014, p. 53-55. (fr) L’appropriation de la ville par le numérique by Clément Marquet : Undergoing Thesis, Institut Mines Telecom. (fr) Et si on inventait l’enquête d’imagination publique? by Sylvain Rolland, La Tribune hors-série Grand-Paris. Publications: (fr) Villes sans limite, un outil pour stimuler l’imagination publique by Karim Ben Merien and Xavier Opige, Les Cahiers de l’IAU idf (fr) Wikibuilding : l’urbanisme participatif de demain ? by Ludovic Clerima, Explorimmo, 2015 Alain Renk, Urban Diversity: Cities Of Differences Create Different Cities, in WorldCrunch.com, November 12, 2013 (visited on May 28, 2016) (fr) Philippe Gargov, Samsung et son safari imaginaire : l’urbanisme collaboratif is now mainstream, on pop-up-urbain.com, December 2012 (visited on June 13, 2016) July 8, 2011, radio broadcast: Qu’est-ce que la ville numérique? : The field of the possible, France Culture, 2011
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Earth Defense Force: Insect Armageddon** Earth Defense Force: Insect Armageddon: Earth Defense Force: Insect Armageddon is a third-person shooter developed by Vicious Cycle Software, and published by D3 Publisher, for the PlayStation 3, Xbox 360 and Microsoft Windows. It is a spinoff built around the concept of "What if Americans made EDF" and has no story or setting connection to the numbered series. Plot: The EDF defends a fictional city called New Detroit against an alien invasion, aided by bio-engineered terrestrial insects. The Player goes by the name of 'Lightning Alpha', leader of an elite team of fighters, known as "Strike Force Lightning." Along with two squad mates, Lightning is (usually) aided by local EDF fighting forces in the area, though they are poor compared to Lightning. Plot: The story is divided into three separate chapters, which are divided into five separate missions. The first chapter is akin to an introduction to the game, while the second chapter has Strike Force Lightning moving across industrial New Detroit to discover the fates of several downed Landers scattered across the city, as well as their pilots and cargo. Only one of them is salvageable, and it becomes piloted by the cool-headed Captain Sully, a character that returns in later missions. The third chapter follows the story of Lightning finding and transporting "The Cube" to a nest in central New Detroit. Before it is deployed, an Ant Queen emerges from the nest, forcing Lightning to do battle with it to secure the drop zone. Plot: Once the Queen and her minions are dead, "The Cube" is dropped in the middle of the nest, and Lightning quickly makes their way to the evacuation zone. As their transport Lander is about to begin its descent, a new Ravager ship (akin to a much larger and more heavily armed version of a Ravager Carrier) appears in the skies, downing the Lander. Lightning rendezvous with Echo and X-Ray squads, EDF troopers who were stuck in Central Detroit without evacuation, and they make their way to another evacuation site where they must defend against several Hectors (large Ravager Mech Walkers) until they are rescued by Lander Captain Sully, who (at the time) violated an EDF No-Fly Zone without remorse. Gameplay: Players take the role of Lightning Alpha, who battles against wave after wave of deadly gigantic insect and robot enemies. Insect Armageddon predominately takes place in the city of New Detroit, the target of a concentrated bug invasion that only the EDF can stop. The graphics have been greatly improved, but still retain the arcade-shooter physics of its predecessor. Vehicle controls have been fixed, with improved tank and mecha vehicles that can be manned by more than one player. Credits are accumulated which are used for a wide variety of tasks. Gameplay: Over 300 weapons are available. These can be purchased using a new unlock system that partially replaces the in-game weapon drop system of EDF: 2017, though some weapons are only dropped by elite enemies. Four different classes are selectable from the menu, each with special functions and exclusive equipment. All armor colors can be customized. Trooper Armor: Trooper armor is the standard loadout for EDF soldiers. It has access to more weapons than any other class, and upgradable abilities that allow it to be a versatile, all-around unit. The Trooper Armor is also the only armor available in Survival Mode. Jet Armor: Jet armor is a suit Lightning Alpha can acquire to take the fight to the skies. It uses energy to replenish weapons. The jet pack allows the fastest movement across the map for any class but also the weakest protection. Tactical Armor: This armor fulfills a wide ranging support role, and is the only class that can deploy turrets, mines and radar dishes. Stronger equipment is unlocked as the story progresses. Battle Armor: Battle armor transforms players into a veritable walking tank. Slow moving and hard-hitting, it comes equipped a portable energy shield and can equip some of the most powerful weapons in the game. Battle armor also can release its entire pool of energy in a massive electric blast, damaging everything unfortunate enough to be close by. Gameplay: Both split-screen and online co-operative play are included. A six-player Survival mode is also available, with a squad of EDF soldiers defending against endless waves of bugs.D3 Publisher also includes a Pain pack. The Pounds of Pain pack features 15 weapons mostly for the Battle Armor but also a few for the Trooper Armor and Tactical Armor, available from GameStop. The Death From Above pack features 15 weapons mostly for the Jet Armor but also a few for the Trooper Armor and Tactical Armor as well, available from Best Buy. Reception: Earth Defense Force: Insect Armageddon received "mixed or average reviews" on all platforms according to the review aggregation website Metacritic. In Japan, Famitsu gave the PlayStation 3 and Xbox 360 versions a score of all four sevens for a total of 28 out of 40.The Daily Telegraph gave the PS3 version a score of eight out of ten and said, "There's nothing quite like EDF's insane thrills. Simple, old-school, but oh so very good at what it does, Insect Armageddon is well worth a look. As long as you don't mind a bit of ant vomit, of course." GameZone gave the PC version a score of seven out of ten, saying, "EDF: IA is simple, sure, but that simply means that anyone can hop into the arcade-style action of Insect Armageddon and be wreaking havok almost immediately. Not to mention that the game has definitely dropped in price since its initial console release, now available for just $19.99 on Steam and other download services. So if you've been looking for a solid multiplayer co-op romp which lets you dial back your brain a bit and simply blow up everything in sight, I definitely recommend picking up this title." However, 411Mania gave the Xbox 360 version a score of 6.2 out of 10 and said that "it's not that Insect Armageddon is a bad game; it's just that it gets repetitive. If you're just looking for mindless killing for a bit, this may be a worthwhile pickup for you and your friends. However, if you're looking for any kind of depth, look elsewhere." Digital Spy gave the same console version three stars out of five and said it was "unquestionably a fun game, but it's the same fun we had with its PS2 and Xbox 360 predecessors. The latest game offers very little that it can call its own, has a severe lack of memorable moments and is often incredibly dull and repetitive." Edge similarly gave the same console version six out of ten, saying, "Old hands will still find much of the personality and singular vision of the franchise intact, but it's the newcomers, who might find Insect Armageddon a jarring mix of old-fashioned thrills and modern gameplay trends." The Digital Fix gave the same console version five out of ten, saying, "In the end the ‘budget’ in ‘budget title’ shone through and whilst Insect Armageddon is a laugh in co-op (in short bursts) it becomes far too much of a grind, very quickly."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Finite pointset method** Finite pointset method: In applied mathematics, the name finite pointset method is a general approach for the numerical solution of problems in continuum mechanics, such as the simulation of fluid flows. In this approach (often abbreviated as FPM) the medium is represented by a finite set of points, each endowed with the relevant local properties of the medium such as density, velocity, pressure, and temperature.The sampling points can move with the medium, as in the Lagrangian approach to fluid dynamics or they may be fixed in space while the medium flows through them, as in the Eulerian approach. A mixed Lagrangian-Eulerian approach may also be used. The Lagrangian approach is also known (especially in the computer graphics field) as particle method. Finite pointset method: Finite pointset methods are meshfree methods and therefore are easily adapted to domains with complex and/or time-evolving geometries and moving phase boundaries (such as a liquid splashing into a container, or the blowing of a glass bottle) without the software complexity that would be required to handle those features with topological data structures. They can be useful in non-linear problems involving viscous fluids, heat and mass transfer, linear and non-linear elastic or plastic deformations, etc. Description: In the simplest implementations, the finite point set is stored as an unstructured list of points in the medium. In the Lagrangian approach the points move with the medium, and points may be added or deleted in order to maintain a prescribed sampling density. The point density is usually prescribed by a smoothing length defined locally. In the Eulerian approach the points are fixed in space, but new points may be added where there is need for increased accuracy. So, in both approaches the nearest neighbors of a point are not fixed, and are determined again at each time step. Advantages: This method has various advantages over grid-based techniques; for example, it can handle fluid domains, which change naturally, whereas grid based techniques require additional computational effort. The finite points have to completely cover the whole flow domain, i.e. the point cloud has to fulfill certain quality criteria (finite points are not allowed to form “holes” which means finite points have to find sufficiently numerous neighbours; also, finite points are not allowed to cluster; etc.). Advantages: The finite point cloud is a geometrical basis, which allows for a numerical formulation making FPM a general finite difference idea applied to continuum mechanics. That especially means, if the point reduced to a regular cubic point grid, then FPM would reduce to a classical finite difference method. The idea of general finite differences also means that FPM is not based on a weak formulation like Galerkin's approach. Rather, FPM is a strong formulation which models differential equations by direct approximation of the occurring differential operators. The method used is a moving least squares idea which was especially developed for FPM. History: In order to overcome the disadvantages of the classical methods many approaches have been developed to simulate such flows (Hansbo 92, Harlow et al. 1965, Hirt et al. 1981, Kelecy et al. 1997, Kothe at el. 1992, Maronnier et al. 1999, Tiwari et al. 2000). A classical grid free Lagrangian method is Smoothed Particle Hydrodynamics (SPH), which was originally introduced to solve problems in astrophysics (Lucy 1977, Gingold et al. 1977). History: It has since been extended to simulate the compressible Euler equations in fluid dynamics and applied to a wide range of problems, see (Monaghan 92, Monaghan et al. 1983, Morris et al. 1997). The method has also been extended to simulate inviscid incompressible free surface flows (Monaghan 94). The implementation of the boundary conditions is the main problem of the SPH method. History: Another approach for solving fluid dynamic equations in a grid free framework is the moving least squares or least squares method (Belytschko et al. 1996, Dilts 1996, Kuhnert 99, Kuhnert 2000, Tiwari et al. 2001 and 2000). With this approach boundary conditions can be implemented in a natural way just by placing the finite points on boundaries and prescribing boundary conditions on them (Kuhnert 99). The robustness of this method is shown by the simulation results in the field of airbag deployment in car industry. Here, the membrane (or boundary) of the airbag changes very rapidly in time and takes a quite complicated shape (Kuhnert et al. 2000). History: Tiwari et al. (2000) performed simulations of incompressible flows as the limit of the compressible Navier–Stokes equations with some stiff equation of state. This approach was first used in (Monaghan 92) to simulate incompressible free surface flows by SPH. The incompressible limit is obtained by choosing a very large speed of sound in the equation of state such that the Mach number becomes small. However, the large value of the speed of sound restricts the time step to be very small due to the CFL-condition. History: The projection method of Chorin (Chorin 68) is a widely used approach to solve problems governed by the incompressible Navier–Stokes equation in a grid based structure. In (Tiwari et al. 2001), this method has been applied to a grid free framework with the help of the weighted least squares method. The scheme gives accurate results for the incompressible Navier–Stokes equations. The occurring Poisson equation for the pressure field is solved by a grid free method. In (Tiwari et al. 2001), it has been shown that the Poisson equation can be solved accurately by this approach for any boundary conditions. The Poisson solver can be adapted to the weighted least squares approximation procedure with the condition that the Poisson equation and the boundary condition must be satisfied on each finite point. This is a local iteration procedure.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Equal detour point** Equal detour point: The equal detour point is a triangle center with the Kimberling number X(176). It is characterized by the equal detour property, that is if you travel from any vertex of a triangle ABC to another by taking a detour through some inner point P then the additional distance travelled is constant. This means the following equation has to hold: |AP|+|PC|−|AC|=|AP|+|PB|−|AB|=|BP|+|PC|−|BC|. Equal detour point: The equal detour point is the only point with the equal detour property if and only if the following inequality holds for the angles α,β,γ of the triangle ABC tan tan tan ⁡12γ≤2 If the inequality does not hold, then the isoperimetric point possesses the equal detour property as well. Equal detour point: The equal detour point, isoperimetric point, the incenter and the Gergonne point of a triangle are collinear, that is all four points lie on a common line. Furthermore, they form an harmonic range as well (see graphic on the right).The equal detour point is the center of the inner Soddy circle of a triangle and the additional distance travelled by the detour is equal to the diameter of the inner Soddy Circle.The barycentric coordinates of the equal detour point are (a+Δs−a:b+Δs−b:c+Δs−c). Equal detour point: and the trilinear coordinates are cos cos cos cos cos cos cos cos cos ⁡12γ).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Photoactive yellow protein** Photoactive yellow protein: In molecular biology, the PYP domain (photoactive yellow protein) is a p-coumaric acid-binding protein domain. They are present in various proteins in bacteria. PYP is a highly soluble globular protein with an alpha/beta fold structure. It is a member of the PAS domain superfamily, which also contains a variety of other kinds of photosensory proteins. Photoactive yellow protein: PYP was first discovered in 1985.A recently (2016) developed chemogenetic system named FAST (Fluorescence-Activating and absorption Shifting Tag) was engineered from PYP to specifically and reversibly bind a series of hydroxybenzylidene rhodanine (HBR) derivatives for their fluorogenic properties. Upon interaction with FAST, the fluorogen is locked into a fluorescent conformation unlike when in solution. This new protein labelling system is used in a variety of microscopy and cytometry setups. p-Coumaric acid: p-Coumaric acid is a cofactor of Photoactive yellow protein|photoactive yellow proteins. Adducts of p-coumaric acid bound to PYP form crystals that diffract well for x-ray crystallography experiments. These structural studies have provided insight into photosensitive proteins, e.g. the role of hydrogen bonding, molecular isomerization and photoactivity. p-Coumaric acid: Photochemical transitions It was originally believed that due to light emissions resembling that of retinal bound rhodopsin, the photosensor molecule bound to PYP should resemble the structure of retinal bound rhodopsin, the photosensor molecule bound to PYP should resemble the structure of retinal. Scientists were therefore amazed when the PYP Cys 69 was bound by a thiol ester linkage as the light sensitive prosthetic group p-coumaric acid. During the photoreactive mechanism: Light absorption yields the native protein to absorb a maximum wavelength of 446 nm, ε = 45500 M−1 cm−1. p-Coumaric acid: Within a nanosecond the absorbed maximum wavelength is shifted to 465 nm. p-Coumaric acid: Then on a sub-millisecond timescale is excited to a 355 nm state.These observed phenomena are due to the trans–cis isomerization of the vinyl trans double bond in the p-coumaric acid. Scientists noted by observing the crystal structure of p-coumaric acid bound by PYP that the hydroxyl group connected to the C4 carbon of the phenyl ring appeared to be deprotonated – effectively a phenolate functional group. This was due to abnormally short hydrogen bonding lengths observed in the protein crystal structure. p-Coumaric acid: Role of hydrogen bonding Hydrogen bonds in proteins such as PYP take part in interrelated networks, where at the center of p-coumaric acid's phenolate O4 atom, there is an oxyanion hole that is crucial for photosensory function. Oxyanion holes exist in enzymes to stabilize transitions states of reaction intermediates, thus stabilizing the trans–cis isomerization of p-coumaric acid. During the transition state it is believed that the p-coumaric acid phenolate O4 takes part in a hydrogen bond network between Glu46, Tyr42 and Thr50 of PYP. These interactions are apart from the thiol ester linkage to Cys 69 keeping p-coumaric acid in the ligand binding site. Upon transitioning to the cis-isomeric form of p-coumaric acid the favorable hydrogen bonds are no longer in close interaction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trypsin 1** Trypsin 1: Trypsin-1, also known as cationic trypsinogen, is a protein that in humans is encoded by the PRSS1 gene. Trypsin-1 is the main isoform of trypsinogen secreted by pancreas, the others are trypsin-2 (anionic trypsinogen), and trypsin-3 (meso-trypsinogen). Function: This gene encodes a trypsinogen, which is a member of the trypsin family of serine proteases. This enzyme is secreted by the pancreas and cleaved to its active form in the small intestine. It is active on peptide linkages involving the carboxyl group of lysine or arginine. Mutations in this gene are associated with hereditary pancreatitis. This gene and several other trypsinogen genes are localized to the T cell receptor beta locus on chromosome 7. Clinical significance: Its malfunction acts in an autosomal dominant manner to cause pancreatitis. Many mutations that can lead to pancreatitis have been found. An example is a mutation at Arg 117. Arg 117 is a trypsin-sensitive site which can be cleaved by another trypsin and becomes inactivated. This site may be a fail-safe mechanism by which trypsin, when activated within the pancreas, may become inactivated. Mutation at this cleavage site would result in a loss of control and permit autodigestion, causing pancreatitis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Molybdopterin adenylyltransferase** Molybdopterin adenylyltransferase: Molybdopterin adenylyltransferase (EC 2.7.7.75, MogA, Cnx1) is an enzyme with systematic name ATP:molybdopterin adenylyltransferase. This enzyme catalyses the following chemical reaction ATP + molybdopterin ⇌ diphosphate + adenylyl-molybdopterinThis enzyme catalyses the activation of molybdopterin for molybdenum insertion.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OS/8** OS/8: OS/8 is the primary operating system used on the Digital Equipment Corporation's PDP-8 minicomputer. PDP-8 operating systems which precede OS/8 include: R-L Monitor, also referred to as MS/8. P?S/8, requiring only 4K of memory. PDP-8 4K Disk Monitor System PS/8 ("Programming System/8"), requiring 8K. This is what became OS/8 in 1971.Other/related DEC operating systems are OS/78, OS/278, and OS/12. The latter is a virtually identical version of OS/8, and runs on Digital's PDP-12 computer. Digital released OS/8 images for non-commercial purposes which can be emulated through SIMH. Overview: OS/8 provides a simple operating environment that is commensurate in complexity and scale with the PDP-8 computers on which it ran. I/O is supported via a series of supplied drivers which uses polled (not interrupt-driven) techniques. The device drivers have to be cleverly written as they can occupy only one or two memory pages of 128 12-bit words, and have to be able to run in any page in field 0. This often requires considerable cleverness, such as the use of the OPR instruction (7XXX) for small negative constants. Overview: The memory-resident "footprint" of OS/8 is only 256 words; 128 words at the top of Field 0 and 128 words at the top of Field 1. The rest of the operating system (the USR, "User Service Routines") swaps in and out of memory transparently (with regard to the user's program) as needed. The Concise Command Language: Early versions of OS/8 have a very rudimentary command-line interpreter with very few basic commands: GET, SAVE, RUN, ASSIGN, DEASSIGN, and ODT. With version 3 they add a more sophisticated overlay called CCL (Concise Command Language) that implements many more commands. OS/8's CCL is directly patterned after the CCL found on Digital's PDP-10 systems running TOPS-10. In fact, much of the OS/8 software system is deliberately designed to mimic, as closely as possible, the TOPS-10 operating environment. (The CCL command language is used on PDP-11 computers running RT-11, RSX-11, and RSTS/E, providing a similar user operating environment across all three architectures: PDP-8s, PDP-10s, and PDP-11s.) The basic OS and CCL implements many rather sophisticated commands, many of which still do not exist in modern command languages, not even in MS-DOS, Windows, or Unix-like operating systems. The Concise Command Language: For example, the COMPILE command automatically finds the right compiler for a given source file and starts the compile/assemble/link cycle. The Concise Command Language: The ASSIGN and DEASSIGN commands permit the use of logical device names in a program instead of physical names (as required in MS-DOS). For example, a program can write to device FLOP:AAA.TXT, and with an initial "ASSIGN FLOP: RXA2:" then the file is created on physical device RXA2 (the second floppy disk drive). VAX/VMS and the Amiga's operating system AmigaOS (and other OSes built around TRIPOS) make considerable use of this feature. The Concise Command Language: The SET command is capable of setting many system options by patching locations in the system binary code. One of them, a command under OS-78, is SET SYS OS8, which re-enables the MONITOR commands that are not part of OS-78. The BUILD command can reconfigure the OS on the fly, even adding device drivers, often without having to reboot the OS. The OS can boot from a hard disk and present the command prompt in under half a second. The OS/8 Filesystem: OS/8 supports a simple, flat file system on a variety of mass storage devices including: TU56 DECtapes DF32 32KW fixed-head disks RF08 256KW fixed-head disks RK01/02/03/04/05 cartridge disk drives RL01/02 cartridge disk drives RX01/02 floppy diskette drivesFilenames on the PDP-8 take the form of FFFFFF.XX where "F" represents an uppercase, alphanumeric character of the filename and "X" represents an uppercase, alphanumeric character of the extension (filetype). The OS/8 Filesystem: .PA : Assembly language .SV : saved core-images (executable programs) .FT : Fortran source files .DA : Data filesThe contents of any given file is stored contiguously in a single "extent". PIP includes an option to compress ("squeeze") the filesystem, so that all unallocated space is moved to a single extent at the end of the disk. This can be invoked by the SQuish CCL command, much as MUNG can be used to run a TECO macro.OS/8 volumes have a limited maximum storage size (4096 blocks of 256 twelve-bit words) and the RK05 (2.4MB) moving-head disk exceeds this size: "1.6 million words of storage". Because of this, RK05 cartridges are divided into two partitions. For example, the first RK05 on a system is known as both RKA0: (SY:) and RKB0:. This division refers to "the outer cylinders" and "the inner cylinders". The OS/8 Filesystem: ASCII files ASCII files are stored as three 8-bit characters per pair of 12-bit words. The first two characters (marked with bits a0–a7 and b0–b7 below) are stored whole in their words, while the third character (bits c0–c7) is stored with half of its bits in word 1 and the other half in word 2. WORD 1: c0 c1 c2 c3 | a0 a1 a2 a3 a4 a5 a6 a7 WORD 2: c4 c5 c6 c7 | b0 b1 b2 b3 b4 b5 b6 b7ASCII files end with a CTRL/Z (ASCII 232). OS/8 date format OS/8 allocates the PDP-8's 12 bit words for storing dates per: 4 bits for the month 5 bits for the date therein 3 bits for the year.The insufficiency of a three-bit year field, capable of storing only eight years, was recognized when COS-310 was developed. OS/8 CUSPs (Utility Programs): The CUSPs (Commonly-Used System Programs, that is utilities) supplied with OS/8 include: BUILD (the program to install a configured OS/8 system onto mass storage) DIR (the directory-listing program) EDIT (A line-oriented editor) MACREL (A relocating assembler that, unlike PAL, implements macros. Written by Stanley Rabinowitz of DEC's Small Systems Group. Stan had an ASCII-artwork picture of a fish in his office that said "MACREL IS A FISH") FLAP (An absolute assembler derived from RALF) FORTRAN-II. OS/8 CUSPs (Utility Programs): FOTP (File-Oriented Transfer Program, an alternative to PIP) PAL (The assembler) PIP (the Peripheral Interchange Program, used to copy files) PIP10 (a version of PIP used to copy files to from PDP-10 DECtapes) RALF (Another relocating assembler for the FPP) TECO (Text Editor and COrrector, a sophisticated editor). The MUNG command runs TECO macros. CCL, the command line interpreter, supplied in source form and user-extensible. Programming languages: BASIC A single-user BASIC and two multi-user versions of BASIC are available as options. The single-user BASIC uses several overlays to provide the full functionality of the language; when OS/8 is booted from a DECtape, a noticeable delay occurred each time BASIC is required to switch overlays as they need to be read from tape. Programming languages: The multi-user versions of BASIC (EDU20 and EDU25) differ only in whether or not they support block-replaceable devices (DECtape or disk). Due to cost constraints, many PDP-8s have punched paper tape readers as their only mass-storage I/O device. EDU20 loads from paper tape and can do output to a paper tape writer if the machine has one, whereas EDU25 understands the structure of a filesystem, can load from DECtape or disk, and can create files on DECtape or disk. Both can run multiple BASIC programs simultaneously using a primitive task-scheduler that round-robins among the attached terminals. Memory is always tight because the PDP-8 uses core memory, which was extremely expensive compared to RAM technology. In 8K of 12-bit words EDU20 can support up to 4 terminals at once, although more memory was recommended. EDU25 requires an additional 4K memory bank (for a minimum of 12K) because the code contains a disk device driver and a filesystem handler. While running, EDU20 and EDU25 are self-contained programs that don't use any OS/8 system calls. Immediately upon being invoked from the OS/8 command interpreter, they overwrite OS/8's entire resident portion – all 256 words of it. Upon startup, EDU25 saves the contents of memory to DECtape or disk and restores it upon exit. But EDU20 cannot do this as it is targeted at hardware configurations without any block-replaceable device. Programming languages: FORTRAN In addition to a freely available FORTRAN II compiler, there is also a rather complete FORTRAN IV compiler available. This compiler generates code for the optional FPP-8 floating-point processor, which is essentially a separate CPU, only sharing memory with the PDP-8 CPU. With the FPP-8 option installed, the FORTRAN runtime code detects it and uses the FPP-8 to run the main program code, and the PDP-8 CPU runs as an I/O processor. Lacking the FPP-8, the runtime code instead calls an FPP-8 interpreter running on the PDP-8 CPU, so the program runs at reduced speed. Programming languages: This FORTRAN IV compiler in version 1 has the interesting bug that DO loops counted incorrectly: DO loops would count 1,2,3,5,6,7,… (skipping 4). A quick patch was released to fix this.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hyperfocal distance** Hyperfocal distance: In optics and photography, hyperfocal distance is a distance beyond which all objects can be brought into an "acceptable" focus. As the hyperfocal distance is the focus distance giving the maximum depth of field, it is the most desirable distance to set the focus of a fixed-focus camera. The hyperfocal distance is entirely dependent upon what level of sharpness is considered to be acceptable. Hyperfocal distance: The hyperfocal distance has a property called "consecutive depths of field", where a lens focused at an object whose distance is at the hyperfocal distance H will hold a depth of field from H/2 to infinity, if the lens is focused to H/2, the depth of field will extend from H/3 to H; if the lens is then focused to H/3, the depth of field will extend from H/4 to H/2, etc. Hyperfocal distance: Thomas Sutton and George Dawson first wrote about hyperfocal distance (or "focal range") in 1867. Louis Derr in 1906 may have been the first to derive a formula for hyperfocal distance. Rudolf Kingslake wrote in 1951 about the two methods of measuring hyperfocal distance. Hyperfocal distance: Some cameras have their hyperfocal distance marked on the focus dial. For example, on the Minox LX focusing dial there is a red dot between 2 m and infinity; when the lens is set at the red dot, that is, focused at the hyperfocal distance, the depth of field stretches from 2 m to infinity. Some lenses have markings indicating the hyperfocal range for specific f-stops. Two methods: There are two common methods of defining and measuring hyperfocal distance, leading to values that differ only slightly. The distinction between the two meanings is rarely made, since they have almost identical values. The value computed according to the first definition exceeds that from the second by just one focal length. Definition 1: The hyperfocal distance is the closest distance at which a lens can be focused while keeping objects at infinity acceptably sharp. When the lens is focused at this distance, all objects at distances from half of the hyperfocal distance out to infinity will be acceptably sharp. Definition 2: The hyperfocal distance is the distance beyond which all objects are acceptably sharp, for a lens focused at infinity. Acceptable sharpness: The hyperfocal distance is entirely dependent upon what level of sharpness is considered to be acceptable. The criterion for the desired acceptable sharpness is specified through the circle of confusion (CoC) diameter limit. This criterion is the largest acceptable spot size diameter that an infinitesimal point is allowed to spread out to on the imaging medium (film, digital sensor, etc.). Formulae: For the first definition, H=f2Nc+f where H is hyperfocal distance f is focal length N is f-number ( f/D for aperture diameter D )c is the circle of confusion limitFor any practical f-number, the added focal length is insignificant in comparison with the first term, so that H≈f2Nc This formula is exact for the second definition, if H is measured from a thin lens, or from the front principal plane of a complex lens; it is also exact for the first definition if H is measured from a point that is one focal length in front of the front principal plane. For practical purposes, there is little difference between the first and second definitions. Formulae: Derivation using geometric optics The following derivations refer to the accompanying figures. For clarity, half the aperture and circle of confusion are indicated. Definition 1 An object at distance H forms a sharp image at distance x (blue line). Here, objects at infinity have images with a circle of confusion indicated by the brown ellipse where the upper red ray through the focal point intersects the blue line. First using similar triangles hatched in green, x−fc/2=fD/2∴x−f=cfD∴x=f+cfD Then using similar triangles dotted in purple, HD/2=xc/2∴H=Dxc=Dc(f+cfD)=Dfc+f=f2Nc+f as found above. Definition 2 Objects at infinity form sharp images at the focal length f (blue line). Here, an object at H forms an image with a circle of confusion indicated by the brown ellipse where the lower red ray converging to its sharp image intersects the blue line. Using similar triangles shaded in yellow, HD/2=fc/2∴H=Dfc=f2Nc Example: As an example, for a 50 mm lens at f/8 using a circle of confusion of 0.03 mm, which is a value typically used in 35 mm photography, the hyperfocal distance according to Definition 1 is 50 0.03 50 10467 mm If the lens is focused at a distance of 10.5 m, then everything from half that distance (5.2 m) to infinity will be acceptably sharp in our photograph. With the formula for the Definition 2, the result is 10417 mm, a difference of 0.5%. Consecutive depths of field: The hyperfocal distance has a curious property: while a lens focused at H will hold a depth of field from H/2 to infinity, if the lens is focused to H/2, the depth of field will extend from H/3 to H; if the lens is then focused to H/3, the depth of field will extend from H/4 to H/2. This continues on through all successive 1/x values of the hyperfocal distance. That is, focusing at H/n will cause the depth of field to extend from H/(n+1) to H/(n-1). Consecutive depths of field: Piper (1901) calls this phenomenon "consecutive depths of field" and shows how to test the idea easily. This is also among the earliest of publications to use the word hyperfocal. History: The concepts of the two definitions of hyperfocal distance have a long history, tied up with the terminology for depth of field, depth of focus, circle of confusion, etc. Here are some selected early quotations and interpretations on the topic. History: Sutton and Dawson 1867 Thomas Sutton and George Dawson define focal range for what we now call hyperfocal distance: Focal Range. In every lens there is, corresponding to a given apertal ratio (that is, the ratio of the diameter of the stop to the focal length), a certain distance of a near object from it, between which and infinity all objects are in equally good focus. For instance, in a single view lens of 6 inch focus, with a 1/4 in. stop (apertal ratio one-twenty-fourth), all objects situated at distances lying between 20 feet from the lens and an infinite distance from it (a fixed star, for instance) are in equally good focus. Twenty feet is therefore called the “focal range” of the lens when this stop is used. The focal range is consequently the distance of the nearest object, which will be in good focus when the ground glass is adjusted for an extremely distant object. In the same lens, the focal range will depend upon the size of the diaphragm used, while in different lenses having the same apertal ratio the focal ranges will be greater as the focal length of the lens is increased. History: The terms 'apertal ratio' and 'focal range' have not come into general use, but it is very desirable that they should, in order to prevent ambiguity and circumlocution when treating of the properties of photographic lenses. 'Focal range' is a good term, because it expresses the range within which it is necessary to adjust the focus of the lens to objects at different distances from it – in other words, the range within which focusing becomes necessary. History: Their focal range is about 1000 times their aperture diameter, so it makes sense as a hyperfocal distance with CoC value of f/1000, or image format diagonal times 1/1000 assuming the lens is a “normal” lens. What is not clear, however, is whether the focal range they cite was computed, or empirical. History: Abney 1881 Sir William de Wivelesley Abney says: The annexed formula will approximately give the nearest point p which will appear in focus when the distance is accurately focussed, supposing the admissible disc of confusion to be 0.025 cm: 0.41 ⋅f2⋅a when f= the focal length of the lens in cm a= the ratio of the aperture to the focal length That is, a is the reciprocal of what we now call the f-number, and the answer is evidently in meters. His 0.41 should obviously be 0.40. Based on his formulae, and on the notion that the aperture ratio should be kept fixed in comparisons across formats, Abney says: It can be shown that an enlargement from a small negative is better than a picture of the same size taken direct as regards sharpness of detail. ... Care must be taken to distinguish between the advantages to be gained in enlargement by the use of a smaller lens, with the disadvantages that ensue from the deterioration in the relative values of light and shade. History: Taylor 1892 John Traill Taylor recalls this word formula for a sort of hyperfocal distance: We have seen it laid down as an approximative rule by some writers on optics (Thomas Sutton, if we remember aright), that if the diameter of the stop be a fortieth part of the focus of the lens, the depth of focus will range between infinity and a distance equal to four times as many feet as there are inches in the focus of the lens. History: This formula implies a stricter CoC criterion than we typically use today. History: Hodges 1895 John Hodges discusses depth of field without formulas but with some of these relationships: There is a point, however, beyond which everything will be in pictorially good definition, but the longer the focus of the lens used, the further will the point beyond which everything is in sharp focus be removed from the camera. Mathematically speaking, the amount of depth possessed by a lens varies inversely as the square of its focus. History: This "mathematically" observed relationship implies that he had a formula at hand, and a parameterization with the f-number or “intensity ratio” in it. To get an inverse-square relation to focal length, you have to assume that the CoC limit is fixed and the aperture diameter scales with the focal length, giving a constant f-number. History: Piper 1901 C. Welborne Piper may be the first to have published a clear distinction between Depth of Field in the modern sense and Depth of Definition in the focal plane, and implies that Depth of Focus and Depth of Distance are sometimes used for the former (in modern usage, Depth of Focus is usually reserved for the latter). He uses the term Depth Constant for H, and measures it from the front principal focus (i. e., he counts one focal length less than the distance from the lens to get the simpler formula), and even introduces the modern term: This is the maximum depth of field possible, and H + f may be styled the distance of maximum depth of field. If we measure this distance extra-focally it is equal to H, and is sometimes called the hyperfocal distance. The depth constant and the hyperfocal distance are quite distinct, though of the same value. History: It is unclear what distinction he means. Adjacent to Table I in his appendix, he further notes: If we focus on infinity, the constant is the focal distance of the nearest object in focus. If we focus on an extra-focal distance equal to the constant, we obtain a maximum depth of field from approximately half the constant distance up to infinity. The constant is then the hyper-focal distance. History: At this point we do not have evidence of the term hyperfocal before Piper, nor the hyphenated hyper-focal which he also used, but he obviously did not claim to coin this descriptor himself. History: Derr 1906 Louis Derr may be the first to clearly specify the first definition, which is considered to be the strictly correct one in modern times, and to derive the formula corresponding to it. Using p for hyperfocal distance, D for aperture diameter, d for the diameter that a circle of confusion shall not exceed, and f for focal length, he derives: p=(D+d)fd [1]As the aperture diameter, D is the ratio of the focal length, f to the numerical aperture, N ; and the diameter of the circle of confusion, c=d , this gives the equation for the first definition above. History: p=(fN+c)fc=f2Nc+f Johnson 1909 George Lindsay Johnson uses the term Depth of Field for what Abney called Depth of Focus, and Depth of Focus in the modern sense (possibly for the first time), as the allowable distance error in the focal plane. His definitions include hyperfocal distance: Depth of Focus is a convenient, but not strictly accurate term, used to describe the amount of racking movement (forwards or backwards) which can be given to the screen without the image becoming sensibly blurred, i.e. without any blurring in the image exceeding 1/100 in., or in the case of negatives to be enlarged or scientific work, the 1/10 or 1/100 mm. Then the breadth of a point of light, which, of course, causes blurring on both sides, i.e. 1/50 in = 2e (or 1/100 in = e). History: His drawing makes it clear that his e is the radius of the circle of confusion. He has clearly anticipated the need to tie it to format size or enlargement, but has not given a general scheme for choosing it. Depth of Field is precisely the same as depth of focus, only in the former case the depth is measured by the movement of the plate, the object being fixed, while in the latter case the depth is measured by the distance through which the object can be moved without the circle of confusion exceeding 2e. Thus if a lens which is focused for infinity still gives a sharp image for an object at 6 yards, its depth of field is from infinity to 6 yards, every object beyond 6 yards being in focus. This distance (6 yards) is termed the hyperfocal distance of the lens, and any allowable confusion disc depends on the focal length of the lens and on the stop used. If the limit of confusion of half of the disc (i.e. e) be taken as 1/100 in., then the hyperfocal distance H=Fde ,d being the diameter of the stop, ... History: Johnson's use of former and latter seem to be swapped; perhaps former was here meant to refer to the immediately preceding section title Depth of Focus, and latter to the current section title Depth of Field. Except for an obvious factor-of-2 error in using the ratio of stop diameter to CoC radius, this definition is the same as Abney's hyperfocal distance. History: Others, early twentieth century The term hyperfocal distance also appears in Cassell's Cyclopaedia of 1911, The Sinclair Handbook of Photography of 1913, and Bayley's The Complete Photographer of 1914. History: Kingslake 1951 Rudolf Kingslake is explicit about the two meanings: if the camera is focused on a distance s equal to 1000 times the diameter of the lens aperture, then the far depth D1 becomes infinite. This critical object distance "h" is known as the Hyperfocal Distance. For a camera focused on this distance, D1=∞ and D2=h/2 , and we see that the range of distances acceptably in focus will run from just half the hyperfocal distance to infinity. The hyperfocal distance is, therefore, the most desirable distance on which to pre-set the focus of a fixed-focus camera. It is worth noting, too, that if a camera is focused on s=∞ , the closest acceptable object is at L2=sh/(h+s)=h/(h/s+1)=h (by equation 21). This is a second important meaning of the hyperfocal distance. History: Kingslake uses the simplest formulae for DOF near and far distances, which has the effect of making the two different definitions of hyperfocal distance give identical values.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dungeon Masters Screen** Dungeon Masters Screen: Dungeon Masters Screen (later called Dungeon Master's Screen) is an accessory for the Dungeons & Dragons fantasy role-playing game. Publication history: Advanced Dungeons & Dragons The 1979 Dungeon Masters Screen was the original dungeon master's screen for the first edition Advanced Dungeons & Dragons rules and came in two pieces: a two-panel piece and a four-panel piece. It included the most important combat rules for quick reference.: 95 The first Dungeon Masters Screen featured a cover by Dave Trampier and was published by TSR in 1979 as two cardboard screens; a second printing in the same year consisted of two cardstock screens, with an Erol Otus painting of a fighter vs. a dragon on the title panel.: 95 The original screen was revised, repackaged, and retitled as REF1, Dungeon Master's Screen, designed by Bob Blake, and published by TSR in 1985 as two three-panel cardstock screens.: 111–112  The 1985 revision REF1 Dungeon Master's Screen contained revised combat charts and tables.: 111–112  This one included a Dungeon Master's Screen, a Players' Screen, and a covering sheet, giving a summary of player character abilities by level and prime requisites for each class. Publication history: Advanced Dungeons & Dragons 2nd edition A screen for the second edition AD&D rules was designed by Jean Rabe and Bruce Rabe, with a cover by Jeff Easley, and was published by TSR in 1989 as a cardstock screen with a 16-page booklet.: 111–112 The 1989 second edition AD&D version of REF1 included a scenario called Terrible Trouble at Tragidore, which contained suggestions on how to be a better, more experienced DM.: 111–112  Its author was Teeuwynn Woodruff. The second edition's revised Dungeon Master Screen & Master Index contains a screen and an index. There are two screens included with a complete list of tables for quick reference including every table: critical hits, miscellaneous equipment and the location of various planes. The two indices contained within the Master Index codify rules and lists from the seven core second edition books, detailing every rule, adjustment, bonus, modifier, magic item, spell and scroll in alphabetical order and cross-referenced with their location in the books.Another version, the Dungeon Master Screen & Master Index was published by TSR in 1995. Publication history: Dungeons & Dragons 3rd edition A Dungeon Master Screen was published in 2000, developed and assembled by Dale Donovan and Kim Mohan, and featuring cover art by Jeff Easley. A Dungeon Master Screen was also published for the Forgotten Realms campaign, which included a booklet titled "Encounters in Faerûn" designed by Skip Williams and Duane Maxwell, and featuring cover art by Justin Sweet. Publication history: Dungeons & Dragons 4th edition For D&D's 4th edition, there was a basic Dungeon Master's Screen published in August 2008. In February 2011, a revised Deluxe Dungeon Master's Screen was released, with heavier cardstock and newer artwork. Dungeons & Dragons 5th edition A Dungeon Master's Screen was released for the game's 5th edition in January 2015. A revised version, titled Dungeon Master's Screen Reincarnated featuring revised artwork and charts was released in September 2017. Additionally, campaign-specific screens produced under license by Game Force 9 have been released as tie-ins to the major adventure modules. Reception: The first edition version of the Dungeon Masters Screen was a Gamer's Choice award-winner.: 111–112 The revised first edition REF1 screen was given a fairly balanced review by Jez Keen in Imagine magazine. Keene called the info sheet a useful memory aid but missed information on player character races and the types of weapons and armor available to each class. Keen called the Players' Screen "less useful", wondering what exactly the players have to screen. The Players' Screen contained standard tables on spells, weapons, and equipment, as well as the "to hit" tables and, according to Keen, allocates "an extraordinary amount of space" to grenade-like missiles. As for the DM Screen, Keen noted that the tables contain nothing surprising but since the reference tables in the Dungeon Master's Guide are much less useful than those in the Player's Handbook, the reviewer "has used them and will continue to do so".Keith Eisenbeis reviewed the 2nd edition product in a 1993 issue of White Wolf. He praised the accompanying adventure, but was negative about the screen itself, stating "it is both plain and uninspiring", and noted that it did not make good use of space. He rated it overall at a 2 out of 5 possible points.Trenton Webb reviewed the AD&D second edition Dungeon Master Screen & Master Index for Arcane magazine, rating it a 7 out of 10 overall. He felt that finding information on the screens "can prove a little tricky, since the screens were obviously laid out by Jackson Pollock". He called the indices "an exercise in clear and consise functionality" and that using the "effective notation system, it's easy to find anything" listed in the index, but cautioned that "you have to think in TSR terms and titles to find the entry". Webb summed up his review of the Dungeon Master Screen & Master Index by saying: "The index is essential stuff; the screens less so, since most DMs have evolved their own screen or alternative system for ready reference. But it's well worth £6 to be able to quickly find every rule you know you've read but forgotten previously where..."In a retrospective review of Dungeon Masters Screen in Black Gate, Scott Taylor said "Those early years using the 1st Edition AD&D mechanics are the times I think screens mattered most. In that system you needed the screens for easy access to the elaborate 'to hit' charts and saving throws. It was the perfect place to house them, and I'm not sure if this was the initial design concept, but whatever the case it worked very well. Assuming this was their primary purpose then the secondary consequence of the screen was, and still is, the true genius behind it all, that being the ability to hide the dice from the prying eyes of the players."Dungeons & Dragons Dungeon Masters Screen won the 2015 Gold Ennie Award for "Best Aid/Accessory". Reviews: Magia i Miecz #25 (January 1996) (Polish)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Elementals (Marvel Comics)** Elementals (Marvel Comics): The Elementals is a fictional organization appearing in American comic books published by Marvel Comics. A variation of the Elementals appeared in the 2019 Marvel Cinematic Universe film Spider-Man: Far From Home. Publication history: The Elementals first appeared in Supernatural Thrillers #8 (August 1974), and were created by Tony Isabella and Val Mayerik. The group subsequently appears in Supernatural Thrillers #9–15 (October 1974 – October 1975) and Ms. Marvel #11–12 (November–December 1977). Fictional biography: The Elementals are four extradimensional humanoids who became immortals with power over natural forces and ruled a kingdom on Earth before the rise of the original Atlantis. They are Hydron, lord of the waters; Magnum, master of the earth; Hellfire, wielder of flame; and Zephyr, mistress of the winds. The Elementals used N'Kantu, the Living Mummy as a pawn against the Living Monolith to obtain the Ruby Scarab from them. However, Zephyr betrayed the other Elementals and allied with N'Kantu. The Elementals attacked Zephyr, the Living Mummy, and their allies and gained the Scarab from them. When the Elementals tried to release their energies through the Scarab, they were blasted off Earth.The Elementals were later returned to Earth and pursued Zephyr and the Scarab, coming into conflict with the entity Hecate. Taking Zephyr hostage, Hellfire and Hydron forced her allies to recover the Scarab. Ms. Marvel arrived and together with Hecate, fought the Elementals, defeating them one by one.During the 2008 — 2009 "Dark Reign" storyline, Quasimodo researched the Elementals alongside other villains for Norman Osborn. He speculated that they could be aliens from the Axi-Tun or the Horusians.The Elementals were later captured by the Collector, save for Zephyr. Team lineup: Hellfire – The leader of the villainous group, who can generate fire and flames. Hydron – A foe with aquatic powers, including the ability to control water. Magnum – He has abilities that allow manipulation of earth, minerals, and rock. Zephyr – The sole female of the team who has the power to control wind, sky and air and thereby affect many of its aspects. In other media: The Elementals appear in Spider-Man: Far From Home. This version of the group consists of the Wind, Earth, Fire, and Water Elementals, who are modeled after Cyclone, Sandman, Molten Man, and Hydro-Man respectively.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quintessence (physics)** Quintessence (physics): In physics, quintessence is a hypothetical form of dark energy, more precisely a scalar field, postulated as an explanation of the observation of an accelerating rate of expansion of the universe. The first example of this scenario was proposed by Ratra and Peebles (1988) and Wetterich (1988). The concept was expanded to more general types of time-varying dark energy, and the term "quintessence" was first introduced in a 1998 paper by Robert R. Caldwell, Rahul Dave and Paul Steinhardt. It has been proposed by some physicists to be a fifth fundamental force. Quintessence differs from the cosmological constant explanation of dark energy in that it is dynamic; that is, it changes over time, unlike the cosmological constant which, by definition, does not change. Quintessence can be either attractive or repulsive depending on the ratio of its kinetic and potential energy. Those working with this postulate believe that quintessence became repulsive about ten billion years ago, about 3.5 billion years after the Big Bang.A group of researchers argued in 2021 that observations of the Hubble tension may imply that only quintessence models with a nonzero coupling constant are viable. Terminology: The name comes from quinta essentia (fifth element). So called in Latin starting from the Middle Ages, this was the (first) element added by Aristotle to the other four ancient classical elements because he thought it was the essence of the celestial world. Aristotle posited to be a pure, fine, and primigenial element. Later scholars identified this element with aether. Similarly, modern quintessence would be the fifth known "dynamical, time-dependent, and spatially inhomogeneous" contribution to the overall mass–energy content of the universe. Terminology: Of course, the other four components are not the ancient Greek classical elements, but rather "baryons, neutrinos, dark matter, [and] radiation." Although neutrinos are sometimes considered radiation, the term "radiation" in this context is only used to refer to massless photons. Spatial curvature of the cosmos (which has not been detected) is excluded because it is non-dynamical and homogeneous; the cosmological constant would not be considered a fifth component in this sense, because it is non-dynamical, homogeneous, and time-independent. Scalar field: Quintessence (Q) is a scalar field with an equation of state where wq, the ratio of pressure pq and density ρ q, is given by the potential energy V(Q) and a kinetic term: wq=pqρq=12Q˙2−V(Q)12Q˙2+V(Q) Hence, quintessence is dynamic, and generally has a density and wq parameter that varies with time. By contrast, a cosmological constant is static, with a fixed energy density and wq = −1. Tracker behavior: Many models of quintessence have a tracker behavior, which according to Ratra and Peebles (1988) and Paul Steinhardt et al. (1999) partly solves the cosmological constant problem. In these models, the quintessence field has a density which closely tracks (but is less than) the radiation density until matter-radiation equality, which triggers quintessence to start having characteristics similar to dark energy, eventually dominating the universe. This naturally sets the low scale of the dark energy. When comparing the predicted expansion rate of the universe as given by the tracker solutions with cosmological data, a main feature of tracker solutions is that one needs four parameters to properly describe the behavior of their equation of state, whereas it has been shown that at most a two-parameter model can optimally be constrained by mid-term future data (horizon 2015–2020). Specific models: Some special cases of quintessence are phantom energy, in which wq < −1, and k-essence (short for kinetic quintessence), which has a non-standard form of kinetic energy. If this type of energy were to exist, it would cause a big rip in the universe due to the growing energy density of dark energy, which would cause the expansion of the universe to increase at a faster-than-exponential rate. Specific models: Holographic dark energy Holographic dark energy models, compared with cosmological constant models, imply a high degeneracy. Specific models: It has been suggested that dark energy might originate from quantum fluctuations of spacetime, and is limited by the event horizon of the universe.Studies with quintessence dark energy found that it dominates gravitational collapse in a spacetime simulation, based on the holographic thermalization. These results show that the smaller the state parameter of quintessence is, the harder it is for the plasma to thermalize. Quintom scenario: In 2004, when scientists fitted the evolution of dark energy with the cosmological data, they found that the equation of state had possibly crossed the cosmological constant boundary (w = –1) from above to below. A proven no-go theorem indicates this situation, called the Quintom scenario, requires at least two degrees of freedom for dark energy models involving ideal gases or scalar fields.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carter–Goddard–Malrieu–Trinquier model** Carter–Goddard–Malrieu–Trinquier model: The Carter–Goddard–Malrieu–Trinquier model (better known as CGMT model) is a model in inorganic chemistry, used for the description and prediction of distortions in multiple bonding systems of main group elements. Theory: The model predicts that if the double bond is homolytically cleaved in a system R1 R2 M = MR3 R4, the two carbene analog fragments resulting therefrom can subsequently be present in both a singlet and a triplet state. Independently, however, the basic state of the fragments may be a triplet or a singlet state. EA Carter and WA Goddard III showed that the binding energy EG results from the bond dissociation energy E int minus the sum of the singlet-triplet excitation energies ΣΔE S → T of the resulting fragments. E GBE = E int - ΣΔE S → T.This model was extended by G. Trinquier and J. P. Malrieu by the possibility to make statements about the geometry (characterized by the distance between the metal centers r, and the tilt angle θ) of a double bond system due to ΣΔE S → T. As can be seen in the illustration, a coplanar structure (θ = 0 °) is optimal for triplet fragments. For singlet fragments, however, there is a double donor-acceptor bond with an angle of θ close to 45 °.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Canal Safety Gates** Canal Safety Gates: Canal safety gates or canal air raid protection gates are structures that were installed on canals specifically to reduce or prevent flood damage to dwellings, factories, etc. in the event of aqueducts, canal banks, etc. being breached either through natural events or by enemy action during wars, insurgency, sabotage, etc. They sometimes have a secondary function in regard of canal maintenance work. Substantial structures or simple 'stop gates' or 'stop planks' were used to prevent flooding and were usually only put in place when air raid warnings were given. Introduction: Large volumes of stored water have considerable destructive potential and where structures such as canals run on embankments above low lying built up areas or where aqueducts exist, appropriate safety precautions were taken either as a war-time contingency or at the time of construction. These 'canal safety gates' or 'canal air raid protection gates (ARPG)' were constructed and installed in regard to the scale of the danger posed and ranged from simple wooden planks known as 'stop gates' or 'stop planks' to more massive constructions built of concrete and steel such as the safety gates built on the Forth and Clyde Canal near Stockingfield Junction and on the Glasgow Branch at Firhill Road and Craighall Road.Where a water link was no longer commercially important, but still represented a risk in case of damage, it might be closed off permanently with concrete or an earth bank. This was done in Bristol at the beginning of WWII to protect the floating harbour by blocking the river access from the harbour at Bathurst Basin and the Feeder Canal at Totterdown Basin. Canals with safety gates: The Forth and Clyde Canal In 1942 two massive steel safety or stop gates were constructed on the Edinburgh side of Stockingfield Junction at what is known as the Stockingfield Narrows. The purpose of these two hand cranked steel gates was to hold back the waters of the Forth and Clyde Canal to prevent serious flooding in Glasgow in the event of bombing destroying or breaching the nearby Stockingfield Aqueduct. The nearest lock on the Edinburgh main line that could control the water loss after a breach is 27 kilometres (17 miles) away at Wyndford, Lock 20.Further sets of safety or stop locks were also created in WWII on the Glasgow Branch at the Firhill Road Narrows and at Craighall Road Narrows near Speirs Wharf, protecting the city from potential damage to the two aqueducts on this route. The Stockingfield Narrows gates are substantially intact whilst mainly the concrete parts of the structures remain at Firhill Road Narrows. Canals with safety gates: The Union Canal The Union Canal was built as a contour or mathematical canal and is approximately 52 kilometres (32 miles) in length, following the 73-metre (240-foot) contour throughout, thereby avoiding the need for locks but lacking this means of restricting water loss in the event of a breach. For safety the engineers between 1818 and 1822 provided gates in case of structural failures and for canal maintenance using single leaf, timber gates at nineteen locations. Scottish Canals have had two timber bridge hole gates made to the original design and dimensions for installation at Linlithgow. Canals with safety gates: The Gloucester and Sharpness Canal The Gloucester and Sharpness Canal is a 27-kilometre-long (17-mile) canal, up to 5 metres (16 feet) in depth, so that in the event of a canal breach millions of litres of water would flood the area. A series of safety gates are located along the canal and are particularly important as an unusual feature of the canal is a lack of locks, being described as a contour canal. In an emergency these gates automatically close to ensure that any risk created by a flood is controlled, protecting Gloucester and the villages along the course of the canal to Sharpness. Canals with safety gates: The Grand Union and Regent's Canal The Grand Union Canal starts in London and runs to Birmingham with a total length of 220 kilometres (140 miles) and 166 locks. Safety or Air Raid Protection (ARP) gates were installed at around 16 locations that were designed to automatically close if the canals were damaged during the WWII Luftwaffe's air raids. A very large number of bombs, etc. fell in the vicinity of the canals in London during the war, however no significant flooding resulted from damage to canals.The Air Raid Precautions (ARP) Department was created in 1935 to ensure that local authorities and other employers co-operated with central government. Canals on embankments through low-lying or built up areas such as London were identified as being particularly vulnerable to bombing and sabotage. At the very least resultant flooding would endanger lives, disrupt transport interchanges at King's Cross and Paddington and endanger factories in the Thames Valley.In 1938 stop planks and safety gates were installed in the Regent's Canal and in the Grand Union Canal in Greater London area and its Slough branch. Stop plank grooves were cut at each end of the aqueducts and at all weir sluices, whilst the stop gates were built in such a way that they did not unduly obstruct canal traffic. Canals with safety gates: Birmingham Canal Navigations The Roundabout island at Old Turn Junction was installed during WWII, to facilitate the insertion of safety gates to protect the railway tunnel of the Stour Valley railway line that runs beneath it, in the event of a breach through bombing. The canal at this point was too wide and the island was required to narrow the canal enough for gates to be installed when required. Canals with safety gates: Dortmund–Ems Canal The Dortmund–Ems Canal in Germany was a prime target for bombing by the RAF in WWII and had safety gates installed to reduce flooding, loss of water from the canal and limit numbers of boats stranded. The Danube–Tisa–Danube Canal The Danube–Tisa–Danube Canal system in Serbia has 24 gates, 16 locks, five safety gates. Micro-history: Attempts were made by six members of the Ribbon Society (Irish dissidents) in March 1883 to blow up the Possil Road Aqueduct on the Glasgow Branch of the Forth and Clyde Canal.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Demand sensing** Demand sensing: Demand sensing is a forecasting method that uses artificial intelligence and real-time data capture to create a forecast of demand based on the current realities of the supply chain. Traditionally, forecasting accuracy was based on time series techniques which create a forecast based on prior sales history and draws on several years of data to provide insights into predictable seasonal patterns. Demand sensing uses a broader range of demand signals, (including current data from the supply chain) and different mathematics to create a forecast that responds to real-world events such as market shifts, weather changes, natural disasters and changes in consumer buying behavior.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**2-satisfiability** 2-satisfiability: In computer science, 2-satisfiability, 2-SAT or just 2SAT is a computational problem of assigning values to variables, each of which has two possible values, in order to satisfy a system of constraints on pairs of variables. It is a special case of the general Boolean satisfiability problem, which can involve constraints on more than two variables, and of constraint satisfaction problems, which can allow more than two choices for the value of each variable. But in contrast to those more general problems, which are NP-complete, 2-satisfiability can be solved in polynomial time. 2-satisfiability: Instances of the 2-satisfiability problem are typically expressed as Boolean formulas of a special type, called conjunctive normal form (2-CNF) or Krom formulas. Alternatively, they may be expressed as a special type of directed graph, the implication graph, which expresses the variables of an instance and their negations as vertices in a graph, and constraints on pairs of variables as directed edges. Both of these kinds of inputs may be solved in linear time, either by a method based on backtracking or by using the strongly connected components of the implication graph. Resolution, a method for combining pairs of constraints to make additional valid constraints, also leads to a polynomial time solution. The 2-satisfiability problems provide one of two major subclasses of the conjunctive normal form formulas that can be solved in polynomial time; the other of the two subclasses is Horn-satisfiability. 2-satisfiability: 2-satisfiability may be applied to geometry and visualization problems in which a collection of objects each have two potential locations and the goal is to find a placement for each object that avoids overlaps with other objects. Other applications include clustering data to minimize the sum of the diameters of the clusters, classroom and sports scheduling, and recovering shapes from information about their cross-sections. 2-satisfiability: In computational complexity theory, 2-satisfiability provides an example of an NL-complete problem, one that can be solved non-deterministically using a logarithmic amount of storage and that is among the hardest of the problems solvable in this resource bound. The set of all solutions to a 2-satisfiability instance can be given the structure of a median graph, but counting these solutions is #P-complete and therefore not expected to have a polynomial-time solution. Random instances undergo a sharp phase transition from solvable to unsolvable instances as the ratio of constraints to variables increases past 1, a phenomenon conjectured but unproven for more complicated forms of the satisfiability problem. A computationally difficult variation of 2-satisfiability, finding a truth assignment that maximizes the number of satisfied constraints, has an approximation algorithm whose optimality depends on the unique games conjecture, and another difficult variation, finding a satisfying assignment minimizing the number of true variables, is an important test case for parameterized complexity. Problem representations: A 2-satisfiability problem may be described using a Boolean expression with a special restricted form. It is a conjunction (a Boolean and operation) of clauses, where each clause is a disjunction (a Boolean or operation) of two variables or negated variables. The variables or their negations appearing in this formula are known as literals. For example, the following formula is in conjunctive normal form, with seven variables, eleven clauses, and 22 literals: The 2-satisfiability problem is to find a truth assignment to these variables that makes the whole formula true. Such an assignment chooses whether to make each of the variables true or false, so that at least one literal in every clause becomes true. For the expression shown above, one possible satisfying assignment is the one that sets all seven of the variables to true. Every clause has at least one non-negated variable, so this assignment satisfies every clause. There are also 15 other ways of setting all the variables so that the formula becomes true. Therefore, the 2-satisfiability instance represented by this expression is satisfiable. Problem representations: Formulas in this form are known as 2-CNF formulas. The "2" in this name stands for the number of literals per clause, and "CNF" stands for conjunctive normal form, a type of Boolean expression in the form of a conjunction of disjunctions. They are also called Krom formulas, after the work of UC Davis mathematician Melven R. Krom, whose 1967 paper was one of the earliest works on the 2-satisfiability problem.Each clause in a 2-CNF formula is logically equivalent to an implication from one variable or negated variable to the other. For example, the second clause in the example may be written in any of three equivalent ways: Because of this equivalence between these different types of operation, a 2-satisfiability instance may also be written in implicative normal form, in which we replace each or clause in the conjunctive normal form by the two implications to which it is equivalent.A third, more graphical way of describing a 2-satisfiability instance is as an implication graph. An implication graph is a directed graph in which there is one vertex per variable or negated variable, and an edge connecting one vertex to another whenever the corresponding variables are related by an implication in the implicative normal form of the instance. An implication graph must be a skew-symmetric graph, meaning that it has a symmetry that takes each variable to its negation and reverses the orientations of all of the edges. Algorithms: Several algorithms are known for solving the 2-satisfiability problem. The most efficient of them take linear time. Algorithms: Resolution and transitive closure Krom (1967) described the following polynomial time decision procedure for solving 2-satisfiability instances.Suppose that a 2-satisfiability instance contains two clauses that both use the same variable x, but that x is negated in one clause and not in the other. Then the two clauses may be combined to produce a third clause, having the two other literals in the two clauses; this third clause must also be satisfied whenever the first two clauses are both satisfied. For instance, we may combine the clauses (a∨b) and (¬b∨¬c) in this way to produce the clause (a∨¬c) . In terms of the implicative form of a 2-CNF formula, this rule amounts to finding two implications ¬a⇒b and b⇒¬c , and inferring by transitivity a third implication ¬a⇒¬c .Krom writes that a formula is consistent if repeated application of this inference rule cannot generate both the clauses (x∨x) and (¬x∨¬x) , for any variable x . As he proves, a 2-CNF formula is satisfiable if and only if it is consistent. For, if a formula is not consistent, it is not possible to satisfy both of the two clauses (x∨x) and (¬x∨¬x) simultaneously. And, if it is consistent, then the formula can be extended by repeatedly adding one clause of the form (x∨x) or (¬x∨¬x) at a time, preserving consistency at each step, until it includes such a clause for every variable. At each of these extension steps, one of these two clauses may always be added while preserving consistency, for if not then the other clause could be generated using the inference rule. Once all variables have a clause of this form in the formula, a satisfying assignment of all of the variables may be generated by setting a variable x to true if the formula contains the clause (x∨x) and setting it to false if the formula contains the clause (¬x∨¬x) .Krom was concerned primarily with completeness of systems of inference rules, rather than with the efficiency of algorithms. However, his method leads to a polynomial time bound for solving 2-satisfiability problems. By grouping together all of the clauses that use the same variable, and applying the inference rule to each pair of clauses, it is possible to find all inferences that are possible from a given 2-CNF instance, and to test whether it is consistent, in total time O(n3), where n is the number of variables in the instance. This formula comes from multiplying the number of variables by the O(n2) number of pairs of clauses involving a given variable, to which the inference rule may be applied. Thus, it is possible to determine whether a given 2-CNF instance is satisfiable in time O(n3). Because finding a satisfying assignment using Krom's method involves a sequence of O(n) consistency checks, it would take time O(n4). Even, Itai & Shamir (1976) quote a faster time bound of O(n2) for this algorithm, based on more careful ordering of its operations. Nevertheless, even this smaller time bound was greatly improved by the later linear time algorithms of Even, Itai & Shamir (1976) and Aspvall, Plass & Tarjan (1979). Algorithms: In terms of the implication graph of the 2-satisfiability instance, Krom's inference rule can be interpreted as constructing the transitive closure of the graph. As Cook (1971) observes, it can also be seen as an instance of the Davis–Putnam algorithm for solving satisfiability problems using the principle of resolution. Its correctness follows from the more general correctness of the Davis–Putnam algorithm. Its polynomial time bound follows from the fact that each resolution step increases the number of clauses in the instance, which is upper bounded by a quadratic function of the number of variables. Algorithms: Limited backtracking Even, Itai & Shamir (1976) describe a technique involving limited backtracking for solving constraint satisfaction problems with binary variables and pairwise constraints. They apply this technique to a problem of classroom scheduling, but they also observe that it applies to other problems including 2-SAT.The basic idea of their approach is to build a partial truth assignment, one variable at a time. Certain steps of the algorithms are "choice points", points at which a variable can be given either of two different truth values, and later steps in the algorithm may cause it to backtrack to one of these choice points. However, only the most recent choice can be backtracked over. All choices made earlier than the most recent one are permanent.Initially, there is no choice point, and all variables are unassigned. At each step, the algorithm chooses the variable whose value to set, as follows: If there is a clause both of whose variables are already set, in a way that falsifies the clause, then the algorithm backtracks to its most recent choice point, undoing the assignments it made since that choice, and reverses the decision made at that choice. If there is no choice point, or if the algorithm has already backtracked over the most recent choice point, then it aborts the search and reports that the input 2-CNF formula is unsatisfiable. Algorithms: If there is a clause in which one of the clause's two variables has already been set, and the clause could still become either true or false, then the other variable is set in a way that forces the clause to become true. Algorithms: In the remaining case, each clause is either guaranteed to become true no matter how the remaining variables are assigned, or neither of its two variables has been assigned yet. In this case the algorithm creates a new choice point and sets any one of the unassigned variables to an arbitrarily chosen value.Intuitively, the algorithm follows all chains of inference after making each of its choices. This either leads to a contradiction and a backtracking step, or, if no contradiction is derived, it follows that the choice was a correct one that leads to a satisfying assignment. Therefore, the algorithm either correctly finds a satisfying assignment or it correctly determines that the input is unsatisfiable.Even et al. did not describe in detail how to implement this algorithm efficiently. They state only that by "using appropriate data structures in order to find the implications of any decision", each step of the algorithm (other than the backtracking) can be performed quickly. However, some inputs may cause the algorithm to backtrack many times, each time performing many steps before backtracking, so its overall complexity may be nonlinear. To avoid this problem, they modify the algorithm so that, after reaching each choice point, it begins simultaneously testing both of the two assignments for the variable set at the choice point, spending equal numbers of steps on each of the two assignments. As soon as the test for one of these two assignments would create another choice point, the other test is stopped, so that at any stage of the algorithm there are only two branches of the backtracking tree that are still being tested. In this way, the total time spent performing the two tests for any variable is proportional to the number of variables and clauses of the input formula whose values are permanently assigned. As a result, the algorithm takes linear time in total. Algorithms: Strongly connected components Aspvall, Plass & Tarjan (1979) found a simpler linear time procedure for solving 2-satisfiability instances, based on the notion of strongly connected components from graph theory.Two vertices in a directed graph are said to be strongly connected to each other if there is a directed path from one to the other and vice versa. This is an equivalence relation, and the vertices of the graph may be partitioned into strongly connected components, subsets within which every two vertices are strongly connected. There are several efficient linear time algorithms for finding the strongly connected components of a graph, based on depth-first search: Tarjan's strongly connected components algorithm and the path-based strong component algorithm each perform a single depth-first search. Kosaraju's algorithm performs two depth-first searches, but is very simple. Algorithms: In terms of the implication graph, two literals belong to the same strongly connected component whenever there exist chains of implications from one literal to the other and vice versa. Therefore, the two literals must have the same value in any satisfying assignment to the given 2-satisfiability instance. In particular, if a variable and its negation both belong to the same strongly connected component, the instance cannot be satisfied, because it is impossible to assign both of these literals the same value. As Aspvall et al. showed, this is a necessary and sufficient condition: a 2-CNF formula is satisfiable if and only if there is no variable that belongs to the same strongly connected component as its negation.This immediately leads to a linear time algorithm for testing satisfiability of 2-CNF formulae: simply perform a strong connectivity analysis on the implication graph and check that each variable and its negation belong to different components. However, as Aspvall et al. also showed, it also leads to a linear time algorithm for finding a satisfying assignment, when one exists. Their algorithm performs the following steps: Construct the implication graph of the instance, and find its strongly connected components using any of the known linear-time algorithms for strong connectivity analysis. Algorithms: Check whether any strongly connected component contains both a variable and its negation. If so, report that the instance is not satisfiable and halt. Algorithms: Construct the condensation of the implication graph, a smaller graph that has one vertex for each strongly connected component, and an edge from component i to component j whenever the implication graph contains an edge uv such that u belongs to component i and v belongs to component j. The condensation is automatically a directed acyclic graph and, like the implication graph from which it was formed, it is skew-symmetric. Algorithms: Topologically order the vertices of the condensation. In practice this may be efficiently achieved as a side effect of the previous step, as components are generated by Kosaraju's algorithm in topological order and by Tarjan's algorithm in reverse topological order. Algorithms: For each component in the reverse topological order, if its variables do not already have truth assignments, set all the literals in the component to be true. This also causes all of the literals in the complementary component to be set to false.Due to the reverse topological ordering and the skew-symmetry, when a literal is set to true, all literals that can be reached from it via a chain of implications will already have been set to true. Symmetrically, when a literal x is set to false, all literals that lead to it via a chain of implications will themselves already have been set to false. Therefore, the truth assignment constructed by this procedure satisfies the given formula, which also completes the proof of correctness of the necessary and sufficient condition identified by Aspvall et al.As Aspvall et al. show, a similar procedure involving topologically ordering the strongly connected components of the implication graph may also be used to evaluate fully quantified Boolean formulae in which the formula being quantified is a 2-CNF formula. Applications: Conflict-free placement of geometric objects A number of exact and approximate algorithms for the automatic label placement problem are based on 2-satisfiability. This problem concerns placing textual labels on the features of a diagram or map. Typically, the set of possible locations for each label is highly constrained, not only by the map itself (each label must be near the feature it labels, and must not obscure other features), but by each other: every two labels should avoid overlapping each other, for otherwise they would become illegible. In general, finding a label placement that obeys these constraints is an NP-hard problem. However, if each feature has only two possible locations for its label (say, extending to the left and to the right of the feature) then label placement may be solved in polynomial time. For, in this case, one may create a 2-satisfiability instance that has a variable for each label and that has a clause for each pair of labels that could overlap, preventing them from being assigned overlapping positions. If the labels are all congruent rectangles, the corresponding 2-satisfiability instance can be shown to have only linearly many constraints, leading to near-linear time algorithms for finding a labeling. Poon, Zhu & Chin (1998) describe a map labeling problem in which each label is a rectangle that may be placed in one of three positions with respect to a line segment that it labels: it may have the segment as one of its sides, or it may be centered on the segment. They represent these three positions using two binary variables in such a way that, again, testing the existence of a valid labeling becomes a 2-satisfiability problem.Formann & Wagner (1991) use 2-satisfiability as part of an approximation algorithm for the problem of finding square labels of the largest possible size for a given set of points, with the constraint that each label has one of its corners on the point that it labels. To find a labeling with a given size, they eliminate squares that, if doubled, would overlap another point, and they eliminate points that can be labeled in a way that cannot possibly overlap with another point's label. They show that these elimination rules cause the remaining points to have only two possible label placements per point, allowing a valid label placement (if one exists) to be found as the solution to a 2-satisfiability instance. By searching for the largest label size that leads to a solvable 2-satisfiability instance, they find a valid label placement whose labels are at least half as large as the optimal solution. That is, the approximation ratio of their algorithm is at most two. Similarly, if each label is rectangular and must be placed in such a way that the point it labels is somewhere along its bottom edge, then using 2-satisfiability to find the largest label size for which there is a solution in which each label has the point on a bottom corner leads to an approximation ratio of at most two.Similar applications of 2-satisfiability have been made for other geometric placement problems. In graph drawing, if the vertex locations are fixed and each edge must be drawn as a circular arc with one of two possible locations (for instance as an arc diagram), then the problem of choosing which arc to use for each edge in order to avoid crossings is a 2-satisfiability problem with a variable for each edge and a constraint for each pair of placements that would lead to a crossing. However, in this case it is possible to speed up the solution, compared to an algorithm that builds and then searches an explicit representation of the implication graph, by searching the graph implicitly. Applications: In VLSI integrated circuit design, if a collection of modules must be connected by wires that can each bend at most once, then again there are two possible routes for the wires, and the problem of choosing which of these two routes to use, in such a way that all wires can be routed in a single layer of the circuit, can be solved as a 2-satisfiability instance.Boros et al. (1999) consider another VLSI design problem: the question of whether or not to mirror-reverse each module in a circuit design. This mirror reversal leaves the module's operations unchanged, but it changes the order of the points at which the input and output signals of the module connect to it, possibly changing how well the module fits into the rest of the design. Boros et al. consider a simplified version of the problem in which the modules have already been placed along a single linear channel, in which the wires between modules must be routed, and there is a fixed bound on the density of the channel (the maximum number of signals that must pass through any cross-section of the channel). They observe that this version of the problem may be solved as a 2-satisfiability instance, in which the constraints relate the orientations of pairs of modules that are directly across the channel from each other. As a consequence, the optimal density may also be calculated efficiently, by performing a binary search in which each step involves the solution of a 2-satisfiability instance. Applications: Data clustering One way of clustering a set of data points in a metric space into two clusters is to choose the clusters in such a way as to minimize the sum of the diameters of the clusters, where the diameter of any single cluster is the largest distance between any two of its points. This is preferable to minimizing the maximum cluster size, which may lead to very similar points being assigned to different clusters. If the target diameters of the two clusters are known, a clustering that achieves those targets may be found by solving a 2-satisfiability instance. The instance has one variable per point, indicating whether that point belongs to the first cluster or the second cluster. Whenever any two points are too far apart from each other for both to belong to the same cluster, a clause is added to the instance that prevents this assignment. Applications: The same method also can be used as a subroutine when the individual cluster diameters are unknown. To test whether a given sum of diameters can be achieved without knowing the individual cluster diameters, one may try all maximal pairs of target diameters that add up to at most the given sum, representing each pair of diameters as a 2-satisfiability instance and using a 2-satisfiability algorithm to determine whether that pair can be realized by a clustering. To find the optimal sum of diameters one may perform a binary search in which each step is a feasibility test of this type. The same approach also works to find clusterings that optimize other combinations than sums of the cluster diameters, and that use arbitrary dissimilarity numbers (rather than distances in a metric space) to measure the size of a cluster. The time bound for this algorithm is dominated by the time to solve a sequence of 2-satisfiability instances that are closely related to each other, and Ramnath (2004) shows how to solve these related instances more quickly than if they were solved independently from each other, leading to a total time bound of O(n3) for the sum-of-diameters clustering problem. Applications: Scheduling Even, Itai & Shamir (1976) consider a model of classroom scheduling in which a set of n teachers must be scheduled to teach each of m cohorts of students. The number of hours per week that teacher i spends with cohort j is described by entry Rij of a matrix R given as input to the problem, and each teacher also has a set of hours during which he or she is available to be scheduled. As they show, the problem is NP-complete, even when each teacher has at most three available hours, but it can be solved as an instance of 2-satisfiability when each teacher only has two available hours. (Teachers with only a single available hour may easily be eliminated from the problem.) In this problem, each variable vij corresponds to an hour that teacher i must spend with cohort j , the assignment to the variable specifies whether that hour is the first or the second of the teacher's available hours, and there is a 2-satisfiability clause preventing any conflict of either of two types: two cohorts assigned to a teacher at the same time as each other, or one cohort assigned to two teachers at the same time.Miyashiro & Matsui (2005) apply 2-satisfiability to a problem of sports scheduling, in which the pairings of a round-robin tournament have already been chosen and the games must be assigned to the teams' stadiums. In this problem, it is desirable to alternate home and away games to the extent possible, avoiding "breaks" in which a team plays two home games in a row or two away games in a row. At most two teams can avoid breaks entirely, alternating between home and away games; no other team can have the same home-away schedule as these two, because then it would be unable to play the team with which it had the same schedule. Therefore, an optimal schedule has two breakless teams and a single break for every other team. Once one of the breakless teams is chosen, one can set up a 2-satisfiability problem in which each variable represents the home-away assignment for a single team in a single game, and the constraints enforce the properties that any two teams have a consistent assignment for their games, that each team have at most one break before and at most one break after the game with the breakless team, and that no team has two breaks. Therefore, testing whether a schedule admits a solution with the optimal number of breaks can be done by solving a linear number of 2-satisfiability problems, one for each choice of the breakless team. A similar technique also allows finding schedules in which every team has a single break, and maximizing rather than minimizing the number of breaks (to reduce the total mileage traveled by the teams). Applications: Discrete tomography Tomography is the process of recovering shapes from their cross-sections. In discrete tomography, a simplified version of the problem that has been frequently studied, the shape to be recovered is a polyomino (a subset of the squares in the two-dimensional square lattice), and the cross-sections provide aggregate information about the sets of squares in individual rows and columns of the lattice. For instance, in the popular nonogram puzzles, also known as paint by numbers or griddlers, the set of squares to be determined represents the dark pixels in a binary image, and the input given to the puzzle solver tells him or her how many consecutive blocks of dark pixels to include in each row or column of the image, and how long each of those blocks should be. In other forms of digital tomography, even less information about each row or column is given: only the total number of squares, rather than the number and length of the blocks of squares. An equivalent version of the problem is that we must recover a given 0-1 matrix given only the sums of the values in each row and in each column of the matrix. Applications: Although there exist polynomial time algorithms to find a matrix having given row and column sums, the solution may be far from unique: any submatrix in the form of a 2 × 2 identity matrix can be complemented without affecting the correctness of the solution. Therefore, researchers have searched for constraints on the shape to be reconstructed that can be used to restrict the space of solutions. For instance, one might assume that the shape is connected; however, testing whether there exists a connected solution is NP-complete. An even more constrained version that is easier to solve is that the shape is orthogonally convex: having a single contiguous block of squares in each row and column. Applications: Improving several previous solutions, Chrobak & Dürr (1999) showed how to reconstruct connected orthogonally convex shapes efficiently, using 2-SAT. The idea of their solution is to guess the indexes of rows containing the leftmost and rightmost cells of the shape to be reconstructed, and then to set up a 2-satisfiability problem that tests whether there exists a shape consistent with these guesses and with the given row and column sums. They use four 2-satisfiability variables for each square that might be part of the given shape, one to indicate whether it belongs to each of four possible "corner regions" of the shape, and they use constraints that force these regions to be disjoint, to have the desired shapes, to form an overall shape with contiguous rows and columns, and to have the desired row and column sums. Their algorithm takes time O(m3n) where m is the smaller of the two dimensions of the input shape and n is the larger of the two dimensions. The same method was later extended to orthogonally convex shapes that might be connected only diagonally instead of requiring orthogonal connectivity.A part of a solver for full nonogram puzzles, Batenburg and Kosters (2008, 2009) used 2-satisfiability to combine information obtained from several other heuristics. Given a partial solution to the puzzle, they use dynamic programming within each row or column to determine whether the constraints of that row or column force any of its squares to be white or black, and whether any two squares in the same row or column can be connected by an implication relation. They also transform the nonogram into a digital tomography problem by replacing the sequence of block lengths in each row and column by its sum, and use a maximum flow formulation to determine whether this digital tomography problem combining all of the rows and columns has any squares whose state can be determined or pairs of squares that can be connected by an implication relation. If either of these two heuristics determines the value of one of the squares, it is included in the partial solution and the same calculations are repeated. However, if both heuristics fail to set any squares, the implications found by both of them are combined into a 2-satisfiability problem and a 2-satisfiability solver is used to find squares whose value is fixed by the problem, after which the procedure is again repeated. This procedure may or may not succeed in finding a solution, but it is guaranteed to run in polynomial time. Batenburg and Kosters report that, although most newspaper puzzles do not need its full power, both this procedure and a more powerful but slower procedure which combines this 2-satisfiability approach with the limited backtracking of Even, Itai & Shamir (1976) are significantly more effective than the dynamic programming and flow heuristics without 2-satisfiability when applied to more difficult randomly generated nonograms. Applications: Renamable Horn satisfiability Next to 2-satisfiability, the other major subclass of satisfiability problems that can be solved in polynomial time is Horn-satisfiability. In this class of satisfiability problems, the input is again a formula in conjunctive normal form. It can have arbitrarily many literals per clause but at most one positive literal. Lewis (1978) found a generalization of this class, renamable Horn satisfiability, that can still be solved in polynomial time by means of an auxiliary 2-satisfiability instance. A formula is renamable Horn when it is possible to put it into Horn form by replacing some variables by their negations. To do so, Lewis sets up a 2-satisfiability instance with one variable for each variable of the renamable Horn instance, where the 2-satisfiability variables indicate whether or not to negate the corresponding renamable Horn variables. Applications: In order to produce a Horn instance, no two variables that appear in the same clause of the renamable Horn instance should appear positively in that clause; this constraint on a pair of variables is a 2-satisfiability constraint. By finding a satisfying assignment to the resulting 2-satisfiability instance, Lewis shows how to turn any renamable Horn instance into a Horn instance in polynomial time. By breaking up long clauses into multiple smaller clauses, and applying a linear-time 2-satisfiability algorithm, it is possible to reduce this to linear time. Applications: Other applications 2-satisfiability has also been applied to problems of recognizing undirected graphs that can be partitioned into an independent set and a small number of complete bipartite subgraphs, inferring business relationships among autonomous subsystems of the internet, and reconstruction of evolutionary trees. Complexity and extensions: NL-completeness A nondeterministic algorithm for determining whether a 2-satisfiability instance is not satisfiable, using only a logarithmic amount of writable memory, is easy to describe: simply choose (nondeterministically) a variable v and search (nondeterministically) for a chain of implications leading from v to its negation and then back to v. If such a chain is found, the instance cannot be satisfiable. By the Immerman–Szelepcsényi theorem, it is also possible in nondeterministic logspace to verify that a satisfiable 2-satisfiability instance is satisfiable. Complexity and extensions: 2-satisfiability is NL-complete, meaning that it is one of the "hardest" or "most expressive" problems in the complexity class NL of problems solvable nondeterministically in logarithmic space. Completeness here means that a deterministic Turing machine using only logarithmic space can transform any other problem in NL into an equivalent 2-satisfiability problem. Analogously to similar results for the more well-known complexity class NP, this transformation together with the Immerman–Szelepcsényi theorem allow any problem in NL to be represented as a second order logic formula with a single existentially quantified predicate with clauses limited to length 2. Such formulae are known as SO-Krom. Similarly, the implicative normal form can be expressed in first order logic with the addition of an operator for transitive closure. Complexity and extensions: The set of all solutions The set of all solutions to a 2-satisfiability instance has the structure of a median graph, in which an edge corresponds to the operation of flipping the values of a set of variables that are all constrained to be equal or unequal to each other. In particular, by following edges in this way one can get from any solution to any other solution. Conversely, any median graph can be represented as the set of solutions to a 2-satisfiability instance in this way. The median of any three solutions is formed by setting each variable to the value it holds in the majority of the three solutions. This median always forms another solution to the instance.Feder (1994) describes an algorithm for efficiently listing all solutions to a given 2-satisfiability instance, and for solving several related problems. Complexity and extensions: There also exist algorithms for finding two satisfying assignments that have the maximal Hamming distance from each other. Complexity and extensions: Counting the number of satisfying assignments #2SAT is the problem of counting the number of satisfying assignments to a given 2-CNF formula. This counting problem is #P-complete, which implies that it is not solvable in polynomial time unless P = NP. Moreover, there is no fully polynomial randomized approximation scheme for #2SAT unless NP = RP and this even holds when the input is restricted to monotone 2-CNF formulas, i.e., 2-CNF formulas in which each literal is a positive occurrence of a variable.The fastest known algorithm for computing the exact number of satisfying assignments to a 2SAT formula runs in time 1.2377 n) Random 2-satisfiability instances One can form a 2-satisfiability instance at random, for a given number n of variables and m of clauses, by choosing each clause uniformly at random from the set of all possible two-variable clauses. When m is small relative to n, such an instance will likely be satisfiable, but larger values of m have smaller probabilities of being satisfiable. More precisely, if m/n is fixed as a constant α ≠ 1, the probability of satisfiability tends to a limit as n goes to infinity: if α < 1, the limit is one, while if α > 1, the limit is zero. Thus, the problem exhibits a phase transition at α = 1. Complexity and extensions: Maximum-2-satisfiability In the maximum-2-satisfiability problem (MAX-2-SAT), the input is a formula in conjunctive normal form with two literals per clause, and the task is to determine the maximum number of clauses that can be simultaneously satisfied by an assignment. Like the more general maximum satisfiability problem, MAX-2-SAT is NP-hard. The proof is by reduction from 3SAT.By formulating MAX-2-SAT as a problem of finding a cut (that is, a partition of the vertices into two subsets) maximizing the number of edges that have one endpoint in the first subset and one endpoint in the second, in a graph related to the implication graph, and applying semidefinite programming methods to this cut problem, it is possible to find in polynomial time an approximate solution that satisfies at least 0.940... times the optimal number of clauses. A balanced MAX 2-SAT instance is an instance of MAX 2-SAT where every variable appears positively and negatively with equal weight. For this problem, Austrin has improved the approximation ratio to min cos 0.943... Complexity and extensions: .If the unique games conjecture is true, then it is impossible to approximate MAX 2-SAT, balanced or not, with an approximation constant better than 0.943... in polynomial time. Under the weaker assumption that P ≠ NP, the problem is only known to be inapproximable within a constant better than 21/22 = 0.95454...Various authors have also explored exponential worst-case time bounds for exact solution of MAX-2-SAT instances. Complexity and extensions: Weighted-2-satisfiability In the weighted 2-satisfiability problem (W2SAT), the input is an n -variable 2SAT instance and an integer k, and the problem is to decide whether there exists a satisfying assignment in which exactly k of the variables are true.The W2SAT problem includes as a special case the vertex cover problem, of finding a set of k vertices that together touch all the edges of a given undirected graph. For any given instance of the vertex cover problem, one can construct an equivalent W2SAT problem with a variable for each vertex of a graph. Each edge uv of the graph may be represented by a 2SAT clause u ∨ v that can be satisfied only by including either u or v among the true variables of the solution. Then the satisfying instances of the resulting 2SAT formula encode solutions to the vertex cover problem, and there is a satisfying assignment with k true variables if and only if there is a vertex cover with k vertices. Therefore, like vertex cover, W2SAT is NP-complete. Complexity and extensions: Moreover, in parameterized complexity W2SAT provides a natural W[1]-complete problem, which implies that W2SAT is not fixed-parameter tractable unless this holds for all problems in W[1]. That is, it is unlikely that there exists an algorithm for W2SAT whose running time takes the form f(k)·nO(1). Even more strongly, W2SAT cannot be solved in time no(k) unless the exponential time hypothesis fails. Complexity and extensions: Quantified Boolean formulae As well as finding the first polynomial-time algorithm for 2-satisfiability, Krom (1967) also formulated the problem of evaluating fully quantified Boolean formulae in which the formula being quantified is a 2-CNF formula. The 2-satisfiability problem is the special case of this quantified 2-CNF problem, in which all quantifiers are existential. Krom also developed an effective decision procedure for these formulae. Aspvall, Plass & Tarjan (1979) showed that it can be solved in linear time, by an extension of their technique of strongly connected components and topological ordering. Complexity and extensions: Many-valued logics The 2-satisfiability problem can also be asked for propositional many-valued logics. The algorithms are not usually linear, and for some logics the problem is even NP-complete. See Hähnle (2001, 2003) for surveys.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Transposition (logic)** Transposition (logic): In propositional logic, transposition is a valid rule of replacement that permits one to switch the antecedent with the consequent of a conditional statement in a logical proof if they are also both negated. It is the inference from the truth of "A implies B" to the truth of "Not-B implies not-A", and conversely. It is very closely related to the rule of inference modus tollens. It is the rule that where " ⇔ " is a metalogical symbol representing "can be replaced in a proof with". Formal notation: The transposition rule may be expressed as a sequent: (P→Q)⊢(¬Q→¬P) where ⊢ is a metalogical symbol meaning that (¬Q→¬P) is a syntactic consequence of (P→Q) in some logical system; or as a rule of inference: P→Q∴¬Q→¬P where the rule is that wherever an instance of " P→Q " appears on a line of a proof, it can be replaced with " ¬Q→¬P "; or as the statement of a truth-functional tautology or theorem of propositional logic. The principle was stated as a theorem of propositional logic by Russell and Whitehead in Principia Mathematica as: (P→Q)→(¬Q→¬P) where P and Q are propositions expressed in some formal system. Traditional logic: Form of transposition In the inferred proposition, the consequent is the contradictory of the antecedent in the original proposition, and the antecedent of the inferred proposition is the contradictory of the consequent of the original proposition. The symbol for material implication signifies the proposition as a hypothetical, or the "if-then" form, e.g. "if P then Q". Traditional logic: The biconditional statement of the rule of transposition (↔) refers to the relation between hypothetical (→) propositions, with each proposition including an antecent and consequential term. As a matter of logical inference, to transpose or convert the terms of one proposition requires the conversion of the terms of the propositions on both sides of the biconditional relationship. Meaning, to transpose or convert (P → Q) to (Q → P) requires that the other proposition, (~Q → ~P), be transposed or converted to (~P → ~Q). Otherwise, to convert the terms of one proposition and not the other renders the rule invalid, violating the sufficient condition and necessary condition of the terms of the propositions, where the violation is that the changed proposition commits the fallacy of denying the antecedent or affirming the consequent by means of illicit conversion. Traditional logic: The truth of the rule of transposition is dependent upon the relations of sufficient condition and necessary condition in logic. Traditional logic: Sufficient condition In the proposition "If P then Q", the occurrence of 'P' is sufficient reason for the occurrence of 'Q'. 'P', as an individual or a class, materially implicates 'Q', but the relation of 'Q' to 'P' is such that the converse proposition "If Q then P" does not necessarily have sufficient condition. The rule of inference for sufficient condition is modus ponens, which is an argument for conditional implication: Premise (1): If P, then Q Premise (2): P Conclusion: Therefore, Q Necessary condition Since the converse of premise (1) is not valid, all that can be stated of the relationship of 'P' and 'Q' is that in the absence of 'Q', 'P' does not occur, meaning that 'Q' is the necessary condition for 'P'. The rule of inference for necessary condition is modus tollens: Premise (1): If P, then Q Premise (2): not Q Conclusion: Therefore, not P Necessity and sufficiency example An example traditionally used by logicians contrasting sufficient and necessary conditions is the statement "If there is fire, then oxygen is present". An oxygenated environment is necessary for fire or combustion, but simply because there is an oxygenated environment does not necessarily mean that fire or combustion is occurring. While one can infer that fire stipulates the presence of oxygen, from the presence of oxygen the converse "If there is oxygen present, then fire is present" cannot be inferred. All that can be inferred from the original proposition is that "If oxygen is not present, then there cannot be fire". Traditional logic: Relationship of propositions The symbol for the biconditional ("↔") signifies the relationship between the propositions is both necessary and sufficient, and is verbalized as "if and only if", or, according to the example "If P then Q 'if and only if' if not Q then not P". Traditional logic: Necessary and sufficient conditions can be explained by analogy in terms of the concepts and the rules of immediate inference of traditional logic. In the categorical proposition "All S is P", the subject term 'S' is said to be distributed, that is, all members of its class are exhausted in its expression. Conversely, the predicate term 'P' cannot be said to be distributed, or exhausted in its expression because it is indeterminate whether every instance of a member of 'P' as a class is also a member of 'S' as a class. All that can be validly inferred is that "Some P are S". Thus, the type 'A' proposition "All P is S" cannot be inferred by conversion from the original 'A' type proposition "All S is P". All that can be inferred is the type "A" proposition "All non-P is non-S" (Note that (P → Q) and (~Q → ~P) are both 'A' type propositions). Grammatically, one cannot infer "all mortals are men" from "All men are mortal". An 'A' type proposition can only be immediately inferred by conversion when both the subject and predicate are distributed, as in the inference "All bachelors are unmarried men" from "All unmarried men are bachelors". Traditional logic: Transposition and the method of contraposition In traditional logic the reasoning process of transposition as a rule of inference is applied to categorical propositions through contraposition and obversion, a series of immediate inferences where the rule of obversion is first applied to the original categorical proposition "All S is P"; yielding the obverse "No S is non-P". In the obversion of the original proposition to an 'E' type proposition, both terms become distributed. The obverse is then converted, resulting in "No non-P is S", maintaining distribution of both terms. The "No non-P is S" is again obverted, resulting in the [contrapositive] "All non-P is non-S". Since nothing is said in the definition of contraposition with regard to the predicate of the inferred proposition, it is permissible that it could be the original subject or its contradictory, and the predicate term of the resulting 'A' type proposition is again undistributed. This results in two contrapositives, one where the predicate term is distributed, and another where the predicate term is undistributed. Traditional logic: Differences between transposition and contraposition Note that the method of transposition and contraposition should not be confused. Contraposition is a type of immediate inference in which from a given categorical proposition another categorical proposition is inferred which has as its subject the contradictory of the original predicate. Since nothing is said in the definition of contraposition with regard to the predicate of the inferred proposition, it is permissible that it could be the original subject or its contradictory. This is in contradistinction to the form of the propositions of transposition, which may be material implication, or a hypothetical statement. The difference is that in its application to categorical propositions the result of contraposition is two contrapositives, each being the obvert of the other, i.e. "No non-P is S" and "All non-P is non-S". The distinction between the two contrapositives is absorbed and eliminated in the principle of transposition, which presupposes the "mediate inferences" of contraposition and is also referred to as the "law of contraposition". Proofs: In classical propositional calculus system In Hilbert-style deductive systems for propositional logic, only one side of the transposition is taken as an axiom, and the other is a theorem. We describe a proof of this theorem in the system of three axioms proposed by Jan Łukasiewicz: A1. ϕ→(ψ→ϕ) A2. (ϕ→(ψ→ξ))→((ϕ→ψ)→(ϕ→ξ)) A3. (¬ϕ→¬ψ)→(ψ→ϕ) (A3) already gives one of the directions of the transposition. The other side, (ψ→ϕ)→(¬ϕ→¬ψ) , is proven below, using the following lemmas proven here: (DN1) ¬¬p→p - Double negation (one direction) (DN2) p→¬¬p - Double negation (another direction) (HS1) (q→r)→((p→q)→(p→r)) - one form of Hypothetical syllogism (HS2) (p→q)→((q→r)→(p→r)) - another form of Hypothetical syllogism.We also use the method of the hypothetical syllogism metatheorem as a shorthand for several proof steps. Proofs: The proof is as follows: q→¬¬q (instance of the (DN2)) (q→¬¬q)→((p→q)→(p→¬¬q)) (instance of the (HS1) (p→q)→(p→¬¬q) (from (1) and (2) by modus ponens) ¬¬p→p (instance of the (DN1)) (¬¬p→p)→((p→¬¬q)→(¬¬p→¬¬q)) (instance of the (HS2)) (p→¬¬q)→(¬¬p→¬¬q) (from (4) and (5) by modus ponens) (p→q)→(¬¬p→¬¬q) (from (3) and (6) using the hypothetical syllogism metatheorem) (¬¬p→¬¬q)→(¬q→¬p) (instance of (A3)) (p→q)→(¬q→¬p) (from (7) and (8) using the hypothetical syllogism metatheorem)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sony Alpha 550** Sony Alpha 550: The Sony Alpha a550 (DSLR-A550) is a midrange-level digital single-lens reflex camera (DSLR) marketed by Sony and aimed at enthusiasts, it was released in August 2009. The camera features a 14.2 megapixel APS-C Type CMOS Exmor Sensor and features Sony's patented SteadyShot INSIDE stabilisation system which works with any attached lens. Sony Alpha 550: The Sony Alpha a550's main selling point is its dual Live View mode's, Sony's normal secondary; smaller sensor based Live View mode and another which uses the main sensor with no autofocus. The a550 also features a maximum of 7frame/s continuous shooting speed when operating in speed-priority mode and a maximum ISO of 1600 when in auto mode and 12800 ISO when in manual mode.The Sony Alpha a550 is "big brother" to the Sony Alpha a500, an almost identical DSLR with a smaller 12.3 megapixel APS-C CMOS sensor, no 7frame/s continuous shooting and a lower resolution LCD. Live View: The a550 stands out from other DSLR's in its price range with its use of 2 separate Live View modes. The first mode operates in the same way other DSLR's do, by switching from using the main sensor to using a secondary, smaller sensor. This allows the a550 to autofocus in its first Live View mode. However this only provides a 90% view of the final image the main sensor will take. Live View: The second Live View mode is named Manual Focus Check Live View. This mode uses the main 14.2 megapixel APS-C Type CMOS Exmor Sensor to provide a 100% view on the rear LCD. This mode does not allow auto focusing, hence the name Manual Focus Check Live View. Live View: When in Live View mode the a550 uses its 3.0 TFT Xtra Fine LCD, with 921,600 dots which can be adjusted to point downwards, for overhead shooting or upwards for below eye level shooting. When Live View mode is not in use the optical viewfinder(OVF) is used and only provides 95% coverage of the actual image taken by the sensor. CMOS sensor: The a550 uses a 14.2 megapixel APS-C Type CMOS Exmor Sensor. It is a 23.4 x 15.6 mm APS-C Type CMOS Exmor Sensor with an RGB Color Filter Array, Built-in fixed low-pass filter, with 14.6 million total pixels and 14.2 million effective pixels. CMOS sensor: This sensor is capable of recording in 5 different File qualities / formats : • RAW (.ARW) • RAW + JPEG Fine • RAW + JPEG Standard • JPEG Fine • JPEG Standard and in 6 different image sizes; 3:2 • 4592 x 3056 (L) • 3344 x 2224 (M) • 2288 x 1520 (S) 16:9 • 4592 x 2576 (L) • 3344 x 1872 (M) • 2288 x 1280 (S) Sony A550 Anti-Dust Technology To help combat dust particles on the sensor from changing lenses, Sony included both an anti-static coating on the sensor filter and anti-dust vibrations to automatically shake the sensor with the anti-shake mechanism each time the camera is shut off. There is also a manual cleaning mode, where the camera first shakes the sensor, then lifts the mirror and opens the shutter, allowing access to the sensor for use with a blower or other cleaning device. Sony A550 Autofocus: The Sony A550 provides both manual and automatic focus control modes, set by the Focus Mode switch on the left side of the camera body, or on the lens. The Function button provides access to additional AF modes and AF Area options. The Autofocus Mode option under the Function menu offers Single-shot AF (AF-S), Automatic AF (AF-A), and Continuous AF (AF-C) settings. Single-shot AF acquires and locks focus when the shutter button is half-pressed, while Continuous AF mode constantly adjusts focus while the shutter button is half-pressed. The Automatic AF setting will lock focus on a still subject, but will switch to Continuous AF mode if the subject moves. Sony A550 Autofocus: Autofocus Area has three options available through the Function menu: Wide, Spot, and Local (manual setting). The default option is a nine-point Wide Focus area, where the camera selects which of the nine focus points is used. (Note that only the center point utilized a cross-type sensor sensitive to detail in both the horizontal and vertical axis. The other 8 sensors are line-type, sensitive to detail in one direction only, although the four line sensors at the corners of the AF array are angled, so they'll respond to both horizontal and vertical detail.) You can override the chosen AF mode by pressing the AF button in the center of the Multi-controller on the camera's rear panel, which will select the center AF point (the latter indicated by the target box in the center of the viewfinder). Wide AF bases its focus on the most prominent subject detail in the portion of the image that falls within the total AF area. Spot mode bases its focus on the AF point at very center of the frame. The Local setting is Sony's terminology for a manual AF area selection, and lets you manually set the main AF point by using the Multi-controller to highlight one of the nine AF points. The active AF area is briefly illuminated in the viewfinder during autofocus. SteadyShot Stabilization: The Sony Alpha a550 incorporates Sony's patented SteadyShot INSIDE stabilization system which works with any attached lens. This uses a CCD-Shift system which moves the entire sensor platform in two axes. Sony claims the system is good for between 2.5 and 4 stops of compensation depending on the lens and shooting conditions, and like the entry-level Alphas it’s enabled or disabled from a menu option rather than with the physical switch of earlier models.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kurupath Radhakrishnan** Kurupath Radhakrishnan: Kurupath Radhakrishnan is an Indian neurologist and epileptologist, who has established R Madhavan Nayar Center for Comprehensive Epilepsy Care, (RMNC) at Sree Chitra Tirunal Institute of Medical Sciences and Technology (SCTIMST) Thiruvananthapuram, India. He has contributed for the resurgence of epilepsy surgery in India during 1990s, after the decline in 1970s. He also served as the director of SCTIMST from 2009 - 2013. After his retirement he worked at the Department of Neurology, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, Karnataka, India; and at the Department of Neurology, Amrita Advanced Epilepsy Center, Amrita Institute of Medical Sciences, Cochin, Kerala, India. Currently he is working as a senior Consultant, Department of Neurosciences, Avitis Institute of Medical Sciences, Nemmara, Palakkad, Kerala, India. Early life, education, and clinical training: Radhakrishnan was born to late Dr. T Madhavan Nair and late Smt. Kunhilakshmi Amma as the second of the five children, at the Kurupath House in the village Mathur, Palakkad district, Kerala on 15 June 1948. He did his schooling in Government Secondary School at Kottayi, Palakkad and later studied pre-degree course (biology) at the Government Victoria College, Palakkad. He completed the MBBS course from the Government medical college Kozhikode, Kerala in 1971, where he won gold medals in physiology and medicine. He pursued his higher studies; MD (Internal Medicine) and DM (Neurology) from Postgraduate Institute of Medical Education and Research (PGIMER), Chandigarh in 1979. He was the first one to be awarded MNAMS (Neurology) by the National Academy of Medical Sciences, India in 1980. Career: He started his professional career in PGIMER, Chandigarh as a neurologist. Later he also worked as faculty at the Medical University, Benghazi, Libya, where he got interested in neuroepidemiology, which culminated in getting advanced training in neuroepidemiology under Leonard T. Kurland at Mayo Clinic, Rochester, Minnesota, USA, who is considered to be the father of neuroepidemiology. Subsequently, he also got trained in EEG and epilepsy from Donald Klass at the Mayo Clinic. He returned to India in 1994 as Professor and Head of Department of Neurology at Sree Chitra Tirunal Institute for Medical Sciences and Technology, where he established R Madhavan Nayar Center for Comprehensive Epilepsy Care using philanthropic funding in 1998. This was one of the pioneering centers in India which contributed for the paradigm shift in comprehensive epilepsy management. The center was mainly involved in 1) medical, surgical, psychosocial and occupational management of individual patients with epilepsy; 2) educating the primary and secondary care physicians about the current trends in the management of epilepsy, and enhance public awareness about epilepsy in order to dispel the prevailing misconceptions; and 3) undertake clinical, applied and basic science research and evolve cost-effective investigative and treatment strategies. This center performs almost 100 epilepsy surgeries annually. Achievements: Prof. Radhakrishnan is a life member of the Neurological Society of India, Indian Academy of Neurology and Indian Epilepsy Association. He is a past president of the Indian Academy of Neurology. He is a Fellow of the National Academy of Medical Sciences India (FAMS), American Academy of Neurology (FAAN) and American Neurological Association (FANA). He was awarded the prestigious Mayo Clinical Award in 1994, and the ‘Outstanding Achievement Epilepsy Award’ at the Asian Ocean Epilepsy Congress (AOEC), March 2012. He has served in the editorial board of Epilepsia, Neurology India and Indian Academy of Neurology. Recently, he has been appointed as Associate Editor of Epilepsy and Behavior Case Reports. He has over 300 scientific publications, several book chapters, with more than 8000 citations, h-index of 45 and i10-index of 146. He is also a recipient of the Lifetime Achievement Award of Madras Neuro Trust.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gamma (eclipse)** Gamma (eclipse): Gamma (denoted as γ) of an eclipse describes how centrally the shadow of the Moon or Earth strikes the other body. This distance, measured at the moment when the axis of the shadow cone passes closest to the center of the Earth or Moon, is stated as a fraction of the equatorial radius of the Earth or Moon. Sign: The sign of gamma defines, for a solar eclipse, if the axis of the shadow passes north or south of the center of the Earth; a positive value means north. The Earth is defined as that half which is exposed to the Sun (this changes with the seasons and is not related directly to the Earth's poles or equator; thus, the Earth's center is wherever the Sun is directly overhead). Sign: For a lunar eclipse, it defines whether the axis of the Earth's shadow passes north or south of the Moon; a positive value means south. Sign: Gamma changes monotonically throughout any single saros series. The change in gamma is larger when Earth is near its aphelion (June to July) than when it is near perihelion (December to January). For odd numbered series (ascending node for solar eclipses and descending node for lunar eclipses), gamma decreases for solar eclipses and gamma increases for lunar eclipses, while for even numbered series (descending node for solar eclipses and ascending node for lunar eclipses), gamma increases for solar eclipses and gamma decreases for lunar eclipses. This simple rule describes the current behavior of gamma, but this has not always been the case. The eccentricity of Earth's orbit is presently 0.0167, and is slowly decreasing. It was 0.0181 in the year -2000 and will be 0.0163 in +3000. In the past, when the eccentricity was larger, there were saros series in which the trend in gamma reversed for one or more saros cycles before resuming its original direction. These instances occur near perihelion when the Sun's apparent motion is highest and may, in fact, overtake the eastward shift of the node. The resulting effect is a relative shift west of the node after one saros cycle instead of the usual eastward shift. Consequently, gamma reverses direction. Limiting cases for solar eclipses on the earth: The absolute value of gamma (denoted as |γ|) allows us to distinguish different kinds of solar eclipses from the earth:If the Earth were a sphere, the limit for a central eclipse would be 1.0, but because of the oblateness of the Earth (which causes the distance between the Earth's north and south poles to be slightly shorter than if the were perfectly spherical), it is 0.9972. Limiting cases for solar eclipses on the earth: If |γ| is 0, the axis of the shadow cone is exactly between the northern and southern halves of the sunlit side of the Earth when it passes over the center. Limiting cases for solar eclipses on the earth: If |γ| is lower than 0.9972, the eclipse is central. The axis of the shadow cone strikes the Earth and there are locations on Earth, where the Moon can be seen centered in front of the Sun. Central eclipses can be total or annular (if the tip of the umbra only barely reaches the surface of the Earth, the type can change during the eclipse from annular to total and/or vice versa; this is called a hybrid eclipse). Limiting cases for solar eclipses on the earth: If |γ| is between 0.9677826 and 0.9972, the eclipse is central (one limit), because one edge misses the Earth. If |γ| is between 0.9972 and 1.0266174, the axis of the shadow cone misses Earth, but, because the umbra or antumbra has a nonzero width, part of the umbra or antumbra may touch down in the polar regions of the Earth. This is called a non-central total or annular eclipse. If |γ| is between 0.9972 and 1.0266174 and the special circumstances mentioned above do not occur, or if |γ| is greater than 1.0266174 but less than approximately 1.55, the eclipse is partial; the Earth traverses only the penumbra. Limiting cases for solar eclipses on the earth: If |γ| exceeds approximately 1.55 (1.53 for total solar eclipses and 1.57 for annular solar eclipses when the gamma is 0.9972 or smaller), the shadow cone misses the Earth completely, and no eclipse occurs.The Solar eclipse of April 29, 2014, with a gamma of -0.99996, is an example of the special case of a non-central annular eclipse. The axis of the shadow cone barely missed Earth's south pole. Thus, no central line could be specified for the zone of annular visibility.The next non-central eclipse in 21st century is total solar eclipse of April 9, 2043. Limiting cases for lunar eclipses on the moon with respect to Earth's umbral and penumbral shadows: There are three types of lunar eclipses: Penumbral Lunar Eclipse = The Moon passes through Earth's penumbra, but the Earth's umbra misses the Moon. Partial Lunar Eclipse = The Moon passes through Earth's umbra, but not completely. Limiting cases for lunar eclipses on the moon with respect to Earth's umbral and penumbral shadows: Total Lunar Eclipse = The Moon passes through Earth's umbra, completely.The gamma is the limit of: If |γ| is 0, the Moon's center passes exactly through the axis of the Earth's umbra.If |γ| is lower than 0.2725, this lunar eclipse is central.If |γ| is between 0.2725 and 0.47, this lunar eclipse is total.If |γ| is between 0.43 and 0.987, this lunar eclipse is partial with a penumbral magnitude greater than 1.If |γ| is between 0.987 and 1.0266174, this lunar eclipse is total penumbral or partial with a penumbral magnitude less than 1.If |γ| is between 1.0266174 and approximately 1.55, this lunar eclipse is penumbral with penumbral magnitude less than 1, the Moon traverses only the Earth's penumbra.If |γ| exceeds approximately 1.55, the Earth's penumbra misses the Moon completely, and no eclipse occurs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jazz guitarist** Jazz guitarist: Jazz guitarists are guitarists who play jazz using an approach to chords, melodies, and improvised solo lines which is called jazz guitar playing. The guitar has fulfilled the roles of accompanist (rhythm guitar) and soloist in small and large ensembles and also as an unaccompanied solo instrument. Jazz guitarist: Until the 1930s, jazz bands used banjo because the banjo's metallic twang was easier to hear than the acoustic guitar when competing with trumpets, trombones, and drums. The banjo could be heard more easily, too, on wax cylinders in the early days of audio recording. The invention of the archtop increased the guitar's volume, and in the hands of Eddie Lang guitar became a solo instrument for the first time. Following the lead of Lang, musicians dropped their banjos for guitars, and by the 1930s the banjo hardly existed as a jazz instrument. Jazz guitarist: Amplification created possibilities for the guitar. Charlie Christian was the first to explore these possibilities. Although his career was brief, it was influential enough for critics to divide the history of jazz guitar into pre- and post-Christian eras. Early years: 1880s-1920s: In early days of jazz in New Orleans most bands had guitarists, but there are no recordings by Lorenzo Staulz, Rene Baptiste, Dominick Barocco, Joe Guiffre, Coochie Martin, and Brock Mumford. Buddy Bolden, one of the earliest jazz musicians, played in a band in 1889 that was led by guitarist Charlie Galloway. King Oliver, another important early figure, belonged to a band in 1910 that was led by guitarist Louis Keppard, brother of Freddie Keppard.Although jazz guitar existed during these years, banjo was a more popular instrument. The metallic twang of the banjo was easier to hear in a band than the acoustic guitar or piano, and it was easier to hear when recording on wax cylinders. The first person to make solo recordings on guitar was Nick Lucas, the dominant guitarist of the 1920s, when he released "Pickin' the Guitar" and "Teasin' the Frets" in 1922. He had experimented with wax cylinders ten years earlier. He became the first person to have a custom guitar named after him, the Gibson Nick Lucas Special. Nevertheless, his career was built on his reputation as a singer. He was popular on radio, Broadway, and in vaudeville. With his high-pitched voice, he sold eight million copies of his signature song, "Tiptoe Through the Tulips". Both the song and singing style were borrowed decades later by Tiny Tim. Replacing the banjo: The role of the early jazz guitarists was to be part of the rhythm section. Freddie Green played rhythm guitar for the Count Basie Orchestra from the 1930s until Basie's death in the 1980s, contributing to the band's swing by inverting chords, also known as revoicing, on each beat. Like Green, Eddie Condon played rhythm guitar his whole career without taking a solo. Allan Reuss gave rhythm guitar a place in the big band of Benny Goodman.The first jazz guitarist to step from the rhythm section was Eddie Lang. Wanting to do more than strum chords for the band, Lang played single-string solos. He drew attention to himself while he was a member of the Paul Whiteman Orchestra and as a popular studio musician. Like most guitarists of the time, he started on banjo, and when he switched to guitar, many others followed. His Gibson L-5 archtop became a popular model among jazz guitarists. By 1934, largely due to Lang, guitar replaced the banjo as a jazz instrument.Django Reinhardt's flashy style stood out in the early days of rhythm guitarists. He was born in Belgium to a gypsy family. His gypsy jazz was influenced by the flamenco guitar of Spanish gypsies and the violin of Hungarian gypsies. In the 1930s, he formed the Quintet of the Hot Club of France, consisting of three acoustic guitars, a violin, and a double bass. He toured the U.S. in 1946 with Duke Ellington. The gypsy jazz tradition has a small but loyal following that continued in the work of the Ferré family, the Schmitt family, Angelo Debarre, Christian Escoudé, Fapy Lafertin, Biréli Lagrène, Jon Larsen, Jimmy Rosenberg, and Stephane Wrembel. Amplification: Playing an unamplified archtop guitar is feasible for rhythm guitar accompaniment in some small groups playing in small venues. However, playing single note guitar solos audibly without an amplifier is a challenge in larger ensembles and in larger halls. Django Reinhardt's Hot Club of France was a string quintet in which being heard over the other instruments was rarely a problem. Argentinian Oscar Alemán, who was in Paris at the same time as Reinhardt, tried to overcome the problem of audibility by using a resonator guitar, as did Eddie Durham, an arranger and trombonist with the Jimmie Lunceford orchestra who also played guitar. Durham experimented with amplification and became the first person to make audio recordings with electric guitar when he recorded with the Kansas City Five in the 1930s. He played a Gibson ES-150 arched-top which Gibson had started producing a couple years before. Durham persuaded Floyd Smith to buy an electric guitar, and while on tour he showed his amp to Charlie Christian.Many musicians were inspired to pick up guitar after hearing Charlie Christian with the Benny Goodman orchestra. Christian was the first person to explore the possibilities created by the electric guitar. He had large audiences when he played solos with passing chords. According to jazz critic Leonard Feather, Christian played a single-note line alongside a trumpet and saxophone, moving the guitar away from its secondary role in the rhythm section. He tried diminished and augmented chords. His rhythm suggested bebop. While in New York City, he spent many late hours at Minton's Playhouse in Harlem, playing with musicians such as Thelonious Monk and Dizzy Gillespie. Post-Christian era: Although Charlie Christian had a brief career, his impact was big enough that some critics divide the history of jazz guitar into pre-Christian and post-Christian eras. Mary Osborne saw Christian perform when he visited her home state of North Dakota in 1938. The performance inspired her to buy an electric guitar. Before Christian, George Barnes was experimenting with amplification in 1931. He claimed to be the first electric guitarist and the first to record with an electric guitar, on March 1, 1938, in sessions with blues guitarist Big Bill Broonzy fifteen days before Eddie Durham recorded with the Kansas City Five.Oscar Moore, Irving Ashby, and John Collins were the successive members of the Nat King Cole Trio who helped establish the jazz trio format. In the early 1940s, Al Casey contributed to the liveliness of the Fats Waller Trio, while Tiny Grimes played electric four-string tenor guitar with the Art Tatum Trio. Kenny Burrell established himself in the guitar-bass-drums format during the 1950s as did Barney Kessel and Herb Ellis with the Oscar Peterson Trio. Kessell continued the swing aspect of Christian's music into the 1950s.As the swing era turned to bebop, guitarists moved away from Charlie Christian's style. Two pioneers of bebop, Charlie Parker and Dizzy Gillespie, recorded with young guitarists Bill DeArango and Remo Palmier and inspired Chuck Wayne to change his approach. After playing in big bands with Woody Herman and Benny Goodman, Billy Bauer explored unconventional territory with Lennie Tristano and Lee Konitz playing dissonant chords, and trying to adapt the abstraction of Konitiz and Warne Marsh to the guitar. Although Jimmy Raney was influenced by Tristano, his harmonies were more subtle and logical. Johnny Smith carried this love of harmony into a romantic, chordal style as in his hit ballad "Moonlight in Vermont". Tal Farlow avoided the abstraction of Trisanto. Farlow blamed his ability to play quickly on the need to keep up with his bandleader, Red Norvo.Lenny Breau performed using an ensemble improvisational playing, along with a more orchestral finger-style solo jazz guitar. He used many diverse elements of music, including closed voicings, flamenco-style guitar, use of varied rhythms, fingered harmonics, modal jazz harmony, an intimate knowledge of inversions and tritone substitutions, and a great understanding of bebop.Bossa nova became popular in the early 1960s in part because of the album Jazz Samba by Stan Getz and guitarist Charlie Byrd and the song "The Girl from Ipanema" by Antonio Carlos Jobim. Although bossa nova isn't synonymous with jazz, the intermingling of bossa nova with jazz was fruitful for both genres. Brazilian guitarists include Antonio Carlos Jobim, Luiz Bonfá, Oscar Castro-Neves, João Gilberto, Baden Powell de Aquino, and Bola Sete. Fusion, technique, and invention: When rock guitarist Jimi Hendrix became popular in the 1960s, he created the persona of the guitar hero, the charismatic solo guitarist dazzling the audience. He created possibilities on guitar through the use of electronic effect units. Hendrix inspired many musicians to pick up electric guitar. Fusion, technique, and invention: One of them was Larry Coryell, who combined jazz and rock in the 1960s before the term jazz fusion was common. English guitarist John McLaughlin followed Coryell and Hendrix, but he explored other styles, too, such as blues, electronic, folk, free jazz, gypsy jazz, and Indian music. McLaughlin recorded an album of acoustic jazz in the early 1980s with guitarists Paco de Lucia and Al Di Meola. English guitarist Allan Holdsworth played jazz rock in the 1980s that as inspired by John Coltrane. Lee Ritenour is among the most popular jazz fusion guitarists. He established his name in the 1970s as a busy studio musician who recorded with acts in many genres. Fusion, technique, and invention: The hammer-on is a common technique in guitar, but in the 1980s Stanley Jordan was the first to extend the technique into his entire playing style. Jordan taps the fretboard with the fingertips of both hands, playing the neck of the guitar like a piano. Enver Izmaylov, a native of Uzbekistan, uses a similar two-handed technique to adapt his country's folk music to jazz. Others using tapping techniques to a lesser degree include David Torn and Tuck Andress. Some fusion guitarists reacted against the excesses of their predecessors by playing in a more restrained style. These include Larry Carlton, Steve Khan, Terje Rypdal. Mike Stern began his career with the band Blood, Sweat & Tears, then was a member of Miles Davis's band in the 1980s.Charlie Hunter invented the eight-string electric guitar, giving the impression of two guitarists simultaneously. He adapts to guitar the Hammond B3 organ grooves of Jimmy Smith and Larry Young. 1960s-2010s: Influences from free jazz in the 1960s made its way to the guitar. Sonny Sharrock used dissonance, distortion effects units, and other electronic gear to create sonic "sheets of noise" that drove some listeners away when he performed at festivals. He refused to play chords, calling himself a horn player, which is where he got his inspiration. English guitarist Derek Bailey established his reputation as part of the European free jazz scene. Like Sharrock, he sought liberation for its own sake and the breaking of all conventions in the name of originality. He belonged to the Spontaneous Music Ensemble in the 1970s. Beginning in the 1990s, he formed duos with DJs, Chinese pipa musicians, and Pat Metheny.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metre sea water** Metre sea water: The metre (or meter) sea water (msw) is a metric unit of pressure used in underwater diving. It is defined as one tenth of a bar.The unit used in the US is the foot sea water (fsw), based on standard gravity and a sea-water density of 64 lb/ft3. According to the US Navy Diving Manual, one fsw equals 0.30643 msw, 0.030643 bar, or 0.44444 psi, though elsewhere it states that 33 fsw is 14.7 psi (one atmosphere), which gives one fsw equal to about 0.445 psi.The msw and fsw are the conventional units for measurement of diver pressure exposure used in decompression tables and the unit of calibration for pneumofathometers and hyperbaric chamber pressure gauges. Feet of sea water: One atmosphere is approximately equal to 33 feet of sea water or 14.7 psi, which gives 4.9/11 or about 0.445 psi per foot. Atmospheric pressure may be considered constant at sea level, and minor fluctuations caused by the weather are usually ignored. Pressures measured in fsw and msw are gauge pressure, relative to the surface pressure of 1 atm absolute, except when a pressure difference is measured between the locks of a hyperbaric chamber, which is also generally measured in fsw and msw. Feet of sea water: The pressure of seawater at a depth of 33 feet equals one atmosphere. The absolute pressure at 33 feet depth in sea water is the sum of atmospheric and hydrostatic pressure for that depth, and is 66 fsw, or two atmospheres absolute. For every additional 33 feet of depth, another atmosphere of pressure accumulates. Therefore at the surface the gauge pressure of 0 fsw is equivalent to an absolute pressure of 1 standard atmosphere (14.7 psi), and the gauge pressure in fsw at any depth is incremented by 1 ata to provide absolute pressure. (Pressure in ata = Depth in feet/33 + 1) Usage: In diving the absolute pressure is used in most computations, particularly for decompression and breathing gas consumption but depth is measured by way of hydrostatic pressure. In metric units the ambient pressure is usually measured in metres sea water (msw), and converted to bar for calculations. In US customary units ambient pressure is normally measured in feet of sea water (fsw), and converted to atmospheres absolute or pounds per square inch absolute (psia) for decompression computation. Feet and metres sea water are convenient measures which approximate closely to depth and are intuitively simple to grasp for the diver, compared to the options of more conventional units of pressure which give no direct indication of depth. The distinction between gauge and absolute pressure is important for calculation of gas properties and pressure must be identified as either gauge or absolute. Gauge pressure in msw or fsw is converted to absolute pressure in bar or atm for decompression and gas consumption calculation, but decompression tables are usually provided ready for use directly with the gauge pressure in msw and fsw. Depth gauges and dive computers with readouts calibrated in feet and metres are actually displaying a pressure measurement, usually in feet or metres sea water, as most diving is done in the sea. If ambient pressure in fresh water and hyperbaric chambers is measured in feet and metres sea water, the same decompression algorithms and tables can be used, which eliminates the need to use calibration factors when diving in these environments. Conversions: In the metric system, a pressure of 10 msw is defined as 1 bar. Pressure conversion between msw and fsw is slightly different from length conversion between metres and feet; 10 msw = 32.6336 fsw and 10 m = 32.8083 ft.The US Navy Diving Manual gives conversion factors for "fw" (feet water) based on a fresh water density of 62.4 lb/ft3 and for fsw based on a sea water density of 64.0 lb/ft3.One standard metre sea water equals: 3.26336 fsw 102.018 cmH2O at 15 °C 0.1 bar by definition 10.0 kPa, in SI units 100000 Ba, in cgs unitsOne standard metre sea water is also approximately equal to: 0.0986923 atm 1.45038 psi 75.0062 mmHg 75.0062 Torr 2.95299 inHgOne standard foot sea water is approximately equal to: 0.30643 msw 3.0643 kPa, in SI units 30643 Ba, in cgs units 0.030242 atm 0.44444 psi 22.984 mmHg 22.984 Torr 0.904884 inHg 31.24616 cmH2O Similar units: Feet fresh water (ffw) or Feet water (fw), equivalent to 1/34 atm.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**3-Aminobenzoic acid** 3-Aminobenzoic acid: 3-Aminobenzoic acid (also known as meta-aminobenzoic acid or MABA) is an organic compound with the molecular formula H2NC6H4CO2H. MABA is a white solid, although commercial samples are often colored. It is only slightly soluble in water. It is soluble in acetone, boiling water, hot alcohol, hot chloroform and ether. It consists of a benzene ring substituted with an amino group and a carboxylic acid.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Glossary of industrial automation** Glossary of industrial automation: This glossary of industrial automation is a list of definitions of terms and illustrations related specifically to the field of industrial automation. For a more general view on electric engineering, see Glossary of electrical and electronics engineering. For terms related to engineering in general, see Glossary of engineering. A: abbreviated address callingCalling that enables a user to employ an address having fewer characters than the full address when initiating a call. absolute coordinatesThe absolute distances or angles that specify the position of a point with respect to the datum of a coordinate system. absolute coordinateOne of the coordinates that identify the position of an addressable point with respect to the origin of a specified coordinate system. absolute errorThe algebraic result of subtracting a true, specified or theoretically correct value from the computed, observed, measured or achieved value. absolute instructionA display command using absolute coordinates. absolute position sensorA sensor that gives directly the coordinate position of an element of a machine. absolute programmingProgramming using words indicating absolute dimensions (absolute coordinates). absolute vectorA vector whose start and end points are specified in absolute coordinates. accelerationRate of change of the velocity at the point under consideration per unit of time. accuracyA qualitative assessment of freedom from error or of the degree of conformity to a desired value, a high assessment corresponding to a small error. active accommodationType of control in which the combination of sensor outputs, control commands, and robot motion is used to achieve alteration of a robot's preprogrammed motions in response to sensed inputs (e.g, used to stop a robot when forces reach set levels, or to perform force feedback tasks like insertions, door opening and edge tracing). active devicesDevices which require a power supply independent of the value of input signals. active outputOutput the power of which in all possible states of the device is derived from supply power. actual conditionsConditions observed during operation. actuatorA power mechanism used to effect motion of the robot (e.g. a motor which converts electrical, hydraulic or pneumatic energy to effect motion of the robot). adaptive controlA control scheme that adjusts the control system parameters from conditions detected during the process. address (in numerical control)A character, or group of characters, at the beginning of a word, that identifies the data following in the word. address block formatA block format in which each word contains an address. address tabulation block formatA tabulation block format in which each word contains an address. addressable pointAny point of a device that can be addressed. aiming fieldOn a display surface, a circle or other pattern of light used to indicate the area in which the presence of a light-pen can be detected at a given time. alignment function characterThe character ":" used as the address character for a sequence number word that indicates a block in a control tape after which are recorded the data necessary for machining to be commenced or recommenced. alignment poseA specified pose of the mechanical interface coordinate system in relation to the base coordinate system. ambient temperatureTemperature of the environment in which the apparatus is working. amplificationRatio between the output signal variations and the control signal variations (for analogue devices only). amplifierAny device that increases the magnitude of an applied signal. It receives an input signal and delivers a larger output signal that, in addition to its increased amplitude, is a replica of the input signal. analog dataData a represented by a physical quantity that is considered to be continuously variable and whose magnitude is made directly proportional to the data or to a suitable function of the data. analog input channel amplifierAn amplifier attached to one or more analog input channels, that adapts the analog signal level to the input range of the succeeding analog-to-digital converter. analog input channel (in process control)The analog data path between the connector and the analog-to-digital converter in the analog input subsystem. analog output channel amplifierAn amplifier attached to one or more analog output channels, that adapts the output signal range of the digital-to-analog converter to the signal level necessary to control the technical process. analog representationA representation of the value of a variable by a physical quantity that is considered to be continuously variable, the magnitude of the physical quantity being made directly proportional to the variable or to a suitable function of the variable. analogue anplifierAmplifier the output of which is continuously variable with the applied control signal. anisochronous transmissionA data transmission process in which there is always an integral number of unit intervals between any two significant instants in the same group; between two significant instants located in different groups, there is not always an integral number of unit intervals. answeringThe process of responding to a calling station to complete the establishment of a connection between data stations. anti-vibration mountingDevice for insulating machine vibrations from the structure upon which it is mounted. argument (in numerical control)Data which qualifies a command. arm (primary axes)An interconnected set of links and powered joints comprising members of longitudinal shape which supports, positions and orientates the wrist and/or an end effector. articulated structureSet of links and joints which constitutes the arm and the wrist. asynchronous transmissionData transmission in which the time of occurrence of the start of each character, or block of characters, is arbitrary; once started, the time of occurrence of each signal representing a bit within the character, or block, has the same relationship to significant instants of a fixed time base. attained poseThe pose achieved by the robot in response to the command pose. automaticPertaining to a process or device that, under specified conditions, functions without human intervention. automatic answeringAnswering in which the called data terminal equipment (DTE) automatically responds to the calling signal. automatic calling (in a data network)Calling in which the elements of the selection signal are entered into the data network contiguously at the full data signaling rate. automatic controlControl method which operates without human intervention. automatic cycleCycle of operations which, once started, repeats indefinitely until stopped. automatic modeThe operating mode in which the robot control system can operate in accordance with the task program. autcmatic mode of operationThe mode of operation of a numerically controlled machine in which it operates in accordance with the control data until stopped by the program or the operator. automationThe implementation of processes by automatic means. axis1. A direction in which a part of a robot can move in a linear or rotary mode. The number of axes is normally the number of guided and mutually independently driven links.2. A direction in which a part of a machine can move in a linear or rotary mode.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Poker** Poker: Poker is a family of comparing card games in which players wager over which hand is best according to that specific game's rules. It is played worldwide, but in some places the rules may vary. While the earliest known form of the game was played with just 20 cards, today it is usually played with a standard deck, although in countries where short packs are common, it may be played with 32, 40 or 48 cards. Thus poker games vary in deck configuration, the number of cards in play, the number dealt face up or face down, and the number shared by all players, but all have rules that involve one or more rounds of betting. Poker: In most modern poker games, the first round of betting begins with one or more of the players making some form of a forced bet (the blind or ante). In standard poker, each player bets according to the rank they believe their hand is worth as compared to the other players. The action then proceeds clockwise as each player in turn must either match (or "call") the maximum previous bet, or fold, losing the amount bet so far and all further involvement in the hand. A player who matches a bet may also "raise" (increase) the bet. The betting round ends when all players have either called the last bet or folded. If all but one player folds on any round, the remaining player collects the pot without being required to reveal their hand. If more than one player remains in contention after the final betting round, a showdown takes place where the hands are revealed, and the player with the winning hand takes the pot. Poker: With the exception of initial forced bets, money is only placed into the pot voluntarily by a player who either believes the bet has positive expected value or who is trying to bluff other players for various strategic reasons. Thus, while the outcome of any particular hand significantly involves chance, the long-run expectations of the players are determined by their actions chosen on the basis of probability, psychology, and game theory. Poker: Poker has increased in popularity since the beginning of the 20th century and has gone from being primarily a recreational activity confined to small groups of enthusiasts to a widely popular activity, both for participants and spectators, including online, with many professional players and multimillion-dollar tournament prizes. History: While poker's exact origin is the subject of debate, many game scholars point to the French game Poque and the Persian game As-Nas as possible early inspirations. For example, in the 1937 edition of Foster's Complete Hoyle, R. F. Foster wrote that "the game of poker, as first played in the United States, five cards to each player from a twenty-card pack, is undoubtedly the Persian game of As-Nas." However, in the 1990s the notion that poker is a direct derivative of As-Nas began to be challenged by gaming historians including David Parlett. What is certain, however, is that poker was popularized in the American South in the early 19th century, as gambling riverboats in the Mississippi River and around New Orleans during the 1830s helped spread the game. One early description of poker played on a steamboat in 1829 is recorded by the English actor, Joe Cowell. The game was played with twenty cards ranking from Ace (high) to Ten (low).In contrast to this version of poker, seven-card stud only appeared in the middle of the 19th century, and was largely spread by the US military. It became a staple in many casinos following the second world war, and grew in popularity with the advent of the World Series of Poker in the 1970s.Texas hold 'em and other community card games began to dominate the gambling scenes over the next couple of decades. The televising of poker was a particularly strong influence increasing the popularity of the game during the turn of the millennium, resulting in the poker boom a few years later between 2003 and 2006. Today the game has grown to become an extremely popular pastime worldwide. Gameplay: In casual play, the right to deal a hand typically rotates among the players and is marked by a token called a dealer button (or buck). In a casino, a house dealer handles the cards for each hand, but the button (typically a white plastic disk) is rotated clockwise among the players to indicate a nominal dealer to determine the order of betting. The cards are dealt clockwise around the poker table, one at a time. Gameplay: One or more players are usually required to make forced bets, usually either an ante or a blind bet (sometimes both). The dealer shuffles the cards, the player on the chair to their right cuts, and the dealer deals the appropriate number of cards to the players one at a time, beginning with the player to their left. Cards may be dealt either face-up or face-down, depending on the variant of poker being played. After the initial deal, the first of what may be several betting rounds begins. Between rounds, the players' hands develop in some way, often by being dealt additional cards or replacing cards previously dealt. At the end of each round, all bets are gathered into the central pot. Gameplay: At any time during a betting round, if one player bets, no opponents choose to call (match) the bet, and all opponents instead fold, the hand ends immediately, the bettor is awarded the pot, no cards are required to be shown, and the next hand begins. This is what makes bluffing possible. Bluffing is a primary feature of poker, distinguishing it from other vying games and from other games that use poker hand rankings. Gameplay: At the end of the last betting round, if more than one player remains, there is a showdown, in which the players reveal their previously hidden cards and evaluate their hands. The player with the best hand according to the poker variant being played wins the pot. A poker hand comprises five cards; in variants where a player has more than five cards available to them, only the best five-card combination counts. There are 10 different kinds of poker hands, such as straight flush and four of a kind. Variants: Poker has many variations, all following a similar pattern of play and generally using the same hand ranking hierarchy. There are four main families of variants, largely grouped by the protocol of card-dealing and betting: Straight A complete hand is dealt to each player, and players bet in one round, with raising and re-raising allowed. This is the oldest poker family; the root of the game as now played was a game known as Primero, which evolved into the game three-card brag, a very popular gentleman's game around the time of the American Revolutionary War and still enjoyed in the U.K. today. Straight hands of five cards are sometimes used as a final showdown, but poker is almost always played in a more complex form to allow for additional strategy. Variants: Stud poker Cards are dealt in a prearranged combination of face-down and face-up rounds, or streets, with a round of betting following each. This is the next-oldest family; as poker progressed from three to five-card hands, they were often dealt one card at a time, either face-down or face-up, with a betting round between each. The most popular stud variant today, seven-card stud, deals two extra cards to each player (three face-down, four face-up) from which they must make the best possible 5-card hand.Draw poker Five-card draw: A complete hand is dealt to each player, face-down. Then each player must place an ante to the pot. They can then see their cards and bet accordingly. After betting, players can discard up to three cards and take new ones from the top of the deck. Then, another round of betting takes place. Finally, each player must show their cards and the player with the best hand wins.Community card poker Also known as "flop poker," community card poker is a variation of stud poker. Players are dealt an incomplete hand of face-down cards, and then a number of face-up community cards are dealt to the center of the table, each of which can be used by one or more of the players to make a 5-card hand. Texas hold 'em and Omaha are two well-known variants of the community card family.There are several methods for defining the structure of betting during a hand of poker. The three most common structures are known as "fixed-limit," "pot-limit," and "no-limit." In fixed-limit poker, betting and raising must be done by standardized amounts. For instance, if the required bet is X, an initial bettor may only bet X; if a player wishes to raise a bet, they may only raise by X. In pot-limit poker, a player may bet or raise any amount up to the size of the pot. When calculating the maximum raise allowed, all previous bets and calls, including the intending raiser's call, are first added to the pot. The raiser may then raise the previous bet by the full amount of the pot. In no-limit poker, a player may wager their entire betting stack at any point that they are allowed to make a bet. In all games, if a player does not have enough betting chips to fully match a bet, they may go "all-in," allowing them to show down their hand for the number of chips they have remaining. Variants: While typical poker games award the pot to the highest hand as per the standard ranking of poker hands, there are variations where the best hand, and thus the hand awarded the pot, is the lowest-ranked hand instead. In such games the best hand contains the lowest cards rather than the highest cards; some variations may be further complicated by whether or not hands such as flushes and straights are considered in the hand rankings. There are also games where the highest and lowest hands divide the pot between them, known as "high low split" games. Variants: Other games that use poker hand rankings may likewise be referred to as poker. Video poker is a single-player video game that functions much like a slot machine; most video poker machines play draw poker, where the player bets, a hand is dealt, and the player can discard and replace cards. Payout is dependent on the hand resulting after the draw and the player's initial bet. Variants: Strip poker is a traditional poker variation where players remove clothing when they lose bets. Since it depends only on the basic mechanic of betting in rounds, strip poker can be played with any form of poker; however, it is usually based on simple variants with few betting rounds, like five card draw. Variants: Another game with the poker name, but with a vastly different mode of play, is called Acey-Deucey or Red Dog poker. This game is more similar to Blackjack in its layout and betting; each player bets against the house, and then is dealt two cards. For the player to win, the third card dealt (after an opportunity to raise the bet) must have a value in-between the first two. Payout is based on the odds that this is possible, based on the difference in values of the first two cards. Other poker-like games played at casinos against the house include three card poker and pai gow poker. Computer programs: A variety of computer poker players have been developed by researchers at the University of Alberta, Carnegie Mellon University, and the University of Auckland amongst others. Computer programs: In a January 2015 article published in Science, a group of researchers mostly from the University of Alberta announced that they "essentially weakly solved" heads-up limit Texas Hold 'em with their development of their Cepheus poker bot. The authors claimed that Cepheus would lose at most 0.001 big blinds per game on average against its worst-case opponent, and the strategy is thus so "close to optimal" that "it can't be beaten with statistical significance within a lifetime of human poker playing." Literature: Parlett, David (2008), The Penguin Book of Card Games, London: Penguin, ISBN 978-0-141-03787-5
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**JWH-302** JWH-302: JWH-302 (1-pentyl-3-(3-methoxyphenylacetyl)indole) is an analgesic chemical from the phenylacetylindole family, which acts as a cannabinoid agonist with moderate affinity at both the CB1 and CB2 receptors. It is a positional isomer of the more common drug JWH-250, though it is slightly less potent with a Ki of 17 nM at CB1, compared to 11 nM for JWH-250. Because of their identical molecular weight and similar fragmentation patterns, JWH-302 and JWH-250 can be very difficult to distinguish by GC-MS testing.In the United States, CB1 receptor agonists of the 3-phenylacetylindole class such as JWH-302 are Schedule I Controlled Substances.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Du Noüy ring method** Du Noüy ring method: In surface science, the du Noüy ring method is a technique for measuring the surface tension of a liquid. The method involves slowly lifting a ring, often made of platinum, from the surface of a liquid. The force, F, required to raise the ring from the liquid's surface is measured and related to the liquid's surface tension, γ: ring +2π⋅(ri+ra)⋅γ where ri is the radius of the inner ring of the liquid film pulled and ra is the radius of the outer ring of the liquid film. wring is the weight of the ring minus the buoyant force due to the part of the ring below the liquid surface.When the ring's thickness is much smaller than its diameter, this equation can be simplified to: ring +4πRγ where R is the average of the inner and outer radius of the ring, i.e. ri+ra2. Du Noüy ring method: This technique was proposed by the French physicist Pierre Lecomte du Noüy (1883–1947) in a paper published in 1925. Du Noüy ring method: The measurement is performed with a force tensiometer, which typically uses an electrobalance to measure the excess force caused by the liquid being pulled up and automatically calculates and displays the surface tension corresponding to the force. Earlier, torsion wire balances were commonly used. The maximum force is used for the calculations, and empirically determined correction factors are required to remove the effect caused by the finite diameter of the ring: ring +4πRγf with f being the correction factor. Du Noüy ring method: The most common correction factors include those by Zuidema and Waters (for liquids with low interfacial tension), Huh and Mason (which covers a wider range than Zuidema-Waters), and Harkins and Jordan (more precise than Huh-Mason while still covering the most widely used liquids).The surface tension and correction factors are expressed by the following equations: γ=F4πRf where γ is surface tension, R is the average diameter of the ring, and f is correction factor. Du Noüy ring method: Zuidema and Waters: measured lower upper +C F = maximum pull of rings [dyn/cm] ρ = density of the lower and upper phases 0.04534 1.679 rR a = 0.7250 b = 0.0009075 [s2.cm−1] r = Du Noüy wire radius R = Du Noüy ring radiusHuh & Mason: The correction factor is described as a function of Rr and R3V. See the references.Harkins and Jordan: The correction factor is tabulated as a function of Rr and R3V. See the references.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Blaise ketone synthesis** Blaise ketone synthesis: The Blaise ketone synthesis (named after Edmond Blaise) is the chemical reaction of acid chlorides with organozinc compounds to give ketones.[1][2] The reaction was claimed to bring excellent yields by Blaise, however, investigators failed to obtain better than moderate yields (50%).[3][4] Thus, the reaction is particularly ineffective in forming ketones from acyl chlorides. The reaction also works with organocuprates.[5][6] Reviews have been written.[7][8] Reaction mechanism: The mechanism is sampled from the proposed mechanism for organocadmium compounds, given that the mechanisms are identical to one another the proposed mechanism for the reaction is the same as the one for organocadmium compounds[9][10]. After the oxygen forms a bond with the organozinc compound, R’ shifts to the carbonyl carbon, having chlorine act as a leaving group and removing the negative charge from zinc. The chlorine that left returns to form a bond with zinc, pushing the electrons back on to oxygen and thus forming the ketone.[11] Variations: Blaise-Maire reaction The Blaise-Maire reaction is the Blaise ketone synthesis using β-hydroxy acid chlorides to give β-hydroxyketones, which are converted into α,β-unsaturated ketones using sulfuric acid.[12] Ketone formation from organocadmium compounds This ketone formation is an identical reaction to the Blaise ketone synthesis. Only instead of organozinc compounds, organocadmium compounds are used and produce higher yields.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Growing pains** Growing pains: Growing pains are recurring pain symptoms that are relatively common in children ages 3 to 12. The pains normally appear at night and affect the calf or thigh muscles of both legs. The pain stops on its own before morning. Growing pains are one of the most common causes of recurring pain in children. Although these pains reliably stop when the child has completely finished growing, it likely has nothing to do with growth. Signs and symptoms: Growing pains usually affect both legs, especially the calf muscle in the lower leg or the muscles in the front of the thighs. Less commonly, the arms are affected. They are normally felt on both sides. Typically, the pains are felt in the muscles, rather than in the joints. The amount of pain varies from mild to very severe. Signs and symptoms: The pains may start in the evening or at night. Because the pains normally appear while the child is sleeping, they often wake the child up at night. The pains often last for 30 minutes to two hours, and are often but not always gone by the morning. Typically, the pains appear once or twice each week, but it can be more frequent or less frequent.The pains are not in the same place as an injury, including overuse injuries such as shin splints, and the child does not limp while walking. Cause: The causes of growing pains are unknown. They are not associated with growth spurts, and some authors suggest alternative terms as providing a more accurate description, such as recurrent limb pain in childhood, paroxysmal nocturnal pains, or benign idiopathic paroxysmal nocturnal limb pains of childhood.Theories of causation include: poor posture or other mechanical or anatomical defects, such as joint hypermobility; vascular perfusion disorder, lower pain threshold or a pain amplification syndrome, tiredness, perhaps especially among children with weaker bones than average who have overexerted themselves; and psychological factors, such as stress within the family.Some parents are able to associate episodes of pain with physical exercise or mood changes in the child. Diagnosis: This diagnosis is normally made by considering the information presented by the child and family members, and by doing a physical exam to make sure that the child seems to be otherwise healthy. When the child has the typical symptoms and appears to be healthy, then laboratory investigations to exclude other diagnoses is not warranted.When a child has growing pains, there are no objective clinical signs of inflammation, such as swollen joints. Children with growing pains do not have signs of any systemic diseases (such as fever or skin rashes), any abnormal pain sensations, tender spots, or joint disorders. Children do not have growing pains if the pain worsens over time, persists during the daytime, only involves one limb, or is located in a joint. It should be excluded if the child is limping, loses the ability to walk, or has physical signs that suggest other medical conditions. Diagnosis: Childhood-onset restless legs syndrome is sometimes misdiagnosed as growing pains. Other possible causes of pain in the limbs include injuries, infections, benign tumors such as osteoid osteoma, malignant tumors such as osteosarcoma, and problems that affect the shape and function of the legs, such as genu valgum (knock-knees). Treatment: Parents and children can be substantially reassured by explaining the benign and self-limiting nature of the pains. Local massage, hot baths, hot water bottles or heating pads, and analgesic drugs such as paracetamol (acetaminophen) are often used during pain episodes. Twice-daily stretching of the quadriceps, hamstrings, and gastrosoleus muscles can make the leg pains resolve more quickly when it appears. Prognosis: Growing pains are not associated with other serious disease and usually resolves by late childhood. Commonly, episodes of growing pains become less severe and less frequent over time, and many children outgrow them after one or two years.Frequent episodes are capable of having a substantial effect on the life of the child. Epidemiology: Growing pains likely affect about 10 to 20% of children, and the rate may be as high as about 40% among children aged four to six. Individuals can vary markedly in when they experience growing pains. History: Growing pains were first described as such in 1823 by a French doctor, Marcel Duchamp, and the cause was attributed to the growth process. A century later, mainstream medicine thought that the pains were caused by a mild case of rheumatic fever.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Correlation ratio** Correlation ratio: In statistics, the correlation ratio is a measure of the curvilinear relationship between the statistical dispersion within individual categories and the dispersion across the whole population or sample. The measure is defined as the ratio of two standard deviations representing these types of variation. The context here is the same as that of the intraclass correlation coefficient, whose value is the square of the correlation ratio. Definition: Suppose each observation is yxi where x indicates the category that observation is in and i is the label of the particular observation. Let nx be the number of observations in category x and y¯x=∑iyxinx and y¯=∑xnxy¯x∑xnx, where y¯x is the mean of the category x and y¯ is the mean of the whole population. The correlation ratio η (eta) is defined as to satisfy η2=∑xnx(y¯x−y¯)2∑x,i(yxi−y¯)2 which can be written as where and σy2=∑x,i(yxi−y¯)2n, i.e. the weighted variance of the category means divided by the variance of all samples. Definition: If the relationship between values of x and values of y¯x is linear (which is certainly true when there are only two possibilities for x) this will give the same result as the square of Pearson's correlation coefficient; otherwise the correlation ratio will be larger in magnitude. It can therefore be used for judging non-linear relationships. Range: The correlation ratio η takes values between 0 and 1. The limit η=0 represents the special case of no dispersion among the means of the different categories, while η=1 refers to no dispersion within the respective categories. η is undefined when all data points of the complete population take the same value. Example: Suppose there is a distribution of test scores in three topics (categories): Algebra: 45, 70, 29, 15 and 21 (5 scores) Geometry: 40, 20, 30 and 42 (4 scores) Statistics: 65, 95, 80, 70, 85 and 73 (6 scores).Then the subject averages are 36, 33 and 78, with an overall average of 52. The sums of squares of the differences from the subject averages are 1952 for Algebra, 308 for Geometry and 600 for Statistics, adding to 2860. The overall sum of squares of the differences from the overall average is 9640. The difference of 6780 between these is also the weighted sum of the squares of the differences between the subject averages and the overall average: 36 52 33 52 78 52 6780. Example: This gives 6780 9640 0.7033 … suggesting that most of the overall dispersion is a result of differences between topics, rather than within topics. Taking the square root gives 6780 9640 0.8386 …. Example: For η=1 the overall sample dispersion is purely due to dispersion among the categories and not at all due to dispersion within the individual categories. For quick comprehension simply imagine all Algebra, Geometry, and Statistics scores being the same respectively, e.g. 5 times 36, 4 times 33, 6 times 78. The limit η=0 refers to the case without dispersion among the categories contributing to the overall dispersion. The trivial requirement for this extreme is that all category means are the same. Pearson v. Fisher: The correlation ratio was introduced by Karl Pearson as part of analysis of variance. Ronald Fisher commented: "As a descriptive statistic the utility of the correlation ratio is extremely limited. It will be noticed that the number of degrees of freedom in the numerator of η2 depends on the number of the arrays" to which Egon Pearson (Karl's son) responded by saying "Again, a long-established method such as the use of the correlation ratio [§45 The "Correlation Ratio" η] is passed over in a few words without adequate description, which is perhaps hardly fair to the student who is given no opportunity of judging its scope for himself."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Olympus Air** Olympus Air: The Olympus Air A01 is a Micro Four Thirds system interchangeable lens camera with a 16 MP sensor, which is operated through Wi-Fi from a smartphone that the camera can be clipped onto. Olympus has announced that there will be an open source application programming interface released with the camera.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Apple Worm** Apple Worm: The Apple Worm is a computer program written by Apple Computer, and especially for the 6502 microprocessor, which performs dynamic self-relocation. The source code of the Apple Worm is the first program printed in its entirety in Scientific American. The Apple Worm was designed and developed by James R. Hauser and William R. Buckley. Other example Apple Worm programs are described in the cover story of the November 1986 issue of Call_A.P.P.L.E. Magazine.Because the Apple Worm performs dynamic self-relocation within the one main memory of one computer, it does not constitute a computer virus, an apt if somewhat inaccurate description. Although the analogous behavior of copying code between memories is exactly the act performed by a computer virus, the virus has other characters not present in the worm. Such programs do not necessarily cause collateral damage to the computing systems upon which their instructions execute; there is no reliance upon a vector to ensure subsequent execution. This extends to the computer virus; it need not be destructive in order to effect its communication between computational environments. Programs: A typical computer program manipulates data which is external to the corporeal representation of the computer program. In programmer-ese, this means the code and data spaces are kept separate. Programs which manipulate data which is internal to its corporeal representation, such as that held in the code space, are self-relational; in part at least, its function is to maintain its function. In this sense, a dynamic self-relocator is a self-referential system, as defined by Douglas R. Hofstadter. Other examples: The instruction set of the PDP-11 computer includes an instruction for moving data, which when constructed in a particular form causes itself to be moved from higher addresses to lower addresses; the form includes an automatic decrement of the instruction pointer register. Hence, when this instruction includes autodecrement of the instruction pointer, it behaves as a dynamic self-relocator. A more current example of a self-relocating program is an adaptation of the Apple Worm for the Intel 80x86 microprocessor and its derivatives, such as the Pentium, and corresponding AMD microprocessors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lactucarium** Lactucarium: Lactucarium is the milky fluid secreted by several species of lettuce, especially Lactuca virosa, usually from the base of the stems. It is known as lettuce opium because of its sedative and analgesic properties. It has also been reported to promote a mild sensation of euphoria. Because it is a latex, lactucarium physically resembles opium, in that it is excreted as a white fluid and can be reduced to a thick smokable solid. History: "Lettuce opium" was used by the ancient Egyptians, and was introduced as a drug in the United States as early as 1799. The drug was prescribed and studied extensively in Poland during the nineteenth century, and was viewed as an alternative to opium, weaker but lacking side-effects, such as not being highly addictive, and in some cases preferable. However, early efforts to isolate an active alkaloid were unsuccessful. It is described and standardized in the 1898 United States Pharmacopoeia and 1911 British Pharmaceutical Codex for use in lozenges, tinctures, and syrups as a sedative for irritable cough or as a mild hypnotic (sleeping aid) for insomnia. The standard definition of lactucarium in these codices required its production from Lactuca virosa, but it was recognized that smaller quantities of lactucarium could be produced in a similar way from Lactuca sativa and Lactuca canadensis var. elongata, and even that lettuce-opium obtained from Lactuca serriola or Lactuca quercina was of superior quality.In the twentieth century, two major studies found commercial lactucarium to be without effect. In 1944, Fulton concluded, "Modern medicine considers its sleep producing qualities a superstition, its therapeutic action doubtful or nil." Another study of the time identified active bitter principles lactucin and lactucopicrin, but noted that these compounds from the fresh latex were unstable and did not remain in commercial preparations of lactucarium. Accordingly, lettuce opium fell from favor, until publications of the hippie movement began to promote it in the mid-1970s as a legal drug producing euphoria, sometimes compounded with catnip or damiana. More recent work has confirmed that lactucin and lactucopicrin do have analgesic and sedative properties.The seeds of lettuce have also been used to relieve pain. Lettuce seed was listed as an anaesthetic in Avicenna's The Canon of Medicine, which served as an authoritative medical textbook from soon after AD 1000 until the seventeenth century. Contemporary use: Although lactucarium has faded from general use as a pain reliever, it remains available, sometimes promoted as a legal psychotropic. The seed of ordinary lettuce, Lactuca sativa, is still used in Avicenna's native Iran as a folk medicine. Chemical constituents: The chemical constituents of lactucarium that have been investigated for biological activity include lactucin and its derivatives lactucopicrin and 11β,13-dihydrolactucin. Lactucin and lactucopicrin were found to have analgesic effects comparable to those of ibuprofen, and sedative activity in measurements of spontaneous movements of the mice. Some effects have also been credited to a trace of hyoscyamine in Lactuca virosa, but the alkaloid was undetectable in standard lactucarium. A crude extract of the seeds was shown to have analgesic and anti-inflammatory effects in standard formalin and carrageenan tests of laboratory rats. It was not toxic to the rats at a dose of 6 grams per kilogram.Lactuca virosa contains flavonoids, coumarins, and N-methyl-β-phenethylamine. A variety of other chemical compounds have been isolated from L. virosa. One of the compounds, lactucin, is an adenosine receptor agonist in vitro, while another, lactucopicrin, has been shown to act as an acetylcholinesterase inhibitor in vitro. Chemical constituents: Lactuca floridana was found to contain 11β,13-Dihydro-lactucin-8-O-acetate hemihydrate. Formulations: Lactucarium was used unmodified in lozenges, 30–60 milligrams (0.5 to 1 grain), sometimes mixed with borax. However, it was found to be more efficient to formulate the drug in a cough syrup (Syrupus Lactucarii, U.S.P.) containing net 5% lactucarium, 22% glycerin, 5% alcohol, and 5% orange-flower water in syrup.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Address Management System** Address Management System: Address Management System (AMS) is the United States Postal Service master database of deliverable addresses. Address-checking tools using AMS provide address standardization, as well as city/state and ZIP Code lookup features.Business mailers use the USPS Address Management System database to standardize addresses by correcting errors in street addresses and city names and to return the correct ZIP Codes. City/state lookup services use AMS to provide the city and state corresponding to any given ZIP Code.AMS is also a general term describing a technological solution for managing street addressing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Offscreen** Offscreen: The terms off-screen, off-camera, and off-stage refer to fictional events in theatre, television, or film which are not seen on stage or in frame, but are merely heard by the audience, or described (or implied) by the characters or narrator. Off-screen action often leaves much to the audience's imagination. As a narrative mode and stylistic device, it may be used for a number of dramatic effects. It may also be used to save time in storytelling, to circumvent technical or financial constraints of a production, or to meet content rating standards. Uses: In ancient Greek drama, events were often recounted to the audience by a narrator, rather than being depicted on the stage. Offscreen voice-over narration continues to be a common tool for conveying information authoritatively.Charlie Chaplin made use of offscreen action to humorous effect. In a deleted scene in Shoulder Arms (1918), Chaplin's character is berated by an abusive wife who is never seen on camera; her presence is merely implied by household objects hurled in Chaplin's direction. In The Kid (1921), his character The Tramp is asked the name of the baby he recently found abandoned. He quickly ducks into a nearby building, emerges seconds later smoothing the child's blankets, and announces, "John" – implying that he had not even determined the child's sex, much less given it a name, until that moment. In City Lights (1931), Chaplin's Tramp is preparing for a boxing match. He asks another boxer a question which the audience is not privy to. He then follows the man's direction off screen, before returning moments later and asking the man to help him remove his boxing gloves – the implication being that he was going to the bathroom. Later, it's shown that he was merely looking for a water fountain.In the horror genre, placing action offstage or offscreen often serves to heighten the dramatic force of a scene. The Grand Guignol theatre in Paris made much use of this technique; in 1901's Au tėlėphone, the violence is presented at the remove of a telephone connection. In 1931's Dracula, director Tod Browning uses offscreen action to avoid showing scenes of murder, and obscures the action of Dracula rising from his grave. While this served to meet Motion Picture Production Code standards, which dictated that "brutal killings are not to be shown in detail", Browning's offscreen action also maintains the macabre mood of the film. The choice of what the audience is shown in place of the elided action can also contribute to the sense of horror through its symbolic value. In Dr. Jekyll and Mr. Hyde (1931), during the murder of Ivy Pierson, director Rouben Mamoulian focuses his camera on a statuette of Psyche Revived by Cupid's Kiss, which acts as an ironic commentary on the action.Offscreen action is often used in sex scenes, with the camera panning from the beginnings of a romantic encounter to a symbolic replacement object, such as a roaring fireplace, a lit candle at first tall and then shorter to show the passage of time, or (in parody) a train entering a tunnel. Uses: Offscreen action may also be used when a scene would otherwise require costly sets, locations, makeup, or special effects to present convincingly.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Runs produced** Runs produced: Runs produced is a baseball statistic that can help estimate the number of runs a hitter contributes to his team. The formula adds together the player's runs and run batted in, and then subtracts the player's home runs. RP=R+RBI−HR Home runs are subtracted to compensate for the batter getting credit for both one run and at least one RBI when hitting a home run. Unlike runs created, runs produced is a teammate-dependent stat in that it includes Runs and RBIs, which are affected by which batters bat near a player in the batting order. Also, subtracting home runs seems logical from an individual perspective, but on a team level it double-counts runs that are not home runs. To counteract the double-counting, some have suggested an alternate formula which is the average of a player's runs scored and runs batted in. Runs produced: RP=(R+RBI)/2 Here, when a player scores a run, he shares the credit with the batter who drove him in, so both are credited with half a run produced. The same is true for an RBI, where credit is shared between the batter and runner. In the case of a home run, the batter is responsible for both the run scored and the RBI, so the runs produced are (1 + 1)/2 = 1, as expected.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**UniFLEX** UniFLEX: UniFLEX is a Unix-like operating system developed by Technical Systems Consultants (TSC) for the Motorola 6809 family which allowed multitasking and multiprocessing. It was released for DMA-capable 8" floppy, extended memory addressing hardware (software controlled 4KiB paging of up to 768 KiB RAM), Motorola 6809 based computers. Examples included machines from SWTPC, Gimix and Goupil (France). On SWTPC machines, UniFLEX also supported a 20 MB, 14" hard drive (OEM'd from Century Data Systems) in 1979. Later on, it also supported larger 14" drives (up to 80 MB), 8" hard drives, and 5-1/4" floppies. In 1982 other machines also supported the first widely available 5-1/4" hard disks using the ST506 interface such as the 5 MB BASF 6182 and the removable SyQuest SQ306RD of the same capacity. UniFLEX: Due to the limited address space of the 6809 (64 kB) and hardware limitations, the main memory space for the UniFLEX kernel as well as for any running process had to be smaller than 56 kB (code + data)(processes could be up to 64K minus 512 bytes). This was achieved by writing the kernel and most user space code entirely in assembly language, and by removing a few classic Unix features, such as group permissions for files. Otherwise, UniFLEX was very similar to Unix Version 7, though some command names were slightly different. There was no technical reason for the renaming apart from achieving some level of command-level compatibility with its single-user sibling FLEX. By simply restoring the Unix style names, a considerable degree of "Unix Look & Feel" could be established, though due to memory limitations the command line interpreter (shell) was less capable than the Bourne Shell known from Unix Version 7. Memory management included swapping to a dedicated portion of the system disk (even on floppies) but only whole processes could be swapped in and out, not individual pages. This caused swapping to be a very big hit on system responsiveness, so memory had to be sized appropriately. However UniFLEX was an extremely memory-efficient operating system. Machines with less than a megabyte of RAM serving a dozen asynchronous terminals were not uncommon and worked surprisingly well. UniFLEX: TSC never bundled a C compiler with UniFLEX for the 6809, though they produced one. But in the early 1980s a C language implementation became available as a 3rd party products (the "McCosh" and "Introl" compilers). Using such a C compiler could establish source-level compatibility with Unix Version 7, i.e., a number of Unix tools and applications could be ported to UniFLEX - if size allowed: Unix on a PDP-11 limited executables to 64 kB of code and another 64 kB of data, while the UniFLEX limitation was approximately 56 kB for both, code and data together. UniFLEX: Not much application software was available for UniFLEX. Ports of the Dynacalc spreadsheet and the Stylograph word processor from the FLEX operating system existed but only very few copies were sold. In the mid 1980s a successor version for the Motorola 68000 was announced. Though it removed the pressing space limitations, it was not commercially successful because it had to compete with source-code ports of original Unix. The source code for UniFLEX and supporting software is available on the Internet. UniFLEX: In the Netherlands, UniFLEX users ported a fair number of Unix utilities to UniFLEX. Also they modified some kernel code that give foreground processes preference over background processes accessing disk and that gave a major improvement in user experience. One of the TSC guys, Dan Vanada, later started his company "Scintillex Software". Its products were, for example, utilities that allowed transfer of data between UniFLEX and MS-DOS and vice versa, as well DOS format utilities, and a code patch utility.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Insufflation** Insufflation: In religious and magical practice, insufflation and exsufflation are ritual acts of blowing, breathing, hissing, or puffing that signify variously expulsion or renunciation of evil or of the devil (the Evil One), or infilling or blessing with good (especially, in religious use, with the Spirit or grace of God). Insufflation: In historical Christian practice, such blowing appears most prominently in the liturgy, and is connected almost exclusively with baptism and other ceremonies of Christian initiation, achieving its greatest popularity during periods in which such ceremonies were given a prophylactic or exorcistic significance, and were viewed as essential to the defeat of the devil or to the removal of the taint of original sin.Ritual blowing occurs in the liturgies of catechumenate and baptism from a very early period and survives into the modern Roman Catholic, Greek Orthodox, Maronite, and Coptic rites. Catholic liturgy post-Vatican II (the so-called novus ordo 1969) has largely done away with insufflation, except in a special rite for the consecration of chrism on Maundy Thursday. Protestant liturgies typically abandoned it very early on. The Tridentine Catholic liturgy retained both an insufflation of the baptismal water and (like the present-day Orthodox and Maronite rites) an exsufflation of the candidate for baptism, right up to the 1960s: [THE INSUFFLATION] He breathes thrice upon the waters in the form of a cross, saying: Do You with Your mouth bless these pure waters: that besides their natural virtue of cleansing the body, they may also be effectual for purifying the soul. Insufflation: THE EXSUFFLATION. The priest breathes three times on the child in the form of a cross, saying: Go out of him...you unclean spirit and give place to the Holy Spirit, the Paraclete. Insufflation vs. exsufflation: From an early period, the act had two distinct but not always distinguishable meanings: it signified on the one hand the derisive repudiation or exorcism of the devil; and, on the other, purification and consecration by and inspiration with the Holy Spirit. The former is technically "exsufflation" ("blowing out") and the latter "insufflation" ("blowing in"), but ancient and medieval texts (followed by modern scholarship) make no consistent distinction in usage. For example, the texts use not only Latin insufflare ('blow in') and exsufflare ('blow out'), or their Greek or vernacular equivalents, but also the simplex sufflare ('blow'), halare ('breathe'), inspirare, exspirare, etc.Typical is the 8th-century Libellus de mysterio baptismatis of Magnus of Sens, one of a number of responses to a questionnaire about baptism circulated by Charlemagne. In discussing insufflation as a means of exorcising catechumens, Magnus combines a variety of mostly exsufflation-like functions≈ "Those who are to be baptised are insufflated by the priest of God, so that the Prince of Sinners [i.e. the devil] may be put to flight from out of them, and that entry for the Lord Christ might be prepared, and that by his insufflation they might be made worthy to receive the Holy Spirit." This double role appears as early as Cyril of Jerusalem's 4th-century Mystagogic Catacheses; as Edward Yarnold notes, "Cyril attributes both negative and positive effects [to insufflation]. … The rite of breathing on the [baptismal] candidate has the negative effect of blowing away the devil (exsufflation) and the positive effect of breathing in grace (insufflation)." History: Early period What might neutrally be called "sufflation" is found in some of the earliest liturgies dealing with the protracted process of initiation known as the "catechumenate," which saw its heyday in the 4th and 5th centuries. The earliest extant liturgical use is possibly that of the Apostolic Tradition attributed to Hippolytus of Rome, from the 3rd or 4th century, and therefore contemporary with Cyril in the east: Those who are to be baptized should … be gathered in one place. … And [the bishop] should lay his hands on them and exorcize all alien spirits, that they may flee out of them and never return into them. And when he has finished exorcizing them, he shall breathe on their faces; and when he has signed their foreheads, ears, and noses, he shall raise them up. History: Distribution, geographical and functional The practice entered the baptismal liturgy proper only as the catechumenate, rendered vestigial by the growth of routine infant baptism, was absorbed into the rite of baptism. Both exsufflation and insufflation are well established by the time of Augustine and in later centuries are found widely. By the Western high Middle Ages of the 12th century, sufflation was geographically widespread, and had been applied not only to sufflating catechumens and baptizands, but also to exorcism of readmitted heretics; to admission of adult converts to the catechumenate; to renunciation of the devil on the part of catechumens; to consecration and/or exorcism of the baptismal font and water; to consecration or exorcism of ashes; and to the consecration of the chrism or holy oil. History: Medieval period Most of these variations persist in one branch or another of the hybrid Romano-Germanic rite that can be traced from 5th-century Rome through the western Middle Ages to the Council of Trent, and beyond that into modern (Tridentine) Roman Catholicism. As the 'national' rites such as the Ambrosian tradition in northern Italy and the Spanish Mozarabic rite faded away or were absorbed into international practice, it was this hybrid Roman-Gallican standard that came to dominate western Christendom, including Anglo-Saxon and medieval England, from the time of Charlemagne, and partly through his doing, through the high and late Middle Ages and into the modern period. Roman practice around the year 500 is reflected in a letter by a somewhat mysterious John the Deacon to a correspondent named Senarius. The letter discusses the exsufflation of catechumens at length. The Stowe Missal, Irish in origin but largely Gallican in form, contains a prebaptismal sufflation of unclear significance. The other Gallican rites are largely devoid of sufflation, though the so-called Missale Gothicum contains a triple exsufflation of baptismal water, and a prebaptismal insufflation of catechumens is found in the hybrid Bobbio Missal and the 10th-century Fulda sacramentary, alongside the more common baptismal exsufflation. The 11th-century North-Italian baptismal ritual in the Ambrosian Library MS. T.27.Sup. makes heavy use of the practice, requiring both insufflation and triple exsufflation of the baptismal candidates in modum crucis, and insufflation of the font as well. The "Hadrianum" version of the Gregorian Sacramentary, sent to Charlemagne from Rome and augmented probably by Benedict of Aniane, contains an insufflation of the baptismal font, as does the mid-10th-century Ordo Romanus L, the basis of the later Roman pontifical. Ordo Romanus L also contains a triple exsufflation of the candidates for baptism, immediately preceding the baptism itself.Most of the numerous Carolingian expositions of baptism treat sufflation to some extent. One anonymous 9th-century catechism is unusual in distinguishing explicitly between the exsufflation of catechumens and the insufflation of baptismal water, but most of the tracts and florilegia, when they treat both, do so without referring one to the other; most confine themselves to exsufflation and are usually content to quote extracts from authorities, especially Isidore and Alcuin. Particularly popular was Isidore's lapidary remark in the Etymologies to the effect that it is not the human being ("God's creature") that is exsufflated, but the Prince of Sinners to whom that person is subjected by being born in sin, a remark that echoed Augustine's arguments against the Pelagians to the effect that it was not the human infant (God's image) that was attacked in sufflation, but the infant's possessor, the devil. Particularly influential was Alcuin's brief treatment of the subject, the so-called Primo paganus, which in turn depended heavily on John the Deacon. The Primo paganus formed the basis of Charlemagne's famous circular questionnaire on baptism, part of his effort to harmonize liturgical practice across his empire; and many of the seventeen extant direct or indirect responses to the questionnaire echo Alcuin, making the process a little circular and the texts a little repetitious. The burden of Alcuin's remarks, in fact, appears above in the quotation from the Libellus of Magnus of Sens, one of the respondents. The questionnaire assumed that exsufflation of or on the part of the candidate for baptism was generally practiced — it merely asks what meaning is attached to the practice: "Concerning the renunciation of Satan and all his works and pomps, what is the renunciation? and what are the works of the devil and his pomps? why is he breathed upon? (cur exsufflatur?) why is he exorcised?" Most of the respondents answered that it was so that, with the devil sent fleeing, the entry of the Holy Spirit might be prepared for. History: In England On the other side of the Channel, in Anglo-Saxon England, sufflation is mentioned in Bishop Wulfstan's collection of Carolingian baptismal expositions, the Incipit de baptisma, and in the two vernacular (Old English) homilies based on it, the Quando volueris and the Sermo de baptismate. The Incipit de baptisma reads: "On his face let the sign of the cross be made by exsufflation, so that, the devil having been put to flight, entry for our Lord Christ might be prepared." Among English liturgical texts proper, the 10th-century Leofric Pontifical (and Sacramentary) dictates an insufflation of baptizands, a triple insufflation of the baptismal water, and an 'exhalation' of holy oil. In the 11th century, the Salisbury Pontifical (BL Cotton MS Tiberius C.1) and the Pontifical of Thomas of Canterbury require insufflation of the font; the Missal of Robert of Jumièges (Canterbury) has an erased rubric where it may have done likewise, as well as having an illegible rubric where it probably directed the exsufflation of catechumens, and retaining the old ordo ad caticuminum ex pagano faciendum, complete with its sufflation ceremony; and an English Ordo Romanus (BL Cotton MS Vitellius E.12) contains a triple exsufflation of baptizands. Various 12th-century texts include signing and triple exsufflation of the holy oil (Sarum), triple exsufflation of baptizands (the Ely, Magdalene, and Winton Pontificals), and insufflation of the font in modum crucis (Ely and Magdalene, followed by most later texts). Such are the origins of the late medieval sufflation rites, which were in turn retained in regularized form in post-Tridentine Catholicism. History: Sufflation in Protestantism Sufflation did not last long in any of the churches arising from the magisterial or radical reformations. Martin Luther's first attempt at a baptismal liturgy, the Tauffbuchlin (Taufbüchlein) of 1523 (reprinted 1524 and 1525) did retain many ceremonies from the late Medieval ritual as it was known in Germany, including a triple exsufflation of baptizands. But in an epilogue, Luther listed this ceremony among the adiaphora — i.e., the inessential features that added nothing to the meaning of the sacrament: "The least importance attaches to these external things, namely breathing under the eyes, signing with the cross, placing salt in the mouth, putting spittle and clay on the ears and nose, anointing with oil the breast and shoulders, and signing the top of the head with chrism, vesting in the christening robe, and giving a burning candle into the hand, and whatever else … men have added to embellish baptism. For … they are not the kind of devices that the devil shuns." The Lutheran Strasbourg Taufbüchlein of June 1524, composed by Diobald Schwartz, assistant to Cathedral preacher Martin Zell, on the basis of the medieval rite used in Strasbourg combined with elements of Luther's 1523 rite, also retains baptismal exsufflation; so does Andreas Osiander in Nuremberg, in the same year.But thereafter the practice vanished from Lutheranism, and indeed from Protestantism generally. Luther's revised edition of 1526 and its successors omit exsufflation altogether, as do the Luther-influenced early reformed rites of England (Thomas Cranmer's Prayer Book of 1549) and Sweden (the Manual of Olavus Petri), despite the former's conservative basis in the medieval Sarum ritual and the latter's strong interest in exorcism as an essential part of the baptismal ritual.Similarly in the Swiss Reformation (the Zwinglian/Reformed tradition), only the very earliest rites retain sufflation, namely the ceremony published by Leo Jud, pastor of St. Peter's in Zurich, in the same year (1523) as Luther's first baptismal manual. History: Sufflation in Protestant-Roman Catholic debate Though sufflation does not appear in Protestant practice, it definitely appears in Protestant polemic, where it is usually treated as an un-Scriptural and superstitious (i.e., in the Protestant view, a typically Roman Catholic) practice, and even one reeking of enchantment or witchcraft. It appears as such, for example in the work of Henry More (the 'Cambridge Platonist') on evil. His argument essentially reverses that of Augustine. Augustine had said to the Pelagians (to paraphrase): "you see that we exorcize and exsufflate infants before baptising them; therefore they must be tainted with sin and possessed by the devil since birth." More replies, in effect, "Infants cannot be devil-possessed sinners; therefore, ceremonial exorcism and exsufflation is presumptuous, frightening, and ridiculous," in a word "the most gross and fundamental Superstitions, that look like Magick or Sorcery": "The conjuring the Devil also out of the Infant that is to be baptized would seem a frightful thing to the Infant himself, if he understood in what an ill plight the Priest supposes him, while he makes three Exsufflations upon his face, and uses an Exorcistical form for the ejecting of the foul Fiend. … And it is much if something might not appear affrightful to the Women in this approaching darkness. For though it be a gay thing for the Priest to be thought to have so much power over the Stygian Fiend, as to Exorcize him out of the Infant; yet it may be a sad consideration with some melancholick women laden with Superstition, to think they are never brought to bed, but they are delivered of a Devil and Child at once." Sufflation appears in Roman Catholic anti-Protestant polemic, as well. The relative antiquity of the practice, and its strong endorsement by the Protestants' favorite Father, Augustine, made it a natural element in Catholic arguments that contrasted the Protestant with the ancient and Apostolic church. A true church, according to Roman Catholic apologists, would be: "A Church that held the exorcismes exsufflations and renunciations, which are made in baptisme, for sacred Ceremonies, and of Apostolicall tradition.... A Church which in the Ceremonies of baptisme, vsed oyle, salte, waxe, lights, exorcismes, the signe of the Cross, the word Epheta and other thinges that accompanie it; to testifie ... by exorcismes, that baptisme puts vs out of the Diuells possession. History: This was argued on the grounds that some of these ceremonies were demonstrably ancient, and all of them might be. History: "Sundry Ceremonies vsed in baptisme, and other Sacraments, as Exorcismes, Exsufflations, Christening, and the like mentioned by S. Augustine and by diuers other ancient Fathers ..., these being practised by the Primitiue Church (which is graunted to be the true Church) and compared to the customes of Protestants, and vs, in our Churches, will easily disclose, which of the two, they or we, do more imitate, or impugne the true Church of antiquity." To which a Protestant reply was that sufflation was not ancient enough, and could not be proved to be apostolic: "It was plain then there was no clear Tradition in the Question, possibly there might be a custome in some Churches postnate to the times of the Apostles, but nothing that was obligatory, no Tradition Apostolicall. But this was a suppletory device ready at hand when ever they needed it; and S. Austin confuted the Pelagians, in the Question of Original sinne, by the custome of exorcisme and insufflation, which S. Austin said came from the Apostles by Tradition, which yet was then, and is now so impossible to be prov'd, that he that shall affirm it, shall gaine only the reputation of a bold man and a confident." Sufflation was judged by Protestant critics to be irrational, mysterious, and obscure, an increasingly important factor by the close of the 17th century and the dawn of the Enlightenment: "Mystery prevail'd very little in the first Hundred or Century of Years after Christ; but in the second and third, it began to establish it self by Ceremonies. To Baptism were then added the tasting of Milk and Honey, Anointing, the Sign of the Cross, a white Garment, &c. ... But in later times there was no end of Lights, Exorcisms, Exsufflations, and many other Extravagancies of Jewish, or Heathen Original ... for there is nothing like these in the Writings of the Apostles, but they are all plainly contain'd in the Books of the Gentiles, and was the Substance of their Worship." It was said to be a human invention, imposed by the arbitrary whim of a tyrannical prelate against the primitive Gospel freedom of the church: "[Some bishop] ... taking it into his head that there ought to be a trine-immersion in baptism; another the signation of the cross; another an unction with oil; another milk and honey, and imposition of hands immediately after it; another insufflation or breathing upon the person's face to exorcise the Devil... Thus, I say, that inundation of abominable corruptions, which at present overwhelms both the Greek and Romish Churches, gradually came in at this very breech which you are now zealously maintaining, namely, the Bishop's Power to decree rites and ceremonies in the Church." To all of which, Roman Catholic apologists replied that insufflation was not only ancient and Apostolic, but had been practiced by Christ himself: "When he [Christ] had said this he breathed upon them, and said to them, Receive the Holy Ghost...." When the Pastors of our Church use the Insufflation or Breathing upon any, for the like mystical Signification, you cry aloud, Superstition, Superstition, an apish mimical action, &c." Prospects Though liturgical sufflation is almost gone, at least from the Western churches, its revival is not inconceivable. Liturgical renewal movements always seem to look to the 'classic' catechumenate of the 4th and 5th centuries for inspiration. Insufflation has indeed been re-introduced into the Catholic "new catechumenate." But many ceremonies dating from that or the medieval period have been re-imported even into Protestant rites during the last couple of decades. Perhaps even more likely is a revival in the context of the growth of the Roman Catholic Church in Africa and in Asia, where locally and culturally meaningful ceremonies have often revolutionized practice, and where the exorcistic function of baptism has taken on a new vitality. For example, a pure insufflation is apparently practiced in the Philippine Independent Church, and Spinks mentions a pre-baptismal ceremony used by the Christian Workers' Fellowship of Sri Lanka, in which the candidates are struck with a cane and their faces are breathed upon. It is not clear whether the latter represents a revival of historical sufflation, or a wholly new ceremony derived from local custom. Significance and associations: There were at least three kinds of association that particularly influenced how liturgical sufflation came to be understood: Biblical antecedents; liturgical setting; and extra-liturgical (cultural) analogs. Significance and associations: Biblical antecedents Three Biblical passages recur repeatedly with reference to insufflation narrowly defined, all of them referring to some kind of life-giving divine breath. The first and most commonly cited is Genesis 2:7 (echoed by Wisdom 15:11 and Job 33:4), in which God first creates man and then breathes into him the breath of life, in order to give him (as the passage was later interpreted) a human soul. The second passage, Ezekiel 37:9, reinterprets the Genesis passage prophetically, in foreseeing God resurrecting the dead bones of exiled Israel by means of his life-giving breath. And finally, in John 20:22, Christ is represented as conveying the Paraclete to his disciples, and so initiating the commissioned church, by breathing on them, here too, very possibly, with implicit reference to the original creation. The two passages were connected explicitly in later Christian exegesis: the same breath that created man re-created him. Significance and associations: "[Insufflation] signifies, To blow into, Gen. 2. 7. This sheweth mans soul not to be of the earth, as his body was, but of nothing, by the insufflation of God, and so differing from the spirit of beasts, Eccl. 3. 21. This word is used also, when Christ to make men new creatures, inspired his Apostles with the holy Ghost, Joh. 20. 21." "The Lord God, saith the Text, formed man of the dust of the ground, and breathed into his Nostrils the breath of life, and man became a living Soul. His Body made of Earth, but his Soul the Breath of God. … We must not understand it grosly; for so Breath is not attributable unto God, who is a simple and perfect Spirit; but … as a figurative expression of God's communicating unto Man that inward Principle, whereby he lives and acts, not only in common with, but in a degree above other Animals. … The Learned P. Fagius takes notice of three things in the Text of Moses, which do conclude the Immortality of the Soul of Man. I. Insufflatio illa Dei: This Inspiration from God spoken of: For he that breaths into another, contributes unto him aliquid de suo somewhat of his own: And therefore, saith he, when our B. Saviour would communicate his Spirit to his Disciples, he did it with Insufflation, breathing on them, thereby to signifie, se Divinum & de suo quiddam illis contribuere [i.e., that he was himself divine and was infusing something of his own into them]." The associations with creation, rebirth, initiation, and revivification created by these passages of Scripture suited insufflation for a role in baptism as it has been most commonly regarded: as figuring the waters of creation (over which the Spirit brooded); as figuring the womb of rebirth; and as figuring (in Saint Paul's metaphor) the tomb, into which the Christian joins Christ in descending, and from which the Christian likewise joins Christ in ascending, dead to the old life but made alive again in Christ.There are also Biblical antecedents for exsufflation, properly speaking, that is, exorcistic blowing, especially the numerous Old Testament passages in which "the breath of God" is the vehicle or symbol not of life but of death and destruction — an expression of the wrath of God: "by the breath of God they perish / and by the blast of his anger they are consumed" (Job 4:9, RSV). The same power is attributed metaphorically to Christ: "The lawless one will be revealed, and the Lord Jesus will slay him with the breath of his mouth" (2 Thessalonians 2:8, RSV). Even less obvious passages could be associated with liturgical exsufflation. Jesse of Amiens, for example, interprets Psalm 34 (Vulg. 35):5 as descriptive of the fate of exsufflated devils: ""Let them be like chaff before the wind, with the angel of the Lord driving them on!" And the apocryphal Acts of Thomas describes a baptismal ceremony which, though it does not explicitly contain a breathing ceremony, may imply one, "Let the gift come by which, breathing upon thine enemies, thou didst make them draw back and fall headlong, and let it dwell in this oil, over which we name thy holy name."God's breath can be fiery, consuming all it touches: "I will blow upon you with the fire of my wrath" (Ezekiel 21:31, RSV). Some of the interpretations of exsufflation may reflect this. Cyril of Jerusalem, for example, when he discusses exsufflation in his catechetical sermons, interprets the liturgical practice in terms of fire: "The breathing of the saints and the invocation of the name of God, like fiercest flame, scorch and drive out evil spirits." Fire remains a theme in later liturgical exorcisms, for devils, as Nicetas is reported to have said, "are purged by exorcisms as by fire": "we come against you, devil, with spiritual words and fiery speech; we ignite the hiding places in which you are concealed." Liturgical context More importantly, perhaps, fire is physically and symbolically associated with sufflation because of the traditional placement of baptism within the Paschal vigil — a setting heavy with symbolism of light and fire: the blessing of the Paschal candle, the lighting of the "new fire," and the singing of the Exultet and the Lumen Christi. The intimate connection between divine breath and divine fire appears in its most visually arresting form during the benediction of the font, in which, according to most orders, the candle is dipped in the font while the priest declares the power of the Holy Spirit to have descended into the water: the sufflation of the font in most cases directly precedes or accompanies the immersion of the candle. Their close association can again be illustrated from Wulfstan's baptismal homilies: "By the breath that the priest breathes into the font when he blesses it, the devil is straightway driven out from it. And when the priest dips the consecrated candle in the water, then that water forthwith becomes imbued with the Holy Ghost." Similar considerations bind sufflation closely to imagery of light and darkness, specifically of the movement of the baptizand from the kingdom of darkness into the kingdom of light (a very common theme), and to the sign of the cross (a very common action), among others that could be mentioned. John the Deacon uses light-dark imagery to explain exsufflation in exorcism as a transition: The exsufflated person is exorcised so that ... having been delivered from the power of darkness, he might be translated into the kingdom ... of God. Significance and associations: So also Augustine ("The church exsufflates and exorcises [infants] that the power of darkness might be cast out from them"), and Isidore ("The power of the devil is ... exsufflated in them, so that ... being delivered from the power of darkness, [they] might be translated unto the kingdom of their Lord"). Significance and associations: And as regards signation (the sign of the cross), in Western texts from as early as the Gelasian Sacramentary, the one gesture almost always precedes (or precedes and follows) the other, and their significance is often complementary if not identical. In Raban Maur's discussion of the baptismal liturgy, for example, the exsufflation is said to expel the devil, the signing to keep him from coming back. The two signs are frequently combined, the blowing done in the form of a cross, e.g. in the Syriac Rite described by James of Edessa, in the modern Coptic rite, in the late 9th-century Ordo Romanus XXXI, in Wulfstan's Anglo-Saxon homilies and their Continental sources, in the 10th-century Ambrosian rites for catechumen and font, in the 11th-century North Italian catechumenal rites, in the 12th- through 15th-century English pontificals, in the Sarum Missal, and in the 13th-century Roman pontifical. Extra-liturgical (hagiographic and magical) use: Patristic period There are hints in some of the Church Fathers that Christians had a habit of breathing (or hissing) at evil spirits as a recognized act of revulsion or repulsion, even apart from the ceremonies of the church. Tertullian is perhaps the best witness. He seems to be talking about an extra-liturgical casting out of demons by means of exsufflation and signing when he declares that gods rejected by Christians are driven from the bodies of men "by our touch and by our breath," and are thus "carried away by the thought and vision of the fire [of judgment]." He is talking about an ordinary gesture of aversion when he asks a Christian incense-dealer (regarded as hypocritical because he sells incense for polytheistic altars), "with what mouth, I ask, will he spit and blow before the fuming altars for which he himself provided? with what constancy will he [thus] exorcise his foster children?" And his remarks to his wife about the dangers of mixed marriage suggest that exsufflation was a distinctively Christian practice: "[If you marry again, to a non-Christian,] shall you escape notice when you sign your bed or your body? when you blow away some impurity? When even by night you rise to pray?"If such a custom did exist, it would clarify certain remarks by other Fathers, which might otherwise seem merely metaphorical. Eusebius, for example, says of the saints that they were men "who though they only breathed and spoke, were able to scatter the counsels of evil demons." Irenaeus describes the right response to Gnostic doctrine as "reviling" (καταφυσησαντας; literally exsufflantes). Cyril of Jerusalem, speaking of resisting temptation, not of baptism, says that "the mere breathing of the exorcist becomes as a fire to that unseen foe." And Augustine's remarks about blowing on images of the emperor suggest that the significance of the gesture was well enough established to be actionable: "Of the great crime of lese majesty ... is he held guilty, according to the laws of this world, who blows upon an image ... of the emperor." Even as late as Bede, we may suspect that "exsufflate" in the sense of "revile" or "cast off" may be a living metaphor. Hagiography: The extremely influential Life of Saint Martin by Sulpicius Severus seems to have set in motion a hagiographic tradition in which saints cast out demons or repel tempting devils by blowing at them. Of Saint Pachomius, for example, it is said that "defending his brow with the sign of the cross, he blew upon [the demon] and immediately he fled … ; blowing upon him, he said, 'depart from me, devil.'" And of Saint Goswin that "a demon stood before Saint Goswin saying 'surely you see that I am Christ …' and … therefore Saint Goswin exsufflated vigorously, saying 'depart foe …,' and immediately … the devil vanished." Saint Justina is reported to have similarly unmasked a series of increasingly subtle and powerful demons, finally melting the prince of demons himself: "blowing upon the devil, she immediately melted him like wax and … felt herself freed from all temptation." And Saint Felix is said to have destroyed idols and uprooted sacred trees by like means.The breath of the saints was credited with healing, as well as exorcistic, powers from an early period. Gregory of Nyssa says of Gregory Thaumaturgus ('Gregory the magician') that he needed to resort to "no finicking and laborious" magic, but "there sufficed, for both the casting out of demons and the healing of bodily ailments, the breath of his mouth." Similar powers are attributed to the Irish saints: kindling lamps, curing dumbness. This theme, too, persists in later hagiographic and quasi-hagiographic texts, appearing, for example in the Estoire del saint graal as the agency by which a madman is miraculously restored. Among English texts, Felix's Life of Saint Guthlac relates that in order to give relief to a boy afflicted by madness, he "washed him in the water of the sacred font and, breathing into his face the breath of healing [or 'spirit of salvation'], drove away from him all the power of the evil spirit," illustrating the difficulty of distinguishing healing from exorcism in an era in which madness was attributed to demonic possession. The miracle that Bishop John performed, according to Bede, on behalf of Herebald, is another example, since it involved a sufflation that was seemingly exorcistic, catechetical, and curative simultaneously. Magic and folk medicine: Tertullian remarked to his wife about Christian practices: "will you not seem to be doing magic?" in the eyes of a non-believer.Celsus (according to Origen) reports the use of exsufflation by Egyptian magicians. Plotinus seems to attack its use by Roman ones. One of Lucian's tall tales mentions a Chaldean pest-control sorcerer who causes toads and snakes to vanish by blowing on them. Magic and folk medicine: It is possible to see Jesus himself as a supernatural healer (Christ Jesus the son of God), in many popular events of the holy Bible, in which he is constantly healing many blind, lame, crippled, lepers, maimed, and even causing his own resurrection, just to name a few. In one instance Jesus used his spit to heal the eyes of a blind man who was born blind, caused by his parents sins; . However, regarding magic, in Syria, where ceremonial breathing became formalized as part of the rite of visitation of the sick. Ephraem Syrus advises that "if medicine fails you when you are sick, the 'visitors' will help, will pray for health, and one of them will breathe in your mouth, the other will sign you [with the sign of the cross]."If it was either originally Christian, Catholic or from the pagan practices, almost similar methods of healing have been reported, continuing until modern times: in Westphalia, the healing of a wound by triple signing and triple cruciform sufflation, or by exsufflation accompanied by a rhyming charm; and in Holland the alleviation of toothache by similar means. According to Drechsler, "Illnesses were blown away by the breath. If a child had bumped himself, one would blow three times on the place and it would 'fly away.'" Burns, and conditions that in some fashion resemble burns, such as fevers, boils, sore throats and rashes, are naturally the most common objects of blowing among modern folk-remedies, for example the Shetland cure that requires blowing on a burn three times while reciting the charm "Here come I to cure a burnt sore. / If the dead knew what the living endure, / The burnt sore would burn no more." But everything from jaundice, convulsions, and colic to bad luck and evil spells can apparently be alleviated by a bit of blowing. Wolters points out that exorcistic blowing was still (in 1935) found in the custom of blowing over bread that is about to be eaten. Moreover, A Syrian blows over his child to avert the evil eye. Some stillblow three times over a strange spoon before using it, and in Alaska the medicineman blows into the nose and mouth of a patient to drive out the daemon of disease. Magic and folk medicine: In one American example of superstition clearly derived from liturgical use, it is said that if at the baptism of a baby one turns at the door and blows three times, one can successfully prevent the devil from ever coming between the baby and the altar.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cortical patterning** Cortical patterning: Cortical patterning is a field of developmental neuroscience which aims to determine how the various functional areas of the cerebral cortex are generated, what size and shape they will be, and how their spatial pattern across the surface of the cortex is specified. Early brain lesion studies indicated that different parts of the cortex served different cognitive functions, such as visual, somatosensory, and motor functions, beautifully assimilated by Brodmann in 1909. Today the field supports the idea of a 'protomap', which is a molecular pre-pattern of the cortical areas during early embryonic stages. The protomap is a feature of the cortical ventricular zone, which contains the primary stem cells of the cortex known as radial glial cells. A system of signaling centers, positioned strategically at the midline and edges of the cortex, produce secreted signaling proteins that establish concentration gradients in the cortical primordium. This provides positional information for each stem cell, and regulates proliferation, neurogenesis, and areal identity. After the initial establishment of areal identity, axons from the developing thalamus arrive at their correct cortical areal destination through the process of axon guidance and begin to form synapses. Many activity-dependent processes are then thought to play important roles in the maturation of each area.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Principle of faunal succession** Principle of faunal succession: The principle of faunal succession, also known as the law of faunal succession, is based on the observation that sedimentary rock strata contain fossilized flora and fauna, and that these fossils succeed each other vertically in a specific, reliable order that can be identified over wide horizontal distances. A fossilized Neanderthal bone (less than 500,000 years old) will never be found in the same stratum as a fossilized Megalosaurus (about 160 million years old), for example, because neanderthals and megalosaurs lived during different geological periods, separated by millions of years. This allows for strata to be identified and dated by the fossils found within. Principle of faunal succession: This principle, which received its name from the English geologist William Smith, is of great importance in determining the relative age of rocks and strata. The fossil content of rocks together with the law of superposition helps to determine the time sequence in which sedimentary rocks were laid down. Principle of faunal succession: Evolution explains the observed faunal and floral succession preserved in rocks. Faunal succession was documented by Smith in England during the first decade of the 19th century, and concurrently in France by Cuvier (with the assistance of the mineralogist Alexandre Brongniart). Archaic biological features and organisms are succeeded in the fossil record by more modern versions. For instance, paleontologists investigating the evolution of birds predicted that feathers would first be seen in primitive forms on flightless predecessor organisms such as feathered dinosaurs. This is precisely what has been discovered in the fossil record: simple feathers, incapable of supporting flight, are succeeded by increasingly large and complex feathers.In practice, the most useful diagnostic species are those with the fastest rate of species turnover and the widest distribution; their study is termed biostratigraphy, the science of dating rocks by using the fossils contained within them. In Cenozoic strata, fossilized tests of foraminifera are often used to determine faunal succession on a refined scale, each biostratigraphic unit (biozone) being a geological stratum that is defined on the basis of its characteristic fossil taxa. An outline microfaunal zonal scheme based on both foraminifera and ostracoda was compiled by M. B. Hart (1972). Principle of faunal succession: Earlier fossil life forms are simpler than more recent forms, and more recent fossil forms are more similar to living forms (principle of faunal succession).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Oligogalacturonate-specific porin** Oligogalacturonate-specific porin: Oligogalacturonate-specific porins (KdgM) are a family of outer bacterial membrane proteins from Dickeya dadantii. The phytopathogenic Gram-negative bacteria D. dadantii secretes pectinases, which are able to degrade the pectic polymers of plant cell walls, and uses the degradation products as a carbon source for growth. Synthesis of KdgM is strongly induced in the presence of pectic derivatives. KdgM behaves like a voltage-dependent porin that is slightly selective for anions and that exhibits fast block in the presence of trigalacturonate. KdgM seems to be monomeric.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SIM Application Toolkit** SIM Application Toolkit: SIM Application Toolkit (STK) is a standard of the GSM system which enables the subscriber identity module (SIM card) to initiate actions which can be used for various value-added services. Similar standards exist for other network and card systems, with the USIM Application Toolkit (USAT) for USIMs used by newer-generation networks being an example. A more general name for this class of Java Card-based applications running on UICC cards is the Card Application Toolkit (CAT).The SIM Application Toolkit consists of a set of commands programmed into the SIM which define how the SIM should interact directly with the outside world and initiates commands independently of the handset and the network. This enables the SIM to build up an interactive exchange between a network application and the end user and access, or control access to, the network. The SIM also gives commands to the handset such as displaying menus and/or asking for user input.STK has been deployed by many mobile operators around the world for many applications, often where a menu-based approach is required, such as Mobile Banking and content browsing. Designed as a single application environment, the STK can be started during the initial power up of the SIM card and is especially suited to low level applications with simple user interfaces.In GSM networks, the SIM Application Toolkit is defined by the GSM 11.14 standard released in 2001. SIM Application Toolkit: From release 4 onwards, GSM 11.14 was replaced by 3GPP TS 31.111 which also includes the specifications of the USIM Application Toolkit for 3/4G networks. Advantages: Some manufacturers claim that STK enables higher levels of security through identity verification and encryption, which are necessary for secure electronic commerce. STK has been deployed on the largest number of mobile devices. Limitations: Updating Android software is done over GSM where the SIM Toolkit may install automatically with new software regardless of automatic install applications. Change in applications and menus stored on the SIM is difficult after the customer takes delivery of the SIM and sometimes may be recognized as surveillance software. Limitations: To deliver updates, either the SIM must be returned and exchanged for a new one (which can be costly and inconvenient) or the application updates must be delivered over-the-air (OTA) using specialized, optional SIM features. As of October 2010, mobile network operators can, for example, deliver updated STK application menus by sending a secure SMS to handsets that include a Toolbox (S@T) compliant wireless internet browser (WIB). When using a SIM card compliant to the BIP (Bearer Independent protocol ) in a BIP-compliant handset, the updates can be delivered very quickly as well (depending upon the network connectivity available to and supported by the handset, i.e. GPRS/3G speed). It might also be possible to change the menu of STK applications based on the Wireless Internet Gateway (WIG) specification. The update limitations hinder the number and frequency of STK application deployments.STK has essentially no support for multimedia, only basic pictures.The STK technology has limited independent development support available.If a mobile phone does not support SIM Application Toolkit, users may not be able to use the service or network correctly. Issues with several mobile network operators have been noticed on smartphones that don't support STK, like Nokia N900. In newer networks: USIM Application Toolkit (USAT) is the equivalent of STK for 3G networks. USAT takes advantage of the multiapplication environment of 3G devices by not activating until a specific application has been selected, unlike STK which is activated at startup. Some functions are card related rather than application related.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OS/VS2** OS/VS2: Operating System/Virtual Storage 2 (OS/VS2) is the successor operating system to OS/360 MVT in the OS/360 family. SVS refers to OS/VS2 Release 1 MVS refers to OS/VS2 Release 2 and later
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Radeon** Radeon: Radeon () is a brand of computer products, including graphics processing units, random-access memory, RAM disk software, and solid-state drives, produced by Radeon Technologies Group, a division of AMD. The brand was launched in 2000 by ATI Technologies, which was acquired by AMD in 2006 for US$5.4 billion. Radeon Graphics: Radeon Graphics is the successor to the Rage line. Three different families of microarchitectures can be roughly distinguished, the fixed-pipeline family, the unified shader model-families of TeraScale and Graphics Core Next. ATI/AMD have developed different technologies, such as TruForm, HyperMemory, HyperZ, XGP, Eyefinity for multi-monitor setups, PowerPlay for power-saving, CrossFire (for multi-GPU) or Hybrid Graphics. A range of SIP blocks is also to be found on certain models in the Radeon products line: Unified Video Decoder, Video Coding Engine and TrueAudio. Radeon Graphics: The brand was previously only known as "ATI Radeon" until August 2010, when it was renamed to increase AMD's brand awareness on a global scale. Products up to and including the HD 5000 series are branded as ATI Radeon, while the HD 6000 series and beyond use the new AMD Radeon branding.On 11 September 2015, AMD's GPU business was split into a separate unit known as Radeon Technologies Group, with Raja Koduri as Senior Vice President and chief architect. Radeon Graphics: Radeon Graphics card brands AMD does not distribute Radeon cards directly to consumers (though some exceptions can be found). Instead, it sells Radeon GPUs to third-party manufacturers, who build and sell the Radeon-based video cards to the OEM and retail channels. Manufacturers of the Radeon cards—some of whom also make motherboards—include ASRock, Asus, Biostar, Club 3D, Diamond, Force3D, Gainward, Gigabyte, HIS, MSI, PowerColor, Sapphire, VisionTek, and XFX. Graphics processor generations: Early generations were identified with a number and major/minor alphabetic prefix. Later generations were assigned code names. New or heavily redesigned architectures have a prefix of R (e.g., R300 or R600) while slight modifications are indicated by the RV prefix (e.g., RV370 or RV635). The first derivative architecture, RV200, did not follow the scheme used by later parts. Fixed-pipeline family R100/RV200 The Radeon, first introduced in 2000, was ATI's first graphics processor to be fully DirectX 7 compliant. R100 brought with it large gains in bandwidth and fill-rate efficiency through the new HyperZ technology. The RV200 was a die-shrink of the former R100 with some core logic tweaks for clockspeed, introduced in 2002. The only release in this generation was the Radeon 7500, which introduced little in the way of new features but offered substantial performance improvements over its predecessors. R200 ATI's second generation Radeon included a sophisticated pixel shader architecture. This chipset implemented Microsoft's pixel shader 1.4 specification for the first time. Its performance relative to competitors was widely perceived as weak, and subsequent revisions of this generation were cancelled in order to focus on development of the next generation. R300/R350 The R300 was the first GPU to fully support Microsoft's DirectX 9.0 technology upon its release in 2001. It incorporated fully programmable pixel and vertex shaders. About a year later, the architecture was revised to allow for higher frequencies, more efficient memory access, and several other improvements in the R350 family. A budget line of RV350 products was based on this refreshed design with some elements disabled or removed. Models using the new PCI Express interface were introduced in 2004. Using 110-nm and 130-nm manufacturing technologies under the X300 and X600 names, respectively, the RV370 and RV380 graphics processors were used extensively by consumer PC manufacturers. R420 While heavily based upon the previous generation, this line included extensions to the Shader Model 2 feature-set. Shader Model 2b, the specification ATI and Microsoft defined with this generation, offered somewhat more shader program flexibility. R520 ATI's DirectX 9.0c series of graphics cards, with complete shader Model 3.0 support. Launched in October 2005, this series brought a number of enhancements including the floating point render target technology necessary for HDR rendering with anti-aliasing. TeraScale-family R600 ATI's first series of GPUs to replace the old fixed-pipeline and implement unified shader model. Subsequent revisions tuned the design for higher performance and energy efficiency, resulting in the ATI Mobility Radeon HD series for mobile computers. Graphics processor generations: R700 Based on the R600 architecture. Mostly a bolstered with many more stream processors, with improvements to power consumption and GDDR5 support for the high-end RV770 and RV740(HD4770) chips. It arrived in late June 2008. The HD 4850 and HD 4870 have 800 stream processors and GDDR3 and GDDR5 memory, respectively. The 4890 was a refresh of 4870 with the same amount of stream processors yet higher clock rates due to refinements. The 4870x2 has 1600 stream processors and GDDR5 memory on an effective 512-bit memory bus with 230.4 Gbit/s video memory bandwidth available. Graphics processor generations: Evergreen The series was launched on 23 September 2009. It featured a 40 nm fabrication process for the entire product line (only the HD4770 (RV740) was built on this process previously), with more stream cores and compatibility with the next major version of the DirectX API, DirectX 11, which launched on 22 October 2009 along with Microsoft Windows 7. The Rxxx/RVxxx codename scheme was scrapped entirely. The initial launch consisted of only the 5870 and 5850 models. ATI released beta drivers that introduced full OpenGL 4.0 support on all variants of this series in March 2010. Graphics processor generations: Northern Islands This is the first series to be marketed solely under the "AMD" brand. It features a 3rd generation 40 nm design, rebalancing the existing architecture with redesigned shaders to give it better performance. It was released first on 22 October 2010, in the form of the 6850 and 6870. 3D output is enabled with HDMI 1.4a and DisplayPort 1.2 outputs. Graphics processor generations: Graphics Core Next-family Southern Islands "Southern Islands" was the first series to feature the new compute microarchitecture known as "Graphics Core Next"(GCN). GCN was used among the higher end cards, while the VLIW5 architecture utilized in the previous generation was used in the lower end, OEM products. However, the Radeon HD 7790 uses GCN 2, and was the first product in the series to be released by AMD on 9 January 2012. Graphics processor generations: Sea Islands The "Sea Islands" were OEM rebadges of the 7000 series, with only three products, code named Oland, available for general retail. The series, just like the "Southern Islands", used a mixture of VLIW5 models and GCN models for its desktop products. Graphics processor generations: Volcanic Islands "Volcanic Islands" GPUs were introduced with the AMD Radeon Rx 200 Series, and were first released in late 2013. The Radeon Rx 200 line is mainly based on AMD's GCN architecture, with the lower end, OEM cards still using VLIW5. The majority of desktop products use GCN 1, while the R9 290x/290 & R7 260X/260 use GCN 2, and with only the R9 285 using the new GCN 3. Graphics processor generations: Caribbean Islands GPUs codenamed "Caribbean Islands" were introduced with the AMD Radeon Rx 300 Series, released in 2015. This series was the first to solely use GCN based models, ranging from GCN 1st to GCN 3rd Gen, including the GCN 3-based Fiji-architecture models named Fury X, Fury, Nano and the Radeon Pro Duo. Arctic Islands GPUs codenamed "Arctic Islands" were first introduced with the Radeon RX 400 Series in June 2016 with the announcement of the RX 480. These cards were the first to use the new Polaris chips which implements GCN 4th Gen on the 14 nm fab process. The RX 500 Series released in April 2017 also uses Polaris chips. Vega RDNA-family RDNA 1 On 27 May 2019, at COMPUTEX 2019, AMD announced the new 'RDNA' graphics micro-architecture, which is to succeed the Graphics Core Next micro-architecture. This is the basis for the Radeon RX 5700-series graphics cards, the first to be built under the codename 'Navi'. These cards feature GDDR6 SGRAM and support for PCI Express 4.0. Graphics processor generations: RDNA 2 On 5 March 2020, AMD publicly announced its plan to release a "refresh" of the RDNA micro-architecture. Dubbed as the RDNA 2 architecture, it was stated to succeed the first-gen RDNA micro-architecture and was initially scheduled for a release in Q4 2020. RDNA 2 was confirmed as the graphics microarchitecture featured in the Xbox Series X and Series S consoles from Microsoft, and PlayStation 5 from Sony, with proprietary tweaks and different GPU configurations in each systems' implementation. Graphics processor generations: AMD unveiled the Radeon RX 6000 series, its next-gen RDNA 2 graphics cards at an online event on 28 October 2020. The lineup consists of the RX 6800, RX 6800 XT and RX 6900 XT. The RX 6800 and 6800 XT launched on 18 November 2020, with the RX 6900 XT being released on 8 December 2020. Further variants including a Radeon RX 6700 (XT) series based on Navi 22, launched on 18 March 2021, a Radeon RX 6600(XT) series based on Navi 23, launched on 11 August 2021 (that is the 6600XT release date, the RX 6600 launched on 13 October 2021), and a Radeon RX 6500(XT), launched on 19 January 2022. Graphics processor generations: API overview Some generations vary from their predecessors predominantly due to architectural improvements, while others were adapted primarily to new manufacturing processes with fewer functional changes. The table below summarizes the APIs supported in each Radeon generation. Also see AMD FireStream and AMD FirePro branded products. The following table shows the graphics and compute APIs support across AMD GPU micro-architectures. Note that a branding series might include older generation chips. Feature overview The following table shows features of AMD/ATI's GPUs (see also: List of AMD graphics processing units). Graphics device drivers: AMD's proprietary graphics "Radeon Software" (Formerly Catalyst) On 24 November 2015, AMD released a new version of their graphics driver following the formation of the Radeon Technologies Group (RTG) to provide extensive software support for their graphics cards. This driver, labelled Radeon Software Crimson Edition, overhauls the UI with Qt, resulting in better responsiveness from a design and system perspective. It includes a new interface featuring a game manager, clocking tools, and sections for different technologies.Unofficial modifications such as Omega drivers and DNA drivers were available. These drivers typically consist of mixtures of various driver file versions with some registry variables altered and are advertised as offering superior performance or image quality. They are, of course, unsupported, and as such, are not guaranteed to function correctly. Some of them also provide modified system files for hardware enthusiasts to run specific graphics cards outside of their specifications. Graphics device drivers: On operating systems Radeon Software is being developed for Microsoft Windows and Linux. As of January 2019, other operating systems are not officially supported. This may be different for the AMD FirePro brand, which is based on identical hardware but features OpenGL-certified graphics device drivers. Graphics device drivers: ATI previously offered driver updates for their retail and integrated Macintosh video cards and chipsets. ATI stopped support for Mac OS 9 after the Radeon R200 cards, making the last officially supported card the Radeon 9250. The Radeon R100 cards up to the Radeon 7200 can still be used with even older classic Mac OS versions such as System 7, although not all features are taken advantage of by the older operating system.Ever since ATI's acquisition by AMD, ATI no longer supplies or supports drivers for classic Mac OS nor macOS. macOS drivers can be downloaded from Apple's support website, while classic Mac OS drivers can be obtained from 3rd party websites that host the older drivers for users to download. ATI used to provide a preference panel for use in macOS called ATI Displays which can be used both with retail and OEM versions of its cards. Though it gives more control over advanced features of the graphics chipset, ATI Displays has limited functionality compared to Catalyst for Windows or Linux. Graphics device drivers: Third-party free and open-source "Radeon" The free and open-source for Direct Rendering Infrastructure has been under constant development by the Linux kernel developers, by 3rd party programming enthusiasts and by AMD employees. It is composed out of five parts: Linux kernel component DRM this part received dynamic re-clocking support in Linux kernel version 3.12 and its performance has become comparable to that of AMD Catalyst Linux kernel component KMS driver: basically the device driver for the display controller user-space component libDRM user-space component in Mesa 3D; currently most of these components are written conforming to the Gallium3D-specifications. Graphics device drivers: all drivers in Mesa 3D with Version 10.x (last 10.6.7) are as of September 2014 limited to OpenGL version 3.3 and OpenGL ES 3.0. all drivers in Mesa 3D with Version 11.x (last 11.2.2) are as of Mai 2016 limited to OpenGL version 4.1 and OpenGL ES 3.0 or 3.1 (11.2+). all drivers in Mesa 3D with version 12.x (in June 2016) can support OpenGL version 4.3. all drivers in Mesa 3D with Version 13.0.x ( in November 2016) can support OpenGL 4.4 and unofficial 4.5. Graphics device drivers: all drivers in Mesa 3D with Version 17.0.x ( in January 2017) can support OpenGL 4.5 and OpenGL ES 3.2 Actual Hardware Support for different MESA versions see: glxinfo AMD R600/700 since Mesa 10.1: OpenGL 3.3+, OpenGL ES 3.0+ (+: some more Features of higher Levels and Mesa Version) AMD R800/900 (Evergreen, Northern Islands): OpenGL 4.1+ (Mesa 13.0+), OpenGL ES 3.0+ (Mesa 10.3+) AMD GCN (Southern/Sea Islands and newer): OpenGL 4.5+ (Mesa 17.0+), OpenGL ES 3.2+ (Mesa 18.0+), Vulkan 1.0 (Mesa 17.0+), Vulkan 1.1 (GCN 2nd Gen+, Mesa 18.1+) a special and distinct 2D graphics device driver for X.Org Server, which is finally about to be replaced by Glamor OpenCL with GalliumCompute (previous Clover) is not full developed in 1.0, 1.1 and only parts of 1.2. Some OpenCL conformance tests were failed in 1.0 and 1.1, most in 1.2. ROCm is developed by AMD and Open Source. OpenCL 1.2 is full supported with OpenCL 2.0 language. Only CPU or GCN-Hardware with PCIe 3.0 is supported. So GCN 3rd Gen. or higher is here full usable for OpenCL 1.2 software. Graphics device drivers: Supported features The free and open-source driver supports many of the features available in Radeon-branded cards and APUs, such as multi-monitor or hybrid graphics. Linux The free and open-source drivers are primarily developed on Linux and for Linux. Other operating systems Being entirely free and open-source software, the free and open-source drivers can be ported to any existing operating system. Whether they have been, and to what extent depends entirely on the man-power available. Available support shall be referenced here. Graphics device drivers: FreeBSD adopted DRI, and since Mesa 3D is not programmed for Linux, it should have identical support.MorphOS supports 2D and 3D acceleration for Radeon R100, R200 and R300 chipsets.AmigaOS 4 supports Radeon R100, R200, R300, R520 (X1000 Series), R700 (HD 4000 Series), HD 5000 (Evergreen) series, HD 6000 (Northern Islands) series and HD 7000 (Southern Islands) series. The RadeonHD AmigaOS 4 driver has been developed by Hans de Ruiter funded and owned by A-EON Technology Ltd. The older R100 and R200 "ATIRadeon" driver for AmigaOS, originally developed Forefront Technologies has been acquired by A-EON Technology Ltd in 2015. Graphics device drivers: In the past ATI provided hardware and technical documentation to the Haiku Project to produce drivers with full 2D and video in/out support on older Radeon chipsets (up to R500) for Haiku. A new Radeon HD driver was developed with the unofficial and indirect guidance of AMD open source engineers and currently exists in recent Haiku versions. The new Radeon HD driver supports native mode setting on R600 through Southern Islands GPU's. Embedded GPU products: AMD (and its predecessor ATI) have released a series of embedded GPUs targeted toward medical, entertainment, and display devices. Radeon Memory: In August 2011, AMD expanded the Radeon name to include random access memory modules under the AMD Memory line. The initial releases included 3 types of 2GiB DDR3 SDRAM modules: Entertainment (1333 MHz, CL9 9-9), UltraPro Gaming (1600 MHz, CL11 11-11) and Enterprise (specs to be determined).In 2013-05-08, AMD announced the release of Radeon RG2133 Gamer Series Memory.Radeon R9 2400 Gamer Series Memory was released in 2014-01-16. Radeon Memory: Production Dataram Corporation is manufacturing RAM for AMD. Radeon RAMDisk: In 2012-09-06, Dataram Corporation announced it has entered into a formal agreement with AMD to develop an AMD-branded version of Dataram's RAMDisk software under the name Radeon RAMDisk, targeting gaming enthusiasts seeking exponential improvements in game load times leading to an enhanced gaming experience. The freeware version of Radeon RAMDisk software supports Windows Vista and later with minimum 4GiB memory, and supports maximum of 4GiB RAM disk (6GiB if AMD Radeon Value, Entertainment, Performance Edition or Products installed, and Radeon RAMDisk is activated between 2012-10-10 and 2013-10-10). Retail version supports RAM disk size between 5MiB to 64GiB. Radeon RAMDisk: Version history Version 4.1 was released in 2013-05-08. Production In 2014-04-02, Dataram Corporation announced it has signed an Agreement with Elysium Europe Ltd. to expand sales penetration in Europe, the Middle East and Africa. Under this Agreement, Elysium is authorized to sell AMD Radeon RAMDisk software. Elysium is focusing on etailers, retailers, system builders and distributors. Radeon SSD: AMD planned to enter solid state drive market with the introduction of R7 models powered by Indilinx Barefoot 3 controller and Toshiba 19 nm MLC flash memory, and initially available in 120G, 240G, 480G capacities. The R7 Series SSD was released on 2014-08-09, which included Toshiba's A19 MLC NAND flash memory, Indilinx Barefoot 3 M00 controller. These components are the same as in the SSD OCZ Vector 150 model.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flange** Flange: A flange is a protruded ridge, lip or rim, either external or internal, that serves to increase strength (as the flange of an iron beam such as an I-beam or a T-beam); for easy attachment/transfer of contact force with another object (as the flange on the end of a pipe, steam cylinder, etc., or on the lens mount of a camera); or for stabilizing and guiding the movements of a machine or its parts (as the inside flange of a rail car or tram wheel, which keep the wheels from running off the rails). Flanges are often attached using bolts in the pattern of a bolt circle. Plumbing or piping: A flange can also be a plate or ring to form a rim at the end of a pipe when fastened to the pipe (for example, a closet flange). A blind flange is a plate for covering or closing the end of a pipe. A flange joint is a connection of pipes, where the connecting pieces have flanges by which the parts are bolted together. Plumbing or piping: Although the word 'flange' generally refers to the actual raised rim or lip of a fitting, many flanged plumbing fittings are themselves known as flanges. Common flanges used in plumbing are the Surrey flange or Danzey flange, York flange, Sussex flange and Essex flange. Surrey and York flanges fit to the top of the hot water tank allowing all the water to be taken without disturbance to the tank. They are often used to ensure an even flow of water to showers. An Essex flange requires a hole to be drilled in the side of the tank. There is also a Warix flange which is the same as a York flange but the shower output is on the top of the flange and the vent on the side. The York and Warix flange have female adapters so that they fit onto a male tank, whereas the Surrey flange connects to a female tank. A closet flange provides the mount for a toilet. Pipe flanges: Piping components can be bolted together between flanges. Flanges are used to connect pipes with each other, to valves, to fittings, and to specialty items such as strainers and pressure vessels. A cover plate can be connected to create a "blind flange". Flanges are joined by bolting, and sealing is often completed with the use of gaskets or other methods. Mechanical means to mitigate effects of leaks, like spray guards or specific spray flanges, may be included. Industries where flammable, volatile, toxic or corrosive substances are being processed have greater need of special protection at flanged connections. Pipe flanges: Flange guards can provide that added level of protection to ensure safety.There are many different flange standards to be found worldwide. To allow easy functionality and interchangeability, these are designed to have standardised dimensions. Common world standards include ASA/ASME (USA), PN/DIN (European), BS10 (British/Australian), and JIS/KS (Japanese/Korean). In the USA, the standard is ASME B16.5 (ANSI stopped publishing B16.5 in 1996). ASME B16.5 covers flanges up to 24 inches size and up to pressure rating of Class 2500. Flanges larger than 24 inches are covered in ASME B16.47. Pipe flanges: In most cases, standards are interchangeable, as most local standards have been aligned to ISO standards; however, some local standards still differ. For example, an ASME flange will not mate against an ISO flange. Further, many of the flanges in each standard are divided into "pressure classes", allowing flanges to be capable of taking different pressure ratings. Again these are not generally interchangeable (e.g. an ASME 150 will not mate with an ASME 300).These pressure classes also have differing pressure and temperature ratings for different materials. Unique pressure classes for piping can also be developed for a process plant or power generating station; these may be specific to the corporation, engineering procurement and construction (EPC) contractor, or the process plant owner. The ASME pressure classes for flat-face flanges are Class 125 and Class 250. The classes for ring-joint, tongue and groove, and raised-face flanges are Class 150, Class 300, Class 400 (unusual), Class 600, Class 900, Class 1500, and Class 2500.The flange faces are also made to standardized dimensions and are typically "flat face", "raised face", "tongue and groove", or "ring joint" styles, although other obscure styles are possible. Pipe flanges: Flange designs are available as "weld neck", "slip-on", "lap joint", "socket weld", "threaded", and also "blind". Other types: Threaded Flanges (also known as Screwed Flanges) have threads under their bore and are used to fix pipes with threading. They are easy to fit but must be more suitable for high-temperature or high-pressure applications. Socket-Weld Flanges have a female socket for low-pressure and temperature applications. A pipe is fitted through the female socket in the flange. Lap-Joint Flanges are made in two components, a stub-end and a loose backing flange, and are used for applications that require frequent dismantling without wearing from frequent modifications. Slip-on Flanges have holes that match the outer diameter of a pipe, and the pipe is passed through the hole and fillet welded from both sides. They are suitable for low-pressure and low-temperature applications and are generally forged with a hub. Large sizes are used to connect big bore pipes with storage tanks. Blind-Flanges are used as a termination point to a piping system and have a blank surface with a bolt point to fit a pipe. Spectacle Blind Flanges (SB) are applied to systems that require regular separation and are made from two discs attached. One is a ring, and the other is a solid plate. Tongue & Groove Flanges consist of two flanges; one face has a ring machined onto it, and the other contains a depression matching the ring on the other. These are commonly found on pump covers and Valve Bonnets. Long Weld Neck Flanges are made with a circular fitting and a bulging rim around the circumference, are similar to standard Welding Neck flanges but have a particularly long neck, and are generally used in processing plants. Raised Face Flanges are high-pressure flanges with a raised face due to gasket surfaces being raised above the bolting circle face. They are widely used in process plant applications because of their increased pressure containment capability. Flat-Face Flanges are frequently used where casting is used to make mating flange or flanged fitting and have a gasket surface in the same plane as the bolting circle face. Ring Type Joint Flanges (RTJ) are made by putting grooves on their faces, and when bolts are tightened, they seal by compressing the gasket into the grooves. They are typically used for high-pressure and high-temperature services. Orifice Flanges measure the flow rate of liquids or gases in the pipeline and are suitable for intensive weld neck flanges with extra machining. Other types: ASME standards (U.S.) Pipe flanges that are made to standards called out by ASME B16.5 or ASME B16.47, and MSS SP-44. They are typically made from forged materials and have machined surfaces. ASME B16.5 refers to nominal pipe sizes (NPS) from 1⁄2" to 24". B16.47 covers NPSs from 26" to 60". Each specification further delineates flanges into pressure classes: 150, 300, 400, 600, 900, 1500 and 2500 for B16.5, and B16.47 delineates its flanges into pressure classes 75, 150, 300, 400, 600, 900. However these classes do not correspond to maximum pressures in psi. Instead, the maximum pressure depends on the material of the flange and the temperature. For example, the maximum pressure for a Class 150 flange is 285 psi, and for a Class 300 flange it is 740 psi (both are for ASTM a105 carbon steel and temperatures below 100°F). Other types: The gasket type and bolt type are generally specified by the standard(s); however, sometimes the standards refer to the ASME Boiler and Pressure Vessel Code (B&PVC) for details (see ASME Code Section VIII Division 1 – Appendix 2). These flanges are recognized by ASME Pipe Codes such as ASME B31.1 Power Piping, and ASME B31.3 Process Piping. Other types: Materials for flanges are usually under ASME designation: SA-105 (Specification for Carbon Steel Forgings for Piping Applications), SA-266 (Specification for Carbon Steel Forgings for Pressure Vessel Components), or SA-182 (Specification for Forged or Rolled Alloy-Steel Pipe Flanges, Forged Fittings, and Valves and Parts for High-Temperature Service). In addition, there are many "industry standard" flanges that in some circumstance may be used on ASME work. Other types: The product range includes SORF, SOFF, BLRF, BLFF, WNRF (XS, XXS, STD and Schedule 20, 40, 80), WNFF (XS, XXS, STD and Schedule 20, 40, 80), SWRF (XS and STD), SWFF (XS and STD), Threaded RF, Threaded FF and LJ, with sizes from 1/2" to 16". The bolting material used for flange connection is stud bolts mated with two nut (washer when required). In petrochemical industries, ASTM A193 B7 STUD and ASTM A193 B16 stud bolts are used as these have high tensile strength. Other types: European dimensions (EN / DIN) Most countries in Europe mainly install flanges according to standard DIN EN 1092-1 (forged stainless or steel flanges). Similar to the ASME flange standard, the EN 1092-1 standard has the basic flange forms, such as weld neck flange, blind flange, lapped flange, threaded flange (thread ISO7-1 instead of NPT), weld on collar, pressed collars, and adapter flange such as flange coupling GD press fittings. The different forms of flanges within the EN 1092-1 (European Norm/Euronorm) is indicated within the flange name through the type. Similar to ASME flanges, EN1092-1 steel and stainless flanges, have several different versions of raised or none raised faces. According to the European form the seals are indicated by different form: Other countries Flanges in the rest of the world are manufactured according to the ISO standards for materials, pressure ratings, etc. to which local standards including DIN, BS, and others, have been aligned. Compact flanges: As the size of a compact flange increases it becomes relatively increasingly heavy and complex resulting in high procurement, installation and maintenance costs. Large flange diameters in particular are difficult to work with, and inevitably require more space and have a more challenging handling and installation procedure, particularly on remote installations such as oil rigs. The design of the flange face includes two independent seals. The first seal is created by application of seal seating stress at the flange heel, but it is not straight forward to ensure the function of this seal.Theoretically, the heel contact will be maintained for pressure values up to 1.8 times the flange rating at room temperature. Theoretically, the flange also remains in contact along its outer circumference at the flange faces for all allowable load levels that it is designed for. Compact flanges: The main seal is the IX seal ring. The seal ring force is provided by the elastic stored energy in the stressed seal ring. Any heel leakage will give internal pressure acting on the seal ring inside intensifying the sealing action. This however requires the IX ring to be retained in the theoretical location in the ring groove which is difficult to ensure and verify during installation. Compact flanges: The design aims at preventing exposure to oxygen and other corrosive agents. Thus, this prevents corrosion of the flange faces, the stressed length of the bolts and the seal ring. This however depends on the outer dust rim to remain in satisfactory contact and that the inside fluid is not corrosive in case of leaking into the bolt circle void. Compact flanges: Applications of compact flanges The initial cost of the theoretical higher performance compact flange is inevitably higher than a regular flange due to the closer tolerances and significantly more sophisticated design and installation requirements. By way of example, compact flanges are often used across the following applications: subsea oil and gas or riser, cold work and cryogenics, gas injection, high temperature, and nuclear applications. Train wheels: Trains and trams stay on their tracks primarily due to the conical geometry of their wheels. They also have a flange on one side to keep the wheels, and hence the train, running on the rails, when the limits of the geometry based alignment are reached, either e.g. due to some emergency or defect, or simply because the curve radius is so low that conicity-based self-steering is no longer effective. Vacuum flanges: A vacuum flange is a flange at the end of a tube used to connect vacuum chambers, tubing and vacuum pumps to each other. Microwave: In microwave telecommunications, a flange is a type of cable joint which allows different types of waveguide to connect. Several different microwave RF flange types exist, such as CAR, CBR, OPC, PAR, PBJ, PBR, PDR, UAR, UBR, UDR, icp and UPX. Ski boots: Ski boots use flanges at the toe or heel to connect to the binding of the ski. The size and shape for flanges on alpine skiing boots is standardized in ISO 5355. Traditional telemark and cross country boots use the 75 mm Nordic Norm, but the toe flange is informally known as the "duckbill". New cross country bindings eliminate the flange entirely and use a steel bar embedded within the sole instead.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pöschl–Teller potential** Pöschl–Teller potential: In mathematical physics, a Pöschl–Teller potential, named after the physicists Herta Pöschl (credited as G. Pöschl) and Edward Teller, is a special class of potentials for which the one-dimensional Schrödinger equation can be solved in terms of special functions. Definition: In its symmetric form is explicitly given by V(x)=−λ(λ+1)2sech2(x) and the solutions of the time-independent Schrödinger equation −12ψ″(x)+V(x)ψ(x)=Eψ(x) with this potential can be found by virtue of the substitution u=tanh(x) , which yields [(1−u2)ψ′(u)]′+λ(λ+1)ψ(u)+2E1−u2ψ(u)=0 .Thus the solutions ψ(u) are just the Legendre functions tanh ⁡(x)) with E=−μ22 , and λ=1,2,3⋯ , μ=1,2,⋯,λ−1,λ . Moreover, eigenvalues and scattering data can be explicitly computed. In the special case of integer λ , the potential is reflectionless and such potentials also arise as the N-soliton solutions of the Korteweg-de Vries equation.The more general form of the potential is given by V(x)=−λ(λ+1)2sech2(x)−ν(ν+1)2csch2(x). Rosen–Morse potential: A related potential is given by introducing an additional term: tanh ⁡x.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cefamandole** Cefamandole: Cefamandole (INN, also known as cephamandole) is a second-generation broad-spectrum cephalosporin antibiotic. The clinically used form of cefamandole is the formate ester cefamandole nafate, a prodrug which is administered parenterally. Cefamandole is no longer available in the United States. Cefamandole: The chemical structure of cefamandole, like that of several other cephalosporins, contains an N-methylthiotetrazole (NMTT or 1-MTT) side chain. As the antibiotic is broken down in the body, it releases free NMTT, which can cause hypoprothrombinemia (likely due to inhibition of the enzyme vitamin K epoxide reductase)(vitamin K supplement is recommended during therapy) and a reaction with ethanol similar to that produced by disulfiram (Antabuse), due to inhibition of aldehyde dehydrogenase. Cefamandole: Cefamandole has a broad spectrum of activity and can be used to treat bacterial infections of the skin, bones and joints, urinary tract, and lower respiratory tract. The following represents cefamandole MIC susceptibility data for a few medically significant microorganisms. Escherichia coli: 0.12 - 400 μg/ml Haemophilus influenzae: 0.06 - >16 μg/ml Staphylococcus aureus: 0.1 - 12.5 μg/mlCO2 is generated during the normal constitution of cefamandole and ceftazidime, potentially resulting in an explosive-like reaction in syringes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Concurrent MetateM** Concurrent MetateM: Concurrent MetateM is a multi-agent language in which each agent is programmed using a set of (augmented) temporal logic specifications of the behaviour it should exhibit. These specifications are executed directly to generate the behaviour of the agent. As a result, there is no risk of invalidating the logic as with systems where logical specification must first be translated to a lower-level implementation. Concurrent MetateM: The root of the MetateM concept is Gabbay's separation theorem; any arbitrary temporal logic formula can be rewritten in a logically equivalent past → future form. Execution proceeds by a process of continually matching rules against a history, and firing those rules when antecedents are satisfied. Any instantiated future-time consequents become commitments which must subsequently be satisfied, iteratively generating a model for the formula made up of the program rules. Temporal Connectives: The Temporal Connectives of Concurrent MetateM can divided into two categories, as follows: Strict past time connectives: '●' (weak last), '◎' (strong last), '◆' (was), '■' (heretofore), 'S' (since), and 'Z' (zince, or weak since). Present and future time connectives: '◯' (next), '◇' (sometime), '□' (always), 'U' (until), and 'W' (unless).The connectives {◎,●,◆,■,◯,◇,□} are unary; the remainder are binary. Strict past time connectives Weak last ●ρ is satisfied now if ρ was true in the previous time. If ●ρ is interpreted at the beginning of time, it is satisfied despite there being no actual previous time. Hence "weak" last. Strong last ◎ρ is satisfied now if ρ was true in the previous time. If ◎ρ is interpreted at the beginning of time, it is not satisfied because there is no actual previous time. Hence "strong" last. Was ◆ρ is satisfied now if ρ was true in any previous moment in time. Heretofore ■ρ is satisfied now if ρ was true in every previous moment in time. Since ρSψ is satisfied now if ψ is true at any previous moment and ρ is true at every moment after that moment. Zince, or weak since ρZψ is satisfied now if (ψ is true at any previous moment and ρ is true at every moment after that moment) OR ψ has not happened in the past. Present and future time connectives Next ◯ρ is satisfied now if ρ is true in the next moment in time. Sometime ◇ρ is satisfied now if ρ is true now or in any future moment in time. Always □ρ is satisfied now if ρ is true now and in every future moment in time. Until ρUψ is satisfied now if ψ is true at any future moment and ρ is true at every moment prior. Unless ρWψ is satisfied now if (ψ is true at any future moment and ρ is true at every moment prior) OR ψ does not happen in the future.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Buy on board** Buy on board: In commercial aviation, buy on board (BoB) is a system in which in-flight food or beverages are not included in the ticket price but are purchased on board or ordered in advance as an optional extra during or after the booking process. Some airlines, including almost all low-cost carriers and a handful of flag-carriers (e.g. Czech Airlines), have buy-on-board food and beverages as part of their ancillary revenue generation. United States: As the airline market in the United States became deregulated, airlines began to compete by price. Airline ticket prices began to decrease, and airlines began to charge extra for services that had been included in the airfare.Starting in 2003, many United States air carriers began eliminating free meal services in economy classes on North American flights and replacing them with buy on board services. In the 2000s US Airways (now part of American Airlines) briefly charged for soft drinks but then reversed course. By 2009, many US carriers had established buy on board as part of an à la carte pricing movement. Around that year, US carriers began using celebrity-named and brand name products to make their buy on board products generate more revenue. Continental Airlines, the last large United States carrier to offer free meals on all domestic flights, announced in March 2010 that it would begin a buy on board program in fall 2010 and end many of its free meal programs on domestic flights. Jeff Green of Businessweek described the end of Continental's program as an "end of an era."In the United States, passengers increasingly began to bring their own foods on board to avoid paying for buy on board.As of 2016, Hawaiian Airlines remains the last U.S. legacy airline to offer free meals on board, but all of its flights are to/from Hawaii. Southwest Airlines is the only mainland U.S. airline without a buy-on-board program as of 2016. Today, all three major U.S. airlines now offer free snacks in economy on board their flights, in addition to their buy-on-board menus. Europe: In Europe, the general increase in the number of tourists that fly, and deregulation which enabled low price carriers, has caused stiffer price competition. Low cost carriers, such as Ryanair, which charges for all food or drink, have forced traditional airlines to lower their costs. As of 2017 only 6 out of the 21 most popular airlines in Europe offered complimentary inflight food and drink. KLM, Lufthansa, Swiss, Austrian, Air France, ITA Airways and TAP Portugal all continued to offer free snacks and beverages on their short-haul flights.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HD 10180 c** HD 10180 c: HD 10180 c is an exoplanet approximately 130 light-years away in the constellation Hydrus. It was discovered in 2010 using the radial velocity method. With a minimum mass comparable to that of Neptune, it is of the class of planets known as Hot Neptunes. Dynamical simulations suggest that if the mass gradient was any more than a factor of two, the system would not be stable. HD 10180 c: While planet c does not exist in any mean motion resonances, both planets with adjacent orbits (b and d) share near resonances with c.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fluctuating asymmetry** Fluctuating asymmetry: Fluctuating asymmetry (FA), is a form of biological asymmetry, along with anti-symmetry and direction asymmetry. Fluctuating asymmetry refers to small, random deviations away from perfect bilateral symmetry. This deviation from perfection is thought to reflect the genetic and environmental pressures experienced throughout development, with greater pressures resulting in higher levels of asymmetry. Examples of FA in the human body include unequal sizes (asymmetry) of bilateral features in the face and body, such as left and right eyes, ears, wrists, breasts, testicles, and thighs. Fluctuating asymmetry: Research has exposed multiple factors that are associated with FA. As measuring FA can indicate developmental stability, it can also suggest the genetic fitness of an individual. This can further have an effect on mate attraction and sexual selection, as less asymmetry reflects greater developmental stability and subsequent fitness. Human physical health is also associated with FA. For example, young men with greater FA report more medical conditions than those with lower levels of FA. Multiple other factors can be linked to FA, such as intelligence and personality traits. Measurement: Fluctuating asymmetry (FA) can be measured by the equation: Mean FA = mean absolute value of left sides - mean absolute value of right sides. Measurement: The closer the mean value is to zero, the lower the levels of FA, indicating more symmetrical features. By taking many measurements of multiple traits per individual, this increases the accuracy in determining that individual's developmental stability. However, these traits must be chosen carefully, as different traits are affected by different selection pressures.This equation can further be used to study the distribution of asymmetries at population levels, to distinguish between traits that show FA, directional asymmetry, and anti-symmetry. The distribution of FA around a mean point of zero suggests that FA is not an adaptive trait, where symmetry is ideal. Directional asymmetry of traits can be distinguished by showing significantly biased measurements towards traits being larger on either the left or right sides, for example, human testicles (where the right is more commonly larger), or handedness (85% are right handed, 15% are left handed). Anti-symmetry can be distinguished by the bimodal distributions, due to some adaptive functions. Causes: Fluctuating asymmetry (FA) is often considered to be the product of developmental stress and instability, caused by both genetic and environmental stressors. The notion that FA is a result of genetic and environmental factors is supported by Waddington's notion of canalisation, which implies that FA is a measure of the genome's ability to successfully buffer development to achieve a normal phenotype under imperfect environmental conditions. Various factors causing developmental instability and FA include infections, mutations, and toxins. Causes: Genetic factors Research on twins suggests that there are genetic influences on FA, and increased levels of mutations and perturbations is also linked to greater asymmetry.FA may also result from a lack of genetic immunity to diseases, as those with higher FA show less effective immune responses. This is further supported by evidence showing an association between FA and the number of respiratory infections experienced by an individual, such that those with higher levels of FA experience more infections. Increased prevalence of parasites and diseases in an organism is also seen more in individuals with greater levels of FA. However, the research in this field is predominantly correlational, so caution must be taken when inferring causation. For example, rather than a lack of immunity causing FA, FA may weaken the immune responses of an organism, or there may be another factor involved. Causes: There is some speculation that inbreeding contributes towards FA. One study on ants demonstrated that, although inbred individuals show more asymmetry in observed bilateral traits, the differences were not significant. Furthermore, ant colonies created by an inbreeding queen do not show significantly higher FA than those produced by a non-inbreeding queen. Causes: Environmental factors Multiple sources provide information on environmental factors that are correlated with FA. A meta-analysis of related studies suggests that FA is an appropriate marker of environmental stress during development.Some evidence suggests that poverty and lack of food during development may contribute to greater levels of FA. Infectious diseases can also lead to FA, as studies have repeatedly shown that those with higher FA report more infections. Alternatively, this association between levels of FA and infections may be due to a lack of immunity to diseases, as mentioned earlier (see 'Genetic factors'). Fluctuating asymmetry in human males is also seen to positively correlate with levels of oxidative stress. This process occurs when an organism creates excess reactive oxygen species (ROS) compared to ROS-neutralising antioxidants. Oxidative stress may mediate the association seen between high FA and infection amounts during development.Toxins and poisons are considered to increase FA. Pregnancy sickness is argued to be an adaptation for avoiding toxins during foetal development. Research has reported that when a mother has no sickness or a sickness that extends beyond week 12 of gestation, the offspring shows higher FA (as demonstrated by measuring thigh circumferences). This suggests that when a mother fails to expel environmental toxins, this creates stress and developmental instability for the foetus, later leading to increased asymmetry in that individual. Greater exposure to pollution may also be a fundamental cause of FA. Research on skull characteristics of Baltic grey seals (Halichoerus grypus) demonstrated that those born after 1960 (marking an increase in environmental pollution) had increased levels of asymmetry. Also, shrews (Crocidura russula) from more polluted areas show higher levels of asymmetry. Radioactive contamination may also increase FA levels, as mice (Apodemus flavicollis) living closer to the failed Chernobyl reactor show greater asymmetry. Developmental stability: Developmental stability is achieved when an organism is able to withstand genetic and environmental stress, to display the bilaterally symmetrical traits determined by its developmentally programmed phenotype. To measure an individual's developmental stability, the FA measurements of 10 traits are added together, including ear width, elbows, ankles, wrists, feet, length of ears and fingers. This is achieved by: (L - R)trait 1 + (L - R)trait 2 + ......(L - R)trait 10. This provides a good overall measure of body FA, as every individual has some features that are not perfectly symmetrical. Developmental stability: Common environmental pressures leading to lower developmental stability include exposure to toxins, poison and infectious diseases, low food quality and malnutrition. Genetic pressures include spontaneous new mutations, and "bad genes" (genes that once had adaptive functions, but are being removed through evolutionary selection). A large fluctuating asymmetry (FA) and a low developmental stability suggests that an organism is unable to develop according to the ideal state of bilateral symmetry. The energy required for bilateral symmetry development is extremely high, making fully perfect bilateral symmetry functionally nonexistent in natural organic creatures. Energy is invested into survival in spite of the genetic and environmental pressures, before making bilaterally symmetrical traits. Research has also revealed links between FA and depression, genetic or environmental stress and measures of mate quality for sexual selection. Health: Susceptibility to diseases Research has linked higher levels of fluctuating asymmetry (FA) to poorer outcomes in some domains of physical health in humans. For example, one study found that individuals with higher levels of FA report a higher number of medical conditions than those with lower levels of FA. However, they did not experience worse outcomes in areas such as systolic blood pressure or cholesterol levels. Higher levels of FA have also been linked to higher body mass index (BMI) in women, and lower BMI in men. Research has shown that both men and women with higher levels of FA, both facial and bodily, report a higher number of respiratory infections and a higher number of days ill, compared to men and women with lower levels of FA. In men, higher levels of FA have been linked to lower levels of physical attractiveness and higher levels of oxidative stress, regardless of smoking or levels of toxin exposure. There is no gender difference in the susceptibility of diseases depending on body FA.A large-scale review of the human and non-human literature by Møller found that higher levels of fluctuating asymmetry were linked to increased vulnerability to parasites, and also to lower levels of immunity to disease. A large-scale longitudinal study in Britain found that facial FA was not associated with poorer health over the course of childhood, which was interpreted as suggesting smaller effects of FA in Western societies with generally low levels of FA A review of the relationship between various attractiveness features and health in Western societies produced similar results, finding that symmetry was not related to health in either sex, but was related to attractiveness in males. Health: Health-risk behaviours It has been suggested that individuals with lower levels of FA may engage in more biologically costly behaviours such as recreational drug use and risky body modifications such as piercings and tattoos. These ideas have been proposed in the context of Zahavi's handicap principle, which argues that highly costly behaviours or traits serve as signals of an organism's genetic quality. The relationship between FA and behaviours with high health risks has received mixed support. Individuals with body piercings and tattoos (which increase risk of blood-borne infections) have been shown to have lower levels of FA, but individuals with lower FA do not engage in any more recreational drug use than those with higher FA levels. Health: Mental health in humans Higher levels of FA have been linked to higher levels of some mental health difficulties. For instance, it has been shown that, among university students, higher FA is associated with higher levels of schizotypy. Depression scores have been found to be higher in men, but not women, with higher levels of FA. One study by Shackelford and Larsen found that men and women with higher facial asymmetry reported more physiological complaints than those with lower facial asymmetry, and that both men and women with higher asymmetry experienced higher levels of psychological distress overall. For example, men with higher facial asymmetry experienced higher levels of depression compared to men with lower facial asymmetry. Fluctuating asymmetry has also been studied in relation to psychopathy. One study looking at offenders and non-offenders found that, although offenders had higher levels of FA overall, psychopathic offenders had lower levels of FA compared to offenders who did not meet the criteria for psychopathy. Additionally, offenders with the highest levels of psychopathy were found to have similar levels of FA to non-offenders. Health: Other health issues in humans Research has also linked FA to conditions such as lower back pain, although the evidence is mixed. While one study found no notable link between pelvic asymmetry and lower back pain, other studies have found pelvic asymmetry (as well as FA in other traits not directly related to pelvic function) to be higher in patients experiencing lower back pain, and higher levels of FA have also been linked to congenital spinal problems. Studies have also shown increased levels of FA of ear length in individuals with cleft lip and/or non-syndromic cleft palate syndrome. Health: Physical fitness in humans In addition to general health and susceptibility to disease, research has also studied the link between FA and physical fitness. Research has found that lower levels of lower-body FA is associated with faster running speeds in Jamaican sprinters, and individuals with greater body asymmetry have been shown to move more asymmetrically while running, although do not experience higher metabolic costs than more symmetrical individuals. It has also been shown that children with lower levels of lower-body FA have faster sprinting speeds and are more willing to sprint when followed up in adulthood. Health: Health in non-human populations The relationship between FA, health and susceptibility to disease has also been studied in non-human animals. For example, studies have found that higher levels of facial asymmetry are associated with poorer overall health in female rhesus macaques (Macaca mulatta), and that higher FA is also linked to more health issues in chimpanzees (Pan troglodytes).The link between FA and health has also been investigated in non-primates. In three gazelle species (Gazella cuvieri; Gazella dama; Gazella dorcas), for instance, FA has been linked to a range of blood parameters associated with health in mammals, although the specific relevance of these blood parameters for these gazelle species was not examined. It has also been found that, among Iberian red deer (Cervus elaphus hispanicus), higher FA was slightly negatively related to both antler size and overall body mass (traits thought to indicate overall condition). Antlers more involved in fighting were found to be more symmetrical than those not involved, and antler asymmetry at reproductive age was lower than in development or at post-reproductive age.FA and health outcomes have been examined within insect populations. For instance, it has been found that Mediterranean field crickets (Gryllus bimaculatus) with higher levels of FA in three hind-limb traits have lower encapsulation rates, but do not differ from low-FA crickets in lytic activity (both are measures of immunocompetence). While research on the relationship between FA and longevity is sparse in humans, some studies using non-human populations have suggested an association between the symmetry of an organism and its lifespan. For instance, it has been found that flies whose wing veins showed more bilateral symmetry live longer than less symmetrical flies. This difference was greatest for male flies. In sexual selection: Mate attraction Symmetry has been shown to affect physical attractiveness. Those with lower levels of fluctuating asymmetry (FA) are often rated as more attractive. Various studies have supported this. The relationship between FA and mate attraction has been studied in both males and females. As FA reflects developmental stability and quality, it has been suggested that we prefer those as more attractive/with low FA because it signals traits such as health and intelligence.Research has shown that the female partners of men with lower levels of FA experience a higher number of copulatory orgasms, compared to the female partners of males with higher levels of FA. Other studies have also found that the voices of men and women with low fluctuating asymmetry are rated as more attractive, suggesting that voice may be indicative of developmental stability. Research has shown attractiveness ratings of men's scent are negatively correlated with FA, but FA is unrelated to attractiveness ratings for women's scent, and women's preferences for the scent of more symmetric men appears limited to the most fertile phases of the menstrual cycle. However, research has failed to find changes in women's preferences for low FA across the menstrual cycle when assessing pictures of faces, as opposed to scents. Facial symmetry has been positively correlated with higher occurrences of mating. Also, one study used 3-D scans of male and female bodies, and showed videos of these scans to a group of individuals who rated the bodies on attractiveness. It was found that, for both males and females, lower levels of FA were associated with higher attractiveness ratings. It was also found that sex-typical joint configurations were rated as more attractive and linked to lower FA in men, but not women. Men with higher FA have been shown to have higher levels of oxidative stress and lower levels of attractiveness. Research has also provided evidence that FA is linked to extra-pair copulation, as women have been shown to prefer men with lower levels of FA as extra-pair partners. However, the literature is mixed regarding the relationship between attractiveness and FA. For example, in one study, altering images of faces to in a way that reduced asymmetry led to observers rating such faces as less, rather than more, attractive. Research by Van Dongen also found FA to be unrelated to attractiveness, physical strength and level of masculinity in both men and women. In sexual selection: Sexual selection in non-human animals Many non-human animals have been shown to be able to distinguish between potential partners, based upon levels of FA. As with humans, lower levels of FA are seen in the most reproductively successful members of species. For instance, FA of male forewing length seem to have an important role in successful mating for many insect species, such as dark-wing damselflies and Japanese scorpionflies. In the dark-winged damselfly (Calopteryx maculate), successfully mating male flies showed significantly lower levels of FA in their forewings than unsuccessful males, while for Japanese scorpionflies, FA levels are a good predictor for the outcome of fights between males in that more symmetrical males won significantly more fights. Other animals also show similar patterns, for example, many species of butterfly, males with lower levels of FA tended to live longer and flew more actively, allowing them to have more reproductive success. Also, female swallows have been shown to prefer longer, and more symmetrical tails as a cue for mate choice. Therefore, the males with longer and more symmetrical tails show higher levels of reproductive success with more attractive females. In red deer, sexual selection has affected antler development, in that larger and more symmetrical antlers are favoured in males at prime mating age.However, some evidence for the effects of sexual selection of FA levels have been inconsistent, suggesting that the relationship between FA and sexual selection may be more complex than originally thought. For instance, in the lekking black grouse and red junglefowl, no correlations were found between FA and mating success. Furthermore, when manipulating paradise whydahs' tails to be more and less symmetrical, females showed no preferences for more symmetrical tails (but they did show preferences for longer tails). Other associated factors: Intelligence Through research, fluctuating asymmetry (FA) has been found to have a negative correlation to measurements of human traits such as working memory and intelligence, such that individuals showing greater asymmetry have lower IQ scores. As FA links with both intelligence and facial attractiveness, it is possible that our perceptions of attractiveness have evolved based upon developmental quality, which includes traits such as intelligence and health. However, some literature shows no such correlations between FA and intelligence. A meta-analysis of the research covering this topic demonstrated that whilst published studies largely report negative correlations, unpublished studies often find no association between FA and intelligence. Other associated factors: Personality Research into FA suggests that there may be some correlation to specific personality factors, in particular, the Big Five personality traits. From a general view, one would expect someone who is more symmetrical (usually meaning greater attractiveness), to be high on agreeableness, conscientiousness, extraversion and openness, and low on neuroticism. One of the most consistent findings reported is that low FA is positively associated with measures of extraversion, suggesting that more symmetrical people tend to be more extraverted than less symmetrical individuals, particularly when specifying to symmetry within the face. A correlation has also been reported between FA and human social dominance. However, research is proving less consistent with other personality factors, with some finding some weak correlations between low FA and conscientiousness and openness to experience, and others finding no significant differences between those with high or low FA. Other associated factors: Antisocial behaviours Some studies suggest a link between FA and aggression, but the evidence is mixed. In humans, criminal offenders show greater FA than nonoffenders. However, other studies report that human males with higher FA show less physical aggression and less anger. Females show no association between FA and physical aggression, but some research has suggested that older female adolescents with higher facial FA are less hostile. The type of aggression being studied may account for the mixed evidence that is seen here. For example, one study found that females with higher FA demonstrated higher levels of reactive aggression in response to high levels of provocation, whereas high FA males showed more reactive aggression under low levels of provocation.Research is also mixed in other animals. In Japanese scorpionflies (Panorpa nipponensis and Panorpa ochraceopennis), FA differences between members of the same sex competing for food determines the outcome of interspecific contests and aggression better than body size or ownership of food. Furthermore, cannibalistic laying hens (Gallus gallus domesticus) demonstrate more asymmetry than normal hens. However, this link between FA and aggression in hens is questionable, as victimised hens also showed greater asymmetry. Furthermore, when prenatally injecting hen eggs with excess serotonin (5-HT), the hens later exhibited more FA at 18 weeks of age, but displayed less aggressive behaviours. It is suggested that the stress introduced during early embryonic stages via certain factors (such as excess serotonin) may create developmental instability, causing phenotypic and behavioural variations (such as increased or decreased aggression). Other associated factors: Aging In old age, facial symmetry has been associated with better cognitive aging, as lower levels of FA have been associated with higher intelligence and more efficient information processing in older men. However, it has been found that risk of mortality cannot be predicted accurately from levels of FA in photographs of older adults. Other factors Additionally, FA has been shown to predict atypical asymmetry of the brain. Research has also shown that growth rates after birth positively correlate with FA. For example, increased FA has been found in people who were obese.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SULT1A2** SULT1A2: Sulfotransferase 1A2 is an enzyme that in humans is encoded by the SULT1A2 gene.Sulfotransferase enzymes catalyze the sulfate conjugation of many hormones, neurotransmitters, drugs, and xenobiotic compounds. These cytosolic enzymes are different in their tissue distributions and substrate specificities. The gene structure (number and length of exons) is similar among family members. This gene encodes one of two phenol sulfotransferases with thermostable enzyme activity. Two alternatively spliced variants that encode the same protein have been described.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Illicit minor** Illicit minor: Illicit minor is a formal fallacy committed in a categorical syllogism that is invalid because its minor term is undistributed in the minor premise but distributed in the conclusion. This fallacy has the following argument form: All A are B. All A are C. Therefore, all C are B.Example: All cats are felines. All cats are mammals. Illicit minor: Therefore, all mammals are felines.The minor term here is mammal, which is not distributed in the minor premise "All cats are mammals", because this premise is only defining a property of possibly some mammals (i.e., that they're cats.) However, in the conclusion "All mammals are felines", mammal is distributed (it is talking about all mammals being felines). It is shown to be false by any mammal that is not a feline; for example, a dog. Example: Pie is good. Illicit minor: Pie is unhealthy. Thus, all good things are unhealthy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Blown idiophone** Blown idiophone: A blown idiophone is one of the categories of musical instruments found in the Hornbostel-Sachs system of musical instrument classification. These idiophones produce sound when stimulated by moving air. For example, the aeolsklavier features sticks while the piano chanteur features plaques. This group is divided in the following two sub-categories (see: List of idiophones by Hornbostel–Sachs number): Blown sticks (141) 141.1 Individual blown sticks. 141.2 Sets of blown sticks. Aeolsklavier Blown plaques (142) 142.1 Individual blown plaques. 142.2 Sets of blown plaques.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**60S ribosomal protein L19** 60S ribosomal protein L19: 60S ribosomal protein L19 is a protein that in humans is encoded by the RPL19 gene.Ribosomes, the organelles that catalyze protein synthesis, consist of a small 40S subunit and a large 60S subunit. Together these subunits are composed of 4 RNA species and approximately 80 structurally distinct proteins. This gene encodes a ribosomal protein that is a component of the 60S subunit. The protein belongs to the L19E family of ribosomal proteins. It is located in the cytoplasm. As is typical for genes encoding ribosomal proteins, there are multiple processed pseudogenes of this gene dispersed through the genome.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Domain inventory pattern** Domain inventory pattern: Domain Inventory is a design pattern, applied within the service-orientation design paradigm, whose application enables creating pools of services, which correspond to different segments of the enterprise, instead of creating a single enterprise-wide pool of services. This design pattern is usually applied when it is not possible to create a single inventory of services for whole of the enterprise by following the same design standards across the different segments of the enterprise. The Domain Inventory Design pattern by Thomas Erl asks, "How can services be delivered to maximize recomposition when enterprise-wide standardization is not possible?" and is discussed as part of this podcast. Rationale: As per the guidelines of the Enterprise Inventory design pattern, it is beneficial to create a single inventory that spans the whole of the enterprise as it results in services that are more standardized, interoperable and easily composable. However, there may be situations when a single enterprise-wide inventory cannot be created. This could be because of a number of reasons including: management issues e.g. who will own the services and who will be responsible for their maintenance? the organization is spread across different geographic locations. Rationale: different segments of the organization are supported by different IT departments and the technologies used are not the same. some segments of the organization might not be ready for transition towards service-orientation. a pilot project needs to be undertaken just to ascertain the effectiveness of SOA. as per the guidelines of the Standardized Service Contract, it may be very difficult to create standardized data models across the enterprise. Rationale: cultural issues, e.g. IT managers not willing to give up control they have over the way different projects are developed.Considering the above-mentioned factors, it is rather more practical to build smaller groups of services whereby the scope of a group relates to a well-defined domain boundary within the enterprise. This is exactly what is advocated by the Domain Inventory design pattern. By limiting the scope of a service inventory, it becomes easier to develop and manage a group of related services. Usage: In order to apply this design pattern, a well-defined boundary needs to be established inside the enterprise that would usually correspond to a particular business area of the enterprise. For example, sales department, customer services department, etc. It is important that any domains created relate to the business domains as it helps to keep the service inventory in sync with the business models as they evolve over time. Having established a well-defined boundary, the next step is to create a set of design standards that would regulate the extent to which the service-orientation design principles would be applied and any other related conventions, rules and restrictions e.g. how to create the data models, how to name the service functions, etc. By having these design standards in place, standardized set of services can be developed that are specifically attuned to work within the limitations of the respective organizational segment. As the services are standardized, they can be easily composed without the requirement of any bridging mechanisms. Considerations: If the established boundary of a domain does not correspond to an actual business domain then it might prove difficult to maintain such an inventory of services because of the managerial cross-over. Each domain inventory now corresponds to a specific set of standards that may be different from rest of the domain inventories. As a result, when it comes to composing a solution out of services that belong to different domain inventories, some sort of transformation mechanisms may be required in order for the messages to be sent between different service inventories. For example, services within domain inventory A may be using XML schemas that are less granular as compared to the schemas used by the services belonging to domain inventory B. Design patterns like the Data Model Transformation, the Data Format Transformation and the Protocol Bridging design patterns can be applied in order to address the different transformation requirements.Another important factor is that as different domain inventories are being built by different project teams, there is a higher chance of developing services with duplicate functionality as each team is unaware of the requirements of the other business processes that are being automated.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hughes–Ingold symbol** Hughes–Ingold symbol: A Hughes–Ingold symbol describes various details of the reaction mechanism and overall result of a chemical reaction. For example, an SN2 reaction is a substitution reaction ("S") by a nucleophilic process ("N") that is bimolecular ("2" molecular entities involved) in its rate-determining step. By contrast, an E2 reaction is an elimination reaction, an SE2 reaction involves electrophilic substitution, and an SN1 reaction is unimolecular. The system is named for British chemists Edward D. Hughes and Christopher Kelk Ingold.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SARS-CoV-2 Kappa variant** SARS-CoV-2 Kappa variant: Kappa variant is a variant of SARS-CoV-2, the virus that causes COVID-19. It is one of the three sublineages of Pango lineage B.1.617. The SARS-CoV-2 Kappa variant is also known as lineage B.1.617.1 and was first detected in India in December 2020. By the end of March 2021, the Kappa sub-variant accounted for more than half of the sequences being submitted from India. On 1 April 2021, it was designated a Variant Under Investigation (VUI-21APR-01) by Public Health England. Mutations: The Kappa variant has three notable alterations in the amino-acid sequences, all of which are in the virus's spike protein code.The three notable substitutions are: L452R, E484Q, P681R L452R. The substitution at position 452, a leucine-to-arginine substitution. This exchange confers stronger affinity of the spike protein for the ACE2 receptor along with decreased recognition capability of the immune system. E484Q. The substitution at position 484, a glutamic acid-to-glutamine substitution. This alteration confers the variant stronger binding potential to Angiotensin-converting enzyme 2, as well as better ability to evade hosts' immune systems. Mutations: P681R. The substitution at position 681, a proline-to-arginine substitution.The European Centre for Disease Prevention and Control (ECDC) also list a fourth spike mutation of interest: D614G. This is a substitution at position 614, an aspartic acid-to-glycine substitution. Other variants which have the D614G mutation include the Beta and Delta variants, and the mutation is associated with increased infectivity.The two other mutations which can be found closer to either end of the spike region are T95I and Q1071H. History: International detection The Kappa variant was first identified in India in December 2020.By 11 May 2021, the WHO Weekly Epidemiological Update had reported 34 countries with detections of the subvariant, however by 25 May 2021, the number of countries had risen to 41. As of 19 May 2021, the United Kingdom had detected a total of 418 confirmed cases of the SARS-CoV-2 Kappa variant. On 6 June 2021, a cluster of 60 cases identified in the Australian city of Melbourne were linked to the Kappa variant. According to GISAID in July 2021, India had submitted more genetic samples of the Kappa variant than any other country. History: Community transmission A Public Health England technical briefing paper of 22 April 2021 reported that 119 cases of the sub-variant had been identified in England with a concentration of cases in the London area and the regions of the North West and East of England. Of the 119 cases, 94 had an established link to travel, 22 cases were still under investigation, but the remaining 3 cases were identified as not having any known link to travel.On 2 June, the Guardian reported that at least 1 in 10 of the cases in the outbreak in the Australian state of Victoria were due to contact with strangers and that community transmission was involved with clusters of the Kappa variant. However, infectious disease expert, Professor Greg Dore, said that the Kappa variant was behaving "the same as we've seen before" in relation to other variants in Australia.Vaccine efficacy Vaccines are effective against the Kappa variant, albeit to a lower extent than against the original strain. History: A study conducted by Oxford University in June 2021 said that the Oxford-AstraZeneca vaccine and the Pfizer-BioNTech vaccine were effective against the Kappa and Delta variants, suggesting that the current vaccines offer protection against these variants, although with slight reductions in neutralization.Covaxin was also found to be effective against the Kappa variant (B.1.617.1) as for other variants.The Moderna COVID-19 vaccine was also found to be effective against the Kappa variant, albeit with a 3.3-3.4 fold reduction in neutralization.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Unary operation** Unary operation: In mathematics, a unary operation is an operation with only one operand, i.e. a single input. This is in contrast to binary operations, which use two operands. An example is any function f : A → A, where A is a set. The function f is a unary operation on A. Common notations are prefix notation (e.g. ¬, −), postfix notation (e.g. factorial n!), functional notation (e.g. sin x or sin(x)), and superscripts (e.g. transpose AT). Other notations exist as well, for example, in the case of the square root, a horizontal bar extending the square root sign over the argument can indicate the extent of the argument. Examples: Absolute value Obtaining the absolute value of a number is a unary operation. This function is defined as if if n<0 where |n| is the absolute value of n Negation This is used to find the negative value of a single number. This is technically not a unary operation as −n is just short form of 0−n . Here are some examples: −(3)=−3 −(−3)=3 Unary negative and positive As unary operations have only one operand they are evaluated before other operations containing them. Here is an example using negation: 3 − − 2 Here, the first '−' represents the binary subtraction operation, while the second '−' represents the unary negation of the 2 (or '−2' could be taken to mean the integer −2). Therefore, the expression is equal to: 3 − (− 2) =5 Technically, there is also a unary + operation but it is not needed since we assume an unsigned value to be positive: +2=2 The unary + operation does not change the sign of a negative operation: + (− 2) = −2 In this case, a unary negation is needed to change the sign: −(−2)=+2 Trigonometry In trigonometry, the trigonometric functions, such as sin , cos , and tan , can be seen as unary operations. This is because it is possible to provide only one term as input for these functions and retrieve a result. By contrast, binary operations, such as addition, require two different terms to compute a result. Examples: Examples from programming languages JavaScript In JavaScript, these operators are unary: Increment: ++x, x++ Decrement: −−x, x−− Positive: +x Negative: −x Ones' complement: ~x Logical negation: !x C family of languages In the C family of languages, the following operators are unary: Increment: ++x, x++ Decrement: −−x, x−− Address: &x Indirection: *x Positive: +x Negative: −x Ones' complement: ~x Logical negation: !x Sizeof: sizeof x, sizeof(type-name) Cast: (type-name) cast-expression Unix shell (Bash) In the Unix/Linux shell (bash/sh), '$' is a unary operator when used for parameter expansion, replacing the name of a variable by its (sometimes modified) value. For example: Simple expansion: $x Complex expansion: ${#x} PowerShell Increment: ++$x, $x++ Decrement: −−$x, $x−− Positive: +$x Negative: −$x Logical negation: !$x Invoke in current scope: .$x Invoke in new scope: &$x Cast: [type-name] cast-expression Cast: +$x Array: ,$array
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pseudo-Cushing's syndrome** Pseudo-Cushing's syndrome: Pseudo-Cushing's syndrome or non-neoplastic hypercortisolism is a medical condition in which patients display the signs, symptoms, and abnormal cortisol levels seen in Cushing's syndrome. However, pseudo-Cushing's syndrome is not caused by a problem with the hypothalamic-pituitary-adrenal axis as Cushing's is; it is mainly an idiopathic condition, however a cushingoid appearance is sometimes linked to excessive alcohol consumption. Elevated levels of total cortisol can also be due to estrogen found in oral contraceptive pills that contain a mixture of estrogen and progesterone. Estrogen can cause an increase of cortisol-binding globulin and thereby cause the total cortisol level to be elevated. Diagnosis: Levels of cortisol and ACTH both elevated 24-hour urinary cortisol levels elevated Dexamethasone suppression test Late night salivary cortisol (LNSC) Loss of diurnal variation in cortisol levels (seen only in true Cushing's Syndrome) High mean corpuscular volume and gamma-glutamyl transferase may be clues to alcoholism Polycystic Ovarian Syndrome should be ruled out; PCOS may have similar symptoms Differential diagnosis Differentiation from Cushing's is difficult, but several tools exist to aid in the diagnosis Alternative causes of Cushing's should be excluded with imaging of lungs, adrenal glands, and pituitary gland; these often appear normal in Cushing's In the alcoholic patient with pseudo-Cushing's, admission to hospital (and avoidance of alcohol) will result in normal midnight cortisol levels within five days, excluding Cushing's Another cause for Cushing's syndrome is adrenocortical carcinoma. This is a rare form of cancer with an incidence of 1-2 per million people annually. About 60% of these cancers produce hormones, with cortisol being the most frequent. Most patients present in an advanced disease state and the outcome is dismal. Prognosis: Blood results and symptoms normalise rapidly on cessation of drinking or remission of depression.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**1 µm process** 1 µm process: The 1 μm process (1 micrometre process) is a level of MOSFET semiconductor process technology that was commercialized around the 1984–1986 timeframe, by leading semiconductor companies like NTT, NEC, Intel and IBM. It was the first process where CMOS was common (as opposed to NMOS). The earliest MOSFET with a 1 μm NMOS channel length was fabricated by a research team led by Robert H. Dennard, Hwa-Nien Yu and F.H. Gaensslen at the IBM T.J. Watson Research Center in 1974. Products featuring 1.0 μm manufacturing process: NTT introduced the 1 μm process for its DRAM memory chips, including its 64k in 1979 and 256k in 1980. NEC's 1 Mbit DRAM memory chip was manufactured with the 1 μm process in 1984. Intel 80386 CPU launched in 1985 was manufactured using this process. Intel uses this process on the CHMOS III-E technology.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Barr body** Barr body: A Barr body (named after discoverer Murray Barr) or X-chromatin is an inactive X chromosome. In species with XY sex-determination (including humans), females typically have two X chromosomes, and one is rendered inactive in a process called lyonization. Errors in chromosome separation can also result in male and female individuals with extra X chromosomes. The Lyon hypothesis states that in cells with multiple X chromosomes, all but one are inactivated early in embryonic development in mammals. The X chromosomes that become inactivated are chosen randomly, except in marsupials and in some extra-embryonic tissues of some placental mammals, in which the X chromosome from the sperm is always deactivated.In humans with euploidy, a genotypical female (46, XX karyotype) has one Barr body per somatic cell nucleus, while a genotypical male (46, XY) has none. The Barr body can be seen in the interphase nucleus as a darkly staining small mass in contact with the nucleus membrane. Barr bodies can be seen in neutrophils at the rim of the nucleus. Barr body: In humans with more than one X chromosome, the number of Barr bodies visible at interphase is always one fewer than the total number of X chromosomes. For example, people with Klinefelter syndrome (47, XXY) have a single Barr body, and people with a 47, XXX karyotype have two Barr bodies. Mechanism: Someone with two X chromosomes (such as most human females) has only one Barr body per somatic cell, while someone with one X chromosome (such as most human males) has none. Mechanism: Mammalian X-chromosome inactivation is initiated from the X inactivation centre or Xic, usually found near the centromere. The center contains twelve genes, seven of which code for proteins, five for untranslated RNAs, of which only two are known to play an active role in the X inactivation process, Xist and Tsix. The centre also appears to be important in chromosome counting: ensuring that random inactivation only takes place when two or more X-chromosomes are present. The provision of an extra artificial Xic in early embryogenesis can induce inactivation of the single X found in male cells.The roles of Xist and Tsix appear to be antagonistic. The loss of Tsix expression on the future inactive X chromosome results in an increase in levels of Xist around the Xic. Meanwhile, on the future active X Tsix levels are maintained; thus the levels of Xist remain low. This shift allows Xist to begin coating the future inactive chromosome, spreading out from the Xic. In non-random inactivation this choice appears to be fixed and current evidence suggests that the maternally inherited gene may be imprinted. Variations in Xi frequency have been reported with age, pregnancy, the use of oral contraceptives, fluctuations in menstrual cycle and neoplasia.It is thought that this constitutes the mechanism of choice, and allows downstream processes to establish the compact state of the Barr body. These changes include histone modifications, such as histone H3 methylation (i.e. H3K27me3 by PRC2 which is recruited by Xist) and histone H2A ubiquitination, as well as direct modification of the DNA itself, via the methylation of CpG sites. These changes help inactivate gene expression on the inactive X-chromosome and to bring about its compaction to form the Barr body. Mechanism: Reactivation of a Barr body is also possible, and has been seen in breast cancer patients. One study showed that the frequency of Barr bodies in breast carcinoma were significantly lower than in healthy controls, indicating reactivation of these once inactivated X chromosomes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fence** Fence: A fence is a structure that encloses an area, typically outdoors, and is usually constructed from posts that are connected by boards, wire, rails or netting. A fence differs from a wall in not having a solid foundation along its whole length.Alternatives to fencing include a ditch (sometimes filled with water, forming a moat). Types: By function Agricultural fencing, to keep livestock in and/or predators out Blast fence, a safety device that redirects the high energy exhaust from a jet engine Sound barrier or acoustic fencing, to reduce noise pollution Crowd control barrier Privacy fencing, to provide privacy and security Temporary fencing, to provide safety, security, and to direct movement; wherever temporary access control is required, especially on building and construction sites Perimeter fencing, to prevent trespassing or theft and/or to keep children and pets from wandering away. Types: Decorative fencing, to enhance the appearance of a property, garden or other landscaping Boundary fencing, to demarcate a piece of real property Newt fencing, amphibian fencing, drift fencing or turtle fence, a low fence of plastic sheeting or similar materials to restrict movement of amphibians or reptiles. Pest-exclusion fence Pet fence, an underground fence for pet containment Pool fence Snow fence School fenceA balustrade or railing is a fence to prevent people from falling over an edge, most commonly found on a stairway, landing, or balcony. Railing systems and balustrades are also used along roofs, bridges, cliffs, pits, and bodies of water. Types: Another aim of using fence is to limit the intrusion attempt into a property by malicious intruders. In support of these barriers there are sophisticated technologies that can be applied on fence itself and strengthen the defence of territory reducing the risk. The elements that reinforce the perimeter protection are: Detectors Peripheral alarm control unit Means of deterrence Means for communicating information remotely remote alarm receiving unit By construction Brushwood fencing, a fence made using wires on either side of brushwood, to compact the brushwood material together. Types: Chain-link fencing, wire fencing made of wires woven together Close boarded fencing, strong and robust fence constructed from mortised posts, arris rails and vertical feather edge boards Expanding fence or trellis, a folding structure made from wood or metal on the scissor-like pantograph principle, sometimes only as a temporary barrier Ha-ha (or sunken fence) Hedge, including: Cactus fence Hedgerows of intertwined, living shrubs (constructed by hedge laying) Live fencing is the use of live woody species for fences Turf mounds in semiarid grasslands such as the western United States or Russian steppes Hurdle fencing, made from moveable sections Pale fence, or "post-and-rail" fence, composed of pales - vertical posts embedded in the ground, with their exposed end typically tapered to shed water and prevent rot from moisture entering end-grain wood - joined by horizontal rails, characteristically in two or three courses. Types: Palisade, or stakewall, made of vertical pales placed side by side with one end embedded in the ground and the other typically sharpened, to provide protection; characteristically two courses of waler are added on the interior side to reinforce the wall. Picket fences, generally a waist-high, painted, partially decorative fence Roundpole fences, similar to post-and-rail fencing but more closely spaced rails, typical of Scandinavia and other areas rich in raw timber. Slate fence, a type of palisade made of vertical slabs of slate wired together. Commonly used in parts of Wales. Split-rail fence, made of timber, often laid in a zig-zag pattern, particularly in newly settled parts of the United States and Canada Vaccary fence (named from Latin vaca - cow), for restraining cattle, made of thin slabs of stone placed upright, found in various places in the north of the UK where suitable stone is had. Vinyl fencing Solid fences, including: Dry-stone wall or rock fence, often agricultural Stockade fence, a solid fence composed of contiguous or very closely spaced round or half-round posts, or stakes, typically pointed at the top. A scaled down version of a palisade wall made of logs, most commonly used for privacy. Wattle fencing, of split branches woven between stakes. Types: Wire fences Smooth wire fence Barbed wire fence Electric fence Woven wire fencing, many designs, from fine chicken wire to heavy mesh "sheep fence" or "ring fence" Welded wire mesh fence Wood-panel fencing, whereby finished wood planks are arranged to make large solid panels, which are then suspended between posts, making an almost completely solid wall-like barrier. Usually as a decorative perimeter. Types: Wrought iron fencing, also known as ornamental iron Legal issues: In most developed areas the use of fencing is regulated, variously in commercial, residential, and agricultural areas. Height, material, setback, and aesthetic issues are among the considerations subject to regulation. Required use The following types of areas or facilities often are required by law to be fenced in, for safety and security reasons: Facilities with open high-voltage equipment (transformer stations, mast radiators). Transformer stations are usually surrounded with barbed-wire fences. Around mast radiators, wooden fences are used to avoid the problem of eddy currents. Railway lines (in the United Kingdom) fixed machinery with dangerous mobile parts (for example at merry go rounds on entertainment parks) Explosive factories and quarry stores Most industrial plants Airfields and airports Military areas Prisons Construction sites Zoos and wildlife parks Pastures containing male breeding animals, notably bulls and stallions. Legal issues: Open-air areas that charge an entry fee Amusement equipment which may pose danger for passers-by Swimming pools and spas History Servitudes are legal arrangements of land use arising out of private agreements. Under the feudal system, most land in England was cultivated in common fields, where peasants were allocated strips of arable land that were used to support the needs of the local village or manor. By the sixteenth century the growth of population and prosperity provided incentives for landowners to use their land in more profitable ways, dispossessing the peasantry. Common fields were aggregated and enclosed by large and enterprising farmers—either through negotiation among one another or by lease from the landlord—to maximize the productivity of the available land and contain livestock. Fences redefined the means by which land is used, resulting in the modern law of servitudes. Legal issues: In the United States, the earliest settlers claimed land by simply fencing it in. Later, as the American government formed, unsettled land became technically owned by the government and programs to register land ownership developed, usually making raw land available for low prices or for free, if the owner improved the property, including the construction of fences. However, the remaining vast tracts of unsettled land were often used as a commons, or, in the American West, "open range" as degradation of habitat developed due to overgrazing and a tragedy of the commons situation arose, common areas began to either be allocated to individual landowners via mechanisms such as the Homestead Act and Desert Land Act and fenced in, or, if kept in public hands, leased to individual users for limited purposes, with fences built to separate tracts of public and private land. Legal issues: United Kingdom Generally Ownership of a fence on a boundary varies. The last relevant original title deed(s) and a completed seller's property information form may document which side has to put up and has installed any fence respectively; the first using "T" marks/symbols (the side with the "T" denotes the owner); the latter by a ticked box to the best of the last owner's belief with no duty, as the conventionally agreed conveyancing process stresses, to make any detailed, protracted enquiry. Commonly the mesh or panelling is in mid-position. Otherwise it tends to be on non-owner's side so the fence owner might access the posts when repairs are needed but this is not a legal requirement. Where estate planners wish to entrench privacy a close-boarded fence or equivalent well-maintained hedge of a minimum height may be stipulated by deed. Beyond a standard height planning permission is necessary. Legal issues: The hedge and ditch ownership presumption Where a rural fence or hedge has (or in some cases had) an adjacent ditch, the ditch is normally in the same ownership as the hedge or fence, with the ownership boundary being the edge of the ditch furthest from the fence or hedge. The principle of this rule is that an owner digging a boundary ditch will normally dig it up to the very edge of their land, and must then pile the spoil on their own side of the ditch to avoid trespassing on their neighbour. They may then erect a fence or hedge on the spoil, leaving the ditch on its far side. Exceptions exist in law, for example where a plot of land derives from subdivision of a larger one along the centre line of a previously existing ditch or other feature, particularly where reinforced by historic parcel numbers with acreages beneath which were used to tally up a total for administrative units not to confirm the actual size of holdings, a rare instance where Ordnance Survey maps often provide more than circumstantial evidence namely as to which feature is to be considered the boundary. Legal issues: Fencing of livestock On private land in the United Kingdom, it is the landowner's responsibility to fence their livestock in. Conversely, for common land, it is the surrounding landowners' duty to fence the common's livestock out such as in large parts of the New Forest. Large commons with livestock roaming have been greatly reduced by 18th and 19th century Acts for enclosure of commons covering most local units, with most remaining such land in the UK's National Parks. Legal issues: United States Distinctly different land ownership and fencing patterns arose in the eastern and western United States. Original fence laws on the east coast were based on the British common law system, and rapidly increasing population quickly resulted in laws requiring livestock to be fenced in. In the west, land ownership patterns and policies reflected a strong influence of Spanish law and tradition, plus the vast land area involved made extensive fencing impractical until mandated by a growing population and conflicts between landowners. The "open range" tradition of requiring landowners to fence out unwanted livestock was dominant in most of the rural west until very late in the 20th century, and even today, a few isolated regions of the west still have open range statutes on the books. More recently, fences are generally constructed on the surveyed property line as precisely as possible. Today, across the nation, each state is free to develop its own laws regarding fences. In many cases for both rural and urban property owners, the laws were designed to require adjacent landowners to share the responsibility for maintaining a common boundary fenceline. Today, however, only 22 states have retained that provision. Legal issues: Some U.S. states, including Texas, Illinois, Missouri, and North Carolina, have enacted laws establishing that purple paint markings on fences (or trees) are the legal equivalent of "No Trespassing" signs. The laws are meant to spare landowners, particularly in rural areas, from having to continually replace printed signs that often end up being stolen or obliterated by the elements. Cultural value of fences: The value of fences and the metaphorical significance of a fence, both positive and negative, has been extensively utilized throughout western culture. A few examples include: "Good fences make good neighbors." – a proverb quoted by Robert Frost in the poem "Mending Wall" "A good neighbour is a fellow who smiles at you over the back fence, but doesn't climb over it." – Arthur Baer "There is something about jumping a horse over a fence, something that makes you feel good. Perhaps it's the risk, the gamble. In any event it's a thing I need." – William Faulkner "Fear is the highest fence." – Dudley Nichols "To be fenced in is to be withheld." – Kurt Tippett "What have they done to the earth? / What have they done to our fair sister? / Ravaged and plundered / and ripped her / and bit her / stuck her with knives / in the side of the dawn / and tied her with fences / and dragged her down." – Jim Morrison, of The Doors "Don't Fence Me In" – Cole Porter "You shall build a turtle fence." – Peter Hoekstra "A woman's dress should be like a barbed-wire fence: serving its purpose without obstructing the view." – Sophia Loren
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Death growl** Death growl: A death growl, or simply growl, is an extended vocal technique usually employed in extreme styles of music, particularly in death metal and other extreme subgenres of heavy metal music. Death growl vocals are sometimes criticized for their "ugliness", but their unintelligibility contributes to death metal's abrasive style and often dark and obscene subject matter. Definition: Death metal, in particular, is associated with growled vocals; it tends to be lyrically and thematically darker and more morbid than other forms of metal, and features vocals which attempt to evoke chaos, death, and misery by being "usually very deep, guttural, and unintelligible." Natalie Purcell notes, "Although the vast majority of death metal bands use very low, beast-like, almost indiscernible growls as vocals, many also have high and screechy or operatic vocals, or simply deep and forcefully-sung vocals." Sociologist Deena Weinstein has noted of death metal: "Vocalists in this style have a distinctive sound, growling and snarling rather than singing the words, and making ample use of the voice distortion box." Terminology and technique: Death growls are also known as death metal vocals, brutal vocals, guttural vocals, death grunts, growled vocals, low pitched vocals, low growls, unclean vocals, harsh vocals, vocal fry, glottal fry, false cord vocals, death cord vocals and disparagingly as "Cookie Monster vocals". To be done properly, death growls require traditional clean/melodic vocal techniques. Terminology and technique: "To appreciate the music, fans first had to accept a merciless sonic signature: guttural vocals that were little more than a menacing, sub-audible growl. James Hetfield's thrash metal rasp was harsh in contrast to Rob Halford's heavy metal high notes, but creatures like Glen Benton of Deicide tore out their larynxes to summon images of decaying corpses and giant catastrophic horrors." "Singing harsh death metal vocals may seem like it's just a bunch of screaming and shouting, but it's actually a technique that takes a lot of practice to master. You can learn to perform the harsh vocals of death metal by properly warming up your vocal cords so you don't damage them and learning how to breathe and sing from your diaphragm while you add guttural growls to your vocals." In June 2007, Radboud University Nijmegen Medical Centre in the Netherlands reported that, because of the increased popularity of growling in the region, several patients who had used improper growling techniques were being treated for edema and polyps on the vocal folds.The low, raspy, aggressive pitch of Lemmy Kilmister from Motörhead was not unlike the death growl and may be thought of as a precursor to the current style.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Copper cladding** Copper cladding: There are four main techniques used today in the UK and mainland Europe for copper cladding a building: seamed-cladding (typically 0.7 mm thick copper sheet on the facade): max 600 mm by 4000 mm 'seam centres'. shingle-cladding (typically made from 0.7 mm thick copper sheet): max 600 mm by 4000 mm 'seam centres'. slot-in panels (typically made from 1.0 mm thick copper sheet): max 350 mm wide for 1.0 mm, by nominal 4 m length. Copper cladding: cassettes (typically made from 1.0 mm up to 1.5 mm thick copper sheet): largest-format cladding elements, more subframing is needed: can be 900 mm x nominal 4000 mm length.When selecting size of a cladding element, take wind-loadings into account, and also consider the standard sizes available of the sheet (or coil) pre-material, to minimise material wastage through off-cuts. This helps to reduce costs. Copper cladding: The choice of which system to use depends on the aesthetic effect required, and building geometry can also have an influence on the choice. Copper cladding is very durable, lightweight compared to other materials and techniques, and at the end of the building life is also 100% recyclable. Depending on metal prices, copper may be a very cost-effective cladding and roofing material. With good building design, materials choice and craftsmanship, copper roofing or facade cladding may be cheaper than slates or concrete tiles, especially when one takes into account the lasting colour, durability, maintenance-free and lightweight nature of the cladding. Because the UK code of practice for "hard metal" cladding (as opposed to lead cladding) is quite old – CP143: part 12 (1970) – the major manufacturers have to provide detailed technical advice and information for architects, designers and builders, and cultivate skilled installers with years of experience to draw on. Typically, an installer of hard metal roofing and cladding must put in around 8–10 years on-the-job in order to achieve a respectable experience on a work site.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Confectionery** Confectionery: Confectionery is the art of making confections, which are food items that are rich in sugar and carbohydrates. Exact definitions are difficult. In general, however, confectionery is divided into two broad and somewhat overlapping categories: bakers' confections and sugar confections. The occupation of confectioner encompasses the categories of cooking performed by both the French patissier (pastry chef) and the confiseur (sugar worker).Bakers' confectionery, also called flour confections, includes principally sweet pastries, cakes, and similar baked goods. Baker's confectionery excludes everyday breads, and thus is a subset of products produced by a baker. Confectionery: Sugar confectionery includes candies (also called sweets, short for sweetmeats, in many English-speaking countries), candied nuts, chocolates, chewing gum, bubble gum, pastillage, and other confections that are made primarily of sugar. In some cases, chocolate confections (confections made of chocolate) are treated as a separate category, as are sugar-free versions of sugar confections. The words candy (Canada & US), sweets (UK, Ireland, and others), and lollies (Australia and New Zealand) are common words for some of the most popular varieties of sugar confectionery. Confectionery: The confectionery industry also includes specialized training schools and extensive historical records. Traditional confectionery goes back to ancient times and continued to be eaten through the Middle Ages and into the modern era. History: Before sugar was readily available in the ancient western world, confectionery was based on honey. Honey was used in Ancient China, Ancient India, Ancient Egypt, Ancient Greece and Ancient Rome to coat fruits and flowers to preserve them or to create sweetmeats. Between the 6th and 4th centuries BC, the Persians, followed by the Greeks, made contact with the Indian subcontinent and its "reeds that produce honey without bees". They adopted and then spread sugar and sugarcane agriculture. Sugarcane is indigenous to tropical Indian subcontinent and Southeast Asia.In the early history of sugar usage in Europe, it was initially the apothecary who had the most important role in the production of sugar-based preparations. Medieval European physicians learned the medicinal uses of the material from the Arabs and Byzantine Greeks. One Middle Eastern remedy for rheums and fevers were little, twisted sticks of pulled sugar called in Arabic al fänäd or al pänäd. These became known in England as alphenics, or more commonly as penidia, penids, pennet or pan sugar. They were the precursors of barley sugar and modern cough drops. In 1390, the Earl of Derby paid "two shillings for two pounds of penydes." As the non-medicinal applications of sugar developed, the comfitmaker, or confectioner gradually came into being as a separate trade. In the late medieval period the words confyt, comfect or cumfitt were generic terms for all kinds of sweetmeats made from fruits, roots, or flowers preserved with sugar. By the 16th century, a cumfit was more specifically a seed, nut or small piece of spice enclosed in a round or ovoid mass of sugar. The production of comfits was a core skill of the early confectioner, who was known more commonly in 16th and 17th century England as a comfitmaker. Reflecting their original medicinal purpose, however, comfits were also produced by apothecaries and directions on how to make them appear in dispensatories as well as cookery texts. An early medieval Latin name for an apothecary was confectionarius, and it was in this sort of sugar work that the activities of the two trades overlapped and that the word "confectionery" originated.In the cuisine of the Late Ottoman Empire diverse cosmopolitan cultural influences were reflected in published recipes such as European-style molded jellies flavored with cordials. In Europe, Ottoman confections (especially "lumps of delight" (Turkish delight) became very fashionable among European and British high society. An important study of Ottoman confectionery called Conditorei des Orients was published by the royal confectioner Friedrich Unger in 1838.The first confectionery in Manchester, England was opened by Elizabeth Raffald who had worked six years in domestic service as a housekeeper. Sweetening agents: Confections are defined by the presence of sweeteners. These are usually sugars, but it is possible to buy sugar-free candies, such as sugar-free peppermints. The most common sweetener for home cooking is table sugar, which is chemically a disaccharide containing both glucose and fructose. Hydrolysis of sucrose gives a mixture called invert sugar, which is sweeter and is also a common commercial ingredient. Finally, confections, especially commercial ones, are sweetened by a variety of syrups obtained by hydrolysis of starch. These sweeteners include all types of corn syrup. Bakers' confectionery: Bakers' confectionery includes sweet baked goods, especially those that are served for the dessert course. Bakers' confections are sweet foods that feature flour as a main ingredient and are baked. Major categories include cakes, sweet pastries, doughnuts, scones, and cookies. In the Middle East and Asia, flour-based confections predominate. Bakers' confectionery: The definition of which foods are "confectionery" vs "bread" can vary based on cultures and laws. In Ireland, the definition of "bread" as a "staple food" for tax purposes requires that the sugar or fat content be no more than 2% of the weight of the flour, so some products sold as bread in the US would be treated as confectionery there. Bakers' confectionery: Types Cakes have a somewhat bread-like texture, and many earlier cakes, such as the centuries-old stollen (fruit cake), or the even older king cake, were rich yeast breads. The variety of styles and presentations extends from simple to elaborate. Major categories include butter cakes, tortes, and foam cakes. Confusingly, some confections that have the word cake in their names, such as cheesecake, are not technically cakes, while others, such as Boston cream pie are cakes despite seeming to be named something else. Bakers' confectionery: Pastry is a large and diverse category of baked goods, united by the flour-based doughs used as the base for the product. These doughs are not always sweet, and the sweetness may come from the sugar, fruit, chocolate, cream, or other fillings that are added to the finished confection. Pastries can be elaborately decorated, or they can be plain dough. Doughnuts may be fried or baked. Scones and related sweet quick breads, such as bannock, are similar to baking powder biscuits and, in sweeter, less traditional interpretations, can seem like a cupcake. Sugar confectionery: Sugar confections include sweet, sugar-based foods, which are usually eaten as snack food. This includes sugar candies, chocolates, candied fruits and nuts, chewing gum, and sometimes ice cream. In some cases, chocolate confections are treated as a separate category, as are sugar-free versions of sugar confections.Different dialects of English use regional terms for sugar confections: In Britain, Ireland, and some Commonwealth countries, sweets (the Scottish Gaelic word suiteis is a derivative). Candy is used specifically for rock candy and occasionally for (brittle) boiled sweets. Lollies are boiled sweets fixed on sticks. Sugar confectionery: In Australia and New Zealand, lollies. Chewy and Chuddy are Australian slang for chewing gum. Sugar confectionery: In North America, candy, although this term generally refers to a specific range of confectionery and does not include some items of sugar confectionery (e.g. ice cream). Sweet is occasionally used, as well as treat.In the US, a chocolate-coated candy bar (e.g. Snickers) would be called a candy bar, in Britain more likely a chocolate bar than unspecifically a sweet. Sugar confectionery: Classification The United Nations' International Standard Industrial Classification of All Economic Activities (ISIC) scheme (revision 4) classifies both chocolate and sugar confectionery as ISIC 1073, which includes the manufacture of chocolate and chocolate confectionery; sugar confectionery proper (caramels, cachous, nougats, fondant, white chocolate), chewing gum, preserving fruit, nuts, fruit peels, and making confectionery lozenges and pastilles. In the European Union, the Statistical Classification of Economic Activities in the European Community (NACE) scheme (revision 2) matches the UN classification, under code number 10.82. Sugar confectionery: In the United States, the North American Industry Classification System (NAICS 2012) splits sugar confectionery across three categories: National industry code 311340 for all non-chocolate confectionery manufacturing, 311351 for chocolate and confectionery manufacturing from cacao beans, and national industry 311352 for confectionery manufacturing from purchased chocolate.Ice cream and sorbet are classified with dairy products under ISIC 1050, NACE 10.52, and NAICS 311520. Sugar confectionery: Examples Sugar confectionery items include candies, lollipops, candy bars, chocolate, cotton candy, and other sweet items of snack food. Some of the categories and types of sugar confectionery include the following: Chocolates: Bite-sized confectioneries generally made with chocolate, considered different from a candy bar made of chocolate. Divinity: A nougat-like confectionery based on egg whites with chopped nuts. Dodol: A toffee-like delicacy popular in Indonesia, Malaysia, and the Philippines Dragée: Sugar-coated almonds and other types of sugar panned candies. Fudge: Made by boiling milk and sugar to the soft-ball stage. In the US, it tends to be chocolate-flavored. Halvah: Confectionery based on tahini, a paste made from ground sesame seeds. Hard candy: Based on sugars cooked to the hard-crack stage. Examples include lollipops, jawbreakers (or gobstoppers), lemon drops, peppermint drops and disks, candy canes, rock candy, etc. Also included are types often mixed with nuts such as brittle, which is similar to chikkis. Ice cream: Frozen, flavored cream, often containing small pieces of chocolate, fruits and/or nuts. Jelly candies: Including those based on sugar and starch, pectin, gum, or gelatin such as Turkish delight (lokum), jelly beans, gumdrops, jujubes, gummies, etc. Liquorice: Containing extract of the liquorice root, this candy is chewier and more resilient than gums or gelatin candies. For example, Liquorice allsorts. It has a similar taste to star anise. Marshmallow: For example, circus peanuts. Marzipan: An almond-based confection, doughy in consistency. Mithai: A generic term for confectionery in the Indian subcontinent, typically made from dairy products and/or some form of flour. Sugar or molasses are used as sweeteners. Persipan: similar to marzipan, but made with peaches or apricots instead of almonds. Pastillage: A thick sugar paste made with gelatin, water, and confectioner's sugar, similar to gum paste, which is moulded into shapes, which then harden. Tablet: A crumbly milk-based soft and hard candy, based on sugars cooked to the soft ball stage. Comes in several forms, such as wafers and heart shapes. Not to be confused with tableting, a method of candy production. Taffy (British: chews): A sugar confection that is folded many times above 120 °F (50 °C), incorporating air bubbles thus reducing its density and making it opaque. Toffee: A confection made by caramelizing sugar or molasses along with butter. Toffee has a glossy surface and textures ranging from soft and sticky to a hard, brittle material. Its brown color and smoky taste arise from the caramelization of the sugars. Sugar confectionery: Storage and shelf life Shelf life is largely determined by the amount of water present in the candy and the storage conditions. High-sugar candies, such as boiled candies, can have a shelf life of many years if kept covered in a dry environment. Spoilage of low-moisture candies tends to involve a loss of shape, color, texture, and flavor, rather than the growth of dangerous microbes. Impermeable packaging can reduce spoilage due to storage conditions. Sugar confectionery: Candies spoil more quickly if they have different amounts of water in different parts of the candy (for example, a candy that combines marshmallow and nougat), or if they are stored in high-moisture environments. This process is due to the effects of water activity, which results in the transfer of unwanted water from a high-moisture environment into a low-moisture candy, rendering it rubbery, or the loss of desirable water from a high-moisture candy into a dry environment, rendering the candy dry and brittle. Sugar confectionery: Another factor, affecting only non-crystalline amorphous candies, is the glass transition process. This can cause amorphous candies to lose their intended texture. Cultural roles: Both bakers' and sugar confections are used to offer hospitality to guests. Cultural roles: Confections are used to mark celebrations or events, such as a wedding cake, birthday cake or Halloween. The chocolate company Cadbury (under the guidance of Richard Cadbury) was the first to commercialize the connection between romance and confectionery, producing a heart-shaped box of chocolates for Valentine's Day in 1868.Tourists commonly eat confections as part of their travels. The indulgence in rich, sugary foods is seen as a special treat, and choosing local specialties is popular. For example, visitors to Vienna eat Sachertorte and visitors to seaside resorts in the UK eat Blackpool rock candy. Transportable confections like fudges and tablet may be purchased as souvenirs. Nutrition: Generally, confections are low in micronutrients and protein but high in calories. They may be fat-free foods, although some confections, especially fried doughs and chocolate, are high-fat foods. Many confections are considered empty calories and ultra-processed food. Specially formulated chocolate has been manufactured in the past for military use as a high-density food energy source. Many sugar confections, especially caramel-coated popcorn and the different kinds of sugar candy, are defined in US law as foods of minimal nutritional value. Risks: Contaminants and coloring agents in confectionery can be particularly harmful to children. Therefore, confectionery contaminants, such as high levels of lead, have been restricted to 1 ppm in the US. There is no specific maximum in the EU.Candy colorants, particularly yellow colorants such as E102 Tartrazine, E104 Quinoline Yellow WS and E110 Sunset Yellow FCF, have many restrictions around the world. Tartrazine, for example, can cause allergic and asthmatic reactions and was once banned in Austria, Germany, and Norway. Some countries such as the UK have asked the food industry to phase out the use of these colorants, especially for products marketed to children.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alireza Mashaghi** Alireza Mashaghi: Alireza Mashaghi is a physician-scientist and biophysicist at Leiden University. He is known for his contributions to single-molecule analysis of chaperone assisted protein folding, molecular topology and medical systems biophysics and bioengineering. He is a leading advocate for interdisciplinary research and education in medicine and pharmaceutical sciences. Alireza Mashaghi: Mashaghi made the first observation of direct chaperone involvement during folding of a protein, using a single molecule force spectroscopy method. This work which has been published in Nature solved a long-standing puzzle in biology. In 2017, he reported a new model for chaperone DnaK function and made a discovery that, according to Ans Hekkenberg, "overturns the decades-old textbook model of action for a protein that is central for many processes in living cells". He and his co-workers found that chaperone DnaK can recognise natively folded protein parts and thereby promotes protein folding directly. Inspired by single-molecule analysis of biopolymers, Mashaghi and his team developed a topology framework, termed as circuit topology, which enabled studying folded molecular chains, beyond what knot theory can offer. The approach allows for topological barcoding of proteins and cellular genomes for medical applications. Mashaghi also contributed to others areas in biophysics and bioengineering including membrane biophysics, membrane based lab-on-a-chip biosensing, and organ-on-a-chip technology. In particular, the Mashaghi team was one of the first to introduce Organ Chip technology to the field of virology. His team engineered the first chip-based disease model for Ebola hemorrhagic shock syndrome, and later extended the applicability of the platform to various viral haemorrhagic syndromes. Ebola and similar viruses pathologically alter the mechanics of human cells, which is recapitulated in organ chip models. Moreover, the Mashaghi team developed optical tweezers and acoustic force spectroscopy based assays to probe such mechanical alterations at the single cell level.Mashaghi is also active in interdisciplinary research in ophthalmology, immunopathology and medicine. His main contributions were in the areas of ocular inflammation and immunomodulation. In 2017, he and his co-workers at Harvard developed an immunotherapy strategy to improve survival of high-risk cornea grafts. Together with his co-workers, he contributed to the use of stem cell technology and omics technology in ophthalmology and medicine. Mashaghi and his co-workers were among the first to use stem cells to reprogram innate immune cells, including neutrophil and macrophages. Additionally, his lab was the first to measure human macrophage mechanics and metabolome using single-cell approaches. Finally, in their research, Mashaghi and his co-workers are linking statistical physics and medical diagnostics; this unprecedented link between physics and medicine may allow for early and efficient diagnosis of certain diseases.During his academic career, Mashaghi has been affiliated with various institutions including Harvard University, Leiden University, Massachusetts Institute of Technology, Delft University of Technology, ETH Zurich, Max Planck Institutes, and AMOLF. Mashaghi has published more than 100 papers in peer-reviewed scientific journals including several papers in Nature and Nature specialty journals. He worked and co-authored with Cees Dekker, Anthony A. Hyman, Colin Adams, Erica Flapan, Donald E. Ingber, Huib Bakker, Reza Dana, and Petra Schwille. He serves on editorial board of several journals including Nano Research. Alireza Mashaghi: In 2018, Mashaghi has been named as "Discoverer of the Year" by Leiden University. He is the recipient of several awards including an honorarium from American Chemical Society.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BIT Numerical Mathematics** BIT Numerical Mathematics: BIT Numerical Mathematics is a quarterly peer-reviewed mathematics journal that covers research in numerical analysis. It was established in 1961 by Carl Erik Fröberg and is published by Springer Science+Business Media. The name "BIT" is a reverse acronym of Tidskrift för Informationsbehandling (Swedish: Journal of Information Processing).Previous editors-in-chief have been Carl Erik Fröberg (1961-1992), Åke Björck (1993-2002), Axel Ruhe (2003-2015), and Lars Eldén (2016). As of 2021 the editor-in-chief is Gunilla Kreiss. BIT Numerical Mathematics: Peter Naur served as a member of the editorial board between the years 1960 and 1993, and Germund Dahlquist between 1962 and 1991. Abstracting and indexing: The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.663.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Self-experimentation in medicine** Self-experimentation in medicine: Self-experimentation refers to scientific experimentation in which the experimenter conducts the experiment on themself. Often this means that the designer, operator, subject, analyst, and user or reporter of the experiment are all the same. Self-experimentation has a long and well-documented history in medicine which continues to the present. Some of these experiments have been very valuable and shed new and often unexpected insights into different areas of medicine. Self-experimentation in medicine: There are many motivations for self-experiment. These include the wish to get results quickly and avoid the need for a formal organisational structure, to take the ethical stance of taking the same risk as volunteers, or just a desire to do good for humanity. Other ethical issues include whether a researcher should self-experiment because another volunteer would not get the same benefit as the researcher will get, and the question of whether informed consent of a volunteer can truly be given by those outside a research program. Self-experimentation in medicine: A number of distinguished scientists have undertaken self-experimentation, including at least five Nobel laureates; in several cases, the prize was awarded for findings the self-experimentation made possible. Many experiments were dangerous; various people exposed themselves to pathogenic, toxic or radioactive materials. Some self-experimenters, like Jesse Lazear and Daniel Alcides Carrión, died in the course of their research. Notable examples of self-researchers occur in many fields; infectious disease (Jesse Lazear: yellow fever, Max von Pettenkofer: cholera), vaccine research and development (Daniel Zagury: AIDS, Tim Friede: Snakebite), cancer (Nicholas Senn, Jean-Louis-Marc Alibert), blood (Karl Landsteiner, William J. Harrington), and pharmacology (Albert Hofmann, and many many others). Research has not been limited to disease and drugs. John Stapp tested the limits of human deceleration, Humphry Davy breathed nitrous oxide, and Nicholas Senn pumped hydrogen into his gastrointestinal tract to test the utility of the method for diagnosing perforations. Definition: There is no formal definition of what constitutes self-experimentation. A strict definition might limit it to cases where there is a single-subject experiment and the experimenter performs the procedure on himself. A looser definition might include cases where the experimenters put themselves amongst the volunteers for the experiment. According to S. C. Gandevia of the University of New South Wales, who was looking at the question from the perspective of ethics, it is only self-experiment if the would-be self-experimenter would be named as an author on any subsequent published paper. That is, the person who would receive the academic credit for the experiment must also be the subject of it.: 44 Motivations: There are many reasons experimenters decide to self-test, but amongst the most fundamental is the ethical principle that the experimenter should not subject the participants in the experiment to any procedure they would not be willing to undertake themselves. This idea was first codified in the Nuremberg Code in 1947, which was a result of the trials of Nazi doctors at the Nuremberg trials accused of murdering and torturing victims in valueless experiments. Several of these doctors were hanged. Point five of the Nuremberg Code requires that no experiment should be conducted that is dangerous to the subjects unless the experimenters themselves also take part. The Nuremberg Code has influenced medical experiment codes of practice around the world, as has the exposure of experiments that have since failed to follow it such as the notorious Tuskegee syphilis experiment.: xv–xvii Critics of self-experimenters point to other less savoury motivations such as simple self-aggrandisement.: xiii  Some scientists have resorted to self-experiment to avoid the "red tape" of seeking permission from the relevant ethics committee of their institution. Werner Forssmann was so determined to proceed with his self-experiment that he continued with it even after permission had been denied. He was twice dismissed for this activity, but the importance of his work was eventually recognised in a Nobel Prize. Some researchers, apparently, even believe that self-experimentation is not permitted. However, this is not true, at least in the United States where the same rules apply regardless of who the subject of the experiment is.: xv, xx Self-experimentation is also criticised for the risk of over-enthusiastic researchers, eager to prove a point, not accurately noting the results. Against this it is argued by those supporting self-experiment that medically trained persons are in a better position to understand and record symptoms, and self-experiment is usually at the very early stage of a program before volunteers have been recruited.: xiv A wish to commit suicide is sometimes offered as a reason for self-experimentation. However, Lawrence K. Altman, author of Who Goes First?: The Story of Self-experimentation in Medicine, while acknowledging that this may sometimes occur, after extensive research could find only one verified case of attempted suicide by self-experimentation. This was Nobel Prize winner Élie Metchnikoff, who, in 1881, suffering from depression, injected himself with relapsing fever. This was his second suicide attempt, but according to his wife, Olga, he chose this method of death so that it would be of benefit to medicine. However, Metchnikoff survived and in 1892 also self-experimented with cholera, but this is not thought to have been a suicide attempt.: 311–312  Perhaps the noblest motivation is the simple altruistic desire to do something of benefit to humanity regardless of the risks. There most certainly are risks, as Jesse Lazear found to his cost when he died of yellow fever after deliberately infecting himself. Max von Pettenkofer, after ingesting cholera bacteria said:Even if I had deceived myself and the experiment endangered my life, I would have looked Death quietly in the eye for mine would have been no foolish or cowardly suicide; I would have died in the service of science like a soldier on the field of honor.: 25  According to Ian Kerridge, professor of bioethics at the University of Sydney, the most common reason for undertaking self-experimentation is not so much anything noble, but rather "an insatiable scientific curiosity and a need to participate closely in their own research". Ethics: As already mentioned, it is an ethical principle that the researcher should not inflict on volunteers anything that the researcher would not be willing to do to him- or herself, but the researcher is not always a suitable, or even possible, subject for the experiment. For instance, the researcher may be the wrong gender if the research is into hormone treatment for women, or may be too old, or too young. The ethical question for the researchers is would they agree to the experiment if they were in the same position as the volunteers?: 311 Another issue that can lead researchers not to take part is whether the researcher would stand to gain any benefit from taking part in the experiment. It is an ethical principle that volunteers must stand to gain some benefit from the research, even if that is only a remote future possibility of treatment being found for a disease that they only have a small chance of contracting. Tests on experimental drugs are sometimes conducted on sufferers of an untreatable condition. If the researcher does not have that condition then there can be no possible benefit to them personally. For instance, Ronald C. Desrosiers in responding to why he did not test an AIDS vaccine he was developing on himself said that he was not at risk of AIDS so could not possibly benefit.: xx  Against that, the early stages of testing a new drug are usually focused merely on the safety of the substance, rather than any benefits it may have. Healthy individuals are required for this stage, not volunteers suffering from the target condition, so if the researcher is healthy, he or she is a potential candidate for testing.: xiv  An issue peculiar to AIDS vaccine research is that the test will leave HIV antibodies in the volunteers blood, causing the person to show HIV positive when tested even if they have never been in contact with an HIV carrier. This could cause a number of social problems for the volunteers (including any self-testers) such as issues with life insurance.: xix The ethics of informed consent is relevant to self-experimentation. Informed consent is the principle that the volunteers in the experiment should fully understand the procedure that is going to take place, be aware of all the risks involved, and give their consent to taking part in the experiment beforehand. The principle of informed consent was first enacted in the U.S. Army's research into Yellow fever in Cuba in 1901. However, there was no general or official guidance at this time.: 43  That remained the case until the yellow fever program was referenced in the drafting of the Nuremberg Code.: xvi, 157  This was further developed in the Declaration of Helsinki in 1964 by the World Medical Association which has since become the foundation for ethics committees' guidelines.: 43–44  Some researchers believe that experimental research is too complex for the general public ever to be able to give proper informed consent. One such researcher is Eugene G. Laforet, who believes that the researchers taking part in the experiment themselves is more valuable to the volunteers than a legal consent form. Another is 1977 Nobel Prize winner Rosalyn S. Yalow who said "In our laboratory we always used ourselves because we are the only ones who can give truly informed consent.": 313–314  On the other side of the coin, there is the possibility that members of a research team may be coerced into participating by peer pressure.: 46 The question of who should be first to try the procedure in a new experiment is an ethical one. However, according to Altman it is not a question that can successfully be legislated. A law requiring self-test would force researchers to take risks that may sometimes be inappropriate. A code forbidding it might inhibit valuable discoveries.: 314  Self-experimentation has a role in medical education. Although no longer encouraged, in former times it was perfectly standard to expect medical students to try for themselves the drugs they were going to be prescribing. Charles-Édouard Brown-Séquard, whose own self-experiments led him to the concept of what are now called hormones, was a nineteenth century proponent of the practice:: 314–315 I will suggest that you should study upon yourselves the effects of the most valuable remedies. I well believe that you will never know fully the action of certain remedies, if you have not ascertained, on your own person, what effects they produce on the brain, the eye, the ear, the nerves, the muscles, and the principal viscera. Value: Self-experimentation has value in rapidly obtaining the first results. In some cases, such as with Forssmann's experiments done in defiance of official permission, results may be obtained that would never otherwise have come to light. However, self-experiment lacks the statistical validity of a larger experiment. It is not possible to generalise from an experiment on a single person. For instance, a single successful blood transfusion does not indicate, as we now know from the work of Karl Landsteiner, that all such transfusions between any two random people will also be successful. Likewise, a single failure does not absolutely prove that a procedure is worthless. Psychological issues such as confirmation bias and the placebo effect are unavoidable in a single-person self-experiment where it is not possible to put scientific controls in place.Such concerns do not apply so much if the self-experimenter is just one of many volunteers (as long as the self-experimenter is not also responsible for recording the results) but his or her presence still has value. As noted above, this can reassure the other participants. It also acts as a check on the experimenter when considering whether the experiment is ethical or dangerous.: 314 : 206 Notable examples: Anaesthesia Dentist Horace Wells made multiple experiments with nitrous oxide, diethyl ether, and chloroform while trying to determine their uses as anaesthetics. The first, conducted in 1844, consisted of having his assistant John Riggs dose him with nitrous oxide and then extract one of his teeth. His later self-experimentation of ether and chloroform took place in 1848, and he eventually became addicted to chloroform due to excessive use. He inhaled chloroform as an anaesthetic shortly before committing suicide on January 24, 1848.Lidocaine, the first amino amide–type local anaesthetic, was first synthesized under the name xylocaine by Swedish chemist Nils Löfgren in 1943. His colleague Bengt Lundqvist performed the first injection anaesthesia experiments on himself. Notable examples: Asthma Roger Altounyan developed the use of sodium cromoglycate as a remedy for asthma, based on khella, a traditional Middle Eastern remedy, with experiments on himself. Notable examples: Blood ABO blood group system Dr. Karl Landsteiner's discovery of the ABO blood group system in 1900 was based on an analysis of blood samples from six members of his laboratory staff, including himself.: 34–37 Thrombocytopenia In the Harrington–Hollingsworth experiment in 1950, William J. Harrington performed an exchange blood transfusion between himself and a thrombocytopenic patient, discovering the immune basis of idiopathic thrombocytopenic purpura and providing evidence for the existence of autoimmunity. Notable examples: Cancer In 1901, Nicholas Senn investigated whether cancer was contagious. He surgically inserted under his skin a piece of cancerous lymph node from a patient with cancer of the lip. After two weeks, the transplant started to fade and Senn concluded that cancer is not contagious.: 287 : 203 Much earlier, in 1808, Jean-Louis-Marc Alibert injected himself with a discharge from breast cancer. The site of injection became inflamed, but did not develop cancer.: 286–287 : 3 Gerhard Domagk, in 1949, injected himself with sterilised extract of human cancer in an attempt to prove that immunisation against cancer was possible.: 287–288 Infectious diseases and vaccines COVID-19 In February 2020, Huang Jinhai, an immunologist at Tianjin University, claimed that he had taken four doses of a COVID-19 vaccine developed in his lab even before it had been tested in animals.In March 2020, the Rapid Deployment Vaccine Collaborative (also known as RaDVaC) developed, produced, and published technical specifications for a modular, intranasal COVID-19 vaccine. Numerous scientists working directly and indirectly on the group's vaccine development also began self-experimentation using the project's multiple vaccine candidates.In March 2020, Hans-Georg Rammensee, professor of immunology at University of Tübingen and co-founder of CureVac began testing a COVID-19 vaccine on himself.In May 2020, Alexander Gintsburg, director of the Gamaleya Research Institute of Epidemiology and Microbiology announced that several vaccine specialists had begun self-experimentation with the Sputnik V COVID-19 vaccine. Notable examples: AIDS vaccine Daniel Zagury, in 1986, was the first to test his proposed AIDS vaccine.: 26 Bartonellosis Daniel Alcides Carrión, in 1885, infected himself from the pus in the purple wart (verruga peruana) of a female patient. Carrión developed an acute form of bartonellosis now known as Carrion's disease or Oroya fever. This is a rare disease found only in Peru and certain other parts of South America. He kept detailed notes of his condition and succeeded in showing through this self-experiment that the chronic and acute forms were the same disease. He died from the disease after several weeks. A student who had assisted Carrion in carrying out this work was arrested and charged with murder, but later released. Notable examples: Cholera Max von Pettenkofer, in October 1892, drank bouillon deliberately infected with a large dose of cholera bacteria. Pettenkofer was attempting to disprove the theory of Robert Koch that the disease was caused by the bacteria Vibrio cholerae alone. Pettenkofer also took bicarbonate of soda to counter a claim by Koch that stomach acid killed the bacteria. Pettenkofer escaped with mild symptoms and claimed success, but the modern view is that he did indeed have cholera, luckily just a mild case, and possibly had some immunity from a previous episode.: 24–26 Dysentery S.O. Levinson with H.J. Shaugnessy – and others between 1942 and 1947 – injected themselves with a vaccine against dysentery. The vaccine had previously been tested on mice, which had all died within minutes, and the effect on humans was completely unknown. The experimenters survived but suffered strong side effects.: 139 Gastritis and peptic ulcers Helicobacter pylori In 1984 a Western Australian scientist, Dr Barry Marshall, discovered the link between Helicobacter pylori (at that time known as Campylobacter pylori) and gastritis. This was based on a series of self-experiments that involved gastroscopy and biopsy, ingestion of H. pylori, regastroscopy and biopsy and subsequent treatment with tinidazole. His only option was self-experimentation: ethical measures forbade him from administering H. pylori to any other person. In 2005, Marshall and his long-time collaborator Robin Warren were awarded Nobel Prize in Physiology or Medicine, "for their discovery of the bacterium Helicobacter pylori and its role in gastritis and peptic ulcer disease". Notable examples: Marshall's experiment debunked the long-held belief of the medical profession that stress was the cause of gastritis. This cleared the way for the development of antibiotic treatments for gastritis and peptic ulcers and a new line of research into the likely role of H. pylori in stomach cancer.: x Campylobacter jejuni Marshall's investigation was preceded by David A. Robinson who, in 1980, ingested Campylobacter jejuni, a bacterium found in cow's milk, to investigate whether gastritis could be caused by drinking milk infected with C. jejuni. Robinson became sick as a result. Robinson needed to do a human experiment because the alternative, testing on cows, was not viable as infected cows frequently do not become ill.: 33 Staphylococcus Gail Monroe Dack (1901–1976), a former president of the American Society for Microbiology, gave himself food poisoning by eating cake tainted with Staphylococcus.: 56–57 Syphilis Constantin Levaditi (1874–1953) injected himself with spirochaete from rabbits suffering from syphilis but did not contract the disease himself.: 138–139 Yellow fever In Cuba, U.S. Army doctors from Walter Reed's research team infected themselves with yellow fever including James Carroll, Aristides Agramonte, and, most notably, Jesse Lazear, who died from yellow fever complications in 1900. These efforts ultimately resulted in proof of the mosquito-borne nature of yellow fever transmission and saved countless lives. Stubbins Ffirth had investigated the contagious nature of the disease at the end of the 18th century.: 137 There was an unsuccessful campaign to award a Nobel Prize to Reed's team. Lazear, in any event, could not be awarded the prize because it is never given posthumously. However, a Nobel Prize was awarded to a later yellow fever researcher and self-experimenter, Max Theiler who, in 1951, developed the first yellow fever vaccine and was the first to try it.: 156–157 Trachoma Anatolii Al'bertovich Shatkin, in 1961, injected trachoma virus into the conjunctival sac of his eye and rapidly developed trachoma. He did not begin treatment of the condition for 26 days.: 205 Schistosomiasis In July 1944, physician researcher Claude Barlow ingested over 200 schistosome worms to carry back to the United States from Egypt to study whether domestic snails could become infected and introduce the disease into the United States. Attempts to send infected snails, the intermediate host, by mail had been unsuccessful. He refused treatment, despite being desperately ill by December, so as not to lose the eggs for further study. He finally passed 4,630 eggs in his semen and 200 eggs in his urine. The U.S. government decided not to use the eggs, so his self-sacrifice was to no avail. It was November 1945 before he finally cleared all the parasites, after treatment with tartar emetic. Notable examples: Non-infectious diseases Anaemia William Bosworth Castle, in 1926, ate minced raw beef every morning, regurgitated it an hour later, and then fed it to his patients suffering from pernicious anaemia.: 258–263  Castle was testing his theory that there was an intrinsic factor produced in a normal stomach that hugely increased the uptake of the extrinsic factor (now identified as vitamin B12), lack of which leads to pernicious anaemia. Beef is a good source of B12, but patients did not respond with beef alone. Castle reasoned they lacked production of intrinsic factor and he could provide it from his own stomach. While Castle was not the recipient of this treatment, his story is included in Who Goes First?: The Story of Self-experimentation in Medicine and is considered a self-experimenter by the author.: 258–263 Hyperthyroidism Elliott Cutler (1888–1947) took sufficient thyroid extract to give himself hyperthyroidism and enable him to study the effect of the condition on kidney function.: 57 Scurvy In London in June 1769, William Stark aimed to find the cause of scurvy with a series of dietary experiments on himself. He devised a series of 24 dietary experiments and kept accurate measures of temperature and weather conditions, the weights of all food and water he consumed, and the weight of all daily excretions. He started with a basic diet of bread and water and became 'dull and listless'. When he recovered, he resumed experimenting by adding various foods, one at a time - olive oil, milk, roast goose, and others. After two months, he had symptoms of scurvy. By November 1769 he was living on nothing but honey puddings and Cheshire cheese. He considered testing fresh fruits and vegetables when he died in February 1770. Notable examples: Drugs Cocaine In 1936, Edwin Katskee took a very large dose of cocaine. He attempted to write notes on his office wall, but these became increasingly illegible as the experiment proceeded. Katskee was found dead the next morning.: 313–325 Disulfiram In 1945, during the German occupation of Denmark, Erik Jacobsen and Jens Hald at the Danish drug company Medicinalco (which had a group of enthusiastic self-experimenters that called itself the "Death Battalion") were exploring the possible use of disulfiram to treat intestinal parasites, and in the course of testing it on themselves, accidentally discovered its effects when alcohol is ingested, which led several years later to the drug called Antabuse.: 98–105 Furan Chauncey D. Leake, in 1930, took furan as a possible substitute for aspirin but it just gave him a splitting headache and painful urination that lasted three days.: 137–138 Grapefruit juice David G. Bailey, in 1989, was researching the effects of drinking alcohol while taking the then experimental drug felodipine. It was usual in this kind of research to mix the alcohol with orange juice but Bailey did not like the taste of this drink so used grapefruit juice instead. Bailey found that there was three times more felodipine in his, and fellow researchers', blood than had been reported by other scientists using orange juice. It was later found that grapefruit juice suppresses an enzyme responsible for breaking down a large number of different drugs.: x–xi Ibuprofen As part of the team who developed ibuprofen in the 1960s, Stewart Adams initially tested it on a hangover. Notable examples: Psychoactive drugs Friedrich Sertürner isolated morphine from opium in 1804. Morphine was the first-ever alkaloid isolated from any plant. Sertürner wanted to prove his findings to his colleague with a public experiment on himself and three other friends.Jacques-Joseph Moreau published his study "Du Hachisch et de l'aliénation mentale" in 1845. He self-experimented with hashish and observed its varying effects on other people. Moreau insisted that researchers should self-experiment to gain understanding of the altered states of consciousness produced by psychoactive substances. Notable examples: Psychopharmacologist Arthur Heffter isolated mescaline from the peyote cactus in 1897 and conducted experiments on its effects by comparing the effects of peyote and mescaline on himself.Albert Hofmann discovered the psychedelic properties of LSD in 1943 by accidentally absorbing it and later intentionally ingesting it to verify that the effects were caused by LSD. He was also the first to isolate psilocybin from psilocybin mushrooms and self-experimented with it to prove it to be the active principle of psilocybin mushroom's psychoactive effects. Notable examples: Timothy Leary took LSD and was a well-known proponent of the social use of the drug in the 1960s.: 138 Alexander Shulgin synthesized and self-experimented with a variety of psychoactive drugs, notably MDMA. He developed a system known as the Shulgin Rating Scale for his research group to use during the self-experimentation of psychedelics. Notable examples: Gases Hydrogen Around 1886, Nicholas Senn pumped nearly six litres of hydrogen through his anus. Senn used a rubber balloon holding four US gallons connected to a rubber tube inserted in the anus. An assistant sealed the tube by squeezing the anus against it. The hydrogen was inserted by squeezing the balloon while monitoring the pressure on a manometer. Senn had previously carried out this experiment on dogs to the point of rupturing the intestine. Senn was a pioneer of using this technique to determine if the bullet in gunshot wounds had penetrated the intestinal tract. In experiments on gunshot wounds to dogs, Senn verified that the gas escaping from the wound was hydrogen by setting light to it.Reports that Senn used helium in this experiment: 203  are almost certainly erroneous. Helium was first detected on Earth in 1882, but not isolated until 1895, and extractable reserves not found until 1903. Notable examples: Synthetic gases Humphry Davy self-experimented with breathing of several different gases, most notably nitrous oxide. Notable examples: Genes Self-experimentation with gene therapies have been reported. Every gene therapy has a unique risk of harm, including the risk associated with the gene delivery method (i.e., the particular viral vector or form of transfection) that is used and the risk associated with a specific genetic modification. Examples of potential risks for some gene therapies include tissue damage and an immune response to foreign DNA, among many others. Notable examples: Pain Thomas Lewis and Jonas Kellgren studied pain in the 1930s. To do this, they injected hypertonic saline into various parts of their own bodies.: 45 In 1983, entomologist Justin O. Schmidt released a paper detailing what he called the Schmidt sting pain index based on his own personal reactions to the stings of various insects of the Hymenoptera order, rating them on a range from 0 to 4. His 1990 revised paper covered 78 such species. Notable examples: Physical experiments Hanging In the early 1900s Nicolae Minovici, a professor of forensic science in Bucharest, undertook a series of experiments into hanging. At first he put the noose around his neck while lying down and had an assistant put tension on the rope. He then moved on to full suspension by the neck. Finally, he attempted suspension with a slipping hangman's knot, but the pain was too great for him to continue. He could not swallow for a month. Minovici was determined to surpass a record set by Dr. Fleichmann of Erlangen, who in 1832, self-asphyxiated for two minutes. However, Minovici could not get close to this and disbelieved Fleichmann.: 318–320 Minovici and Fleichmann are not the only ones to self-experiment with strangulation. Graeme Hammond, a doctor in New York, tried it in 1882. Francis Bacon described an even earlier occasion in 1623 when the self-experimenter stepped off a stool with a rope around his neck, but was unable to regain his footing on the stool without assistance.: 316–317 Rapid acceleration John Paul Stapp, in 1954, sat in a rocket sled fired along rails in a series of steadily more violent tests. Speeds reached 631 mph, almost the speed of sound. This is a speed record for a manned rail vehicle that still stands today. At the end of the track the sled hit a trough of water that brought it to a rapid stop in around 1.4 seconds. In the most severe test, Stapp underwent an acceleration of 20 g as the rocket engine accelerated the vehicle up to speed and 46 g of deceleration (also a record) as the vehicle was brought to a stop. Stapp suffered numerous injuries in these tests (previous animal tests had shown that limbs could be broken merely by being pulled into the air stream), and several concussions. In the last test his eyes were bloodied as blood vessels burst in his eyes.: 352–355 These tests were carried out for the US Air Force to determine the forces that pilots could be subjected to and to enable better restraining straps to be designed. Notable examples: Weight balance Santorio Santorio spent a large portion of 30 years living on a platform meticulously measuring his daily weight combined with that of his intake and excretion in an effort to test Galen's theory that respiration occurs through the skin as perspiratio insensibilis (insensible perspiration). The result was the 1614 publication De Statica Medicina ("On Medical Measurements"). Notable examples: Poisons Black widow spider venom Allan Blair of the University of Alabama, in 1933, deliberately caused a black widow spider to bite him. At the time there was some doubt that the reported symptoms of some victims were the result of a spider bite or some other cause. Blair's experiment was intended to settle the matter. Blair became seriously ill and was hospitalised for several days in great pain, but survived. Notable examples: Hydrogen cyanide Joseph Barcroft, in 1917, tested hydrogen cyanide on himself as part of research into poison gas in World War I. He was shut in a chamber with a dog and exposed to the gas. Barcroft continued with the experiment even after the dog went into tetanic convulsions and appeared to die. The experiment was continued for less than two minutes. The next morning the dog was found to be alive and apparently fully recovered. It is not known why dogs are more susceptible to the gas than humans.: 279–280  For other self-experiments by Barcroft, see § Temperature and pressure Snake venom Tim Friede created his own vaccine against snakebite using pure venom injections from all four species of mambas, and four cobra species to achieve high immunity. He also survived anaphylactic shock six times during the development of his vaccine. Others have also injected venom to create immunity to snake venom: Bill Haast, Harold Mierkey, Ray Hunter, Joel La Rocque, Herschel Flowers, Martin Crimmins, and Charles Tanner. Notable examples: Tetrachloroethylene and carbon tetrachloride In 1921, Maurice Crowther Hall ingested carbon tetrachloride to test its safety with a view to its possible use as a treatment for hookworm. Hall reported mild side effects. Carbon tetrachloride has since been found to cause acute liver failure. In 1925, Hall ingested tetrachloroethylene (the most common dry cleaning fluid) for the same purpose.: 102 Radioactive materials and isotopes Gary Earl Leinbach, in 1972, swallowed radioactive iodine and a knife in a tube for a biopsy. Leinbach was investigating a new way of diagnosing steatorrhea.: 138 Kenneth Gordon Scott, in 1949, inhaled aerosols of plutonium and uranium.: 203 Heavy water In 1935, pharmacologist Klaus Hansen drank heavy water to determine its effects on living beings. After his first dose yielded no ill effects, he began taking increasing doses on a daily basis. A follow-up report released a year later confirmed that he was in good health, and he lived to the age of 75. Notable examples: Surgical and psychological procedures Cardiac catheterization Clinical application of cardiac catheterization began with Werner Forssmann in the 1930s, who inserted a catheter into the brachial vein of his own forearm, guided it fluoroscopically into his right atrium, and took an X-ray picture of it. Forssmann did this procedure without permission. He obtained the assistance of a nurse by deceiving her that she was to be the subject of the experiment. He tied down her arms while inserting the catheter into his own arm, only releasing her at the point it was too late to change, and he needed her assistance. Forssmann was twice fired for carrying out these self-experiments, but shared the Nobel Prize in Physiology or Medicine in 1956 for this achievement. Cardiac catheterization is now a routine procedure in heart surgery.: 51 Self-surgery There have been several cases of surgeons operating on themselves, but most often it has been in the nature of an emergency rather than experiment. Such a case was Leonid Rogozov who was obliged to remove his own appendix in 1961 while stranded in Antarctica in winter. However, the first surgeon to carry out this self-operation, Evan O'Neill Kane in 1921, did so with an element of experiment. Although Kane's operation was necessary, it was not necessary to do it himself, so that in itself was experimental. More than that, Kane wished to experience the operation under local anaesthetic before trying the procedure on his patients. Kane advocated a reduction in the use of general anaesthetic by surgeons. In 2023, Michael Raduga, a Russian lucid dreaming researcher, performed self-neurosurgery that included trepanation, electrode implantation, and electrical stimulation of the motor cortex. Notable examples: Sensory deprivation John C. Lilly developed the first sensory deprivation tanks and self-experimented them with the intention to study the origin of consciousness and its relation to the brain by creating an environment which isolates an individual from external stimulation. Notable examples: Temperature and pressure Joseph Barcroft, in 1920, spent six days in a sealed glass chamber to investigate respiration at altitude. The partial pressure of oxygen was initially 163 mmHg falling to 84 mmHg (equivalent to an altitude of 18,000 ft) as the experiment progressed. Barcroft was attempting to disprove a theory of John Scott Haldane that the lungs actively secrete oxygen into the blood (rather than just through the process of passive diffusion) under conditions of low oxygen partial pressure. Barcroft suffered from severe hypoxia. At the end of experiment, part of Barcroft's left radial artery was removed for investigation.: 274–279 In 1931, Barcroft subjected himself to freezing temperatures while naked. Towards the end of the experiment he showed signs of the final stages of hypothermia. He was thought to be close to death and had to be rescued by colleagues.: 321–322 Neural implant Kevin Warwick had an array of 100 electrodes fired into the median nerve fibres of his left arm. With this in place, over a 3-month period, he conducted a number of experiments linking his nervous system with the internet. Notable examples: Neural adaption to immobilization Nico Dosenbach wore a pink cast over his (unbroken) right arm for two weeks in order to examine how brain circuits controlling movement are impacted by immobilizing illnesses or injuries. He did a 30-minute resting state fMRI study daily and identified an undiscovered pattern of pulses of rs-fMRI signal in motor regions controlling the disused anatomy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Reception** Reception: Reception is a noun form of receiving, or to receive something, such as art, experience, information, people, products, or vehicles. It may refer to: Astrology: Reception (astrology), when a planet is located in a sign ruled by another planet Mutual reception, when two planets are in each other's signs of rulership Events and rites: Reception, a formal party, where the guests are "received" (welcomed) by the hosts and guests of honor Wedding reception, where the guests are "received" (welcomed) by the hosts and guests of honor Rite of Reception, see Reception into the full communion of the Catholic Church Films: Reception (film), a 2011 short film The Reception (film), a 2005 film The Reception (1989 film), a 1989 Canadian film directed by Robert Morin Law: Doctrine of reception, in English law Jurisprudential reception, a legal theory Reception statute, a statutory law adopted as a former British colony becomes independent Other uses: Reception (gridiron football), a play where the ball is received (caught) by a player on the thrower's team Reception (school), in England, Wales and South Australia, the first year of primary school A desk or area where a receptionist serves as the initial contact person to visitors In telecommunications, the action of an electronic receiver, such as for radio or remote control Television reception Reception theory, a version of reader response literary theory, also referred to as audience reception
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Accompaniment** Accompaniment: Accompaniment is the musical part which provides the rhythmic and/or harmonic support for the melody or main themes of a song or instrumental piece. There are many different styles and types of accompaniment in different genres and styles of music. In homophonic music, the main accompaniment approach used in popular music, a clear vocal melody is supported by subordinate chords. In popular music and traditional music, the accompaniment parts typically provide the "beat" for the music and outline the chord progression of the song or instrumental piece. The accompaniment for a vocal melody or instrumental solo can be played by a single musician playing an instrument such as piano, pipe organ, or guitar. While any instrument can in theory be used as an accompaniment instrument, keyboard and guitar-family instruments tend to be used if there is only a single instrument, as these instruments can play chords and basslines simultaneously (chords and a bassline are easier to play simultaneously on keyboard instruments, but a fingerpicking guitarist can play chords and a bassline simultaneously on guitar). A solo singer can accompany themself by playing guitar or piano while they sing, and in some rare cases, a solo singer can even accompany themself just using their voice and body (e.g., Bobby McFerrin). Accompaniment: Alternatively, the accompaniment to a vocal melody or instrumental solo can be provided by a musical ensemble, ranging in size from a duo (e.g., cello and piano; guitar and double bass; synthesizer and percussion); a trio (e.g., a rock power trio of electric guitar, electric bass and drum kit; an organ trio); a quartet (e.g., a string quartet in Classical music can accompany a solo singer; a rock band or rhythm section in rock and pop; a jazz quartet in jazz); all the way to larger ensembles, such as concert bands, Big Bands (in jazz), pit orchestras in musical theatre; and orchestras, which, in addition to playing symphonies, can also provide accompaniment to a concerto solo instrumentalist or to solo singers in opera. With choral music, the accompaniment to a vocal solo can be provided by other singers in the choir, who sing harmony parts or countermelodies. Accompaniment parts range from so simple that a beginner can play them (e.g., simple three-note triad chords in a traditional folk song) to so complex that only an advanced player or singer can perform them (e.g., the piano parts in Schubert's Lieder art songs from the 19th century or vocal parts from a Renaissance music motet). Definition: An accompanist is a musician who plays an accompaniment part. Accompanists often play keyboard instruments (e.g., piano, pipe organ, synthesizer) or, in folk music and traditional styles, a guitar. While sight-reading (the ability to play a notated piece of music without preparing it) is important for many types of musicians, it is essential for professional accompanists. In auditions for musical theater and orchestras, an accompanist will often have to sight read music. Definition: A number of classical pianists have found success as accompanists rather than soloists; arguably the best known example is Gerald Moore, well known as a Lieder accompanist. In some American schools, the term collaborative piano is used, and hence, the title "collaborative pianist" (or collaborative artist) is replacing the title accompanist, because in many art songs and contemporary classical music songs, the piano part is complex and demands an advanced level of musicianship and technique. The term accompanist also refers to a musician (typically a pianist) who plays for singers, dancers, and other performers at an audition or rehearsal—but who does not necessarily participate in the ensemble that plays for the final performance (which might be an orchestra or a big band). Definition: Accompaniment figure An accompaniment figure is a musical gesture used repeatedly in an accompaniment, such as: Alberti bass and other arpeggio figures Ostinati figures (repeated lines) or, in popular music, riffsNotated accompaniment may be indicated obbligato (obliged) or ad libitum (at one's pleasure). Dialogue accompaniment Dialogue accompaniment is a form of call and response in which the lead and accompaniment alternate, the accompaniment playing during the rests of the lead and providing a drone or silence during the main melody or vocal. Notation and improvisation: The accompaniment instrumentalists and/or singers can be provided with a fully notated accompaniment part written or printed on sheet music. This is the norm in Classical music and in most large ensemble writing (e.g., orchestra, pit orchestra, choir). In popular music and traditional music, the accompaniment instrumentalists often improvise their accompaniment, either based on a lead sheet or chord chart which indicates the chords used in the song or piece (e.g., C Major, d minor, G7, or Nashville Numbers or Roman numerals, such as I, ii, V7, etc.) or by "playing by ear". To achieve a stylistic correct sound the accompaniment pattern should remind or imitate the original version using similar rhythms and patterns. Chord-playing musicians (e.g., those playing guitar, piano, Hammond organ, etc.) can improvise chords, "fill-in" melodic lines and solos from the chord chart. It is rare for chords to be fully written out in music notation in pop and traditional music. Some guitarists, bassists and other stringed instrumentalists read accompaniment parts using tabulature (or "tab"), a notation system which shows the musician where on the instrument to play the notes. Drummers can play accompaniment by following the lead sheet, a sheet music part in music notation, or by playing by ear. In pop and traditional music, bass players, which may be upright bass or electric bass, or another instrument, such as bass synth, depending on the style of music, are usually expected to be able to improvise a bassline from a chord chart or learn the song from a recording. In some cases, an arranger or composer may give a bassist a bass part that is fully written out in music notation. In. some arranged music parts, there is a mix of written-out accompaniment and improvisation. For example, in a big band bass part, the introduction and melody ("head") to a tune may have a fully notated bassline, but then for the improvised solos, the arranger may just write out chord symbols (e.g., Bb G7/c min F7), with the expectation that the bassist improvise her own walking bass part.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Calculus (medicine)** Calculus (medicine): A calculus (PL: calculi), often called a stone, is a concretion of material, usually mineral salts, that forms in an organ or duct of the body. Formation of calculi is known as lithiasis (). Stones can cause a number of medical conditions. Some common principles (below) apply to stones at any location, but for specifics see the particular stone type in question. Calculi are not to be confused with gastroliths. Types: Calculi in the urinary system are called urinary calculi and include kidney stones (also called renal calculi or nephroliths) and bladder stones (also called vesical calculi or cystoliths). They can have any of several compositions, including mixed. Principal compositions include oxalate and urate. Calculi of the gallbladder and bile ducts are called gallstones and are primarily developed from bile salts and cholesterol derivatives. Calculi in the nasal passages (rhinoliths) are rare. Calculi in the gastrointestinal tract (enteroliths) can be enormous. Individual enteroliths weighing many pounds have been reported in horses. Calculi in the stomach are called gastric calculi (Not to be confused with gastroliths which are exogenous in nature). Calculi in the salivary glands are called salivary calculi (sialoliths). Calculi in the tonsils are called tonsillar calculi (tonsilloliths). Calculi in the veins are called venous calculi (phleboliths). Calculi in the skin, such as in sweat glands, are not common but occasionally occur. Calculi in the navel are called omphaloliths.Calculi are usually asymptomatic, and large calculi may have required many years to grow to their large size. Cause: From an underlying abnormal excess of the mineral, e.g., with elevated levels of calcium (hypercalcaemia) that may cause kidney stones, dietary factors for gallstones. Local conditions at the site in question that promote their formation, e.g., local bacteria action (in kidney stones) or slower fluid flow rates, a possible explanation of the majority of salivary duct calculus occurring in the submandibular salivary gland. Enteroliths are a type of calculus found in the intestines of animals (mostly ruminants) and humans, and may be composed of inorganic or organic constituents. Cause: Bezoars are lumps of indigestible material in the stomach and/or intestines; most commonly, they consist of hair (in which case they are also known as hairballs). A bezoar may form the nidus of an enterolith.In kidney stones, calcium oxalate is the most common mineral type (see Nephrolithiasis). Uric acid is the second most common mineral type, but an in vitro study showed uric acid stones and crystals can promote the formation of calcium oxalate stones. Pathophysiology: Stones can cause disease by several mechanisms: Irritation of nearby tissues, causing pain, swelling, and inflammation Obstruction of an opening or duct, interfering with normal flow and disrupting the function of the organ in question Predisposition to infection (often due to disruption of normal flow)A number of important medical conditions are caused by stones: Nephrolithiasis (kidney stones) Can cause hydronephrosis (swollen kidneys) and kidney failure Can predispose to pyelonephritis (kidney infections) Can progress to urolithiasis Urolithiasis (urinary bladder stones) Can progress to bladder outlet obstruction Cholelithiasis (gallstones) Can predispose to cholecystitis (gall bladder infections) and ascending cholangitis (biliary tree infection) Can progress to choledocholithiasis (gallstones in the bile duct) and gallstone pancreatitis (inflammation of the pancreas) Gastric calculi can cause colic, obstruction, torsion, and necrosis. Diagnosis: Diagnostic workup varies by the stone type, but in general: Clinical history and physical examination Imaging studies Some stone types (mainly those with substantial calcium content) can be detected on X-ray and CT scan Many stone types can be detected by ultrasound Factors contributing to stone formation (as in #Etiology) are often tested: Laboratory testing can give levels of relevant substances in blood or urine Some stones can be directly recovered (at surgery, or when they leave the body spontaneously) and sent to a laboratory for analysis of content Treatment: Modification of predisposing factors can sometimes slow or reverse stone formation. Treatment varies by stone type, but, in general: Healthy diet & exercise (promotes flow of energy & nutrition) Drinking fluids (water & electrolytes like lemon juice, diluted vinegar eg. in pickles, salad dressings, sauces, soups, shrubs cocktail) Surgery (lithotomy) Medication / Antibiotics Extracorporeal shock wave lithotripsy (ESWL) for removal of calculi History: The earliest operation for curing stones is given in the Sushruta Samhita (6th century BCE). The operation involved exposure and going up through the floor of the bladder.The care of this disease was forbidden to the physicians that had taken the Hippocratic Oath because: There was a high probability of intraoperative and postoperative surgical complication like infection or bleeding The physicians would not perform surgery as in ancient cultures they were two different professions Etymology: The word comes from Latin calculus "small stone", from calx "limestone, lime", probably related to Greek χάλιξ chalix "small stone, pebble, rubble", which many trace to a Proto-Indo-European root for "split, break up". Calculus was a term used for various kinds of stones. In the 18th century it came to be used for accidental or incidental mineral buildups in human and animal bodies, like kidney stones and minerals on teeth.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Claw-free graph** Claw-free graph: In graph theory, an area of mathematics, a claw-free graph is a graph that does not have a claw as an induced subgraph. Claw-free graph: A claw is another name for the complete bipartite graph K1,3 (that is, a star graph comprising three edges, three leaves, and a central vertex). A claw-free graph is a graph in which no induced subgraph is a claw; i.e., any subset of four vertices has other than only three edges connecting them in this pattern. Equivalently, a claw-free graph is a graph in which the neighborhood of any vertex is the complement of a triangle-free graph. Claw-free graph: Claw-free graphs were initially studied as a generalization of line graphs, and gained additional motivation through three key discoveries about them: the fact that all claw-free connected graphs of even order have perfect matchings, the discovery of polynomial time algorithms for finding maximum independent sets in claw-free graphs, and the characterization of claw-free perfect graphs. They are the subject of hundreds of mathematical research papers and several surveys. Examples: The line graph L(G) of any graph G is claw-free; L(G) has a vertex for every edge of G, and vertices are adjacent in L(G) whenever the corresponding edges share an endpoint in G. A line graph L(G) cannot contain a claw, because if three edges e1, e2, and e3 in G all share endpoints with another edge e4 then by the pigeonhole principle at least two of e1, e2, and e3 must share one of those endpoints with each other. Line graphs may be characterized in terms of nine forbidden subgraphs; the claw is the simplest of these nine graphs. This characterization provided the initial motivation for studying claw-free graphs. Examples: The de Bruijn graphs (graphs whose vertices represent n-bit binary strings for some n, and whose edges represent (n − 1)-bit overlaps between two strings) are claw-free. One way to show this is via the construction of the de Bruijn graph for n-bit strings as the line graph of the de Bruijn graph for (n − 1)-bit strings. The complement of any triangle-free graph is claw-free. These graphs include as a special case any complete graph. Proper interval graphs, the interval graphs formed as intersection graphs of families of intervals in which no interval contains another interval, are claw-free, because four properly intersecting intervals cannot intersect in the pattern of a claw. The same is true more generally for proper circular-arc graphs. The Moser spindle, a seven-vertex graph used to provide a lower bound for the chromatic number of the plane, is claw-free. Examples: The graphs of several polyhedra and polytopes are claw-free, including the graph of the tetrahedron and more generally of any simplex (a complete graph), the graph of the octahedron and more generally of any cross polytope (isomorphic to the cocktail party graph formed by removing a perfect matching from a complete graph), the graph of the regular icosahedron, and the graph of the 16-cell. Examples: The Schläfli graph, a strongly regular graph with 27 vertices, is claw-free. Recognition: It is straightforward to verify that a given graph with n vertices and m edges is claw-free in time O(n4), by testing each 4-tuple of vertices to determine whether they induce a claw. With more efficiency, and greater complication, one can test whether a graph is claw-free by checking, for each vertex of the graph, that the complement graph of its neighbors does not contain a triangle. A graph contains a triangle if and only if the cube of its adjacency matrix contains a nonzero diagonal element, so finding a triangle may be performed in the same asymptotic time bound as n × n matrix multiplication. Therefore, using the Coppersmith–Winograd algorithm, the total time for this claw-free recognition algorithm would be O(n3.376). Recognition: Kloks, Kratsch & Müller (2000) observe that in any claw-free graph, each vertex has at most 2√m neighbors; for otherwise by Turán's theorem the neighbors of the vertex would not have enough remaining edges to form the complement of a triangle-free graph. This observation allows the check of each neighborhood in the fast matrix multiplication based algorithm outlined above to be performed in the same asymptotic time bound as 2√m × 2√m matrix multiplication, or faster for vertices with even lower degrees. The worst case for this algorithm occurs when Ω(√m) vertices have Ω(√m) neighbors each, and the remaining vertices have few neighbors, so its total time is O(m3.376/2) = O(m1.688). Enumeration: Because claw-free graphs include complements of triangle-free graphs, the number of claw-free graphs on n vertices grows at least as quickly as the number of triangle-free graphs, exponentially in the square of n. Enumeration: The numbers of connected claw-free graphs on n nodes, for n = 1, 2, ... are 1, 1, 2, 5, 14, 50, 191, 881, 4494, 26389, 184749, ... (sequence A022562 in the OEIS).If the graphs are allowed to be disconnected, the numbers of graphs are even larger: they are 1, 2, 4, 10, 26, 85, 302, 1285, 6170, ... (sequence A086991 in the OEIS).A technique of Palmer, Read & Robinson (2002) allows the number of claw-free cubic graphs to be counted very efficiently, unusually for graph enumeration problems. Matchings: Sumner (1974) and, independently, Las Vergnas (1975) proved that every claw-free connected graph with an even number of vertices has a perfect matching. That is, there exists a set of edges in the graph such that each vertex is an endpoint of exactly one of the matched edges. The special case of this result for line graphs implies that, in any graph with an even number of edges, one can partition the edges into paths of length two. Perfect matchings may be used to provide another characterization of the claw-free graphs: they are exactly the graphs in which every connected induced subgraph of even order has a perfect matching.Sumner's proof shows, more strongly, that in any connected claw-free graph one can find a pair of adjacent vertices the removal of which leaves the remaining graph connected. To show this, Sumner finds a pair of vertices u and v that are as far apart as possible in the graph, and chooses w to be a neighbor of v that is as far from u as possible; as he shows, neither v nor w can lie on any shortest path from any other node to u, so the removal of v and w leaves the remaining graph connected. Repeatedly removing matched pairs of vertices in this way forms a perfect matching in the given claw-free graph. Matchings: The same proof idea holds more generally if u is any vertex, v is any vertex that is maximally far from u, and w is any neighbor of v that is maximally far from u. Further, the removal of v and w from the graph does not change any of the other distances from u. Therefore, the process of forming a matching by finding and removing pairs vw that are maximally far from u may be performed by a single postorder traversal of a breadth first search tree of the graph, rooted at u, in linear time. Chrobak, Naor & Novick (1989) provide an alternative linear-time algorithm based on depth-first search, as well as efficient parallel algorithms for the same problem. Matchings: Faudree, Flandrin & Ryjáček (1997) list several related results, including the following: (r − 1)-connected K1,r-free graphs of even order have perfect matchings for any r ≥ 2; claw-free graphs of odd order with at most one degree-one vertex may be partitioned into an odd cycle and a matching; for any k that is at most half the minimum degree of a claw-free graph in which either k or the number of vertices is even, the graph has a k-factor; and, if a claw-free graph is (2k + 1)-connected, then any k-edge matching can be extended to a perfect matching. Independent sets: An independent set in a line graph corresponds to a matching in its underlying graph, a set of edges no two of which share an endpoint. The blossom algorithm of Edmonds (1965) finds a maximum matching in any graph in polynomial time, which is equivalent to computing a maximum independent set in line graphs. This has been independently extended to an algorithm for all claw-free graphs by Sbihi (1980) and Minty (1980).Both approaches use the observation that in claw-free graphs, no vertex can have more than two neighbors in an independent set, and so the symmetric difference of two independent sets must induce a subgraph of degree at most two; that is, it is a union of paths and cycles. In particular, if I is a non-maximum independent set, it differs from any maximum independent set by even cycles and so called augmenting paths: induced paths which alternate between vertices not in I and vertices in I, and for which both endpoints have only one neighbor in I. As the symmetric difference of I with any augmenting path gives a larger independent set, the task thus reduces to searching for augmenting paths until no more can be found, analogously as in algorithms for finding maximum matchings. Independent sets: Sbihi's algorithm recreates the blossom contraction step of Edmonds' algorithm and adds a similar, but more complicated, clique contraction step. Minty's approach is to transform the problem instance into an auxiliary line graph and use Edmonds' algorithm directly to find the augmenting paths. After a correction by Nakamura & Tamura 2001, Minty's result may also be used to solve in polynomial time the more general problem of finding in claw-free graphs an independent set of maximum weight. Generalizations of these results to wider classes of graphs are also known. Independent sets: By showing a novel structure theorem, Faenza, Oriolo & Stauffer (2011) gave a cubic time algorithm, which also works in the weighted setting. Coloring, cliques, and domination: A perfect graph is a graph in which the chromatic number and the size of the maximum clique are equal, and in which this equality persists in every induced subgraph. It is now known (the strong perfect graph theorem) that perfect graphs may be characterized as the graphs that do not have as induced subgraphs either an odd cycle or the complement of an odd cycle (a so-called odd hole). However, for many years this remained an unsolved conjecture, only proven for special subclasses of graphs. One of these subclasses was the family of claw-free graphs: it was discovered by several authors that claw-free graphs without odd cycles and odd holes are perfect. Perfect claw-free graphs may be recognized in polynomial time. In a perfect claw-free graph, the neighborhood of any vertex forms the complement of a bipartite graph. It is possible to color perfect claw-free graphs, or to find maximum cliques in them, in polynomial time. Coloring, cliques, and domination: In general, it is NP-hard to find the largest clique in a claw-free graph. It is also NP-hard to find an optimal coloring of the graph, because (via line graphs) this problem generalizes the NP-hard problem of computing the chromatic index of a graph. For the same reason, it is NP-hard to find a coloring that achieves an approximation ratio better than 4/3. However, an approximation ratio of two can be achieved by a greedy coloring algorithm, because the chromatic number of a claw-free graph is greater than half its maximum degree. A generalization of the edge list coloring conjecture states that, for claw-free graphs, the list chromatic number equals the chromatic number; these two numbers can be far apart in other kinds of graphs.The claw-free graphs are χ-bounded, meaning that every claw-free graph of large chromatic number contains a large clique. More strongly, it follows from Ramsey's theorem that every claw-free graph of large maximum degree contains a large clique, of size roughly proportional to the square root of the degree. For connected claw-free graphs that include at least one three-vertex independent set, a stronger relation between chromatic number and clique size is possible: in these graphs, there exists a clique of size at least half the chromatic number.Although not every claw-free graph is perfect, claw-free graphs satisfy another property, related to perfection. A graph is called domination perfect if it has a minimum dominating set that is independent, and if the same property holds in all of its induced subgraphs. Claw-free graphs have this property. To see this, let D be a dominating set in a claw-free graph, and suppose that v and w are two adjacent vertices in D; then the set of vertices dominated by v but not by w must be a clique (else v would be the center of a claw). If every vertex in this clique is already dominated by at least one other member of D, then v can be removed producing a smaller independent dominating set, and otherwise v can be replaced by one of the undominated vertices in its clique producing a dominating set with fewer adjacencies. By repeating this replacement process one eventually reaches a dominating set no larger than D, so in particular when the starting set D is a minimum dominating set this process forms an equally small independent dominating set.Despite this domination perfectness property, it is NP-hard to determine the size of the minimum dominating set in a claw-free graph. However, in contrast to the situation for more general classes of graphs, finding the minimum dominating set or the minimum connected dominating set in a claw-free graph is fixed-parameter tractable: it can be solved in time bounded by a polynomial in the size of the graph multiplied by an exponential function of the dominating set size. Structure: Chudnovsky & Seymour (2005) overview a series of papers in which they prove a structure theory for claw-free graphs, analogous to the graph structure theorem for minor-closed graph families proven by Robertson and Seymour, and to the structure theory for perfect graphs that Chudnovsky, Seymour and their co-authors used to prove the strong perfect graph theorem. The theory is too complex to describe in detail here, but to give a flavor of it, it suffices to outline two of their results. First, for a special subclass of claw-free graphs which they call quasi-line graphs (equivalently, locally co-bipartite graphs), they state that every such graph has one of two forms: A fuzzy circular interval graph, a class of graphs represented geometrically by points and arcs on a circle, generalizing proper circular arc graphs. Structure: A graph constructed from a multigraph by replacing each edge by a fuzzy linear interval graph. This generalizes the construction of a line graph, in which every edge of the multigraph is replaced by a vertex. Fuzzy linear interval graphs are constructed in the same way as fuzzy circular interval graphs, but on a line rather than on a circle.Chudnovsky and Seymour classify arbitrary connected claw-free graphs into one of the following: Six specific subclasses of claw-free graphs. Three of these are line graphs, proper circular arc graphs, and the induced subgraphs of an icosahedron; the other three involve additional definitions. Structure: Graphs formed in four simple ways from smaller claw-free graphs. Structure: Antiprismatic graphs, a class of dense graphs defined as the claw-free graphs in which every four vertices induce a subgraph with at least two edges.Much of the work in their structure theory involves a further analysis of antiprismatic graphs. The Schläfli graph, a claw-free strongly regular graph with parameters srg(27,16,10,8), plays an important role in this part of the analysis. This structure theory has led to new advances in polyhedral combinatorics and new bounds on the chromatic number of claw-free graphs, as well as to new fixed-parameter-tractable algorithms for dominating sets in claw-free graphs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nystagmus** Nystagmus: Nystagmus is a condition of involuntary (or voluntary, in some cases) eye movement. People can be born with it but more commonly acquire it in infancy or later in life. In many cases it may result in reduced or limited vision.In normal eyesight, while the head rotates about an axis, distant visual images are sustained by rotating eyes in the opposite direction of the respective axis. The semicircular canals in the vestibule of the ear sense angular acceleration, and send signals to the nuclei for eye movement in the brain. From here, a signal is relayed to the extraocular muscles to allow one's gaze to fix on an object as the head moves. Nystagmus occurs when the semicircular canals are stimulated (e.g., by means of the caloric test, or by disease) while the head is stationary. The direction of ocular movement is related to the semicircular canal that is being stimulated.There are two key forms of nystagmus: pathological and physiological, with variations within each type. Nystagmus may be caused by congenital disorder or sleep deprivation, acquired or central nervous system disorders, toxicity, pharmaceutical drugs, alcohol, or rotational movement. Previously considered untreatable, in recent years several drugs have been identified for treatment of nystagmus. Nystagmus is also occasionally associated with vertigo. Causes: The cause of pathological nystagmus may be congenital, idiopathic, or secondary to a pre-existing neurological disorder. It also may be induced temporarily by disorientation (such as on roller coaster rides or when a person has been spinning in circles) or by some drugs (alcohol, lidocaine, and other central nervous system depressants, inhalant drugs, stimulants, psychedelics, and dissociative drugs). Early-onset nystagmus Early-onset nystagmus occurs more frequently than acquired nystagmus. It can be insular or accompany other disorders (such as micro-ophthalmic anomalies or Down syndrome). Early-onset nystagmus itself is usually mild and non-progressive. The affected persons are usually unaware of their spontaneous eye movements, but vision can be impaired depending on the severity of the eye movements. Causes: Types of early-onset nystagmus include the following, along with some of their causes: Infantile: Albinism Aniridia Bilateral congenital cataract Bilateral optic nerve hypoplasia Idiopathic Leber's congenital amaurosis Optic nerve or macular disease Persistent tunica vasculosa lentis Rod monochromatism Visual-motor syndrome of functional monophthalmus Latent nystagmus Noonan syndrome Nystagmus blockage syndromeX-linked infantile nystagmus is associated with mutations of the gene FRMD7, which is located on the X chromosome.Infantile nystagmus is also associated with two X-linked eye diseases known as complete congenital stationary night blindness (CSNB) and incomplete CSNB (iCSNB or CSNB-2), which are caused by mutations of one of two genes located on the X chromosome. In CSNB, mutations are found in NYX (nyctalopin). CSNB-2 involves mutations of CACNA1F, a voltage-gated calcium channel that, when mutated, does not conduct ions. Causes: Acquired nystagmus Nystagmus that occurs later in childhood or in adulthood is called acquired nystagmus. The cause is often unknown, or idiopathic, and thus referred to as idiopathic nystagmus. Other common causes include diseases and disorders of the central nervous system, metabolic disorders and alcohol and drug toxicity. In the elderly, stroke is the most common cause. Causes: General diseases and conditions Some of the diseases that present nystagmus as a pathological sign or symptom are as follows: Aniridia Benign paroxysmal positional vertigo Toxicity or intoxication, metabolic disorders and combination Sources of toxicity that could lead to nystagmus: Thiamine deficiency Risk factors for thiamine deficiency, or beri beri, in turn include a diet of mostly white rice, as well as alcoholism, dialysis, chronic diarrhea, and taking high doses of diuretics. Rarely it may be due to a genetic condition that results in difficulties absorbing thiamine found in food. Wernicke encephalopathy and Korsakoff syndrome are forms of dry beriberi. Causes: Central nervous system (CNS) diseases and disorders Central nervous system disorders such as with a cerebellar problem, the nystagmus can be in any direction including horizontal. Purely vertical nystagmus usually originates in the central nervous system, but it is also an adverse effect commonly seen in high phenytoin toxicity. Other causes of toxicity that may result in nystagmus include: Other causes Non-physiological Trochlear nerve malfunction Vestibular Pathology (Ménière's disease, SCDS (superior canal dehiscence syndrome), BPPV, vestibular neuritis) Exposure to strong magnetic fields (as in MRI machines) Long-term exposure to low light conditions or darkness, called miner's nystagmus after 19th-century coal miners who developed nystagmus from working in the dark. Causes: A slightly different form of nystagmus may be produced voluntarily by some people. Diagnosis: Nystagmus is highly noticeable but rarely recognized. Nystagmus can be clinically investigated by using a number of non-invasive standard tests. The simplest one is the caloric reflex test, in which one ear canal is irrigated with warm or cold water or air. The temperature gradient provokes the stimulation of the horizontal semicircular canal and the consequent nystagmus. Nystagmus is often very commonly present with Chiari malformation. Diagnosis: The resulting movement of the eyes may be recorded and quantified by a special device called an electronystagmograph (ENG), a form of electrooculography (an electrical method of measuring eye movements using external electrodes), or an even less invasive device called a videonystagmograph (VNG), a form of video-oculography (VOG) (a video-based method of measuring eye movements using external small cameras built into head masks), administered by an audiologist. Special swinging chairs with electrical controls can be used to induce rotatory nystagmus.Over the past forty years, objective eye-movement-recording techniques have been applied to the study of nystagmus, and the results have led to greater accuracy of measurement and understanding of the condition. Diagnosis: Orthoptists may also use an optokinetic drum, or electrooculography or Frenzel goggles to assess a patient's eye movements. Diagnosis: Nystagmus can be caused by subsequent foveation of moving objects, pathology, sustained rotation or substance use. Nystagmus is not to be confused with other superficially similar-appearing disorders of eye movements (saccadic oscillations) such as opsoclonus or ocular flutter that are composed purely of fast-phase (saccadic) eye movements, while nystagmus is characterized by the combination of a smooth pursuit, which usually acts to take the eye off the point of focus, interspersed with the saccadic movement that serves to bring the eye back on target. Without the use of objective recording techniques, it may be very difficult to distinguish among these conditions. Diagnosis: In medicine, the presence of nystagmus can be benign, or it can indicate an underlying visual or neurological problem. Diagnosis: Pathologic nystagmus Pathological nystagmus is characterized by "excessive drifts of stationary retinal images that degrades vision and may produce illusory motion of the seen world: oscillopsia (an exception is congenital nystagmus)".When nystagmus occurs without fulfilling its normal function, it is pathologic (deviating from the healthy or normal condition). Pathological nystagmus is the result of damage to one or more components of the vestibular system, including the semicircular canals, otolith organs, and the vestibulocerebellum.Pathological nystagmus generally causes a degree of vision impairment, although the severity of such impairment varies widely. Also, many blind people have nystagmus, which is one reason that some wear dark glasses. Diagnosis: Variations Central nystagmus occurs as a result of either normal or abnormal processes not related to the vestibular organ. For example, lesions of the midbrain or cerebellum can result in up- and down-beat nystagmus. Gaze induced nystagmus occurs or is exacerbated as a result of changing one's gaze toward or away from a particular side which has an affected central apparatus. Peripheral nystagmus occurs as a result of either normal or diseased functional states of the vestibular system and may combine a rotational component with vertical or horizontal eye movements and may be spontaneous, positional, or evoked. Positional nystagmus occurs when a person's head is in a specific position. An example of disease state in which this occurs is Benign paroxysmal positional vertigo (BPPV). Post rotational nystagmus occurs after an imbalance is created between a normal side and a diseased side by stimulation of the vestibular system by rapid shaking or rotation of the head. Spontaneous nystagmus is nystagmus that occurs randomly, regardless of the position of the patient's head. Physiological nystagmus Physiological nystagmus is a form of involuntary eye movement that is part of the vestibulo-ocular reflex (VOR), characterized by alternating smooth pursuit in one direction and saccadic movement in the other direction. Diagnosis: Variations The direction of nystagmus is defined by the direction of its quick phase (e.g. a right-beating nystagmus is characterized by a rightward-moving quick phase, and a left-beating nystagmus by a leftward-moving quick phase). The oscillations may occur in the vertical, horizontal or torsional planes, or in any combination. The resulting nystagmus is often named as a gross description of the movement, e.g. downbeat nystagmus, upbeat nystagmus, seesaw nystagmus, periodic alternating nystagmus. Diagnosis: These descriptive names can be misleading, however, as many were assigned historically, solely on the basis of subjective clinical examination, which is not sufficient to determine the eyes' true trajectory. Diagnosis: Optokinetic (syn. opticokinetic) nystagmus: a nystagmus induced by looking at moving visual stimuli, such as moving horizontal or vertical lines, and/or stripes. For example, if one fixates on a stripe of a rotating drum with alternating black and white, the gaze retreats to fixate on a new stripe as the drum moves. This is first a rotation with the same angular velocity, then returns in a saccade in the opposite direction. The process proceeds indefinitely. This is optokinetic nystagmus, and is a source for understanding the fixation reflex. Diagnosis: Postrotatory nystagmus: if one spins in a chair continuously and stops suddenly, the fast phase of nystagmus is in the opposite direction of rotation, known as the "post-rotatory nystagmus", while slow phase is in the direction of rotation. Treatment: Congenital nystagmus has long been viewed as untreatable, but medications have been discovered that show promise in some patients. In 1980, researchers discovered that a drug called baclofen could stop periodic alternating nystagmus. Subsequently, gabapentin, an anticonvulsant, led to improvement in about half the patients who took it. Other drugs found to be effective against nystagmus in some patients include memantine, levetiracetam, 3,4-diaminopyridine (available in the US to eligible patients with downbeat nystagmus at no cost under an expanded access program), 4-aminopyridine, and acetazolamide. Several therapeutic approaches, such as contact lenses, drugs, surgery, and low vision rehabilitation have also been proposed. For example, it has been proposed that mini-telescopic eyeglasses suppress nystagmus.Surgical treatment of congenital nystagmus is aimed at improving head posture, simulating artificial divergence, or weakening the horizontal recti muscles. Clinical trials of a surgery to treat nystagmus (known as tenotomy) concluded in 2001. Tenotomy is now being performed regularly at numerous centres around the world. The surgery aims to reduce the eye oscillations, which in turn tends to improve visual acuity.Acupuncture tests have produced conflicting evidence on its beneficial effects on the symptoms of nystagmus. Benefits have been seen in treatments in which acupuncture points of the neck were used, specifically points on the sternocleidomastoid muscle. Benefits of acupuncture for treatment of nystagmus include a reduction in frequency and decreased slow phase velocities, which led to an increase in foveation duration periods both during and after treatment. By the standards of evidence-based medicine, the quality of these studies is poor (for example, Ishikawa's study had sample size of six subjects, was unblinded, and lacked proper controls), and given high quality studies showing that acupuncture has no effect beyond placebo, the results of these studies have to be considered clinically irrelevant until higher quality studies are performed. Treatment: Physical or occupational therapy is also used to treat nystagmus. Treatment consists of learning strategies to compensate for the impaired system.A Cochrane Review on interventions for eye movement disorders due to acquired brain injury, updated in June 2017, identified three studies of pharmacological interventions for acquired nystagmus but concluded that these studies provided insufficient evidence to guide treatment choices. Epidemiology: Nystagmus is a relatively common clinical condition, affecting one in several thousand people. A survey conducted in Oxfordshire, United Kingdom found that by the age of two, one in every 670 children had manifested nystagmus. Authors of another study in the United Kingdom estimated an incidence of 24 in 10,000 (c. 0.240%), noting an apparently higher rate amongst white Europeans than in individuals of Asian origin. Law enforcement: In the United States, testing for horizontal gaze nystagmus is one of a battery of field sobriety tests used by police officers to determine whether a suspect is driving under the influence of alcohol. The test involves observation of the suspect's pupil as it follows a moving object, noting lack of smooth pursuit, distinct and sustained nystagmus at maximum deviation, and the onset of nystagmus prior to 45 degrees.The horizontal gaze nystagmus test has been highly criticized and major errors in the testing methodology and analysis found. However, the validity of the horizontal gaze nystagmus test for use as a field sobriety test for persons with a blood alcohol level between 0.04 and 0.08 is supported by peer reviewed studies and has been found to be a more accurate indication of blood alcohol content than other standard field sobriety tests. Media about the condition: My Dancing Eyes, a documentary by filmmaker Matt Morris, had participants explain what it is like to live with the eye condition, and was released for free. It was featured on NBN News, and ABC Radio Newcastle, in Australia. Scottish filmmaker Mitchell McKechnie, who has congenital nystagmus, often uses the unique perspective the condition offers in his films.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Death in Jainism** Death in Jainism: According to Jainism, Ātman (soul) is eternal and never dies. According to Tattvartha Sutra which is a compendium of Jain principles, the function of matter (pudgala) is to contribute to pleasure, suffering, life and death of living beings. Types of Deaths: According to Jain texts, there are 17 different types of death: Avici-marana Avadhimarana Atyantika-marana Vasaharta-marana Valana-marana Antahsalya-marana Tadhava-marana Bala-marana or Akama marana Pandita-marana or Sakama marana Balpandita-marana Chadmastha-marana Kevali-marana Vaihayasa-marana Guddhapristha-marana Bhaktapratyakhyana-marana Inginta-marana Padopagamana-marana Akama Marana & Sakama Marana Out of all 17 types of Marana, two are considered important:Akama Marana which refers to someone who has attachment to life and doesn't want to die but dies when his life is over. Therefore, he has died helplessly and not on his own accord. According to Jainism, this person is often one who is willingly or unwillingly ignorant to the concepts of rebirth, other worlds, and liberation of the soul. Types of Deaths: Sakama Marana which refers to someone who is not afraid of death and who accepts it willingly and at ease. They understand that there is no way to avoid death and that it is a natural process. Sakama Marana can be further divided into 4 types. These are Samadhi marana, anasana, santharo, and sallekhana.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Representational difference analysis** Representational difference analysis: Representational difference analysis (RDA) is a technique used in biological research to find sequence differences in two genomic or cDNA samples. Genomes or cDNA sequences from two samples (i.e. cancer sample and a normal sample) are PCR amplified and differences analyzed using subtractive DNA hybridization. This technology has been further enhanced through the development of representation oligonucleotide microarray analysis (ROMA), which uses array technology to perform such analyses. This method may also be adapted to detect DNA methylation differences, as seen in methylation-sensitive representational difference analysis (MS-RDA). Theory: This method relies on PCR to differentially amplify non-homologous DNA regions between digested fragments of two nearly identical DNA species, that are called 'driver' and 'tester' DNA. Typically, tester DNA contains a sequence of interest that is non-homologous to driver DNA. When the two species are mixed, the driver sequence is added in excess to tester. During PCR, double stranded fragments first denature at ~95 °C and then re-anneal when subjected to the annealing temperature. Since driver and tester sequences are nearly identical, the excess of driver DNA fragments will anneal to homologous DNA fragments from the tester species. This blocks PCR amplification and there is no increase in homologous fragments. However, fragments that are different between the two species will not anneal to a complementary counterpart and will be amplified by PCR. As more cycles of RDA are performed, the pool of unique sequence fragment copies will grow faster than fragments found in both species.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Swedish Whist** Swedish Whist: Swedish whist (Swedish: svensk whist), also called Fyrmanswhist ("Four-hand Whist") or, regionally, just whist, is a Swedish trick-taking, card game. Knowing four-player whist is useful for playing other card games because it was the prototype for trick-taking games. History: The game emerged in the 1950s in Sweden, but first appeared in the literature in 1967. It may be a derivative of the classic Swedish game of Priffe. Swedish Whist was very popular in Sweden in the 1970s and 1980s. Description: Swedish whist is played by four players in teams of two using a standard 52-card pack, typically of the 'Modern Swedish' pattern. Cards rank in their natural order, aces high. The first dealer is chosen by lot and then rotates after each deal. The dealer deals all the cards, one by one.Players examine their hands and decide whether to play 'red' or 'black', i.e., whether they want to take as many tricks as possible (red) or as few as possible (black). Players indicate their choice by placing a red card (of the suit of hearts or diamonds) or a black card (of the suit of clubs or spades) at the bottom of their cards, bearing in mind that all players may see this card during the bidding.Forehand begins the bidding by showing his bottom card. If it is black, this is the equivalent of calling "pass" (pass) and the next player reveals his bottom card. If all four players show a black card, they play a black or 'null game', (nollspel) in which the aim is to lose tricks.As soon as a player reveals a red card, it is the equivalent of announcing "play" (spel). The bidding ends and a normal game is played, whereby teams aim to win tricks.Once the bidding is over, forehand leads to the first trick. Players must follow suit if able.When one side has announced they will play (red game), they get one point for each trick over six that they take. If they lose (i.e. take fewer than seven tricks) their opponents score double for each trick in excess of six. In a null game (black game), where neither side has announced an intention to win, the winners must take fewer than seven tricks. The winning side then gets one point for each trick under seven. Description: Game is usually 13 points. Although the rules are simple, the game requires good memory, strategy and skill.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Supergroup (biology)** Supergroup (biology): A supergroup, in evolutionary biology, is a large group of organisms that share one common ancestor and have important defining characteristics. It is an informal, mostly arbitrary rank in biological taxonomy that is often greater than phylum or kingdom, although some supergroups are also treated as phyla. Eukaryotic supergroups: Since the decade of 2000's, the eukaryotic tree of life (abbreviated as eToL) has been divided into 5–8 major groupings called 'supergroups'. These groupings were established after the idea that only monophyletic groups should be accepted as ranks, as an alternative to the use of paraphyletic kingdom Protista. In the early days of the eToL six traditional supergroups were considered: Amoebozoa, Opisthokonta, "Excavata", Archaeplastida, "Chromalveolata" and Rhizaria. Since then, the eToL has been rearranged profoundly, and most of these groups were found as paraphyletic or lacked defining morphological characteristics that unite their members, which makes the 'supergroup' label more arbitrary. Eukaryotic supergroups: Currently, the addition of many lineages of newly discovered protists (such as Telonemia, Picozoa, Hemimastigophora, Rigifilida...) and the use of phylogenomic analyses have brought a new, more accurate supergroup model. These are the current supergroups of eukaryotes: TSAR, constituted by Telonemia and the SAR clade (Stramenopiles, Alveolata and Rhizaria). It is estimated to occupy up to half of all eukaryotic diversity, since it includes multiple major groups such as diatoms, dinoflagellates, seaweeds, ciliates, foraminiferans, radiolarians, and the apicomplexan and oomycete parasites. It essentially contains the majority of "Chromalveolata". Eukaryotic supergroups: Haptista (also treated as a phylum), previously in "Chromalveolata", comprising the haptophyte algae and centrohelids. Cryptista (also treated as a phylum), previously in "Chromalveolata", comprising the cryptomonads, katablepharids and the enigmatic Palpitomonas. Archaeplastida (also treated as a kingdom), constituted by the lineages that acquired chloroplasts through primary endosymbiosis: Chloroplastida (green algae and land plants), Rhodophyta, Glaucophyta and Rhodelphis. Amorphea, composed by the Amoebozoa and the Opisthokonta (animals, fungi and related protists). They're related to the breviates and the apusomonads, and together form the clade Obazoa. CRuMs, composed by the free-living protozoan groups Collodictyonidae, Rigifilida and Mantamonas. Discoba, constituted by Discicristata (Euglenozoa and Heterolobosea), Jakobida and Tsukubamonas. It is the biggest remaining clade of the "Excavates". Metamonada, previously part of the "Excavates", entirely containing anaerobic protists. Hemimastigophora, previously an orphan clade but recently brought into Diaphoretickes. Eukaryotic supergroups: Provora, the most recent supergroup, containing the previously orphan Ancoracysta.Many orphan groups of free-living protozoa remain left behind, unable to be added to a supergroup, such as: Picozoa (possibly belongs to Archaeplastida with limited certainty), Malawimonadida (thought to be related to Metamonada), Ancyromonadida, Breviatea, Apusomonadida, etc.A possible modern topology of the eToL would be the following (supergroups labeled in bold): Prokaryotic supergroups: The term 'supergroup' is used in phylogenetic studies of bacteria, in a more specific sense than within eukaryotes. As of recently, it is very commonly used for naming clades within the genus Wolbachia.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sebelipase alfa** Sebelipase alfa: Sebelipase alfa, sold under the brand name Kanuma, is a recombinant form of the enzyme lysosomal acid lipase (LAL) that is used as a medication for the treatment of lysosomal acid lipase deficiency (LAL-D). It is administered via intraveneous infusion. It was approved for medical use in the European Union and in the United States in 2015. Medical uses: Sebelipase alfa is indicated for long-term enzyme replacement therapy (ERT) in people of all ages with lysosomal acid lipase (LAL) deficiency. History: Sebelipase was developed by Synageva that became part of Alexion Pharmaceuticals in 2015. For its production, chickens are genetically modified to produce the recombinant form of LAL (rhLAL) in their egg white. After extraction and purification it becomes available as the medication. On 8 December 2015 the FDA announced that its approval came from two centers: The Center for Drug Evaluation and Research (CDER) approved the human therapeutic application of the medication, while the Center for Veterinary Medicine (CVM) approved the application for a recombinant DNA construct in genetically engineered chicken to produce rhLAL in their egg whites. At the time it gained FDA approval Kanuma was the first only drug manufactured in chicken eggs and intended for use in humans.Sebelipase alfa is an orphan drug; its effectiveness was published after a phase 3 trial in 2015. The disease of LAL affects < 0.2 in 10,000 people in the EU.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Internalnet** Internalnet: An internalnet is a computer network composed of devices inside and on the human body. Such a system could be used to link nanochondria, bionic implants, wearable computers, and other devices.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pocket schedule** Pocket schedule: A pocket schedule or fixtures card is a small card or foldable paper guide which typically fits in a wallet or pocket. A typical pocket schedule may contain a full season match schedule for the team, including dates, match times, and opponent.: 249  It may also include venue information, contact information for purchasing tickets, promotional events, major sponsor advertising,: 104  and radio and television broadcast data. Pocket schedule: It is most often used by sports clubs or their sponsors for marketing purposes, which often pay for the production and printing of the schedules.: 249 Pocket schedules for ice hockey date to at least the early 1900s,: Pocket schedules  and those for baseball to at least 1903.: 15 They are part of sports memorabilia that are difficult to find in the secondary market because so many are damaged.: Pocket schedules 
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Historically informed performance** Historically informed performance: Historically informed performance (also referred to as period performance, authentic performance, or HIP) is an approach to the performance of classical music, which aims to be faithful to the approach, manner and style of the musical era in which a work was originally conceived. Historically informed performance: It is based on two key aspects: the application of the stylistic and technical aspects of performance, known as performance practice; and the use of period instruments which may be reproductions of historical instruments that were in use at the time of the original composition, and which usually have different timbre and temperament from their modern equivalents. A further area of study, that of changing listener expectations, is increasingly under investigation.Given no sound recordings exist of music before the late 19th century, historically informed performance is largely derived from musicological analysis of texts. Historical treatises, pedagogic tutor books, and concert critiques, as well as additional historical evidence, are all used to gain insight into the performance practice of a historic era. Extant recordings (cylinders, discs, and reproducing piano rolls) from the 1890s onwards have enabled scholars of 19th-century Romanticism to gain a uniquely detailed understanding of this style, although not without significant remaining questions. In all eras, HIP performers will normally use scholarly or urtext editions of a musical score as a basic template, while additionally applying a range of contemporaneous stylistic practices, including rhythmic alterations and ornamentation of many kinds.Historically informed performance was principally developed in a number of Western countries in the mid to late 20th century, ironically a modernist response to the modernist break with earlier performance traditions. Initially concerned with the performance of Medieval, Renaissance, and Baroque music, HIP now encompasses music from the Classical and Romantic eras. HIP has been a crucial part of the early music revival movement of the 20th and 21st centuries, and has begun to affect the theatrical stage, for instance in the production of Baroque opera, where historically informed approaches to acting and scenery are also used.Some critics contest the methodology of the HIP movement, contending that its selection of practices and aesthetics are a product of the 20th century and that it is ultimately impossible to know what performances of an earlier time sounded like. Obviously, the older the style and repertoire, the greater the cultural distance and the increased possibility of misunderstanding the evidence. For this reason, the term "historically informed" is now preferred to "authentic", as it acknowledges the limitations of academic understanding, rather than implying absolute accuracy in recreating historical performance style, or worse, a moralising tone. Early instruments: The choice of musical instruments is an important part of the principle of historically informed performance. Musical instruments have evolved over time, and instruments that were in use in earlier periods of history are often quite different from their modern equivalents. Many other instruments have fallen out of use, having been replaced by newer tools for creating music. For example, prior to the emergence of the modern violin, other bowed stringed instruments such as the rebec or the viol were in common use. The existence of ancient instruments in museum collections has helped musicologists to understand how the different design, tuning and tone of instruments may have affected earlier performance practice.As well as a research tool, historic instruments have an active role in the practice of historically informed performance. Modern instrumentalists who aim to recreate a historic sound often use modern reproductions of period instruments (and occasionally original instruments) on the basis that this will deliver a musical performance that is thought to be historically faithful to the original work, as the original composer would have heard it. For example, a modern music ensemble staging a performance of music by Johann Sebastian Bach may play reproduction Baroque violins instead of modern instruments in an attempt to create the sound of a 17th-century Baroque orchestra.This has led to the revival of musical instruments that had entirely fallen out of use, and to a reconsideration of the role and structure of instruments also used in current practice.Orchestras and ensembles who are noted for their use of period instruments in performances include the Taverner Consort and Players (directed by Andrew Parrott), the Academy of Ancient Music (Christopher Hogwood), the Concentus Musicus Wien (Nikolaus Harnoncourt), The English Concert (Trevor Pinnock), the Hanover Band (Roy Goodman), the English Baroque Soloists (Sir John Eliot Gardiner), Musica Antiqua Köln (Reinhard Goebel), Amsterdam Baroque Orchestra & Choir (Ton Koopman), Les Arts Florissants (William Christie), La Petite Bande (Sigiswald Kuijken), La Chapelle Royale (Philippe Herreweghe), the Australian Brandenburg Orchestra (Paul Dyer), and the Freiburger Barockorchester (Gottfried von der Goltz). As the scope of historically informed performance has expanded to encompass the works of the Romantic era, the specific sound of 19th-century instruments has increasingly been recognised in the HIP movement, and period instruments orchestras such as Gardiner's Orchestre Révolutionnaire et Romantique have emerged. Early instruments: Harpsichord A variety of once obsolete keyboard instruments such as the clavichord and the harpsichord have been revived, as they have particular importance in the performance of Early music. Before the evolution of the symphony orchestra led by a conductor, Renaissance and Baroque orchestras were commonly directed from the harpsichord; the director would lead by playing continuo, which would provide a steady, harmonic structure upon which the other instrumentalists would embellish their parts. Many religious works of the era made similar use of the pipe organ, often in combination with a harpsichord. Historically informed performances frequently make use of keyboard-led ensemble playing. Early instruments: Composers such as François Couperin, Domenico Scarlatti, Girolamo Frescobaldi, and Johann Sebastian Bach wrote for the harpsichord, clavichord, and organ. Among the foremost modern players of the harpsichord are Scott Ross, Alan Curtis, William Christie, Christopher Hogwood, Robert Hill, Igor Kipnis, Ton Koopman, Wanda Landowska, Gustav Leonhardt, Trevor Pinnock, Skip Sempé, Andreas Staier, Colin Tilney, and Christophe Rousset. Early instruments: Fortepiano During the second half of the 18th century, the harpsichord was gradually replaced by the earliest pianos. As the harpsichord went out of fashion, many were destroyed; indeed, the Paris Conservatory is notorious for having used harpsichords for firewood during the French Revolution and Napoleonic times. Although names were originally interchangeable, we now use 'fortepiano' to indicate the earlier, smaller style of piano, with the more familiar 'pianoforte' used to describe the larger instruments approaching modern designs from around 1830. In the 20th and 21st centuries, the fortepiano has enjoyed a revival as a result of the trend for historically informed performance, with the works of Haydn, Mozart, Beethoven and Schubert now often played on fortepiano. Increasingly, the early to mid 19th century pianos of Pleyel, Érard, Streicher and others are being used to recreate the soundscape of Romantic composers such as Chopin, Liszt and Brahms. Early instruments: Many keyboard players who specialise in the harpsichord also specialise in the fortepiano and other period instruments. Although some keyboardist renowned for their fortepiano playing are Ronald Brautigam, Ingrid Haebler, Robert Levin, Malcolm Bilson and Tobias Koch. Viol A vast quantity of music for viols, for both ensemble and solo performance, was written by composers of the Renaissance and Baroque eras, including Diego Ortiz, Claudio Monteverdi, William Byrd, William Lawes, Henry Purcell, Monsieur de Sainte-Colombe, J.S. Bach, Georg Philipp Telemann, Marin Marais, Antoine Forqueray, and Carl Frederick Abel. Early instruments: From largest to smallest, the viol family consists of: violone (two sizes, a contrabass an octave below the bass, and a smaller one a fourth or fifth above, a great bass) bass viol (about the size of a cello) tenor viol (about the size of a guitar) alto viol (slightly smaller than the tenor) treble or descant viol (about the size of a viola) pardessus de viole (about the size of a violin)Among the foremost modern players of the viols are Paolo Pandolfo, Wieland Kuijken, Jordi Savall, John Hsu, and Vittorio Ghielmi. There are many modern viol consorts. Early instruments: Recorder Recorders in multiple sizes (contra-bass, bass, tenor, alto, soprano, the sopranino, and the even smaller kleine sopranino or garklein) are often played today in consorts of mixed size. Handel and Telemann, among others, wrote solo works for the recorder. Arnold Dolmetsch did much to revive the recorder as a serious concert instrument, reconstructing a "consort of recorders (descant, treble, tenor and bass) all at low pitch and based on historical originals".Often recorder players start off has flautists, then transition into focusing on the recorder. Some famous recorder players include Frans Brüggen (also a renowned conductor of the period instrument movement), Michala Petri, Ashley Solomon and Giovanni Antonini. Handel and Telemann were also keen and skilled recorder players (as mentioned before), the former of which composed several sets of sonatas for recorder and basso continuo, although they're dominated by the transcriptions for flute made during the mid-20th century. Singing: As with instrumental technique, the approach to historically informed performance practice for singers has been shaped by musicological research and academic debate. In particular, there was debate around the use of the technique of vibrato at the height of the Early music revival, and many advocates of HIP aimed to eliminate vibrato in favour of the "pure" sound of straight-tone singing. The difference in style may be demonstrated by the sound of a boy treble in contrast to the sound of a Grand opera singer such as Maria Callas.Certain historic vocal techniques have gained in popularity, such as trillo, a tremolo-like repetition of a single note that was used for ornamental effect in the early Baroque era. Academic understanding of these expressive devices is often subjective however, as many vocal techniques discussed by treatise writers in the 17th and 18th centuries have different meanings, depending on the author. Despite the fashion for straight tone, many prominent Early music singers make use of a subtle, gentle form of vibrato to add expression to their performance.A few of the singers who have contributed to the historically informed performance movement are Emma Kirkby, Max van Egmond, Julianne Baird, Nigel Rogers, and David Thomas. Singing: The resurgence of interest in Early music, particularly in sacred renaissance polyphony and Baroque opera, has driven a revival of the countertenor voice. High-voice male singers are often cast in preference to female contraltos in HIP opera productions, partly as a substitute for castrato singers. Alfred Deller is considered to have been a pioneer of the modern revival of countertenor singing. Leading contemporary performers include James Bowman, David Daniels, Derek Lee Ragin, Andreas Scholl, Michael Chance, Jakub Józef Orliński, Daniel Taylor, Brian Asawa, Yoshikazu Mera, and Philippe Jaroussky. Layout: Standard practice concerning the layout of a group of performers, for example in a choir or an orchestra, has changed over time. Determining a historically appropriate layout of singers and instruments on a performance stage may be informed by historical research. In addition to documentary evidence, musicologists may also turn to iconographic evidence — contemporary paintings and drawings of performing musicians — as a primary source for historic information. Pictorial sources may reveal various practices such as the size of an ensemble; the position of various types of instruments; their position in relation to a choir or keyboard instrument; the position or absence of a conductor; whether the performers are seated or standing; and the performance space (such as a concert hall, palace chamber, domestic house, church, or outdoors etc.). The German theorist Johann Mattheson, in a 1739 treatise, states that the singers should stand in front of the instrumentalists.Three main layouts are documented: Circle (Renaissance) Choir in the front of the instruments (17th–19th century) Singers and instruments next to each other on the choir loft. Recovering early performance practices: Interpreting musical notation Some familiar difficult items are as follows: Early composers often wrote using the same symbols as today, yet in a different meaning, often context-dependent. For example, what is written as an appoggiatura is often meant to be longer or shorter than the notated length, and even in scores as late as the 19th century there is disagreement over the meaning (dynamic and/or agogic) of hairpins. Recovering early performance practices: The notation may be partial. E.g., the note durations may be omitted altogether, such as in unmeasured preludes, pieces written without rhythm or metre indications. Even when the notation is comprehensive, non-notated changes are usually required, such as rhythmic shaping of passagework, pauses between sections, or additional arpeggiation of chords. Cuts and repetitions were common. The music may be written using alternative, non-modern notations, such as tablature. Some tablature notations are only partially decoded, such as the notation in the harp manuscript by Robert ap Huw. The reference pitch of earlier music cannot generally be interpreted as designating the same pitch used today. Various tuning systems (temperaments), are used. Composers always assume the player will choose the temperament, and never indicate it in the score. In most ensemble music up to the early Baroque, the actual musical instruments to be used are not indicated in the score, and must be partially or totally chosen by the performers. A well-discussed example can be found in Monteverdi's L'Orfeo, where the indications on which instruments to use are partial and limited to critical sections only. Issues of pronunciation, that impact on musical accents, carry over to church Latin, the language in which a large amount of early vocal music was written. The reason is that Latin was customarily pronounced using the speech sounds and patterns of the local vernacular language. Mechanical music Some information about how music sounded in the past can be obtained from contemporary mechanical instruments. For instance, the Dutch Museum Speelklok owns an 18th-century mechanical organ of which the music programme was composed and supervised by Joseph Haydn. Tuning and pitch Until modern era, different tuning references have been used in different venues. The baroque oboist Bruce Haynes has extensively investigated surviving wind instruments and even documented a case of violinists having to retune by a minor third to play at neighboring churches. Recovering early performance practices: Iconographic evidence The research of musicologists often overlaps with the work of art historians; by examining paintings and drawings of performing musicians contemporary to a particular musical era, academics can infer details about performance practice of the day. In addition to showing the layout of an orchestra or ensemble, a work of art may reveal detail about contemporary playing techniques, for example the manner of holding a bow or a wind player's embouchure. However, just as an art historian must evaluate a work of art, a scholar of musicology must also assess the musical evidence of a painting or illustration in its historical context, taking into consideration the potential cultural and political motivations of the artist and allow for artistic license. An historic image of musicians may present an idealised or even fictional account of musical instruments, and there is as much a risk that it may give rise to a historically misinformed performance. Issues: Opinions on how artistic and academic motivations should translate into musical performance vary.Though championing the need to attempt to understand a composer's intentions in their historical context, Ralph Kirkpatrick highlights the risk of using historical exoterism to hide technical incompetence: "too often historical authenticity can be used as a means of escape from any potentially disquieting observance of esthetic values, and from the assumption of any genuine artistic responsibility. The abdication of esthetic values and artistic responsibilities can confer a certain illusion of simplicity on what the passage of history has presented to us, bleached as white as bones on the sands of time".Early music scholar Beverly Jerold has questioned the string technique of historically informed musicians, citing accounts of Baroque-era concert-goers describing nearly the opposite practice. Similar criticism has been leveled at the practices of historically informed vocalists. Issues: Some proponents of the Early music revival have distanced themselves from the terminology of "authentic performance". Conductor John Eliot Gardiner has expressed the view that the term can be "misleading", and has stated, "My enthusiasm for period instruments is not antiquarian or in pursuit of a spurious and unattainable authenticity, but just simply as a refreshing alternative to the standard, monochrome qualities of the symphony orchestra."Daniel Leech-Wilkinson concedes that much of the HIP practice is based on invention: "Historical research may provide us with instruments, and sometimes even quite detailed information on how to use them; but the gap between such evidence and a sounding performance is still so great that it can be bridged only by a large amount of musicianship and invention. Exactly how much is required can easily be forgotten, precisely because the exercise of musical invention is so automatic to the performer." Leech-Wilkinson concludes that performance styles in early music "have as much to do with current taste as with accurate reproduction." This is probably over-pessimistic. More recently, Andrew Snedden has suggested that HIP reconstructions are on firmer ground when approached in context with a cultural exegesis of the era, examining not merely how they played but why they played as they did, and what cultural meaning is embedded in the music. Issues: In the conclusion of his study of early twentieth-century orchestral recordings, Robert Philip states that the concept of "what sounds tasteful now probably sounded tasteful in earlier periods" is a fundamental but flawed assumption behind much of the historical performance movement. Having spent the entire book examining rhythm, vibrato, and portamento, Philips states that the fallacy of the assumption of tastefulness causes adherents of historical performance to randomly select what they find acceptable and to ignore evidence of performance practice which goes against modern taste. Reception: In his book, The Aesthetics of Music, the British philosopher Roger Scruton wrote that "the effect [of HIP] has frequently been to cocoon the past in a wad of phoney scholarship, to elevate musicology over music, and to confine Bach and his contemporaries to an acoustic time-warp. The tired feeling which so many 'authentic' performances induce can be compared to the atmosphere of a modern museum.... [The works of early composers] are arranged behind the glass of authenticity, staring bleakly from the other side of an impassable screen".A number of scholars see the HIP movement essentially as a 20th-century invention. Writing about the periodical Early Music (one of the leading periodicals about historically informed performance), Peter Hill noted "All the articles in Early Music noted in varying ways the (perhaps fatal) flaw in the 'authenticity' position. This is that the attempt to understand the past in terms of the past is—paradoxically—an absolutely contemporary phenomenon."One of the more skeptical voices of the historically informed performance movement has been Richard Taruskin. His thesis is that the practice of unearthing supposedly historically informed practices is actually a 20th-century practice influenced by modernism and, ultimately, we can never know what music sounded like or how it was played in previous centuries. "What we had been accustomed to regard as historically authentic performances, I began to see, represented neither any determinable historical prototype nor any coherent revival of practices coeval with the repertories they addressed. Rather, they embodied a whole wish list of modern(ist) values, validated in the academy and the marketplace alike by an eclectic, opportunistic reading of historical evidence." "'Historical' performers who aim 'to get to the truth'...by using period instruments and reviving lost playing techniques actually pick and choose from history's wares. And they do so in a manner that says more about the values of the late twentieth century than about those of any earlier era." In her book The Imaginary Museum of Musical Works: An Essay in the Philosophy of Music, Lydia Goehr discusses the aims and fallacies of both proponents and critics of the HIP movement. She claims that the HIP movement itself came about during the latter half of the 19th century as a reaction to the way modern techniques were being imposed upon music of earlier times. Thus performers were concerned with achieving an "authentic" manner of performing music—an ideal that carries implications for all those involved with music. She distills the late 20th century arguments into two points of view, achieving either fidelity to the conditions of performance, or fidelity to the musical work.She succinctly summarizes the critics' arguments (for example, anachronistic, selectively imputing current performance ideas on early music), but then concludes that what the HIP movement has to offer is a different manner of looking at and listening to music: "It keeps our eyes open to the possibility of producing music in new ways under the regulation of new ideals. It keeps our eyes open to the inherently critical and revisable nature of our regulative concepts. Most importantly, it helps us overcome that deep‐rooted desire to hold the most dangerous of beliefs, that we have at any time got our practices absolutely right."What is clear is that a narrowly musicological approach to stylistic reconstruction is both modernist in culture and inauthentic as a living performance, an approach termed 'deadly theatre' by Peter Brook.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cirrus duplicatus** Cirrus duplicatus: Cirrus duplicatus is a variety of cirrus cloud. The name cirrus duplicatus is derived from Latin, meaning "double". The duplicatus variety of cirrus clouds occurs when there are at least two layers of cirrus clouds. Most of the time, occurrences of cirrus fibratus and cirrus uncinus are in the duplicatus form. Like stratus clouds, cirrus clouds are often seen in the duplicatus form.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**3Com Audrey** 3Com Audrey: The 3Com Ergo Audrey is a discontinued internet appliance from 3Com. It was released to the public on October 17, 2000, for USD499 as the only device in the company's "Ergo" initiative to be sold. Once connected to an appropriate provider, users could access the internet, send and receive e-mail, play audio and video, and synchronize with up to two Palm OS-based devices. 3Com Audrey: Audrey was the brainchild of Don Fotsch (formerly of Apple Computer and U.S. Robotics) and Ray Winninger. Don and Ray had a vision for a family of appliances, each designed for a specific room in the house. The brand Ergo was meant to convey that intent, as in "it's in the kitchen, ergo it's designed that way". There were plans to serve other rooms in the house as well. They considered the kitchen to be the heart of the home and the control room for the home manager. Don coined the phrase "Internet Snacking" to describe the lightweight web browsing done in this environment.The name Audrey was given to this first product to honor Audrey Hepburn. It was meant to deliver the elegance that she exuded. The project codename was "Kojak", named after the Telly Savalas character. The follow-on product targeted for the family room was code named "Mannix". 3Com Audrey: 3Com discontinued the product on June 1, 2001, in the wake of the dot-com crash, after only seven and a half months on the market. Only 3Com direct customers received full refunds for the product and accessories. Customers who had bought Audrey devices through other vendors were not offered refunds and never even notified about the refunds. The remaining Audrey hardware was liquidated and embraced by the hardware hacker community. Hardware: The Audrey is a touchscreen, passive matrix LCD device and came equipped with a stylus. All applications were touch-enabled. Since the standard infrared keyboard was only needed for typing tasks, it could be hung out of the way on the rear of the unit. The stylus was to be placed in a receptacle on the top of the screen with an LED that flashed when emails arrived. Buttons on the right side of the screen were used to access the web browser, email application, and calendar, and a wheel knob at the bottom selected different "channels" of push content. Hardware: The 3Com Audrey is powered by a 200 MHz Geode GX 1 CPU, with 16 MB of flash ROM and 32 MB of RAM. It measures 9 x 11.8 x 3.0 inches (22.86 x 29.97 x 7.62 cm), and weighs 4.1 pounds (1.86 kg). It is powered by the QNX operating system. The Audrey is equipped with a modem, two USB ports, and a CompactFlash socket. A USB Ethernet adapter was commonly used for broadband subscribers. Hardware: The Audrey was also available in such shades as "linen" (off-white), "meadow" (green), "ocean" (blue), "slate" (grey), and "sunshine" (light yellow). Hacking: After the demise of official support, the Audrey drew the attention of computer enthusiasts. They quickly discovered an exploit to launch a pterm session. Using privilege escalation techniques, the root password in the passwd file could be edited, opening the box to further experimentation. Hacking: Many of the tools for the QNX operating system development platform were quickly adapted for use in the Audrey, including an updated web browser (Voyager), an MP3 player, digital rotating photoframe, and other applications.The CompactFlash slot was also investigated. Although it could not be used for storage expansion, the Audrey was set to flash its operating system from the slot. Soon, a variety of replacement OS images were distributed among enthusiasts. As the device could utilize an optional Ethernet connection, it was an easy task to mount a remote disk drive served up by a neighboring desktop system, thus allowing for virtually unlimited storage capability. Similar devices: Devices similar to the Audrey included the i-Opener, the Virgin Webplayer and the Gateway Touch Pad.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**On30 gauge** On30 gauge: On30 (also called On21⁄2, O16.5 and Oe) gauge is the modelling of narrow gauge railways in O scale on HO (16.5 mm / 0.65 in) gauge track in 1:48 scale ratio by American and Australian model railroaders, in 1:43.5 scale ratio by British and French model railroaders and 1:45 by Continental European model railroaders (excluding France). Definitions: On30 On30 uses the American O scale of 1⁄4 inch to the foot, (ratio 1:48) to operate trains on HO gauge (16.5 mm / 0.65 in) track. The 30 indicates the scale/gauge combination is used to model 2 ft 6 in (762 mm) narrow gauge prototypes, although it is often used to model 2 ft (610 mm) and 3 ft (914 mm) gauge prototypes as well. This scale/gauge combination is sometimes referred to as On21⁄2. Definitions: O16.5 O16.5 (sometimes O-16.5) in the United Kingdom is a model railway scale/gauge combination of 7 mm to the foot. This is the same scale as British O scale (1:43.5 ratio) running on 16.5 mm (0.65 in) gauge track, which is also used by OO gauge model railways. It thus represents the prototype gauge of just over 2 ft 4 in (711 mm) (e.g. the Snailbeach District Railways), although also widely used to model 2 ft (610 mm) (e.g. the Ffestiniog Railway) and 2 ft 3 in (686 mm) (e.g. the Tal-y-llyn Railway) gauge UK prototypes. Definitions: 0e 0e (sometimes Oe) is the Continental European notation for 0 scale using 16.5 mm (0.65 in) track. In France and a few other countries 0 scale uses a ratio of 1:43.5. In Germany and many other European countries 0 scale uses a ratio of 1:45. The prototypes represented as example the German 750 mm (2 ft 5+1⁄2 in) gauge, Austrian 760 mm (2 ft 5+15⁄16 in) gauge, and 800 mm (2 ft 7+1⁄2 in) gauge rack railways in Switzerland. Development: United States In the United States modelling in On30 dates back to the 1950s, using HO gauge wheels and locomotive chassis. The scale was popularised to some extent in the 1960s and 1970s by the writings of modellers such as Gordon North. An On30 layout, the Venago Valley (built by Bill Livingston) was featured in the June 1971 issue of Railroad Model Craftsman magazine. However, as there are very few prototype 2 ft 6 in (762 mm) gauge railways in the United States, it remained very much a minority modelling area, especially when compared with modelling in On2 and On3. Development: In 1998 Bachmann Industries introduced a model of a 2-6-0 steam locomotive in this scale for the Christmas village market. This model, being very inexpensive, was quickly adopted by modellers. Other manufacturers followed Bachmann into this market, and Bachmann also introduced a number of other models. On30 is now regarded as the fastest growing segment of the model railroading market in the United States.Several other companies have produced mass market models for the USA market including Mountain Model Imports (MMI) who produced die-cast K series DRGW models (also available in 0n3), Broadway Limited who produced a 2-8-0 and a "galloping goose", San Juan Car company produce kits and RTR plastic wagons, Accucraft/AMS produce brass engines and plastic rolling stock. Many US modellers can be broadly cast into one of two groups:The first group are freelance modelers, not modelling any specific prototype. These modelers are adept at taking H0 gauge models and modifying them with new cabs and other features into models without prototypes. A common saying in this group of modelers is that they model with "no standards", a reaction to the highly accurate modelling known as "rivet counting" found in some other sections of the hobby. Development: The second group model prototype American narrow gauge railroads ranging from mining and logging companies through to large shortline railways such as the 2 ft (610 mm) Sandy River and Rangeley Lakes and the 3 ft (914 mm) Denver and Rio Grande Western Railroad. Modelers following these prototypes often choose On30 over the more accurate On2 and On3 gauges citing the lower cost of models and the ready availability of 'ready-to-run' models by Bachmann, Accucraft Trains and Broadway Limited. Detail parts and kits for specific models can be adapted easily from models manufactured for On3 modellers and others. Development: Britain In Britain O16.5 modelling also began in the 1950s, using modified proprietary OO scale models. A number of small companies now supply kits for locomotives made of materials such as brass and white metal, as well as rolling stock kits. British modellers have also had the advantage of Peco flexible track and turnouts, which have become popular throughout the On30 modelling world. The 7mm Narrow Gauge Association supports the hobby, and publishes a magazine, "Narrow Lines". Most modellers attempt to accurately model one of the many 2 ft (610 mm), 2 ft 3 in (686 mm) and 2 ft 6 in (762 mm) gauge railways that were found throughout Britain, although European, American and even railways from Britain's colonial empire have become popular. First items in the PECO O-16.5 range of track, and rolling stock kits appeared early in 1978, with other items released over the next 2-3 years to become a large part of the range that we have today. Development: For accurate modelling of two-foot gauge railways, such as the Ffestiniog or Lynton and Barnstaple O14 has been employed, although there is some limited commercial support for this scale/gauge, it is mostly of industrial prototypes. Development: Continental Europe The first known 0e gauge model railway is probably the rack railway of the Basel Model Railway Club (MCB), german: Modelleisenbahnclub Basel, in Switzerland can be traced back to 1938. Just a few years after Trix and Märklin independently launched 00 gauge products with a model gauge of 16.5 mm. In 1957, an early gauge 0e modell railroad line according to the Mariazell Railway was built on the club layout of the St. Pölten Railway Model Club, german: Eisenbahnmodellbauklub St. Pölten, in Austria, based on experience with 0e gauge railway models built in the years after 1948. The narrow gauge rack railway in Basel, the narrow gauge railway in St. Pölten, as well as the following Billerbahn were initially assigned to gauge 00. This, as far as Billerbahn is concerned, even in 1956. The term '0 scale' as well as the term 'gauge 0e' did not exist at that time. Development: A similar pattern of small manufacturers producing kits is followed in Continental Europe. However the German firm Fleischmann produced ready-to-run models until a few years ago of small German and Austrian locomotives and associated rolling stock in Oe gauge on a scale of about 1:40. The line was called "Magic Train". "Billerbahn" was already producing its field railway from 1948 to 1977. Märklin also produced a small range of 3 rail 0e gauge models between 1970 and 1972 under the "MINEX" range name. All three ranges were aimed at children but had appeal to serious modelers too. Development: Australia Surveys at modelling conventions in Australia have found that the majority of all narrow gauge modellers in that country model in On30, 1:48 scale. An early pioneer was Rick Richardson, with his Vulcan Vale model railway. Recent examples include 'Steam in the Bush' who are based in the Blue Mountains and produce a range of On30 'craftsman' style kits. Many modelers choose to model the 2ft6in gauge railways in Victoria, such as Puffing Billy, and a number of kits and ready to run models have been produced for that prototype. A ready to run die-cast and plastic model of the NA Puffing Billy locomotive was released by Haskell in 2014 and they have subsequently also released NQR openwagons. The scale is also popular for modelers of timber logging tramways and the Queensland sugar cane tramways, as well as freelance modelers. Ixion produced a "Coffee Pot" (a 3-foot 6 gauge prototype) for this market. A small number of models have also been produced in 7 mm:1 ft scale, mostly of New South Wales prototypes. Development: Japan On30 is also modeled in Japan, where it is used to represent the 2 ft 6 in (762 mm) narrow gauge railways, such as the Kiso Forest Railway, that were once quite common in that country. Several brass locomotive kits have been produced. Summary: The following table lists the most popular narrow gauges in O scale: Remarks: Model railroaders with layouts and rolling stocks by American standard and usually by British standard, use for designation of the scale in English language publications the letter O and not the number 0. In British and sometimes in French-speaking countries, for narrow gauge it is written after designation of the scale the model gauge in mm. As examples OO9 or OO6.5 in United Kingdom and H09 at the place of H0e or H06,5 at the place of H0f in France.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RET proto-oncogene** RET proto-oncogene: The RET proto-oncogene encodes a receptor tyrosine kinase for members of the glial cell line-derived neurotrophic factor (GDNF) family of extracellular signalling molecules.RET loss of function mutations are associated with the development of Hirschsprung's disease, while gain of function mutations are associated with the development of various types of human cancer, including medullary thyroid carcinoma, multiple endocrine neoplasias type 2A and 2B, pheochromocytoma and parathyroid hyperplasia. Structure: RET is an abbreviation for "rearranged during transfection", as the DNA sequence of this gene was originally found to be rearranged within a 3T3 fibroblast cell line following its transfection with DNA taken from human lymphoma cells. Structure: The human gene RET is localized to chromosome 10 (10q11.2) and contains 21 exons.The natural alternative splicing of the RET gene results in the production of 3 different isoforms of the protein RET. RET51, RET43 and RET9 contain 51, 43 and 9 amino acids in their C-terminal tail respectively. The biological roles of isoforms RET51 and RET9 are the most well studied in-vivo as these are the most common isoforms in which RET occurs. Structure: Common to each isoform is a domain structure. Each protein is divided into three domains: an N-terminal extracellular domain with four cadherin-like repeats and a cysteine-rich region, a hydrophobic transmembrane domain and a cytoplasmic tyrosine kinase domain, which is split by an insertion of 27 amino acids. Within the cytoplasmic tyrosine kinase domain, there are 16 tyrosines (Tyrs) in RET9 and 18 in RET51. Tyr1090 and Tyr1096 are present only in the RET51 isoform.The extracellular domain of RET contains nine N-glycosylation sites. The fully glycosylated RET protein is reported to have a molecular weight of 170 kDa although it is not clear to which isoform this molecular weight relates. Kinase activation: RET is the receptor for GDNF-family ligands (GFLs).In order to activate RET, GFLs first need to form a complex with a glycosylphosphatidylinositol (GPI)-anchored co-receptor. The co-receptors themselves are classified as members of the GDNF receptor-α (GFRα) protein family. Different members of the GFRα family (GFRα1, GFRα2, GFRα3, GFRα4) exhibit a specific binding activity for a specific GFLs. Kinase activation: Upon GFL-GFRα complex formation, the complex then brings together two molecules of RET, triggering trans-autophosphorylation of specific tyrosine residues within the tyrosine kinase domain of each RET molecule. Tyr900 and Tyr905 within the activation loop (A-loop) of the kinase domain have been shown to be autophosphorylation sites by mass spectrometry.Phosphorylation of Tyr905 stabilizes the active conformation of the kinase, which, in turn, results in the autophosphorylation of other tyrosine residues mainly located in the C-terminal tail region of the molecule. Kinase activation: The structure shown to the left was taken from the protein data bank code 2IVT. Kinase activation: The structure is that of a dimer formed between two protein molecules each spanning amino acids 703-1012 of the RET molecule, covering RETs intracellular tyrosine kinase domain. One protein molecule, molecule A is shown in yellow and the other, molecule B in grey. The activation loop is coloured purple and selected tyrosine residues in green. Part of the activation loop from molecule B is absent. Kinase activation: Phosphorylation of Tyr981 and the additional tyrosines Tyr1015, Tyr1062 and Tyr1096, not covered by the above structure, have been shown to be important to the initiation of intracellular signal transduction processes. Role of RET signalling during development: Mice deficient in GDNF, GFRα1 or the RET protein itself exhibit severe defects in kidney and enteric nervous system development. This implicates RET signal transduction as key to the development of normal kidneys and the enteric nervous system. Clinical relevance: At least 26 disease-causing mutations in this gene have been discovered. Activating point mutations in RET can give rise to the hereditary cancer syndrome known as multiple endocrine neoplasia type 2 (MEN 2). There are three subtypes based on clinical presentation: MEN 2A, MEN 2B, and familial medullary thyroid carcinoma (FMTC). There is a high degree of correlation between the position of the point mutation and the phenotype of the disease. Clinical relevance: Chromosomal rearrangements that generate a fusion gene, resulting in the juxtaposition of the C-terminal region of the RET protein with an N-terminal portion of another protein, can also lead to constitutive activation of the RET kinase. These types of rearrangements are primarily associated with papillary thyroid carcinoma (PTC) where they represent 10-20% of cases, and non-small cell lung cancer (NSCLC) where they represent 2% of cases. Several fusion partners have been described in the literature, and the most common ones across both cancer types include KIF5B, CCDC6 and NCOA4. Clinical relevance: While older multikinase inhibitors such as cabozantinib or vandetanib showed modest efficacy in targeting RET-driven malignancies, newer selective inhibitors (such as selpercatinib and pralsetinib) have shown significant activity in both mutations and fusions. The results of the LIBRETTO-001 trial studying selpercatinib showed a progression-free survival of 17.5 months in previously treated RET-positive NSCLC, and 22 months for RET-positive thyroid cancers, which prompted an FDA approval for both these indications in May 2020. Several other selective RET inhibitors are under development, including TPX-0046, a macrocyclic inhibitor of RET and Src intended to inhibit mutations providing resistance to current inhibitors. Disease database: The RET gene variant database at the University of Utah, identifies (as of November 2014) 166 mutations that are implicated in MEN2. Interactions: RET proto-oncogene has been shown to interact with: DOK1, DOK5, GDNF family receptor alpha 1, GRB10, GRB7, Grb2, SHC1, and STAT3.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Isobutyrylfentanyl** Isobutyrylfentanyl: Isobutyrylfentanyl is an opioid analgesic that is an analog of fentanyl and has been sold online as a designer drug. It is believed to be around the same potency as butyrfentanyl but has been less widely distributed on illicit markets, though it was one of the earliest of the "new wave" of fentanyl derivatives to appear, and was reported in Europe for the first time in December 2012. Side effects: Side effects of fentanyl analogs are similar to those of fentanyl itself, which include itching, nausea and potentially serious respiratory depression, which can be life-threatening. Fentanyl analogs have killed hundreds of people throughout Europe and the former Soviet republics since the most recent resurgence in use began in Estonia in the early 2000s, and novel derivatives continue to appear. A new wave of fentanyl analogues and associated deaths began in around 2014 in the US, and have continued to grow in prevalence; especially since 2016 these drugs have been responsible for hundreds of overdose deaths every week. Legal status: Isobutyrylfentanyl is a Schedule I controlled drug in the USA since 1 February 2018.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Parafacial zone** Parafacial zone: The parafacial zone (PZ) is a brain structure located in the brainstem within the medulla oblongata believed to be heavily responsible for non-rapid eye movement (non-REM) sleep regulation, specifically for inducing slow-wave sleep.It is one of several GABAergic sleep-promoting nuclei in the brain, which also include the ventrolateral preoptic area of the hypothalamus, the nucleus accumbens core (specifically, the medium spiny neurons of the D2-type which co-express adenosine A2A receptors), and a GABAergic nucleus in the lateral hypothalamus which co-releases melanin-concentrating hormone. Function and location: The parafacial zone promotes slow-wave sleep by inhibiting the glutamatergic parabrachial nucleus (a component of the ascending reticular activating system that mediates wakefulness and arousal) via the release of the inhibitory neurotransmitter GABA onto those neurons.Optogenetic activation of GABAergic PZ neurons induces cortical slow-wave activity and slow-wave sleep in awake animals. In cases of genetic disruption of GABAergic transmizzion from PZ in mice, the mice were observed to go through periods of significantly longer, sustained wakefulness. PZ neurons are also believed to be sleep active, as they express c-Fos after sleep but not after wakefulness.The parafacial is located within the medulla oblongata, lateral and dorsal to the facial nerve. It overlaps with the alpha part of the parvocellular reticular formation (PCRt), which is thought to govern states of consciousness as well as have some control over sleep-wake sensory signals and mechanisms. However, PZ and PCRt activity are believed to be of separate nature. Inputs: The parafacial zone receives inputs mainly from three areas: the hypothalamus, the midbrain, and the pons and medulla.From the hypothalamus, the PZ receives inputs from the hypothalamic area, zona incerta, and the parasubthalamic nucleus; while the zona incerta and parasubthalamic nucleus functions remain largely unknown, several of their functions have been proposed to deal with action selection and limbic-motor integration. Inputs: From the midbrain, the PZ receives input from the substantia nigra, pars reticulata, and deep mesencephalic nucleus. These brain structures are believed to deal heavily with movement, as well as reward and unconscious reflex; additionally, the par reticulata especially has been documented to project nearly all GABAergic inhibitory neurons. And from the pons and medulla, the PZ receives input from the intermediate reticular nucleus and medial vestibular nucleus (parvocellular), areas that are thought to be involved in expiration and respiratory rhythm generation. Inputs: From the hypothalamus Hypothalamic area Zona incerta Parasubthalamic nucleus Substantia nigra From the midbrain Substantia nigra Pars reticular Mesenphalic nucleus Outputs: PZ neurons project to the medial parabrachial nucleus, a wake promoting neuron cluster that is part of the ascending reticular activating system. Thirty-four various nuclei also share strong reciprocal projections with PZ GABAergic neurons, including various nuclei of the stria terminalis, the lateral hypothalamic area, the substantia nigra, the zona incerta, and the central amygdaloid nucleus. These strong reciprocal projections suggest feedback control and the ability to regulate specific functions. Parabrachial nucleus (part of the ascending reticular activating system) Stria terminalis Lateral hypothalamic area Substantia Nigra Zona Incerta Central Amygdaloid Nucleus
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alcohol dehydrogenase, iron containing 1** Alcohol dehydrogenase, iron containing 1: Alcohol dehydrogenase, iron containing 1 is a protein that in humans is encoded by the ADHFE1 gene. Function: The ADHFE1 gene encodes hydroxyacid-oxoacid transhydrogenase (EC 1.1.99.24), which is responsible for the oxidation of 4-hydroxybutyrate in mammalian tissues.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Journal of Research in Personality** Journal of Research in Personality: The Journal of Research in Personality is a peer-reviewed academic journal covering the field of personality psychology, published by Elsevier and edited by Zlatan Krizan. It publishes articles including experimental and descriptive research on issues in the field of personality and related fields. These can include genetic, physiological, motivational, learning, perceptual, cognitive, and social processes. Both normal and abnormal psychology, and also studies of animal personality have been published.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Matter collineation** Matter collineation: A matter collineation (sometimes matter symmetry and abbreviated to MC) is a vector field that satisfies the condition, LXTab=0 where Tab are the energy–momentum tensor components. The intimate relation between geometry and physics may be highlighted here, as the vector field X is regarded as preserving certain physical quantities along the flow lines of X , this being true for any two observers. In connection with this, it may be shown that every Killing vector field is a matter collineation (by the Einstein field equations (EFE), with or without cosmological constant). Thus, given a solution of the EFE, a vector field that preserves the metric necessarily preserves the corresponding energy-momentum tensor. When the energy-momentum tensor represents a perfect fluid, every Killing vector field preserves the energy density, pressure and the fluid flow vector field. When the energy-momentum tensor represents an electromagnetic field, a Killing vector field does not necessarily preserve the electric and magnetic fields.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flyting** Flyting: Flyting or fliting (Classical Gaelic: immarbág) (Irish: iomarbháigh) (lit. "counter-boasting"), is a contest consisting of the exchange of insults between two parties, often conducted in verse. Etymology: The word flyting comes from the Old English verb flītan meaning 'to quarrel', made into a gerund with the suffix -ing. Attested from around 1200 in the general sense of a verbal quarrel, it is first found as a technical literary term in Scotland in the sixteenth century. The first written Scots example is William Dunbar, The Flyting of Dunbar and Kennedie, written in the late fifteenth century. Description: Flyting is a ritual, poetic exchange of insults practiced mainly between the 5th and 16th centuries. Examples of flyting are found throughout Scots, Ancient, Medieval and Modern Celtic, Old English, Middle English and Norse literature involving both historical and mythological figures. The exchanges would become extremely provocative, often involving accusations of cowardice or sexual perversion. Description: Norse literature contains stories of the gods flyting. For example, in Lokasenna the god Loki insults the other gods in the hall of Ægir. In the poem Hárbarðsljóð, Hárbarðr (generally considered to be Odin in disguise) engages in flyting with Thor.In the confrontation of Beowulf and Unferð in the poem Beowulf, flytings were used as either a prelude to battle or as a form of combat in their own right.In Anglo-Saxon England, flyting would take place in a feasting hall. The winner would be decided by the reactions of those watching the exchange. The winner would drink a large cup of beer or mead in victory, then invite the loser to drink as well.The 13th century poem The Owl and the Nightingale and Geoffrey Chaucer's Parlement of Foules contain elements of flyting. Description: Flyting became public entertainment in Scotland in the 15th and 16th centuries, when makars would engage in verbal contests of provocative, often sexual and scatological but highly poetic abuse. Flyting was permitted despite the fact that the penalty for profanities in public was a fine of 20 shillings (over £300 in 2023 prices) for a lord, or a whipping for a servant. James IV and James V encouraged "court flyting" between poets for their entertainment and occasionally engaged with them. The Flyting of Dumbar and Kennedie records a contest between William Dunbar and Walter Kennedy in front of James IV, which includes the earliest recorded use of the word shit as a personal insult. In 1536 the poet Sir David Lyndsay composed a ribald 60-line flyte to James V after the King demanded a response to a flyte. Description: Flytings appear in several of William Shakespeare's plays. Margaret Galway analysed 13 comic flytings and several other ritual exchanges in the tragedies. Flytings also appear in Nicholas Udall's Ralph Roister Doister and John Still's Gammer Gurton's Needle from the same era. Description: While flyting died out in Scottish writing after the Middle Ages, it continued for writers of Celtic background. Robert Burns parodied flyting in his poem, "To a Louse", and James Joyce's poem "The Holy Office" is a curse upon society by a bard. Joyce played with the traditional two-character exchange by making one of the characters representing society as a whole. Similar practices: Hilary Mackie has detected in the Iliad a consistent differentiation between representations in Greek of Achaean and Trojan speech, where Achaeans repeatedly engage in public, ritualized abuse: "Achaeans are proficient at blame, while Trojans perform praise poetry."Taunting songs are present in the Inuit culture, among many others. Flyting can also be found in Arabic poetry in a popular form called naqā’iḍ, as well as the competitive verses of Japanese Haikai. Similar practices: Echoes of the genre continue into modern poetry. Hugh MacDiarmid's poem A Drunk Man Looks at the Thistle, for example, has many passages of flyting in which the poet's opponent is, in effect, the rest of humanity. Similar practices: Flyting is similar in both form and function to the modern practice of freestyle battles between rappers and the historic practice of the Dozens, a verbal-combat game representing a synthesis of flyting and its Early Modern English descendants with comparable African verbal-combat games such as Ikocha Nkocha.In the Finnish epic Kalevala, the hero Väinämöinen uses the similar practice of kilpalaulanta (duel singing) to defeat his opponent Joukahainen. Modern portrayals: In "The Roaring Trumpet", part of Harold Shea's introduction to the Norse gods is a flyting between Heimdall and Loki in which Heimdall says, "All insults are untrue. I state facts." The climactic scene in Rick Riordan's novel The Ship of the Dead consists of a flyting between the protagonist Magnus Chase and the Norse god Loki. In the Monkey Island video game series, insults are often integral to duels such as sword fighting and arm wrestling. In Assassin's Creed: Valhalla, in which the protagonist is a Viking themself, players can engage in flyting with various non-playable characters for prestige and other rewards. Some see the subculture of hip hop music known as Battle rap as a modern expression, providing a platform for two individuals to poetically insult each other.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IKVM.NET** IKVM.NET: IKVM.NET is an implementation of Java for Mono and the Microsoft .NET Framework. IKVM is free software, distributed under the zlib permissive free software license.The original developer, Jeroen Frijters, discontinued work on IKVM in 2015. In 2018, Windward Studios forked IKVM.NET to continue development on the open-sourced IKVM. In 2022 Jerome Haltom and others picked up the work on a new GitHub organization and finished .NET Core support. Components: IKVM.NET includes the following components: A Java virtual machine (JVM) implemented in .NET A .NET implementation of the Java class libraries A tool that translates Java bytecode (JAR files) to .NET IL (DLLs or EXE files). Tools that enable Java and .NET interoperabilityIKVM.NET can run compiled Java code (bytecode) directly on Microsoft .NET or Mono. The bytecode is converted on the fly to CIL and executed. By contrast J# is a Java syntax on the .NET framework, whereas IKVM.NET is effectively a Java framework running on top of the .NET framework. Jeroen Frijters was the main contributor to IKVM.NET. He is Technical Director of Sumatra Software, based in the Netherlands. Name: The "IKVM" portion of the name is a play on "JVM" in which the author "just took the two letters adjacent to the J". Status: IKVM 8 implements Java 8. The IKVM organization also maintains IKVM.Maven.Sdk, an extension to the .NET PackageReference system that allows direct references to and transpiling of Maven artifacts. IKVM.Maven.Sdk is also available on NuGet.org.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TARBP1** TARBP1: Probable methyltransferase TARBP1 is an enzyme that in humans is encoded by the TARBP1 gene.HIV-1, the causative agent of acquired immunodeficiency syndrome (AIDS), contains an RNA genome that produces a chromosomally integrated DNA during the replicative cycle. Activation of HIV-1 gene expression by the transactivator Tat is dependent on an RNA regulatory element (TAR) located downstream of the transcription initiation site. This element forms a stable stem-loop structure and can be bound by either the protein encoded by this gene or by RNA polymerase II. This protein may act to disengage RNA polymerase II from TAR during transcriptional elongation. Alternatively spliced transcripts of this gene may exist, but their full-length natures have not been determined.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cellulitis** Cellulitis: Cellulitis is usually a bacterial infection involving the inner layers of the skin. It specifically affects the dermis and subcutaneous fat. Signs and symptoms include an area of redness which increases in size over a few days. The borders of the area of redness are generally not sharp and the skin may be swollen. While the redness often turns white when pressure is applied, this is not always the case. The area of infection is usually painful. Lymphatic vessels may occasionally be involved, and the person may have a fever and feel tired.The legs and face are the most common sites involved, although cellulitis can occur on any part of the body. The leg is typically affected following a break in the skin. Other risk factors include obesity, leg swelling, and old age. For facial infections, a break in the skin beforehand is not usually the case. The bacteria most commonly involved are streptococci and Staphylococcus aureus. In contrast to cellulitis, erysipelas is a bacterial infection involving the more superficial layers of the skin, present with an area of redness with well-defined edges, and more often is associated with a fever. The diagnosis is usually based on the presenting signs and symptoms, while a cell culture is rarely possible. Before making a diagnosis, more serious infections such as an underlying bone infection or necrotizing fasciitis should be ruled out.Treatment is typically with antibiotics taken by mouth, such as cephalexin, amoxicillin or cloxacillin. Those who are allergic to penicillin may be prescribed erythromycin or clindamycin instead. When methicillin-resistant S. aureus (MRSA) is a concern, doxycycline or trimethoprim/sulfamethoxazole may, in addition, be recommended. There is concern related to the presence of pus or previous MRSA infections. Elevating the infected area may be useful, as may pain killers.Potential complications include abscess formation. Around 95% of people are better after 7 to 10 days of treatment. Those with diabetes, however, often have worse outcomes. Cellulitis occurred in about 21.2 million people in 2015. In the United States about 2 of every 1,000 people per year have a case affecting the lower leg. Cellulitis in 2015 resulted in about 16,900 deaths worldwide. In the United Kingdom, cellulitis was the reason for 1.6% of admissions to a hospital. Signs and symptoms: The typical signs and symptoms of cellulitis are an area that is red, hot, and painful. The photos shown here are of mild to moderate cases and are not representative of the earlier stages of the condition. Complications Potential complications may include abscess formation, fasciitis, and sepsis. Causes: Cellulitis is usually, but not always, caused by bacteria that enter and infect the tissue through breaks in the skin. Group A Streptococcus and Staphylococcus are the most common causes of the infection and may be found on the skin as normal flora in healthy individuals.About 80% of cases of Ludwig's angina, or cellulitis of the submandibular space, are caused by dental infections. Mixed infections, due to both aerobes and anaerobes, are commonly associated with this type of cellulitis. Typically, this includes alpha-hemolytic streptococci, staphylococci, and bacteroides' groups.Predisposing conditions for cellulitis include an insect or spider bite, blistering, an animal bite, tattoos, pruritic (itchy) skin rash, recent surgery, athlete's foot, dry skin, eczema, injecting drugs (especially subcutaneous or intramuscular injection or where an attempted intravenous injection "misses" or blows the vein), pregnancy, diabetes, and obesity, which can affect circulation, as well as burns and boils, although debate exists as to whether minor foot lesions contribute. Occurrences of cellulitis may also be associated with the rare condition hidradenitis suppurativa or dissecting cellulitis.The appearance of the skin assists a doctor in determining a diagnosis. A doctor may also suggest blood tests, a wound culture, or other tests to help rule out a blood clot deep in the veins of the legs. Cellulitis in the lower leg is characterized by signs and symptoms similar to those of a deep vein thrombosis, such as warmth, pain, and swelling (inflammation). Causes: Reddened skin or rash may signal a deeper, more serious infection of the inner layers of skin. Once below the skin, the bacteria can spread rapidly, entering the lymph nodes and the bloodstream and spreading throughout the body. This can result in influenza-like symptoms with a high temperature and sweating or feeling very cold with shaking, as the affected person cannot get warm.In rare cases, the infection can spread to the deep layer of tissue called the fascial lining. Necrotizing fasciitis, also called by the media "flesh-eating bacteria", is an example of a deep-layer infection. It is a medical emergency. Causes: Risk factors The elderly and those with a weakened immune system are especially vulnerable to contracting cellulitis. Diabetics are more susceptible to cellulitis than the general population because of impairment of the immune system; they are especially prone to cellulitis in the feet, because the disease causes impairment of blood circulation in the legs, leading to diabetic foot or foot ulcers. Poor control of blood glucose levels allows bacteria to grow more rapidly in the affected tissue and facilitates rapid progression if the infection enters the bloodstream. Neural degeneration in diabetes means these ulcers may not be painful, thus often become infected. Those who have had poliomyelitis are also prone because of circulatory problems, especially in the legs.Immunosuppressive drugs, and other illnesses or infections that weaken the immune system, are also factors that make infection more likely. Chickenpox and shingles often result in blisters that break open, providing a gap in the skin through which bacteria can enter. Lymphedema, which causes swelling on the arms and/or legs, can also put an individual at risk. Causes: Diseases that affect blood circulation in the legs and feet, such as chronic venous insufficiency and varicose veins, are also risk factors for cellulitis.Cellulitis is also common among dense populations sharing hygiene facilities and common living quarters, such as military installations, college dormitories, nursing homes, oil platforms, and homeless shelters. Diagnosis: Cellulitis is most often a clinical diagnosis, readily identified in many people by history and physical examination alone, with rapidly spreading areas of cutaneous swelling, redness, and heat, occasionally associated with inflammation of regional lymph nodes. While classically distinguished as a separate entity from erysipelas by spreading more deeply to involve the subcutaneous tissues, many clinicians may classify erysipelas as cellulitis. Both are often treated similarly, but cellulitis associated with furuncles, carbuncles, or abscesses is usually caused by S. aureus, which may affect treatment decisions, especially antibiotic selection. Skin aspiration of nonpurulent cellulitis, usually caused by streptococcal organisms, is rarely helpful for diagnosis, and blood cultures are positive in fewer than 5% of all cases.It is important to evaluate for co-existent abscess, as this finding usually requires surgical drainage as opposed to antibiotic therapy alone. Physicians' clinical assessment for abscess may be limited, especially in cases with extensive overlying induration, but use of bedside ultrasonography performed by an experienced practitioner readily discriminates between abscess and cellulitis and may change management in up to 56% of cases. Use of ultrasound for abscess identification may also be indicated in cases of antibiotic failure. Cellulitis has a characteristic "cobblestoned" appearance indicative of subcutaneous edema without a defined hypoechoic, heterogeneous fluid collection that would indicate abscess. Diagnosis: Differential diagnosis Other conditions that may mimic cellulitis include deep vein thrombosis, which can be diagnosed with a compression leg ultrasound, and stasis dermatitis, which is inflammation of the skin from poor blood flow. Signs of a more severe infection such as necrotizing fasciitis or gas gangrene that would require prompt surgical intervention include purple bullae, skin sloughing, subcutaneous edema, and systemic toxicity. Misdiagnosis can occur in up to 30% of people with suspected lower-extremity cellulitis, leading to 50,000 to 130,000 unnecessary hospitalizations and $195 to $515 million in avoidable healthcare spending annually in the United States. Evaluation by dermatologists for cases of suspected cellulitis has been shown to reduce misdiagnosis rates and improve patient outcomes.Associated musculoskeletal findings are sometimes reported. When it occurs with acne conglobata, hidradenitis suppurativa, and pilonidal cysts, the syndrome is referred to as the follicular occlusion triad or tetrad.Lyme disease can be misdiagnosed as cellulitis. The characteristic bullseye rash does not always appear in Lyme disease (the rash may not have a central or ring-like clearing, or not appear at all). Factors supportive of Lyme include recent outdoor activities where Lyme is common and rash at an unusual site for cellulitis, such as armpit, groin, or behind the knee. Lyme can also result in long-term neurologic complications. The standard treatment for cellulitis, cephalexin, is not useful in Lyme disease. When it is unclear which one is present, the IDSA recommends treatment with cefuroxime axetil or amoxicillin/clavulanic acid, as these are effective against both infections. Prevention: In those who have previously had cellulitis, the use of antibiotics may help prevent future episodes. This is recommended by CREST for those who have had more than two episodes. A 2017 meta-analysis found a benefit of preventative antibiotics for recurrent cellulitis in the lower limbs, but the preventative effects appear to diminish after stopping antibiotic therapy. Treatment: Antibiotics are usually prescribed, with the agent selected based on suspected organism and presence or absence of purulence, although the best treatment choice is unclear. If an abscess is also present, surgical drainage is usually indicated, with antibiotics often prescribed for co-existent cellulitis, especially if extensive. Pain relief is also often prescribed, but excessive pain should always be investigated, as it is a symptom of necrotizing fasciitis. Elevation of the affected area is often recommended.Steroids may speed recovery in those on antibiotics. Treatment: Antibiotics Antibiotics choices depend on regional availability, but a penicillinase-resistant semisynthetic penicillin or a first-generation cephalosporin is currently recommended for cellulitis without abscess. A course of antibiotics is not effective in between 6 and 37% of cases. Epidemiology: Cellulitis in 2015 resulted in about 16,900 deaths worldwide, up from 12,600 in 2005.Cellulitis is a common global health burden, with more than 650,000 admissions per year in the United States alone. In the United States, an estimated 14.5 million cases annually of cellulitis account for $3.7 billion in ambulatory care costs alone. The majority of cases of cellulitis are nonculturable and therefore the causative bacteria are unknown. In the 15% of cellulitis cases in which organisms are identified, most are due to β-hemolytic Streptococcus and Staphylococcus aureus. Other animals: Horses may acquire cellulitis, usually secondarily to a wound (which can be extremely small and superficial) or to a deep-tissue infection, such as an abscess or infected bone, tendon sheath or joint. Cellulitis from a superficial wound usually creates less lameness (grade 1–2 of 5) than that caused by septic arthritis (grade 4–5). The horse exhibits inflammatory edema, which is hot, painful swelling. This swelling differs from stocking up in that the horse does not display symmetrical swelling in two or four legs, but in only one leg. This swelling begins near the source of infection, but eventually continues down the leg. In some cases, the swelling also travels distally. Treatment includes cleaning the wound and caring for it properly, the administration of NSAIDs, such as phenylbutazone, cold hosing, applying a sweat wrap or a poultice, and mild exercise.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RozoFS** RozoFS: RozoFS is a free software distributed file system. It comes as a free software, licensed under the GNU GPL v2. RozoFS uses erasure coding for redundancy. Design: Rozo provides an open source POSIX filesystem, built on top of distributed file system architecture similar to Google File System, Lustre or Ceph. The Rozo specificity lies in the way data is stored. The data to be stored is translated into several chunks using Mojette Transform and distributed across storage devices in such a way that it can be retrieved even if several pieces are unavailable. On the other hand, chunks are meaningless alone. Redundancy schemes based on coding techniques like the one used by RozoFS allow to achieve significant storage savings as compared to simple replication.The file system comprises three components: Exports server — (Meta Data Server) manages the location (layout) of chunks (managing capacity load balancing with respect to high availability), file access and namespace (hierarchy). Multiple replicated metadata servers are used to provide failover. The Exports server is a user-space daemon; the metadata are stored synchronously to a usual file system (the underlying file system must support extended attributes). Design: Storage servers — (Chunk Server) store the chunks. The Chunk server is also a user-space daemon that relies on the underlying local file system to manage the actual storage. Clients — talk to both the exports server and chunk servers and are responsible for data transformation. Clients mount the file system into user-space via FUSE. Press articles: Storage Insider "Innovative Ansätze hauchen Scale-out NAS neues Leben ein" article July 2016 Storage Insider "Storage-Startups: Die nächste Welle rollt" article July 2016 CDP Blog "A new iteration to make Erasure Coding universal" article May 2016 Storage Newsletter "MemoScale With New Iteration to Make Erasure Coding Universal" article May 2016 The Register "Could Rozo squeeze into the scale-out NAS-object scalability gap?" article Dec. 2015 Le Monde Informatique "Rozo passe à l'erasure coding 128 bits pour la v2 de son NAS distribué" article Dec. 2015 Le Mag IT (TechTarget) "Rozo Systems dévoile la version 2.0 de sa technologie NAS distribuée" article Dec. 2015 Storage Newsletter "V2.0 of RozoFS Scale-Out NAS Software" article Dec. 2015 Silicon.fr "OpenIO, Outpace.io et Rozo: le stockage made in France s’exporte aux US" article Dec. 2015 The Register "Big Blue boosts Spectrum Scale, polishes the parallel file systems" article Dec. 2015 ChannelNews "OpenIO et Rozo Systems, les deux pépites françaises du SDS" article Nov. 2015 The Register "French upstart Rozo: Magic beans will help us become storage giant" article Oct. 2015 Storage Newsletter "Start-Up Profile: Rozo Systems in Software-Defined Scale-Out NAS" article Oct. 2015 Le Monde Informatique "La start-up française Rozo Systems recrute dans la Silicon Valley" article Oct. 2015
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Yorkfield** Yorkfield: Yorkfield is the code name for some Intel processors sold as Core 2 Quad and Xeon. In Intel's Tick-Tock cycle, the 2007/2008 "Tick" was Penryn microarchitecture, the shrink of the Core microarchitecture to 45 nanometers as CPUID model 23, replacing Kentsfield, the previous model. Yorkfield: Like its predecessor, Yorkfield multi-chip modules come in two sizes. The smaller version is equipped with 6MB L2 cache, and is commonly called Yorkfield-6M. The larger version is equipped with 12 MB L2 cache. The mobile version of Yorkfield is Penryn-QC and the dual-socket server version is Harpertown. The MP server Dunnington chip is a more distant relative based on a different chip but using the same 45 nm Core microarchitecture. The Wolfdale desktop processor is a dual-core version of Yorkfield. The successors to Yorkfield are the Nehalem based Lynnfield and Bloomfield. Variants: Yorkfield Yorkfield (codename for the Core 2 Quad Q9x5x series and Xeon X33x0 series) features a dual-die quad core design with two unified 6 MB L2 caches; their product code is 80569. They also feature 1333 MT/s FSB and are compatible with the Bearlake chipset. These processors were released in late March 2008 beginning with the Q9300 and Q9450. Yorkfield CPUs were expected to be released in January 2008. The release of Yorkfield, however, was delayed to March 15, 2008. Initially this delay was attributed to an error found in the Yorkfield chip, but later reports claimed that the delay was necessary in order to ensure compatibility with the 4-layer printed circuit boards utilized by many mainstream motherboards. At the Intel Developer Forum 2007, a Yorkfield processor was compared with a Kentsfield processor. Variants: Yorkfield-6M Yorkfield-6M (product code 80580) are similar to Yorkfield but are made from two Wolfdale-3M like cores, so they have a total of 6 MB of L2 cache, with 3 MB shared by two cores. They are used in Core 2 Quad Q8xxx with 4 MB cache enabled and Core 2 Quad Q9xxx and Xeon X3320/X3330 processors with all of the 6 MB enabled. Q8xxx processors initially had no support for Intel VT unlike Q9xxx, but later versions all have VT enabled. Variants: Yorkfield XE On November 11, 2007, Intel released the first Yorkfield XE processor, Core 2 Extreme QX9650. It is the first Intel desktop processor to use 45 nm technology and high-k metal gates. Yorkfield features a dual-die quad core design with two unified level-two (L2) caches of 6 MB each. It also features a 1333 MT/s FSB and clock rate of 3 GHz. The processor incorporates SSE4.1 instructions and has total of 820 million transistors on 2x107 mm² dies. Variants: QX9650 and QX9770 both are labeled as product code 80569 like Yorkfield, while QX9775, being made for Dual Socket 771 mainboards, uses product code 80574 like the Xeon X5482 "Harpertown" that it is closely related to. Variants: Yorkfield CL The OEM-only Xeon X33x3 processors with 80 W TDP and product code 80584 are made for Socket 771 like Harpertown but are only supported in single-socket configurations. Like the dual-core Wolfdale-CL processor, these would work in regular Socket 775 mainboards after modification but are typically used in blade servers that otherwise require DP server processors like Wolfdale-DP or Harpertown. Successor: Yorkfield was replaced by the 45 nm Nehalem processor.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**No-limits apnea** No-limits apnea: No-limit apnea is an AIDA International freediving discipline of competitive freediving, also known as competitive apnea, in which the freediver descends and ascends with the method of his or her choice. Often, a heavy metal bar or "sled" grasped by the diver descends fixed to a line, reaching great depths. The most common ascension assistance is via inflatable lifting bags or vests with inflatable compartments, which surface rapidly. The dives may be performed head-first or feet-first. No-limits apnea: This form of diving is considered extremely dangerous by diving professionals. No-limit apnea has claimed the lives of several divers. Challenges: The three main differences between free diving disciplines that involve diving to depth and those that occur at the surface are that you can not interrupt the dive, there are periods where work is performed and the diver is impacted by direct effects of pressure. Records: The current no-limit world record holder is Herbert Nitsch with a depth of 214 metres (702 ft) set on 9 June 2007, in Spetses, Greece, however, in a subsequent dive on 6 June 2012 in Santorini, Greece to break his own record, he went down to 253.2 metres (831 ft) and suffered severe decompression sickness immediately afterwards and subsequently retired from competitive events.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Enhanced other networks** Enhanced other networks: Enhanced other networks (EON) is a radio system used to deliver traffic information to enabled devices. It is a component of the European Radio Data System (RDS). The system delivers traffic information (TP and TA) to enabled devices, by interrupting the current stream of media (radio, cd, etc.) and sends the traffic message.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SQL** SQL: Structured Query Language (SQL) ( (listen) S-Q-L, sometimes "sequel" for historical reasons) is a domain-specific language used in programming and designed for managing data held in a relational database management system (RDBMS), or for stream processing in a relational data stream management system (RDSMS). It is particularly useful in handling structured data, i.e., data incorporating relations among entities and variables. SQL: Introduced in the 1970s, SQL offered two main advantages over older read–write APIs such as ISAM or VSAM. Firstly, it introduced the concept of accessing many records with one single command. Secondly, it eliminates the need to specify how to reach a record, i.e., with or without an index. SQL: Originally based upon relational algebra and tuple relational calculus, SQL consists of many types of statements, which may be informally classed as sublanguages, commonly: a data query language (DQL), a data definition language (DDL), a data control language (DCL), and a data manipulation language (DML). The scope of SQL includes data query, data manipulation (insert, update, and delete), data definition (schema creation and modification), and data access control. Although SQL is essentially a declarative language (4GL), it also includes procedural elements. SQL: SQL was one of the first commercial languages to use Edgar F. Codd’s relational model. The model was described in his influential 1970 paper, "A Relational Model of Data for Large Shared Data Banks". Despite not entirely adhering to the relational model as described by Codd, SQL became the most widely used database language.SQL became a standard of the American National Standards Institute (ANSI) in 1986 and of the International Organization for Standardization (ISO) in 1987. Since then, the standard has been revised multiple times to include a larger set of features and incorporate common extensions. Despite the existence of standards, virtually no implementations in existence adhere to it fully, and most SQL code requires at least some changes before being ported to different database systems. History: SQL was initially developed at IBM by Donald D. Chamberlin and Raymond F. Boyce after learning about the relational model from Edgar F. Codd in the early 1970s. This version, initially called SEQUEL (Structured English QUEry Language), was designed to manipulate and retrieve data stored in IBM's original quasirelational database management system, System R, which a group at IBM San Jose Research Laboratory had developed during the 1970s.Chamberlin and Boyce's first attempt at a relational database language was SQUARE (Specifying Queries in A Relational Environment), but it was difficult to use due to subscript/superscript notation. After moving to the San Jose Research Laboratory in 1973, they began work on a sequel to SQUARE. The original name SEQUEL, which is widely regarded as a pun on QUEL, the query language of Ingres, was later changed to SQL (dropping the vowels) because "SEQUEL" was a trademark of the UK-based Hawker Siddeley Dynamics Engineering Limited company. The label SQL later became the acronym for Structured Query Language. History: After testing SQL at customer test sites to determine the usefulness and practicality of the system, IBM began developing commercial products based on their System R prototype, including System/38, SQL/DS, and IBM Db2, which were commercially available in 1979, 1981, and 1983, respectively.In the late 1970s, Relational Software, Inc. (now Oracle Corporation) saw the potential of the concepts described by Codd, Chamberlin, and Boyce, and developed their own SQL-based RDBMS with aspirations of selling it to the U.S. Navy, Central Intelligence Agency, and other U.S. government agencies. In June 1979, Relational Software introduced one of the first commercially available implementations of SQL, Oracle V2 (Version2) for VAX computers. History: By 1986, ANSI and ISO standard groups officially adopted the standard "Database Language SQL" language definition. New versions of the standard were published in 1989, 1992, 1996, 1999, 2003, 2006, 2008, 2011, 2016 and most recently, 2023. Syntax: The SQL language is subdivided into several language elements, including: Clauses, which are constituent components of statements and queries. (In some cases, these are optional.) Expressions, which can produce either scalar values, or tables consisting of columns and rows of data Predicates, which specify conditions that can be evaluated to SQL three-valued logic (3VL) (true/false/unknown) or Boolean truth values and are used to limit the effects of statements and queries, or to change program flow. Syntax: Queries, which retrieve the data based on specific criteria. This is an important element of SQL. Statements, which may have a persistent effect on schemata and data, or may control transactions, program flow, connections, sessions, or diagnostics. SQL statements also include the semicolon (";") statement terminator. Though not required on every platform, it is defined as a standard part of the SQL grammar. Insignificant whitespace is generally ignored in SQL statements and queries, making it easier to format SQL code for readability. Procedural extensions: SQL is designed for a specific purpose: to query data contained in a relational database. SQL is a set-based, declarative programming language, not an imperative programming language like C or BASIC. However, extensions to Standard SQL add procedural programming language functionality, such as control-of-flow constructs. Procedural extensions: In addition to the standard SQL/PSM extensions and proprietary SQL extensions, procedural and object-oriented programmability is available on many SQL platforms via DBMS integration with other languages. The SQL standard defines SQL/JRT extensions (SQL Routines and Types for the Java Programming Language) to support Java code in SQL databases. Microsoft SQL Server 2005 uses the SQLCLR (SQL Server Common Language Runtime) to host managed .NET assemblies in the database, while prior versions of SQL Server were restricted to unmanaged extended stored procedures primarily written in C. PostgreSQL lets users write functions in a wide variety of languages—including Perl, Python, Tcl, JavaScript (PL/V8) and C. Interoperability and standardization: Overview SQL implementations are incompatible between vendors and do not necessarily completely follow standards. In particular, date and time syntax, string concatenation, NULLs, and comparison case sensitivity vary from vendor to vendor. PostgreSQL and Mimer SQL strive for standards compliance, though PostgreSQL does not adhere to the standard in all cases. For example, the folding of unquoted names to lower case in PostgreSQL is incompatible with the SQL standard, which says that unquoted names should be folded to upper case. Thus, Foo should be equivalent to FOO not foo according to the standard. Interoperability and standardization: Popular implementations of SQL commonly omit support for basic features of Standard SQL, such as the DATE or TIME data types. The most obvious such examples, and incidentally the most popular commercial and proprietary SQL DBMSs, are Oracle (whose DATE behaves as DATETIME, and lacks a TIME type) and MS SQL Server (before the 2008 version). As a result, SQL code can rarely be ported between database systems without modifications. Interoperability and standardization: Reasons for incompatibility Several reasons for the lack of portability between database systems include: The complexity and size of the SQL standard means that most implementers do not support the entire standard. The standard does not specify database behavior in several important areas (e.g., indices, file storage), leaving implementations to decide how to behave. The SQL standard precisely specifies the syntax that a conforming database system must implement. However, the standard's specification of the semantics of language constructs is less well-defined, leading to ambiguity. Many database vendors have large existing customer bases; where the newer version of the SQL standard conflicts with the prior behavior of the vendor's database, the vendor may be unwilling to break backward compatibility. Little commercial incentive exists for vendors to make changing database suppliers easier (see vendor lock-in). Users evaluating database software tend to place other factors such as performance higher in their priorities than standards conformance. Standardization history SQL was adopted as a standard by the ANSI in 1986 as SQL-86 and the ISO in 1987. It is maintained by ISO/IEC JTC 1, Information technology, Subcommittee SC 32, Data management and interchange. Interoperability and standardization: Until 1996, the National Institute of Standards and Technology (NIST) data-management standards program certified SQL DBMS compliance with the SQL standard. Vendors now self-certify the compliance of their products.The original standard declared that the official pronunciation for "SQL" was an initialism: ("ess cue el"). Regardless, many English-speaking database professionals (including Donald Chamberlin himself) use the acronym-like pronunciation of ("sequel"), mirroring the language's prerelease development name, "SEQUEL". The SQL standard has gone through a number of revisions: Current standard The standard is commonly denoted by the pattern: ISO/IEC 9075-n:yyyy Part n: title, or, as a shortcut, ISO/IEC 9075. Interested parties may purchase the standards documents from ISO, IEC, or ANSI. Some old drafts are freely available.ISO/IEC 9075 is complemented by ISO/IEC 13249: SQL Multimedia and Application Packages and some Technical reports. Interoperability and standardization: Anatomy of SQL Standard The SQL standard is divided into 11 parts with gaps in the numbering due to the withdrawal of outdated parts. Extensions to the SQL Standard ISO/IEC 9075 is complemented by ISO/IEC 13249 SQL Multimedia and Application Packages. This closely related but separate standard is developed by the same committee. It defines interfaces and packages based on SQL. The aim is unified access to typical database applications like text, pictures, data mining, or spatial data. Technical reports ISO/IEC 9075 is also accompanied by a series of Technical Reports, published as ISO/IEC TR 19075. These Technical Reports explain the justification for and usage of some features of SQL, giving examples where appropriate. The Technical Reports are non-normative; if there is any discrepancy from 9075, the text in 9075 holds. Currently available 19075 Technical Reports are: Alternatives: A distinction should be made between alternatives to SQL as a language, and alternatives to the relational model itself. Below are proposed relational alternatives to the SQL language. See navigational database and NoSQL for alternatives to the relational model. Distributed SQL processing: Distributed Relational Database Architecture (DRDA) was designed by a workgroup within IBM from 1988 to 1994. DRDA enables network-connected relational databases to cooperate to fulfill SQL requests.An interactive user or program can issue SQL statements to a local RDB and receive tables of data and status indicators in reply from remote RDBs. SQL statements can also be compiled and stored in remote RDBs as packages and then invoked by package name. This is important for the efficient operation of application programs that issue complex, high-frequency queries. It is especially important when the tables to be accessed are located in remote systems. Distributed SQL processing: The messages, protocols, and structural components of DRDA are defined by the Distributed Data Management Architecture. Distributed SQL processing ala DRDA is distinctive from contemporary distributed SQL databases. Criticisms: Design SQL deviates in several ways from its theoretical foundation, the relational model and its tuple calculus. In that model, a table is a set of tuples, while in SQL, tables and query results are lists of rows; the same row may occur multiple times, and the order of rows can be employed in queries (e.g., in the LIMIT clause). Criticisms: Critics argue that SQL should be replaced with a language that returns strictly to the original foundation: for example, see The Third Manifesto by Hugh Darwen and C.J. Date (2006, ISBN 0-321-39942-0). Criticisms: Orthogonality and completeness Early specifications did not support major features, such as primary keys. Result sets could not be named, and subqueries had not been defined. These were added in 1992.The lack of sum types has been described as a roadblock to full use of SQL's user-defined types. JSON support, for example, needed to be added by a new standard in 2016. Criticisms: Null The concept of Null is the subject of some debate. The Null marker indicates the absence of a value, and is distinct from a value of 0 for an integer column or an empty string for a text column. The concept of Nulls enforces the 3-valued-logic in SQL, which is a concrete implementation of the general 3-valued logic. Duplicates Another popular criticism is that it allows duplicate rows, making integration with languages such as Python, whose data types might make accurately representing the data difficult, in terms of parsing and by the absence of modularity. This is usually avoided by declaring a primary key, or a unique constraint, with one or more columns that uniquely identify a row in the table. Impedance mismatch In a sense similar to object–relational impedance mismatch, a mismatch occurs between the declarative SQL language and the procedural languages in which SQL is typically embedded. SQL data types: The SQL standard defines three kinds of data types (chapter 4.1.1 of SQL/Foundation): predefined data types constructed types user-defined types.Constructed types are one of ARRAY, MULTISET, REF(erence), or ROW. User-defined types are comparable to classes in object-oriented language with their own constructors, observers, mutators, methods, inheritance, overloading, overwriting, interfaces, and so on. Predefined data types are intrinsically supported by the implementation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded