text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Priest (tool)**
Priest (tool):
A priest (poacher's, game warden's or angler's "priest"), sometimes called a fish bat, or "persuader" is a tool for killing game or fish.
The name "priest" comes from the notion of administering the "last rites" to the fish or game. Anglers often use priests to quickly kill fish.
Description:
Priests usually come in the form of a heavy metal head attached to a metal or wooden stick. The small baton is a blunt instrument used for quickly killing fish or game. Early versions are made of lignum vitae (Latin for "wood of life"), the densest hardwood. One example is described as "Lead filled head. Brass ring to handle. With large Head for dispatching Game. Size overall 14 inches long".
In culture:
Identified as a "keeper's priest" the tool is a featured murder weapon in Series 12 of the BBC's Dalziel and Pascoe, Episodes 2 and 3, "Under Dark Stars", which left a round bruised mark on impact.
Used as the murder weapon of convenience in Series 8 of the BBC's Father Brown, Episode 7, "The River Corrupted," thereby framing the owner of the tool.
Tommy "Fishpriest" Barth (portrayed by Ethan Hawk) the main character in the "Fishpriest" podcast series, is nicknamed after his weapon of choice. "Poacher's Priest" is the name of a 2023 novel by Samuel Mills. In the story, the protagonist Odilio Brimble uses a priest to club a salmon in front of his horrified daughter. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quadray coordinates**
Quadray coordinates:
Quadray coordinates, also known as caltrop, tetray or Chakovian coordinates, were developed by Darrel Jarmusch and others, as another take on simplicial coordinates, a coordinate system using a simplex or tetrahedron as its basis polyhedron.
Geometric definition:
The four basis (but not necessarily unit) vectors stem from the center of a regular tetrahedron and go to its four corners. Their coordinate addresses are (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0) and (0, 0, 0, 1) respectively. These may be positively scaled without rotation (e.g. negation) and linearly combined to span conventional XYZ space, with at least one of the four coordinates unneeded (set to zero).
Pedagogical significance:
A typical application might set the edges of the basis tetrahedron as unit. The tetrahedron itself may also be defined as the unit of volume (see below).
The four quadrays may be linearly combined to provide integer coordinates for the inverse tetrahedron (0,1,1,1), (1,0,1,1), (1,1,0,1), (1,1,1,0), and for the cube, octahedron, rhombic dodecahedron and cuboctahedron of volumes 3, 4, 6 and 20 respectively, given the starting tetrahedron of unit volume.
Pedagogical significance:
For example, given A, B, C, D as (1,0,0,0), (0,1,0,0), (0,0,1,0) and (0,0,0,1) respectively, the vertices of an octahedron with the same edge length and volume four would be A + B, A + C, A + D, B + C, B + D, C + D or all eight permutations of {1,1,0,0}. The 12 permutations of {2,1,1,0} define the vertices of the volume 20 cuboctahedron centered at (0,0,0,0). These vectors point from any given sphere to its 12 surrounding neighbors in the cubic close packing (CCP), equivalently the IVM (isotropic vector matrix) in Synergetics. Therefore CCP ball centers all have non-negative integer coordinates.
Pedagogical significance:
If one now calls this volume "4D" as in "four-dimensional" or "four-directional" we have primed the pump for an understanding of R. Buckminster Fuller's "4D geometry," or Synergetics. In this American transcendentalist philosophy, the regular tetrahedron of edges one, as defined by four intertangent uni-radius balls, is taken as unit of volume. A set of familiar convex polyhedra, termed "the concentric hierarchy" is nested around it, per the above table, such that the cube has volume 3, the octahedron volume 4, rhombic dodecahedron volume 6, and cuboctahedron volume 20. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wolfgang Straub**
Wolfgang Straub:
Wolfgang Straub (born 1969, in Waiblingen) is a Swiss lawyer and photographer.
Photographic works:
His series of still lifes ‘Le dictionnaire des analphabètes’ deal with visual evidence of the paradoxical.His 'Enchanted Gardens' series deals with conveying an emotional content by means of altering forms of expression.Straub's works are present in several public and private collections.
Photographic publications:
Enchanted Gardens, Wyss Bern/Museum Franz Gertsch Burgdorf 2010, ISBN 978-3-033-02263-8 and ISBN 978-3-7285-2010-4
Solo exhibitions:
2009 Museum Franz Gertsch, Burgdorf 2010 Leica Gallery Switzerland, Biel | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Types of cheese**
Types of cheese:
There are many different types of cheese. Cheeses can be grouped or classified according to criteria such as length of fermentation, texture, methods of production, fat content, animal milk, and country or region of origin. The method most commonly and traditionally used is based on moisture content, which is then further narrowed down by fat content and curing or ripening methods. The criteria may either be used singly or in combination, with no single method being universally used.The combination of types produces around 51 different varieties recognized by the International Dairy Federation, over 400 identified by Walter and Hargrove, over 500 by Burkhalter, and over 1,000 by Sandine and Elliker. Some attempts have been made to rationalise the classification of cheese; a scheme was proposed by Pieter Walstra that uses the primary and secondary starter combined with moisture content, and Walter and Hargrove suggested classifying by production methods. This last scheme results in 18 types, which are then further grouped by moisture content.
Fresh and whey cheeses:
The main factor in categorizing these cheeses is age. Fresh cheeses without additional preservatives can spoil in a matter of days.For these simplest cheeses, milk is curdled and drained, with little other processing. Examples include cottage cheese, cream cheese, curd cheese, farmer cheese, caș, chhena, fromage blanc, queso fresco, paneer, fresh goat's milk chèvre, Breingen-Tortoille, Irish Mellieriem Rochers and Belgian Mellieriem Rochers. Such cheeses are often soft and spreadable, with a mild flavour.Whey cheeses are fresh cheeses made from whey, a by-product from the process of producing other cheeses which would otherwise be discarded. Corsican brocciu, Italian ricotta, Romanian urda, Greek mizithra, Croatian skuta, Cypriot anari cheese, Himalayan chhurpi and Norwegian Brunost are examples. Brocciu is mostly eaten fresh, and is as such a major ingredient in Corsican cuisine, but it can also be found in an aged form.Some fresh cheeses such as fromage blanc and fromage frais (the latter differing from the former in that it contains live cultures) are commonly sold and consumed as desserts.
Stretched curd cheeses:
Stretched curd, for which the Italian term pasta filata is often used, is a group of cheeses where the hot curd is stretched, today normally mechanically, producing various effects. Many traditional pasta filata cheeses such as the Italian mozzarella and halloumi from the Eastern Mediterranean also fall into the fresh cheese category. Fresh curds are stretched and kneaded in hot water to form a ball of mozzarella, which in southern Italy is usually eaten within a few hours of being made. Stored in brine, it can easily be shipped, and it is known worldwide for its use on pizza. But not all stretch-curd cheeses are fresh; the Italian provolone, Ragusano, caciocavallo and many others are hard or semi-hard, and aged. Oaxaca cheese from Mexico is semi-hard, but not aged. Like the pressed cooked cheeses (below), all these are made using thermophilic lactic fermentation starters. Many of the various types of string cheese are made this way.
Cooked pressed cheeses:
Swiss-type cheeses, also known as Alpine cheeses, are a group of hard or semi-hard cheeses with a distinct character, whose origins lie in the Alps of Europe, although they are now eaten and imitated in most cheesemaking parts of the world. They are classified as "cooked", meaning made using thermophilic lactic fermentation starters, incubating the curd with a period at a high temperature of 45°C or more. Since they are later pressed to expel excess moisture, the group are also described as "'cooked pressed cheeses'", fromages à pâte pressée cuite in French. Their distinct character arose from the requirements of cheese made in the summer on high Alpine grasslands (alpage in French), and then transported with the cows down to the valleys in the winter, in the historic culture of Alpine transhumance. Traditionally the cheeses were made in large rounds or "wheels" with a hard rind, and were robust enough for both keeping and transporting.The best known cheeses of the type, all made from cow's milk, include the Swiss Emmental, Gruyère and Appenzeller, as well as the French Beaufort and Comté (from the Jura Mountains, near the Alps). Both countries have many other traditional varieties, as do the Alpine regions of Austria (Alpkäse) and Italy (Asiago), though these have not achieved the same degree of intercontinental fame. Most global modern production is industrial, and usually made in rectangular blocks, and by wrapping in plastic no rind is allowed to form. Historical production was all with "raw" milk, although the periods of high heat in making largely controlled unwelcome bacteria, but modern production may use thermized or pasteurized milk.The general eating characteristics of the Alpine cheeses are a firm but still elastic texture, flavor that is not sharp, acidic or salty, but rather nutty and buttery. When melted, which they often are in cooking, they are "gooey", and "slick, stretchy and runny".Another related group of cooked pressed cheeses is the very hard Italian "grana" cheeses; the best known are Parmesan and Grana Padano. Although their origins lie in the flat and (originally) swampy Po Valley, they share the broad Alpine cheesemaking process, and began after local monasteries initiated drainage programmes from the 11th century onwards. These were Benedictine and Cistercian monasteries, both with sister-houses benefiting from Alpine cheesemaking. They seem to have borrowed their techniques from them, but produced very different cheeses, using much more salt, and less heating, which suited the local availability of materials.
Moisture: soft to hard:
Categorizing cheeses by moisture content or firmness is a common but inexact practice. The lines between soft, semi-soft, semi-hard and hard are arbitrary, and many types of cheese are made in softer or firmer variants. The factor that controls cheese hardness is moisture content, which depends on the pressure with which it is packed into molds, and upon aging time.
Moisture: soft to hard:
Soft cheese Cream cheeses are not matured. Brie and Neufchâtel are soft-type cheeses that mature for no more than a month. Neufchâtel can be sold after 10 days of maturation.
Semi-soft cheese Semi-soft cheeses, and the sub-group Monastery cheeses, have a high moisture content and tend to be mild-tasting. Well-known varieties include Havarti, Munster, Port Salut and Butterkäse.
Moisture: soft to hard:
Medium-hard cheese Cheeses that range in texture from semi-soft to firm include Swiss-style cheeses such as Emmental and Gruyère. The same bacteria that give such cheeses their eyes also contribute to their aromatic and sharp flavours. Other semi-soft to firm cheeses include Gouda, Edam, Jarlsberg, Cantal, and Kashkaval/Cașcaval. Cheeses of this type are ideal for melting and are often served on toast for quick snacks or simple meals.
Moisture: soft to hard:
Semi-hard cheese Harder cheeses have a lower moisture content than softer cheeses. They are generally packed into molds under more pressure and aged for a longer time than the soft cheeses. Cheeses that are classified as semi-hard to hard include the familiar Cheddar, originating in the village of Cheddar in England but now used as a generic term for this style of cheese, of which varieties are imitated worldwide and are marketed by strength or the length of time they have been aged.
Moisture: soft to hard:
Cheddar is one of a family of semi-hard or hard cheeses (including Cheshire and Gloucester), whose curd is cut, gently heated, piled, and stirred before being pressed into forms. Colby and Monterey Jack are similar but milder cheeses; their curd is rinsed before it is pressed, washing away some acidity and calcium. A similar curd-washing takes place when making the Dutch cheeses Edam and Gouda.
Moisture: soft to hard:
Hard cheese Hard cheeses—grating cheeses such as Grana Padano, Parmesan or Pecorino—are quite firmly packed into large forms and aged for months or years.
Source of milk:
Some cheeses are categorized by the source of the milk used to produce them or by the added fat content of the milk from which they are produced. While most of the world's commercially available cheese is made from cow's milk, many parts of the world also produce cheese from goats and sheep. Examples include Roquefort (produced in France) and Pecorino (produced in Italy) from ewe's milk. One farm in Sweden also produces cheese from moose's milk. Sometimes cheeses marketed under the same name are made from milk of different species—feta cheeses, for example, are made from sheep's milk in Greece. Pule cheese are made from Balkan donkey milk and goat's milk (produced in Serbia).
Source of milk:
Double cream cheeses are soft cheeses of cows' milk enriched with cream so that their fat in dry matter (FDM or FiDM) content is 60–75%; triple cream cheeses are enriched to at least 75%.
Mold:
There are three main categories of cheese in which the presence of mold is an important feature: soft-ripened cheeses, washed-rind cheeses and blue cheeses.
Mold:
Soft-ripened Soft-ripened cheeses begin firm and rather chalky in texture, but are aged from the exterior inwards by exposing them to mold. The mold may be a velvety bloom of P. camemberti that forms a flexible white crust and contributes to the smooth, runny, or gooey textures and more intense flavours of these aged cheeses. Brie and Camembert, the most famous of these cheeses, are made by allowing white mold to grow on the outside of a soft cheese for a few days or weeks. Goat's milk cheeses are often treated in a similar manner, sometimes with white molds (Chèvre-Boîte) and sometimes with blue.
Mold:
Washed-rind Washed-rind cheeses are soft in character and ripen inwards like those with white molds; however, they are treated differently. Washed-rind cheeses are periodically cured in a solution of saltwater brine or mold-bearing agents that may include beer, wine, brandy and spices, making their surfaces amenable to a class of bacteria (Brevibacterium linens, the reddish-orange smear bacteria) that impart pungent odors and distinctive flavours and produce a firm, flavourful rind around the cheese. Washed-rind cheeses can be soft (Limburger), semi-hard, or hard (Appenzeller). The same bacteria can also have some effect on cheeses that are simply ripened in humid conditions, like Camembert. The process requires regular washings, particularly in the early stages of production, making it quite labor-intensive compared to other methods of cheese production.
Mold:
Smear-ripened S-rind cheeses are also smear-ripened with solutions of bacteria or fungi (most commonly Brevibacterium linens, Debaryomyces hansenii or Geotrichum candidum), which usually gives them a stronger flavor as the cheese matures. In some cases, older cheeses are smeared on young cheeses to transfer the microorganisms. Many, but not all, of these cheeses have a distinctive pinkish or orange coloring of the exterior. Unlike with other washed-rind cheeses, the washing is done to ensure uniform growth of desired bacteria or fungi and to prevent the growth of undesired molds. Examples of smear-ripened cheeses include Munster and Port Salut.
Mold:
Blue So-called blue cheese is created by inoculating a cheese with Penicillium roqueforti or Penicillium glaucum. This is done while the cheese is still in the form of loosely pressed curds, and may be further enhanced by piercing a ripening block of cheese with skewers in an atmosphere in which the mold is prevalent. The mold grows within the cheese as it ages. These cheeses have distinct blue veins, which gives them their name and, often, assertive flavours. The molds range from pale green to dark blue, and may be accompanied by white and crusty brown molds. Their texture can be soft or firm. Some of the most renowned cheeses in this type include Roquefort, Gorgonzola and Stilton.
Granular:
Granular cheese is a type of cheese produced by repeatedly stirring and draining a mixture of curd and whey. It can refer to a wide variety of cheeses, including the grana cheeses such as Parmigiano-Reggiano.
Brined:
Brined or pickled cheese is matured in a solution of brine in an airtight or semi-permeable container. This process gives the cheese good stability, inhibiting bacterial growth even in hot environments. Brined cheeses may be soft or hard, varying in moisture content, and in color and flavor, according to the type of milk used. All will be rindless, and generally taste clean, salty and acidic when fresh, developing some piquancy when aged, and most will be white. Varieties of brined cheese include bryndza, feta, halloumi, sirene, and telemea. Brined cheese is the main type of cheese produced and eaten in the Middle East and Mediterranean areas.
Processed:
Processed cheese is made from traditional cheese and emulsifying salts, often with the addition of milk, more salt, preservatives, and food coloring. Its texture is consistent, and it melts smoothly. It is sold packaged and either pre-sliced or unsliced, in several varieties. Some are sold as sausage-like logs and chipolatas (mostly in Germany and the US), and some are molded into the shape of animals and objects. It is also available as "Easy Cheese", a product distributed by Mondelez International, that is packaged in aerosol cans and available in some countries.Some, if not most, varieties of processed cheese are made using a combination of real cheese waste (which is steam-cleaned, boiled and further processed), whey powders, and various mixtures of vegetable oils, palm oils or fats. Some processed-cheese slices contain as little as two to six percent cheese; some have smoke flavours added. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Multiplex (juggling)**
Multiplex (juggling):
Multiplexing is a juggling trick or form of toss juggling where more than one ball is in the hand at the time of the throw. The opposite, a squeeze catch, is when more than one ball is caught in the hand simultaneously on the same beat. If a multiplex throw were time-reversed, it would be a squeeze catch.
Terminology:
Number of props Multiplex throws are given different names depending on the number of balls used, for example a one-ball throw (with one ball held) would be called a uniplex, a two-ball throw would be called a duplex, and a three-ball throw, a triplex. A four and a five-ball throw would be called a quadruplex and a quintuplex, respectively.
Throw types Multiplex throws are generally grouped into different categories: Stack, Split, Cut, and Slice.
Stacked multiplex throws involve throwing both balls from one hand and catching them both in the same or other hand.
Split multiplex throws, as the name suggests, involve throwing both balls from one hand, "splitting" them in the air, and catching them in separate hands.
Cut multiplex throws involve throwing both balls to the same or other hand like a stacked multiplex but in a staggered fashion so the bottom ball of the duplex is caught, and re-thrown before the top ball is caught. These are used in the Shower Explosion family of multiplex tricks.
Terminology:
Sliced multiplex throws involve throwing both balls with one ball going directly to the opposite hand as a pass. This throw is usually made with the catching hand directly above the throwing hand so that when the throw is made, one ball goes straight up into the catching hand, with little to no air time, while the remaining ball is caught on a later beat.
Terminology:
In the case of triplexes, a split can result in one or two balls being caught in the opposite hand. An 'inside' split triplex denotes one ball being caught in the opposite hand, due to the single ball being on the inside of the triplex, and an 'outside' split triplex denotes two balls being caught in the opposite hand.
A cut and split multiplex can be combined in a triplex and this is referred to as a cut-split triplex indicating that both types of throw are involved.
Terminology:
Air position Multiplex throws can be further classified depending upon the position of the balls in the air after the throw is made. In the case of duplexes, two balls side by side is a horizontal duplex, and two balls one above the other is a vertical duplex. This terminology can be applied to either the stack, split or cut duplex types of throw. So, a vertical stacked duplex refers to two balls being thrown together, one above the other in the air, and caught together in the same or other hand. A horizontal split duplex refers to two balls being thrown together, side by side in the air, and caught in separate hands. In the case of triplexes, three balls one above the other is a vertical triplex and three balls in a triangle pattern is a triangle triplex.
Notation:
Siteswap notation is a way of writing down a key feature of juggling patterns: the order in which the balls are thrown. Multiplex throws are notated inside square brackets [ ]. For example, two balls held in the hand for a beat is notated [22] while [54] represents a split duplex where one ball is rethrown five beats later and the other ball four beats later.
Notation:
When working out the average of a multiplex siteswap, to determine the number of balls in the pattern, the throws inside the brackets are added together but treated as one throw. So, [43]23 = [4+3]+2+3 = 12. 12 / 3 (throws) = a 4-ball pattern.
It is possible to combine two siteswaps to make a new trick. The three-ball siteswap '423' and the two-ball siteswap '330' combined give the five-ball siteswap [43][32]3. Since siteswaps can be rotated, 330 can also be read as '033' and '303' and thus, when combined with 423, give the five-ball siteswaps 4[32][33] and [43]2[33] respectively.
If two siteswaps of differing lengths are combined, for instance 423 and 31, the length of the new siteswap can be determined by multiplying the two lengths together. Using the aforementioned siteswaps as an example, 423 (length 3) and 31 (length 2) give the siteswap [43][21][33][41][32][31] (length 6).
Styles:
Claymotion Claymotion is a style of multiplex juggling that was developed by British juggler Richard Clay in the early 1990s and was first given the name 'Claymotion' by Erica Kelch-Slesnick in 1997. Claymotion juggling is a sub-category of multiplex juggling that has a start-stop rhythm to it, not unlike cigar box juggling, so there are times when there are no balls in the air. Emphasis is placed on the graceful movements of the arms and so throws are typically low and controlled. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Life support (aviation)**
Life support (aviation):
Life support, or aircrew life support, in aviation, is the field centered on, and related technologies used in, ensuring the safety of aircrew, particularly military aviation. This includes safety equipment capable of helping them survive in the case of a crash, accident, or malfunction.
Life support functions and technology are also prominent in the field of human spaceflight. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Audio Interchange File Format**
Audio Interchange File Format:
Audio Interchange File Format (AIFF) is an audio file format standard used for storing sound data for personal computers and other electronic audio devices. The format was developed by Apple Inc. in 1988 based on Electronic Arts' Interchange File Format (IFF, widely used on Amiga systems) and is most commonly used on Apple Macintosh computer systems.
Audio Interchange File Format:
The audio data in most AIFF files is uncompressed pulse-code modulation (PCM). This type of AIFF file uses much more disk space than lossy formats like MP3—about 10 MB for one minute of stereo audio at a sample rate of 44.1 kHz and a bit depth of 16 bits. There is also a compressed variant of AIFF known as AIFF-C or AIFC, with various defined compression codecs.
Audio Interchange File Format:
In addition to audio data, AIFF can include loop point data and the musical note of a sample, for use by hardware samplers and musical applications.
The file extension for the standard AIFF format is .aiff or .aif. For the compressed variants it is supposed to be .aifc, but .aiff or .aif are accepted as well by audio applications supporting the format.
AIFF on macOS:
With the development of the OS X operating system now known as macOS, Apple created a new type of AIFF which is, in effect, an alternative little-endian byte order format.Because the AIFF architecture has no provision for alternative byte order, Apple used the existing AIFF-C compression architecture, and created a "pseudo-compressed" codec called sowt (twos spelled backwards). The only difference between a standard AIFF file and an AIFF-C/sowt file is the byte order; there is no compression involved at all.Apple uses this new little-endian AIFF type as its standard on macOS. When a file is imported to or exported from iTunes in "AIFF" format, it is actually AIFF-C/sowt that is being used. When audio from an audio CD is imported by dragging to the macOS Desktop, the resulting file is also an AIFF-C/sowt. In all cases, Apple refers to the files simply as "AIFF", and uses the ".aiff" extension.
AIFF on macOS:
For the vast majority of users this technical situation is completely unnoticeable and irrelevant. The sound quality of standard AIFF and AIFF-C/sowt are identical, and the data can be converted back and forth without loss. Users of older audio applications, however, may find that an AIFF-C/sowt file will not play, or will prompt the user to convert the format on opening, or will play as static.
AIFF on macOS:
All traditional AIFF and AIFF-C files continue to work normally on macOS, and many third-party audio applications as well as hardware continue to use the standard AIFF big-endian byte order.
AIFF Apple Loops:
Apple has also created another recent extension to the AIFF format in the form of Apple Loops used by GarageBand and Logic Pro, which allows the inclusion of data for pitch and tempo shifting by an application in the more common variety, and MIDI-sequence data and references to GarageBand playback instruments in another variety.
Apple Loops use either the .aiff (or .aif) or .caf extension regardless of type.
Data format:
An AIFF file is divided into a number of chunks. Each chunk is identified by a chunk ID more broadly referred to as FourCC.
Types of chunks found in AIFF files: Common Chunk (required) Sound Data Chunk (required) Marker Chunk Instrument Chunk Comment Chunk Name Chunk Author Chunk Copyright Chunk Annotation Chunk Audio Recording Chunk MIDI Data Chunk Application Chunk ID3 Chunk
Metadata:
AIFF files can store metadata in Name, Author, Comment, Annotation, and Copyright chunks. An ID3v2 tag chunk can also be embedded in AIFF files, as well as an Application Chunk with Extensible Metadata Platform (XMP) data in it.
Common compression types:
AIFF supports only uncompressed PCM data. AIFF-C also supports compressed audio formats, which can be specified in the "COMM" chunk. The compression type is "NONE" for PCM audio data. The compression type is accompanied by a printable name. Common compression types and names include, but are not limited to: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fault tree analysis**
Fault tree analysis:
Fault tree analysis (FTA) is a type of failure analysis in which an undesired state of a system is examined. This analysis method is mainly used in safety engineering and reliability engineering to understand how systems can fail, to identify the best ways to reduce risk and to determine (or get a feeling for) event rates of a safety accident or a particular system level (functional) failure. FTA is used in the aerospace, nuclear power, chemical and process, pharmaceutical, petrochemical and other high-hazard industries; but is also used in fields as diverse as risk factor identification relating to social service system failure. FTA is also used in software engineering for debugging purposes and is closely related to cause-elimination technique used to detect bugs.
Fault tree analysis:
In aerospace, the more general term "system failure condition" is used for the "undesired state" / top event of the fault tree. These conditions are classified by the severity of their effects. The most severe conditions require the most extensive fault tree analysis. These system failure conditions and their classification are often previously determined in the functional hazard analysis.
Usage:
Fault tree analysis can be used to: understand the logic leading to the top event / undesired state.
show compliance with the (input) system safety / reliability requirements.
prioritize the contributors leading to the top event- creating the critical equipment/parts/events lists for different importance measures monitor and control the safety performance of the complex system (e.g., is a particular aircraft safe to fly when fuel valve x malfunctions? For how long is it allowed to fly with the valve malfunction?).
minimize and optimize resources.
assist in designing a system. The FTA can be used as a design tool that helps to create (output / lower level) requirements.
function as a diagnostic tool to identify and correct causes of the top event. It can help with the creation of diagnostic manuals / processes.
History:
Fault tree analysis (FTA) was originally developed in 1962 at Bell Laboratories by H.A. Watson, under a U.S. Air Force Ballistics Systems Division contract to evaluate the Minuteman I Intercontinental Ballistic Missile (ICBM) Launch Control System. The use of fault trees has since gained widespread support and is often used as a failure analysis tool by reliability experts. Following the first published use of FTA in the 1962 Minuteman I Launch Control Safety Study, Boeing and AVCO expanded use of FTA to the entire Minuteman II system in 1963–1964. FTA received extensive coverage at a 1965 System Safety Symposium in Seattle sponsored by Boeing and the University of Washington. Boeing began using FTA for civil aircraft design around 1966.Subsequently, within the U.S. military, application of FTA for use with fuses was explored by Picatinny Arsenal in the 1960s and 1970s. In 1976 the U.S. Army Materiel Command incorporated FTA into an Engineering Design Handbook on Design for Reliability. The Reliability Analysis Center at Rome Laboratory and its successor organizations now with the Defense Technical Information Center (Reliability Information Analysis Center, and now Defense Systems Information Analysis Center) has published documents on FTA and reliability block diagrams since the 1960s. MIL-HDBK-338B provides a more recent reference.In 1970, the U.S. Federal Aviation Administration (FAA) published a change to 14 CFR 25.1309 airworthiness regulations for transport category aircraft in the Federal Register at 35 FR 5665 (1970-04-08). This change adopted failure probability criteria for aircraft systems and equipment and led to widespread use of FTA in civil aviation. In 1998, the FAA published Order 8040.4, establishing risk management policy including hazard analysis in a range of critical activities beyond aircraft certification, including air traffic control and modernization of the U.S. National Airspace System. This led to the publication of the FAA System Safety Handbook, which describes the use of FTA in various types of formal hazard analysis.Early in the Apollo program the question was asked about the probability of successfully sending astronauts to the moon and returning them safely to Earth. A risk, or reliability, calculation of some sort was performed and the result was a mission success probability that was unacceptably low. This result discouraged NASA from further quantitative risk or reliability analysis until after the Challenger accident in 1986. Instead, NASA decided to rely on the use of failure modes and effects analysis (FMEA) and other qualitative methods for system safety assessments. After the Challenger accident, the importance of probabilistic risk assessment (PRA) and FTA in systems risk and reliability analysis was realized and its use at NASA has begun to grow and now FTA is considered as one of the most important system reliability and safety analysis techniques.Within the nuclear power industry, the U.S. Nuclear Regulatory Commission began using PRA methods including FTA in 1975, and significantly expanded PRA research following the 1979 incident at Three Mile Island. This eventually led to the 1981 publication of the NRC Fault Tree Handbook NUREG–0492, and mandatory use of PRA under the NRC's regulatory authority.
History:
Following process industry disasters such as the 1984 Bhopal disaster and 1988 Piper Alpha explosion, in 1992 the United States Department of Labor Occupational Safety and Health Administration (OSHA) published in the Federal Register at 57 FR 6356 (1992-02-24) its Process Safety Management (PSM) standard in 19 CFR 1910.119. OSHA PSM recognizes FTA as an acceptable method for process hazard analysis (PHA).
History:
Today FTA is widely used in system safety and reliability engineering, and in all major fields of engineering.
Methodology:
FTA methodology is described in several industry and government standards, including NRC NUREG–0492 for the nuclear power industry, an aerospace-oriented revision to NUREG–0492 for use by NASA, SAE ARP4761 for civil aerospace, MIL–HDBK–338 for military systems, IEC standard IEC 61025 is intended for cross-industry use and has been adopted as European Norm EN 61025.
Any sufficiently complex system is subject to failure as a result of one or more subsystems failing. The likelihood of failure, however, can often be reduced through improved system design. Fault tree analysis maps the relationship between faults, subsystems, and redundant safety design elements by creating a logic diagram of the overall system.
Methodology:
The undesired outcome is taken as the root ('top event') of a tree of logic. For instance, the undesired outcome of a metal stamping press operation being considered might be a human appendage being stamped. Working backward from this top event it might be determined that there are two ways this could happen: during normal operation or during maintenance operation. This condition is a logical OR. Considering the branch of the hazard occurring during normal operation, perhaps it is determined that there are two ways this could happen: the press cycles and harms the operator, or the press cycles and harms another person. This is another logical OR. A design improvement can be made by requiring the operator to press two separate buttons to cycle the machine—this is a safety feature in the form of a logical AND. The button may have an intrinsic failure rate—this becomes a fault stimulus that can be analyzed. When fault trees are labeled with actual numbers for failure probabilities, computer programs can calculate failure probabilities from fault trees. When a specific event is found to have more than one effect event, i.e. it has impact on several subsystems, it is called a common cause or common mode. Graphically speaking, it means this event will appear at several locations in the tree. Common causes introduce dependency relations between events. The probability computations of a tree which contains some common causes are much more complicated than regular trees where all events are considered as independent. Not all software tools available on the market provide such capability.
Methodology:
The tree is usually written out using conventional logic gate symbols. A cut set is a combination of events, typically component failures, causing the top event. If no event can be removed from a cut set without failing to cause the top event, then it is called a minimal cut set.
Methodology:
Some industries use both fault trees and event trees (see Probabilistic Risk Assessment). An event tree starts from an undesired initiator (loss of critical supply, component failure etc.) and follows possible further system events through to a series of final consequences. As each new event is considered, a new node on the tree is added with a split of probabilities of taking either branch. The probabilities of a range of 'top events' arising from the initial event can then be seen.
Methodology:
Classic programs include the Electric Power Research Institute's (EPRI) CAFTA software, which is used by many of the US nuclear power plants and by a majority of US and international aerospace manufacturers, and the Idaho National Laboratory's SAPHIRE, which is used by the U.S. Government to evaluate the safety and reliability of nuclear reactors, the Space Shuttle, and the International Space Station. Outside the US, the software RiskSpectrum is a popular tool for fault tree and event tree analysis, and is licensed for use at more than 60% of the world's nuclear power plants for probabilistic safety assessment. Professional-grade free software is also widely available; SCRAM is an open-source tool that implements the Open-PSA Model Exchange Format open standard for probabilistic safety assessment applications.
Graphic symbols:
The basic symbols used in FTA are grouped as events, gates, and transfer symbols. Minor variations may be used in FTA software.
Graphic symbols:
Event symbols Event symbols are used for primary events and intermediate events. Primary events are not further developed on the fault tree. Intermediate events are found at the output of a gate. The event symbols are shown below: The primary event symbols are typically used as follows: Basic event – failure or error in a system component or element (example: switch stuck in open position) External event – normally expected to occur (not of itself a fault) Undeveloped event – an event about which insufficient information is available, or which is of no consequence Conditioning event – conditions that restrict or affect logic gates (example: mode of operation in effect)An intermediate event gate can be used immediately above a primary event to provide more room to type the event description.
Graphic symbols:
FTA is a top-to-bottom approach.
Gate symbols Gate symbols describe the relationship between input and output events. The symbols are derived from Boolean logic symbols: The gates work as follows: OR gate – the output occurs if any input occurs.
AND gate – the output occurs only if all inputs occur (inputs are independent from the source).
Exclusive OR gate – the output occurs if exactly one input occurs.
Priority AND gate – the output occurs if the inputs occur in a specific sequence specified by a conditioning event.
Inhibit gate – the output occurs if the input occurs under an enabling condition specified by a conditioning event.
Transfer symbols Transfer symbols are used to connect the inputs and outputs of related fault trees, such as the fault tree of a subsystem to its system. NASA prepared a complete document about FTA through practical incidents.
Basic mathematical foundation:
Events in a fault tree are associated with statistical probabilities or Poisson-Exponentially distributed constant rates. For example, component failures may typically occur at some constant failure rate λ (a constant hazard function). In this simplest case, failure probability depends on the rate λ and the exposure time t: P=1−e−λt where: P≈λt if 0.001 A fault tree is often normalized to a given time interval, such as a flight hour or an average mission time. Event probabilities depend on the relationship of the event hazard function to this interval.
Basic mathematical foundation:
Unlike conventional logic gate diagrams in which inputs and outputs hold the binary values of TRUE (1) or FALSE (0), the gates in a fault tree output probabilities related to the set operations of Boolean logic. The probability of a gate's output event depends on the input event probabilities.
Basic mathematical foundation:
An AND gate represents a combination of independent events. That is, the probability of any input event to an AND gate is unaffected by any other input event to the same gate. In set theoretic terms, this is equivalent to the intersection of the input event sets, and the probability of the AND gate output is given by: P (A and B) = P (A ∩ B) = P(A) P(B)An OR gate, on the other hand, corresponds to set union: P (A or B) = P (A ∪ B) = P(A) + P(B) - P (A ∩ B)Since failure probabilities on fault trees tend to be small (less than .01), P (A ∩ B) usually becomes a very small error term, and the output of an OR gate may be conservatively approximated by using an assumption that the inputs are mutually exclusive events: P (A or B) ≈ P(A) + P(B), P (A ∩ B) ≈ 0An exclusive OR gate with two inputs represents the probability that one or the other input, but not both, occurs: P (A xor B) = P(A) + P(B) - 2P (A ∩ B)Again, since P (A ∩ B) usually becomes a very small error term, the exclusive OR gate has limited value in a fault tree.
Basic mathematical foundation:
Quite often, Poisson-Exponentially distributed rates are used to quantify a fault tree instead of probabilities. Rates are often modeled as constant in time while probability is a function of time. Poisson-Exponential events are modelled as infinitely short so no two events can overlap. An OR gate is the superposition (addition of rates) of the two input failure frequencies or failure rates which are modeled as Poisson point processes. The output of an AND gate is calculated using the unavailability (Q1) of one event thinning the Poisson point process of the other event (λ2). The unavailability (Q2) of the other event then thins the Poisson point process of the first event (λ1). The two resulting Poisson point processes are superimposed according to the following equations.
Basic mathematical foundation:
The output of an AND gate is the combination of independent input events 1 and 2 to the AND gate: Failure Frequency = λ1Q2 + λ2Q1 where Q = 1 - eλt ≈ λt if λt < 0.001 Failure Frequency ≈ λ1λ2t2 + λ2λ1t1 if λ1t1 < 0.001 and λ2t2 < 0.001In a fault tree, unavailability (Q) may be defined as the unavailability of safe operation and may not refer to the unavailability of the system operation depending on how the fault tree was structured. The input terms to the fault tree must be carefully defined.
Analysis:
Many different approaches can be used to model a FTA, but the most common and popular way can be summarized in a few steps. A single fault tree is used to analyze one and only one undesired event, which may be subsequently fed into another fault tree as a basic event. Though the nature of the undesired event may vary dramatically, a FTA follows the same procedure for any undesired event; be it a delay of 0.25 ms for the generation of electrical power, an undetected cargo bay fire, or the random, unintended launch of an ICBM.
Analysis:
FTA analysis involves five steps: Define the undesired event to study.
Analysis:
Definition of the undesired event can be very hard to uncover, although some of the events are very easy and obvious to observe. An engineer with a wide knowledge of the design of the system is the best person to help define and number the undesired events. Undesired events are used then to make FTAs. Each FTA is limited to one undesired event.
Analysis:
Obtain an understanding of the system.
Analysis:
Once the undesired event is selected, all causes with probabilities of affecting the undesired event of 0 or more are studied and analyzed. Getting exact numbers for the probabilities leading to the event is usually impossible for the reason that it may be very costly and time-consuming to do so. Computer software is used to study probabilities; this may lead to less costly system analysis. System analysts can help with understanding the overall system. System designers have full knowledge of the system and this knowledge is very important for not missing any cause affecting the undesired event. For the selected event all causes are then numbered and sequenced in the order of occurrence and then are used for the next step which is drawing or constructing the fault tree.
Analysis:
Construct the fault tree.
After selecting the undesired event and having analyzed the system so that we know all the causing effects (and if possible their probabilities) we can now construct the fault tree. Fault tree is based on AND and OR gates which define the major characteristics of the fault tree.
Evaluate the fault tree.
Analysis:
After the fault tree has been assembled for a specific undesired event, it is evaluated and analyzed for any possible improvement or in other words study the risk management and find ways for system improvement. A wide range of qualitative and quantitative analysis methods can be applied. This step is as an introduction for the final step which will be to control the hazards identified. In short, in this step we identify all possible hazards affecting the system in a direct or indirect way.
Analysis:
Control the hazards identified.
This step is very specific and differs largely from one system to another, but the main point will always be that after identifying the hazards all possible methods are pursued to decrease the probability of occurrence.
Comparison with other analytical methods:
FTA is a deductive, top-down method aimed at analyzing the effects of initiating faults and events on a complex system. This contrasts with failure mode and effects analysis (FMEA), which is an inductive, bottom-up analysis method aimed at analyzing the effects of single component or function failures on equipment or subsystems. FTA is very good at showing how resistant a system is to single or multiple initiating faults. It is not good at finding all possible initiating faults. FMEA is good at exhaustively cataloging initiating faults, and identifying their local effects. It is not good at examining multiple failures or their effects at a system level. FTA considers external events, FMEA does not. In civil aerospace the usual practice is to perform both FTA and FMEA, with a failure mode effects summary (FMES) as the interface between FMEA and FTA.
Comparison with other analytical methods:
Alternatives to FTA include dependence diagram (DD), also known as reliability block diagram (RBD) and Markov analysis. A dependence diagram is equivalent to a success tree analysis (STA), the logical inverse of an FTA, and depicts the system using paths instead of gates. DD and STA produce probability of success (i.e., avoiding a top event) rather than probability of a top event. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Eugeroic**
Eugeroic:
Eugeroics (originally "eugrégorique" or "eugregoric"), also known as wakefulness-promoting agents and wakefulness-promoting drugs, are a class of drugs that promote wakefulness and alertness. They are medically indicated for the treatment of certain sleep disorders including excessive daytime sleepiness (EDS) in narcolepsy or obstructive sleep apnea (OSA). Eugeroics are also often prescribed off-label for the treatment of EDS in idiopathic hypersomnia. In contrast to classical psychostimulants, such as methylphenidate and amphetamine, which are also used in the treatment of these disorders, eugeroics typically do not produce euphoria, and, consequently, have a lower addictive potential.Modafinil and armodafinil are each thought to act as selective, weak, atypical dopamine reuptake inhibitors (DRI), whereas adrafinil acts as a prodrug for modafinil. Other eugeroics include solriamfetol, which acts as a norepinephrine–dopamine reuptake inhibitor (NDRI), and pitolisant, which acts as a histamine 3 (H3) receptor antagonist/inverse agonist.
Examples:
Marketed Armodafinil (Nuvigil) Modafinil (Provigil) Pitolisant (Wakix) Solriamfetol (Sunosi) Discontinued Adrafinil Never marketed Flmodafinil (CRL-40,940) Fluorafinil (CRL-40,941) Fluorenol Methylbisfluoromodafinil 2-Phenyl-3-aminobutane In development Selective orexin receptor agonists (two are currently under development by Takeda, danavorexton and TAK-994) CE-123 is under patent by Red Bull. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Plane partition**
Plane partition:
In mathematics and especially in combinatorics, a plane partition is a two-dimensional array of nonnegative integers πi,j (with positive integer indices i and j) that is nonincreasing in both indices. This means that πi,j≥πi,j+1 and πi,j≥πi+1,j for all i and j.Moreover, only finitely many of the πi,j may be nonzero. Plane partitions are a generalization of partitions of an integer.
Plane partition:
A plane partition may be represented visually by the placement of a stack of πi,j unit cubes above the point (i, j) in the plane, giving a three-dimensional solid as shown in the picture. The image has matrix form 4432143113211 Plane partitions are also often described by the positions of the unit cubes. From this point of view, a plane partition can be defined as a finite subset P of positive integer lattice points (i, j, k) in N3 , such that if (r, s, t) lies in P and if (i,j,k) satisfies 1≤i≤r , 1≤j≤s , and 1≤k≤t , then (i, j, k) also lies in P The sum of a plane partition is n=∑i,jπi,j.
Plane partition:
The sum describes the number of cubes of which the plane partition consists. Much interest in plane partitions concerns the enumeration of plane partitions in various classes. The number of plane partitions with sum n is denoted by PL(n). For example, there are six plane partitions with sum 3 32111121111111 so PL(3) = 6.
Plane partitions may be classified by how symmetric they are. Many symmetric classes of plane partitions are enumerated by simple product formulas.
Generating function of plane partitions:
The generating function for PL(n) is PL 13 24 x5+⋯ (sequence A000219 in the OEIS).It is sometimes referred to as the MacMahon function, as it was discovered by Percy A. MacMahon.
Generating function of plane partitions:
This formula may be viewed as the 2-dimensional analogue of Euler's product formula for the number of integer partitions of n. There is no analogous formula known for partitions in higher dimensions (i.e., for solid partitions). The asymptotics for plane partitions were first calculated by E. M. Wright. One obtains, for large n , that PL 36 12 25 36 exp (3ζ(3)1/3(n2)2/3+ζ′(−1)).
Generating function of plane partitions:
Evaluating numerically yields ln PL 2.00945 0.69444 ln 1.4631.
Generating function of plane partitions:
Plane partitions in a box Around 1896, MacMahon set up the generating function of plane partitions that are subsets of the r×s×t box B(r,s,t)={(i,j,k)|1≤i≤r,1≤j≤s,1≤k≤t} in his first paper on plane partitions. The formula is given by A proof of this formula can be found in the book Combinatory Analysis written by MacMahon. MacMahon also mentions the generating functions of plane partitions. The formula for the generating function can be written in an alternative way, which is given by Multiplying each component by 1−q1−q , and setting q = 1 in the formulas above yields that the total number N1(r,s,t) of plane partitions that fit in the r×s×t box B(r,s,t) is equal to the following product formula: The planar case (when t = 1) yields the binomial coefficients: B(r,s,1)=(r+sr).
Special plane partitions:
Special plane partitions include symmetric, cyclic and self-complementary plane partitions, and combinations of these properties.
In the subsequent sections, the enumeration of special sub-classes of plane partitions inside a box are considered.
These articles use the notation Ni(r,s,t) for the number of such plane partitions, where r, s, and t are the dimensions of the box under consideration, and i is the index for the case being considered.
Special plane partitions:
Action of S2, S3 and C3 on plane partitions S2 is the group of permutations acting on the first two coordinates of a point. This group contains the identity, which sends (i, j, k) to itself, and the transposition (i, j, k) → (j, i, k). The number of elements in an orbit η is denoted by |η| . B/S2 denotes the set of orbits of elements of B under the action of S2 . The height of an element (i, j, k) is defined by The height increases by one for each step away from the back right corner. For example, the corner position (1, 1, 1) has height 1 and ht(2, 1, 1) = 2. The height of an orbit is defined to be the height of any element in the orbit. This notation of the height differs from the notation of Ian G. Macdonald.There is a natural action of the permutation group S3 on a Ferrers diagram of a plane partition—this corresponds to simultaneously permuting the three coordinates of all nodes. This generalizes the conjugation operation for integer partitions. The action of S3 can generate new plane partitions starting from a given plane partition. Below there are shown six plane partitions of 4 that are generated by the S3 action. Only the exchange of the first two coordinates is manifest in the representation given below.
Special plane partitions:
313121121111111111 C3 is called the group of cyclic permutations and consists of and (i,j,k)→(k,i,j).
Special plane partitions:
Symmetric plane partitions A plane partition π is called symmetric if πi,j = πj,i for all i, j. In other words, a plane partition is symmetric if (i,j,k)∈B(r,s,t) if and only if (j,i,k)∈B(r,s,t) . Plane partitions of this type are symmetric with respect to the plane x = y. Below is an example of a symmetric plane partition and its visualisation.
Special plane partitions:
43321332132212111 In 1898, MacMahon formulated his conjecture about the generating function for symmetric plane partitions which are subsets of B(r,r,t) . This conjecture is called The MacMahon conjecture. The generating function is given by Macdonald pointed out that Percy A. MacMahon's conjecture reduces to ∑π∈B(r,r,t)/S2q|π|=∏η∈B(r,r,t)/S21−q|η|(1+ht(η))1−q|η|ht(η) In 1972 Edward A. Bender and Donald E. Knuth conjectured a simple closed form for the generating function for plane partition which have at most r rows and strict decrease along the rows. George Andrews showed that the conjecture of Bender and Knuth and the MacMahon conjecture are equivalent. MacMahon's conjecture was proven almost simultaneously by George Andrews in 1977 and later Ian G. Macdonald presented an alternative proof. When setting q = 1 yields the counting function N2(r,r,t) which is given by N2(r,r,t)=∏i=1r2i+t−12i−1∏1≤i<j≤ri+j+t−1i+j−1 For a proof of the case q = 1 please refer to George Andrews' paper MacMahon's conjecture on symmetric plane partitions.
Special plane partitions:
Cyclically symmetric plane partitions π is called cyclically symmetric, if the i-th row of π is conjugate to the i-th column for all i. The i-th row is regarded as an ordinary partition. The conjugate of a partition π is the partition whose diagram is the transpose of partition π . In other words, the plane partition is cyclically symmetric if whenever (i,j,k)∈B(r,s,t) then (k, i, j) and (j, k, i) also belong to B(r,s,t) . Below an example of a cyclically symmetric plane partition and its visualization is given.
Special plane partitions:
65543364331643114221311111 Macdonald's conjecture provides a formula for calculating the number of cyclically symmetric plane partitions for a given integer r. This conjecture is called The Macdonald conjecture. The generating function for cyclically symmetric plane partitions which are subsets of B(r,r,r) is given by ∑π∈B(r,r,r)/C3q|π|=∏η∈B(r,r,r)/C31−q|η|(1+ht(η))1−q|η|ht(η) This equation can also be written in another way ∏η∈B(r,r,r)/C31−q|η|(1+ht(η))1−q|η|ht(η)=∏i=1r[1−q3i−11−q3i−2∏j=ir1−q3(r+i+j−1)1−q3(2i+j−1)] In 1979, Andrews proved Macdonald's conjecture for the case q = 1 as the "weak" Macdonald conjecture. Three years later William. H. Mills, David Robbins and Howard Rumsey proved the general case of Macdonald's conjecture in their paper Proof of the Macdonald conjecture. The formula for N3(r,r,r) is given by the "weak" Macdonald conjecture N3(r,r,r)=∏i=1r[3i−13i−2∏j=iri+j+r−12i+j−1] Totally symmetric plane partitions A totally symmetric plane partition π is a plane partition which is symmetric and cyclically symmetric. This means that the diagram is symmetric at all three diagonal planes, or in other words that if (i,j,k)∈B(r,s,t) then all six permutations of (i, j, k) are also in B(r,s,t) . Below an example of a matrix for a totally symmetric plane partition is given. The picture shows the visualisation of the matrix.
Special plane partitions:
54431433143213111 Macdonald found the total number of totally symmetric plane partitions that are subsets of B(r,r,r) . The formula is given by N4(r,r,r)=∏η∈B(r,r,r)/S31+ht(η)ht(η) In 1995 John R. Stembridge first proved the formula for N4(r,r,r) and later in 2005 it was proven by George Andrews, Peter Paule, and Carsten Schneider. Around 1983 Andrews and Robbins independently stated an explicit product formula for the orbit-counting generating function for totally symmetric plane partitions. This formula already alluded to in George E. Andrews' paper Totally symmetric plane partitions which was published 1980. The conjecture is called The q-TSPP conjecture and it is given by: Let S3 be the symmetric group. The orbit counting function for totally symmetric plane partitions that fit inside B(r,r,r) is given by the formula ∑π∈B(r,r,r)/S3q|π|=∏η∈B(r,r,r)/S31−q1+ht(η)1−qht(η)=∏1≤i≤j≤k≤r1−qi+j+k−11−qi+j+k−2.
Special plane partitions:
This conjecture was proved in 2011 by Christoph Koutschan, Manuel Kauers and Doron Zeilberger.
Self-complementary plane partitions If πi,j+πr−i+1,s−j+1=t for all 1≤i≤r , 1≤j≤s , then the plane partition is called self-complementary. It is necessary that the product r⋅s⋅t is even. Below an example of a self-complementary symmetric plane partition and its visualisation is given.
Special plane partitions:
443214222321 Richard P. Stanley conjectured formulas for the total number of self-complementary plane partitions N5(r,s,t) . According to Stanley, Robbins also formulated formulas for the total number of self-complementary plane partitions in a different but equivalent form. The total number of self-complementary plane partitions that are subsets of B(r,s,t) is given by N5(2r,2s,2t)=N1(r,s,t)2 N5(2r+1,2s,2t)=N1(r,s,t)N1(r+1,s,t) N5(2r+1,2s+1,2t)=N1(r+1,s,t)N1(r,s+1,t) It is necessary that the product of r,s and t is even. A proof can be found in the paper Symmetries of Plane Partitions which was written by Stanley. The proof works with Schur functions ssr(x) . Stanley's proof of the ordinary enumeration of self-complementary plane partitions yields the q-analogue by substituting xi=qi for i=1,…,n . This is a special case of Stanley's hook-content formula. The generating function for self-complementary plane partitions is given by sγα(q,q2,…,qn)=qγα(α+1)/2∏i=1α∏j=0γ−11−qi+n−α+j1−qi+j Substituting this formula in for B(2r,2s,2t) for B(2r,2s+1,2t) for B(2r+1,2s,2t+1) supplies the desired q-analogue case.
Special plane partitions:
Cyclically symmetric self-complementary plane partitions A plane partition π is called cyclically symmetric self-complementary if it is cyclically symmetric and self-complementary. The figure presents a cyclically symmetric self-complementary plane partition and the according matrix is below.
Special plane partitions:
4441332132113 In a private communication with Stanley, Robbins conjectured that the total number of cyclically symmetric self-complementary plane partitions is given by N6(2r,2r,2r) . The total number of cyclically symmetric self-complementary plane partitions is given by N6(2r,2r,2r)=Dr2 Dr is the number of r×r alternating sign matrices. A formula for Dr is given by Dr=∏j=0r−1(3j+1)!(r+j)! Greg Kuperberg proved the formula for N6(r,r,r) in 1994.
Special plane partitions:
Totally symmetric self-complementary plane partitions A totally symmetric self-complementary plane partition is a plane partition that is both totally symmetric and self-complementary. For instance, the matrix below is such a plane partition; it is visualised in the accompanying picture.
6665536553316553315331153311311 The formula N7(r,r,r) was conjectured by William H. Mills, Robbins and Howard Rumsey in their work Self-Complementary Totally Symmetric Plane Partitions. The total number of totally symmetric self-complementary plane partitions is given by N7(2r,2r,2r)=Dr Andrews proves this formula in 1994 in his paper Plane Partitions V: The TSSCPP Conjecture. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aircraft station**
Aircraft station:
An aircraft station (also aircraft radio station) is – according to Article 1.83 of the International Telecommunication Union's (ITU) ITU Radio Regulations (RR) – defined as "A mobile radio station in the aeronautical mobile service, other than survival craft station, located on board an aircraft".
Each station shall be classified by the service in which it operates permanently or temporarily.
See also Selection of UHF/VHF aircraft stations | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vikki Abrahams**
Vikki Abrahams:
Vikki Martyne Abrahams is an English–American reproductive immunologist. She is a full professor of obstetrics, gynecology and reproductive sciences at the Yale School of Medicine. Her research focuses on understanding the role of innate immune toll-like receptor and NOD-like receptor family members in placental and maternal-fetal immune responses.
Early life and education:
Abrahams earned her Bachelor of Science degree and PhD from the University College London.
Career:
Abrahams came to the United States for her postdoctoral work at Dartmouth Medical School and Yale University in the field of reproductive immunology before accepting a faculty position in 2004. In her role as an assistant professor of obstetrics and gynecology, Abrahams co-authored a study which found that there was a specific defence mechanism used by the immune system which was imitated by cancer cells in order to fight off the effects of cancer drugs like paclitaxel. She was later awarded a one-year grant of $73,284 from the Lupus Research Alliance for her project titled "Effect of Antiphospholipid Antibodies on Trophoblast Function in Pregnancy." In 2010, Abrahams continued her research into pregnancy complications using a three-year grant from the American Heart Association to advance her work. She also co-authored another study which uncovered how hormone progesterone act to prevent preterm birth.Abrahams research focuses on understanding the role of innate immune toll-like receptor and NOD-like receptor family members in placental and maternal-fetal immune responses. In 2014, she was the senior author on a study exploring whether an anti-malaria drug could be used to treat obstetrical antiphospholipid syndrome. Two years later, she was the recipient of the 2016 Novel Research Grant from the Lupus Research Institute to conduct innovative work in lupus. She used this grant to lead a study identifying how the Zika virus infects the placenta.In 2019, Abrahams was the recipient of the annual American Society for Reproductive Immunology Award as someone "who has made outstanding contributions to the area of reproductive immunology." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ethmoid sinus**
Ethmoid sinus:
The ethmoid sinuses or ethmoid air cells of the ethmoid bone are one of the four paired paranasal sinuses. Unlike the other three pairs of paranasal sinuses which consist of one or two large cavities, the ethmoidal sinuses entail a number of small air-filled cavities ("air cells"). The cells are located within the lateral mass (labyrinth) of each ethmoid bone and are variable in both size and number. The cells are grouped into anterior, middle, and posterior groups; the groups differ in their drainage modalities, though all ultimately drain into either the superior or the middle nasal meatus of the lateral wall of the nasal cavity.
Structure:
The ethmoid air cells consist of numerous thin-walled cavities in the ethmoidal labyrinth that represent invaginations of the mucous membrane of the nasal wall into the ethmoid bone. They are situated between the superior parts of the nasal cavities and the orbits, and are separated from these cavities by thin bony lamellae.There are 5-15 air cells in either ethmoid bone in the adult, with a combined volume of 2-3mL.
Structure:
Drainage The anterior ethmoidal cells drain (directly or indirectly) into the middle nasal meatus by way of the ethmoidal infundibulum.
The posterior ethmoidal cells drain directly into the middle nasal meatus.The posterior ethmoidal cells drain directly into the superior nasal meatus at the sphenoethmoidal recess; sometimes, one or more opens into the sphenoidal sinus.
Haller cells Haller cells are infraorbital ethmoidal air cells lateral to the lamina papyracea. These may arise from the anterior or posterior ethmoidal sinuses.
Structure:
Lamellae The ethmoidal labyrinth is divided by multiple obliquely oriented, parallel lamellae. The first lamellae is equivalent to the uncinate process of ethmoid bone, the second corresponds the ethmoid bulla, and the third is the basal lamella, and the fourth is equivalent to the superior nasal concha.The anterior and posterior ethmoid cells are separated by the basal lamella (also known as the ground lamella). It is one of the bony divisions of the ethmoid bone and is mostly contained inside the ethmoid labyrinth. The basal lamella is continuous medially with the bony middle nasal concha. Anteriorly, it vertically inserts into the ethmoid crest; the middle part attaches obliquely into the orbital lamina of ethmoid bone (lamina papyricea) while the posterior part attaches into the orbital lamina horizontally.
Structure:
Innervation The ethmoidal air cells receive sensory innervation from the anterior and the posterior ethmoidal nerve (which are ultimately derived from the ophthalmic branch (CN V1) of the trigeminal nerve (CN V)), and the orbital branches of the pterygopalatine ganglion, which carry the postganglionic parasympathetic nerve fibers for mucous secretion from the facial nerve.
Development The ethmoidal cells (sinuses) and maxillary sinuses are present at birth. At birth, 3-4 air cells are present, with the number increasing to 5-15 by adulthood.
Clinical significance:
Acute ethmoiditis in childhood and ethmoidal carcinoma may spread superiorly causing meningitis and cerebrospinal fluid leakage or it may spread laterally into the orbit causing proptosis and diplopia. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Momo (software)**
Momo (software):
Momo (Chinese: 陌陌; pinyin: mò mò) is a free social search and instant messaging mobile app. The app allows users to chat with nearby friends and strangers. Momo provides users with free instant messaging services through Wifi, 3G and 4G. The client software is available for Android, iOS, and Windows Phone.Momo officially began operations in July 2011, and a month later, launched the first version of the app for iOS. Momo filed for a NASDAQ IPO on November 7, 2014 and was listed in December 2014.
History:
Founding and incorporation Tang Yan, Zhang Sichuan, Lei Xiaoliang, Yong Li, and Li Zhiwei co-founded Beijing Momo Technology Co., Ltd. in July 2011.
Prior to founding the company, Tang Yan worked as editor and then editor-in-chief at NetEase. In October 2014, Tang was named by Fortune Magazine as one of its "40 Under 40," a list of the most powerful business elites under the age of 40.
The other co-founders all have prior experience with major Chinese Internet companies. In order to facilitate foreign investments, Momo’s co-founders incorporated a holding company called Momo Technology Company Limited in the British Virgin Islands in November 2011. In July 2014, Momo Technology Company Limited was renamed to Momo Inc. and re-domiciled to the Cayman Islands.
In December 2011, Momo established Momo Technology HK Company Limited (Momo HK) as a wholly owned subsidiary in Hong Kong. In March 2012, Momo HK established Beijing Momo Information Technology Co., Ltd.(Beijing Momo IT), a wholly owned People’s Republic of China subsidiary. In May 2013, Beijing Momo established Chengdu Momo Technology Co., Ltd.(Chengdu Momo), as a wholly owned subsidiary.
History:
Growth In December 2011, Momo announced reaching half a million users. Three months later, the number of Momo users reached 2 million. Momo reached 10 million users on its first anniversary in August 2012. In October 2012, Momo surpassed 15 million users. In 2014, App Annie reported that Momo was the number 2 non-game app of 2013 in terms of revenue.
History:
In February 2014, TechNode reported that Momo had announced reaching 100 million registered users. Momo executives also claimed they had reached 40 million monthly active users (MAU).
According to Momo, in June 2014, total registered users and MAU reached 148 million and 52.4 million respectively.
China Internet Watch reported more conservative estimates. In the months of August and September 2014, Momo had 51.279 and 52.101 million MAU. While Momo’s MAU grew, Wechat and QQ both lost MAU within the same time frame. Momo's prospectus reported 60.2 million MAU in September 2014.
History:
Financing Momo reportedly raised USD 2.5 million in Series A financing. Angel investor, PurpleSky Capital (ZiHui ChuangTou), and Matrix Hong Kong led this round of financing. However, Momo's Form F-1 filed with the SEC reports that USD 5 million was raised in this round of financing. Momo Inc. completed its Series B financing in October 2012. This round of financing was led by two institution investors and received $100 million valuation. China Renaissance Partners acted as the exclusive financial advisor. There was much speculation as to whether or not Chinese e-commerce giant, Alibaba Group, was involved in this round of financing. Momo’s registration statement verifies this claim. In total, Momo raised approximately USD 40 million.
History:
In October 2013, raised USD 45 million in Series C financing. Matrix Hong Kong, Gothic Partners, L.P., PJF Acorn I Trust, Gansett Partners, L.L.C., PH momo investment Ltd., Tenzing Holding 2011 Ltd., Alibaba Investment Limited, and DST Team Fund Limited were all issued and sold Series C preferred shares.
In May 2014, Momo raised USD 211.8 million in Series D financing. Momo sold Series D preferred shares to Sequoia Capital China Investment Holdco II, Ltd., Sequoia Capital China GF Holdco III-A, Ltd., SC China Growth III Co-Investment 2014-A, L.P., Rich Moon Limited, and Tiger Global Eight Holdings.
Product and services:
Momo’s mobile application is available on Android, iOS, and Windows platforms. It enables users to establish and expand their social relationships based on similar locations and interests. Some features of the application include subsections like: Nearby Users, Groups, Message Board, Topics, and Nearby Events. Users can send multimedia instant messages as well as play single and multiplayer games within the app’s platform. Users also make a Facebook-like profile and are encouraged to include as much information as possible. Momo execs claim that this allows their software to create more accurate matches with nearby strangers. Momo is claimed to "sift through the clutter of mobile Internet users to find personalized matches for its users".
Product and services:
Momo offers users paid membership subscriptions. A membership will cost around USD 2 a month, or less if a user commits to a longer term of use. Benefits of a paid membership include: VIP logos, advanced search options, discounts in the emoticon store, higher limits on maximum users in a group, and the ability to see a list of recent visitors to a user’s profile page. As of September 30, 2014, there was 2.3 million paid subscriptions.
Product and services:
Like many other instant messaging services, Momo has integrated mobile games into their platform to monetize off their large user base. Third parties develop games, and revenues from in-game purchases are shared between Momo and the developers.
In August 2014, Momo launched Dao Dian Tong, a marketing tool for local merchants. Through Dao Dian Tong, local businesses and merchants can construct profile pages that allow Momo users to find them with the Momo’s LBS. Members can see the businesses just as they would see other Momo users.
Momo plans to further monetize user traffic by referring users from the Momo platform to e-commerce companies. Alibaba was specifically mentioned in Momo’s Form F-1.
Corporate affairs and cultures:
Anti-plagiarism In December 2012, Momo made an official announcement to accuse Sina Corp of copycatting straight from all the features of Momo Group. However, Sina Corp did not give its formal response.
Statement made by NetEase On December 10, 2014, NetEase released a statement accusing that Tang Yan has professional ethic issues, business ethics issues, and has been detained due to personal affairs by the local police in 2007.
Public opinions:
Hook-ups and homeless dogs On April 27, 2012, Mike Sui, a mixed-race comedian and performer in China, first posted his "12 Beijingers" viral video which attracted nearly 5.17 million hits. In this video, one character mentions Momo, for the first time calling it a magical tool to get laid (Chinese: 约炮神器; pinyin: yuē pào shén qì ). Momo has spent millions of dollars to reverse the image of Momo as a one-night stand app. Momo, through its Weibo account, continues to engage the online community through various campaigns. Momo’s latest online campaign focused on supporting the homeless cats and dogs of China.
Public opinions:
Relationships Although Momo is widely considered as a social media application, there are claims that meetings on Momo resulted in marriage. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rectangle**
Rectangle:
In Euclidean plane geometry, a rectangle is a quadrilateral with four right angles. It can also be defined as: an equiangular quadrilateral, since equiangular means that all of its angles are equal (360°/4 = 90°); or a parallelogram containing a right angle. A rectangle with four sides of equal length is a square. The term "oblong" is occasionally used to refer to a non-square rectangle. A rectangle with vertices ABCD would be denoted as ABCD.
Rectangle:
The word rectangle comes from the Latin rectangulus, which is a combination of rectus (as an adjective, right, proper) and angulus (angle).
Rectangle:
A crossed rectangle is a crossed (self-intersecting) quadrilateral which consists of two opposite sides of a rectangle along with the two diagonals (therefore only two sides are parallel). It is a special case of an antiparallelogram, and its angles are not right angles and not all equal, though opposite angles are equal. Other geometries, such as spherical, elliptic, and hyperbolic, have so-called rectangles with opposite sides equal in length and equal angles that are not right angles.
Rectangle:
Rectangles are involved in many tiling problems, such as tiling the plane by rectangles or tiling a rectangle by polygons.
Characterizations:
A convex quadrilateral is a rectangle if and only if it is any one of the following: a parallelogram with at least one right angle a parallelogram with diagonals of equal length a parallelogram ABCD where triangles ABD and DCA are congruent an equiangular quadrilateral a quadrilateral with four right angles a quadrilateral where the two diagonals are equal in length and bisect each other a convex quadrilateral with successive sides a, b, c, d whose area is 14(a+c)(b+d) .: fn.1 a convex quadrilateral with successive sides a, b, c, d whose area is 12(a2+c2)(b2+d2).
Classification:
Traditional hierarchy A rectangle is a special case of a parallelogram in which each pair of adjacent sides is perpendicular.
A parallelogram is a special case of a trapezium (known as a trapezoid in North America) in which both pairs of opposite sides are parallel and equal in length.
A trapezium is a convex quadrilateral which has at least one pair of parallel opposite sides.
A convex quadrilateral is Simple: The boundary does not cross itself.
Star-shaped: The whole interior is visible from a single point, without crossing any edge.
Classification:
Alternative hierarchy De Villiers defines a rectangle more generally as any quadrilateral with axes of symmetry through each pair of opposite sides. This definition includes both right-angled rectangles and crossed rectangles. Each has an axis of symmetry parallel to and equidistant from a pair of opposite sides, and another which is the perpendicular bisector of those sides, but, in the case of the crossed rectangle, the first axis is not an axis of symmetry for either side that it bisects.
Classification:
Quadrilaterals with two axes of symmetry, each through a pair of opposite sides, belong to the larger class of quadrilaterals with at least one axis of symmetry through a pair of opposite sides. These quadrilaterals comprise isosceles trapezia and crossed isosceles trapezia (crossed quadrilaterals with the same vertex arrangement as isosceles trapezia).
Properties:
Symmetry A rectangle is cyclic: all corners lie on a single circle.
It is equiangular: all its corner angles are equal (each of 90 degrees).
It is isogonal or vertex-transitive: all corners lie within the same symmetry orbit.
It has two lines of reflectional symmetry and rotational symmetry of order 2 (through 180°).
Rectangle-rhombus duality The dual polygon of a rectangle is a rhombus, as shown in the table below.
The figure formed by joining, in order, the midpoints of the sides of a rectangle is a rhombus and vice versa.
Miscellaneous A rectangle is a rectilinear polygon: its sides meet at right angles.
A rectangle in the plane can be defined by five independent degrees of freedom consisting, for example, of three for position (comprising two of translation and one of rotation), one for shape (aspect ratio), and one for overall size (area).
Two rectangles, neither of which will fit inside the other, are said to be incomparable.
Formulae:
If a rectangle has length ℓ and width w it has area A=ℓw it has perimeter P=2ℓ+2w=2(ℓ+w) each diagonal has length d=ℓ2+w2 and when ℓ=w , the rectangle is a square.
Theorems:
The isoperimetric theorem for rectangles states that among all rectangles of a given perimeter, the square has the largest area.
The midpoints of the sides of any quadrilateral with perpendicular diagonals form a rectangle.
A parallelogram with equal diagonals is a rectangle.
The Japanese theorem for cyclic quadrilaterals states that the incentres of the four triangles determined by the vertices of a cyclic quadrilateral taken three at a time form a rectangle.
The British flag theorem states that with vertices denoted A, B, C, and D, for any point P on the same plane of a rectangle: (AP)2+(CP)2=(BP)2+(DP)2.
For every convex body C in the plane, we can inscribe a rectangle r in C such that a homothetic copy R of r is circumscribed about C and the positive homothety ratio is at most 2 and 0.5 × Area Area × Area (r)
Crossed rectangles:
A crossed quadrilateral (self-intersecting) consists of two opposite sides of a non-self-intersecting quadrilateral along with the two diagonals. Similarly, a crossed rectangle is a crossed quadrilateral which consists of two opposite sides of a rectangle along with the two diagonals. It has the same vertex arrangement as the rectangle. It appears as two identical triangles with a common vertex, but the geometric intersection is not considered a vertex.
Crossed rectangles:
A crossed quadrilateral is sometimes likened to a bow tie or butterfly, sometimes called an "angular eight". A three-dimensional rectangular wire frame that is twisted can take the shape of a bow tie.
The interior of a crossed rectangle can have a polygon density of ±1 in each triangle, dependent upon the winding orientation as clockwise or counterclockwise.
A crossed rectangle may be considered equiangular if right and left turns are allowed. As with any crossed quadrilateral, the sum of its interior angles is 720°, allowing for internal angles to appear on the outside and exceed 180°.A rectangle and a crossed rectangle are quadrilaterals with the following properties in common: Opposite sides are equal in length.
The two diagonals are equal in length.
It has two lines of reflectional symmetry and rotational symmetry of order 2 (through 180°).
Other rectangles:
In spherical geometry, a spherical rectangle is a figure whose four edges are great circle arcs which meet at equal angles greater than 90°. Opposite arcs are equal in length. The surface of a sphere in Euclidean solid geometry is a non-Euclidean surface in the sense of elliptic geometry. Spherical geometry is the simplest form of elliptic geometry.
In elliptic geometry, an elliptic rectangle is a figure in the elliptic plane whose four edges are elliptic arcs which meet at equal angles greater than 90°. Opposite arcs are equal in length.
In hyperbolic geometry, a hyperbolic rectangle is a figure in the hyperbolic plane whose four edges are hyperbolic arcs which meet at equal angles less than 90°. Opposite arcs are equal in length.
Tessellations:
The rectangle is used in many periodic tessellation patterns, in brickwork, for example, these tilings:
Squared, perfect, and other tiled rectangles:
A rectangle tiled by squares, rectangles, or triangles is said to be a "squared", "rectangled", or "triangulated" (or "triangled") rectangle respectively. The tiled rectangle is perfect if the tiles are similar and finite in number and no two tiles are the same size. If two such tiles are the same size, the tiling is imperfect. In a perfect (or imperfect) triangled rectangle the triangles must be right triangles. A database of all known perfect rectangles, perfect squares and related shapes can be found at squaring.net. The lowest number of squares need for a perfect tiling of a rectangle is 9 and the lowest number needed for a perfect tilling a square is 21, found in 1978 by computer search.A rectangle has commensurable sides if and only if it is tileable by a finite number of unequal squares. The same is true if the tiles are unequal isosceles right triangles.
Squared, perfect, and other tiled rectangles:
The tilings of rectangles by other tiles which have attracted the most attention are those by congruent non-rectangular polyominoes, allowing all rotations and reflections. There are also tilings by congruent polyaboloes.
Unicode:
U+25AC ▬ BLACK RECTANGLE U+25AD ▭ WHITE RECTANGLE U+25AE ▮ BLACK VERTICAL RECTANGLE U+25AF ▯ WHITE VERTICAL RECTANGLE | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hydrochlorothiazide**
Hydrochlorothiazide:
Hydrochlorothiazide, sold under the brand name Hydrodiuril among others, is a diuretic medication used to treat hypertension and swelling due to fluid build-up. Other uses include treating diabetes insipidus and renal tubular acidosis and to decrease the risk of kidney stones in those with a high calcium level in the urine. Hydrochlorothiazide is taken by mouth and may be combined with other blood pressure medications as a single pill to increase effectiveness. Hydrochlorothiazide is a thiazide medication which inhibits reabsorption of sodium and chloride ions from the distal convoluted tubules of the kidneys, causing a natriuresis. This initially increases urine volume and lowers blood volume. It is believed to reduce peripheral vascular resistance.Potential side effects include poor kidney function, electrolyte imbalances, including low blood potassium, and, less commonly, low blood sodium, gout, high blood sugar, and feeling lightheaded with standing.Two companies, Merck & Co. and Ciba Specialty Chemicals, state they discovered the medication which became commercially available in 1959. It is on the World Health Organization's List of Essential Medicines. It is available as a generic drug and is relatively affordable. In 2020, it was the eleventh most commonly prescribed medication in the United States, with more than 41 million prescriptions.
Medical uses:
Hydrochlorothiazide is used for the treatment of hypertension, congestive heart failure, symptomatic edema, diabetes insipidus, renal tubular acidosis. It is also used for the prevention of kidney stones in those who have high levels of calcium in their urine.Multiple studies suggest hydrochlorothiazide could be used as initial monotherapy in people with primary hypertension; however, the decision should be weighed against the consequence of long-term adverse metabolic abnormalities. Doses of hydrochlorothiazide of 50 mg or less over four years reduced mortality and development of cardiovascular diseases better than high-dose hydrochlorothiazide (50 mg or more) and beta-blockers. A 2019 review supported equivalence between drug classes for initiating monotherapy in hypertension, although thiazide or thiazide-like diuretics showed better primary effectiveness and safety profiles than angiotensin-converting enzyme inhibitors and non-dihydropyridine calcium channel blockers.Low doses (50 mg or less) of hydrochlorothiazide as first‐line therapy for hypertension were found to reduce total mortality and cardiovascular disease events over a four-year study. Hydrochlorothiazide appears be more effective than chlorthalidone in preventing heart attacks and strokes. Hydrochlorothiazide is less potent but may be more effective than chlorthalidone in reducing blood pressure. More robust studies are required to confirm which drug is superior in reducing cardiovascular events. Side effect profile for both drugs appear similar and are dose dependent.Hydrochlorothiazide is also sometimes used to prevent osteopenia and treat hypoparathyroidism, hypercalciuria, Dent's disease, and Ménière's disease.
Medical uses:
A low level of evidence, predominantly from observational studies, suggests that thiazide diuretics have a modest beneficial effect on bone mineral density and are associated with a decreased fracture risk when compared with people not taking thiazides. Thiazides decrease mineral bone loss by promoting calcium retention in the kidney, and by directly stimulating osteoblast differentiation and bone mineral formation.The combination of fixed-dose preparation such as losartan/hydrochlorothiazide has added advantages of a more potent antihypertensive effect with additional antihypertensive efficacy at the dose of 100 mg/25 mg when compared to monotherapy.
Adverse effects:
Hypokalemia, or low blood levels of potassium are an occasional side effect. It can be usually prevented by potassium supplements or by combining hydrochlorothiazide with a potassium-sparing diuretic Other disturbances in the levels of serum electrolytes, including hypomagnesemia (low magnesium), hyponatremia (low sodium), and hypercalcemia (high calcium) Hyperuricemia (high levels of uric acid in the blood). All thiazide diuretics including hydrochlorothiazide can inhibit excretion of uric acid by the kidneys, thereby increasing serum concentrations of uric acid. This may increase the incidence of gout in doses of ≥ 25 mg per day and in more susceptible patients such as male gender of <60 years old.
Adverse effects:
Hyperglycemia, high blood sugar Hyperlipidemia, high cholesterol and triglycerides Headache Nausea/vomiting Photosensitivity Weight gain PancreatitisPackage inserts contain vague and inconsistent data surrounding the use of thiazide diuretics in patients with allergies to sulfa drugs, with little evidence to support these statements. A retrospective cohort study conducted by Strom et al. concluded that there is an increased risk of an allergic reaction occurring in patients with a predisposition to allergic reactions in general rather than cross reactivity from structural components of the sulfonamide-based drug. Prescribers should examine the evidence carefully and assess each patient individually, paying particular attention to their prior history of sulfonamide hypersensitivity rather than relying on drug monograph information.There is an increased risk of non-melanoma skin cancer. In August 2020, the Australian Therapeutic Goods Administration required the Product Information (PI) and Consumer Medicine Information (CMI) for medicines containing hydrochlorothiazide to be updated to include details about an increased risk of non-melanoma skin cancer. In August 2020, the U.S. Food and Drug Administration (FDA) updated the drug label about an increased risk of non-melanoma skin cancer (basal cell skin cancer or squamous cell skin cancer).
Society and culture:
Brand names Hydrochlorothiazide is available as a generic drug under a large number of brand names, including Apo-Hydro, Aquazide, BPZide, Dichlotride, Esidrex, Hydrochlorot, Hydrodiuril, HydroSaluric, Hypothiazid, Microzide, Oretic and many others.To reduce pill burden and in order to reduce side effects, hydrochlorothiazide is often used in fixed-dose combinations with many other classes of antihypertensive drugs such as: ACE inhibitors — e.g. Prinzide or Zestoretic (with lisinopril), Co-Renitec (with enalapril), Capozide (with captopril), Accuretic (with quinapril), Monopril HCT (with fosinopril), Lotensin HCT (with benazepril), etc.
Society and culture:
Angiotensin receptor blockers — e.g. Hyzaar (with losartan), Co-Diovan or Diovan HCT (with valsartan), Teveten Plus (with eprosartan), Avalide or CoAprovel (with irbesartan), Atacand HCT or Atacand Plus (with candesartan), etc.
Beta blockers — e.g. Ziac or Lodoz (with bisoprolol), Nebilet Plus or Nebilet HCT (with nebivolol), Dutoprol or Lopressor HCT (with metoprolol), etc.
Direct renin inhibitors — e.g. Co-Rasilez or Tekturna HCT (with aliskiren) Potassium sparing diuretics: Dyazide and Maxzide triamterene Sport Use of hydrochlorothiazide is prohibited by the World Anti-Doping Agency for its ability to mask the use of performance-enhancing drugs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hot stamping**
Hot stamping:
Hot stamping or foil stamping is a printing method of relief printing in which pre-dried ink or foils are transferred to a surface at high temperatures. The method has diversified since its rise to prominence in the 19th century to include a variety of processes. After the 1970s, hot stamping became one of the most important methods of decoration on the surface of plastic products.
Process:
In a hot stamping machine, a die is mounted and heated, with the product to be stamped placed beneath it. A metallized or painted roll-leaf carrier is inserted between the two, and the die presses down through it. The dry paint or foil used is impressed into the surface of the product. The dye-stamping process itself is non-polluting because the materials involved are dry. Pressure and heat cause the relevant sections of the foil to become detached from the carrier material and become bonded with the printing surface.
Tools:
Along with foil stamping machines, among the commonly used tools in hot stamping are dies and foil. Dies may be made of metal or silicone rubber, and they may be shaped directly or cast. They can carry high levels of detail to be transferred to the surface and may be shaped to accommodate irregularities in the surface.
Tools:
Foils are multilayered coatings that transfer to the surface of the product. Non-metallic foils consist of an adherence base, a color layer, and a release layer. Metallic foils replace the color layer with a layer of chrome or vacuum-metallized aluminum. Metallic foil construction has a metal-like sheen and is available in different metal shades such as gold, silver, bronze, and copper. Pigment foil does not have a metallic sheen but may be glossy or matte. Holographic foil paper includes a 3-dimensional image to provide a distinctive appearance to specific areas of a digitally printed application. Printing is often done on leather or paper.
Tools:
Different hot stamping machines serve different purposes, but the most common hot stamping machines are simple up-and-down presses. Three of the most common brands are Kwikprint, Kingsley, and Howard. However, for more industrial applications Kluge and Heidelberg presses are more commonly used.
History:
In the 19th century, hot stamping became a popular method of applying gold tooling or embossing in book printing on leather and paper. The first patent for hot stamping was recorded in Germany by Ernst Oeser in 1892.From the 1950s onward, the method became a popular means of marking plastic . Hot Stamping technology for plastic is used for electric components (TV frames, audio components, refrigerators etc.), cosmetic containers (lipstick, cream, mascara, shampoo bottle etc.), automobile parts (interior and exterior materials).As of 1998, it was one of the most commonly used methods of security printing.Foil stamping can be used to make Radio-frequency identification (RFID) tags, although screen printing is faster and cheaper. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PAQR6**
PAQR6:
Membrane progesterone receptor delta (mPRδ), or progestin and adipoQ receptor 6 (PAQR6), is a protein that in humans is encoded by the PAQR6 gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sofía Calero**
Sofía Calero:
Sofía Calero Diaz is a Spanish chemist who is a professor and Vice Dean of the Department of Applied Physics and Science Education at the Eindhoven University of Technology. Her research considers computational modelling of functional materials for applications in renewable energy. She was awarded the Spanish Royal Society of Chemistry Award for Scientific Excellence in 2018.
Early life and education:
Calero was born in Spain. She attended the Complutense University of Madrid and remained there for her doctoral research, which considered the thermodynamic properties of liquids of complex molecules. She moved to the University of Amsterdam as a Marie Curie Fellow.
Research and career:
Calero moved to the Pablo de Olavide University (UPO) in 2003, where she was named a Ramon y Cajal Fellow, and established a group who studied nanomaterials for applications in technologies. She was part of the European Cooperation in Science and Technology (COST) action on Computational materials sciences for efficient water splitting with nanocrystals from abundant elements. The programme looked to developed highly efficient energy converting devices (e.g. electrochemical cells for water-splitting). Calero was responsible for the development of Monte Carlo simulations of the project, and contributed to the software RASPA. RASPA is a simulation package that allows the study of adsorption and diffusion in nanoporous systems.Calero moved to the Eindhoven University of Technology in 2020. She was made Vice Dean of the Department of Applied Physics and Science Education. She focuses on the simulation of materials for renewable energies and the development of force field to predict the functional properties of materials.
Awards and honours:
2005 Marie Curie Award for Excellence 2011 European Research Council Consolidator Award 2018 Spanish Royal Society of Chemistry Award for Scientific Excellence 2020 TU/e Irene Curie Grant
Selected publications:
David Dubbeldam; Sofía Calero; Donald E. Ellis; Randall Q. Snurr (26 February 2015). "RASPA: molecular simulation software for adsorption and diffusion in flexible nanoporous materials". Molecular Simulation. 42 (2): 81–101. doi:10.1080/08927022.2015.1010082. ISSN 0892-7022. S2CID 53077055. Wikidata Q60395799.
D. Dubbeldam; S. Calero; T. J. H. Vlugt; R. Krishna; T. L. M. Maesen; B. Smit (August 2004). "United Atom Force Field for Alkanes in Nanoporous Materials". The Journal of Physical Chemistry B. 108 (33): 12301–12313. doi:10.1021/JP0376727. ISSN 1520-6106. S2CID 14390433. Wikidata Q59281748.
Sofia Calero; David Dubbeldam; Rajamani Krishna; Berend Smit; Thijs J H Vlugt; Joeri Denayer; Johan A. Martens; Theo L M Maesen (1 September 2004). "Understanding the role of sodium during adsorption: a force field for alkanes in sodium-exchanged faujasites". Journal of the American Chemical Society. 126 (36): 11377–11386. doi:10.1021/JA0476056. ISSN 0002-7863. PMID 15355121. Wikidata Q51618932. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Interactive video**
Interactive video:
The term interactive video usually refers to a technique used to blend interaction and linear film or video.
History:
In 1962, Steve Russell, a student at the Massachusetts Institute of Technology (MIT), created Spacewar!, the world's first interactive computer game.In 1967, the first interactive film, The Cinema Machine, was released. While watching this film, the audience in the cinema theatre would choose one of two scenes during the plot fork. Switching between scenes was done manually by the projectionist.
History:
In 1972, Philips introduced the first laser disc (LD). Laser disc technology allowed for playback of any video chapter, making interactive video possible.In 1983, Sega released Astron Belt, the first interactive arcade game on LD. Also released in 1983 was Cinematronics's LD animated Dragon's Lair.
During the 1990s, several interactive Video CD formats were available such as CD-i (Compact Disc-Interactive) and Digital Video Interactive (DVI). Since 2000, the LD format has been superseded by the DVD format.
In 2008, YouTube added an interactive annotation feature to videos. This feature was disabled in 2019.Netflix started releasing interactive animations in 2016.TikTok announces support for interactive effects in 2021.
Interactive video:
Interactive video (also known as "IV") is a form of digital video that supports user interaction. Interactive videos provide the viewer the ability to click, on a desktop, or touch on mobile devices within the video for an action to occur. These clickable areas, or "hotspots," can perform an action when clicked or touched. For example, the video may display additional information, jump to a different part of the video or another video, or may change the storyline.
"Hotspots" In Video:
One popular use of interactive video technology is to add clickable points or 'hotspots' to the video. These hotspots allow the viewer to learn more about a particular object, product, or person in the video. A hotspot can trigger content to appear within the video such as text, images, videos or additional web content can be set within an iframe.
"Customizable" online interactive videos:
Customizable videos allow the user to adjust some variables and then play a video customised to the user's particular preferences. However the user does not actually interact with the video while it is playing. Recent examples of this form of video include: Miss Helga—customizable video ad for Volkswagen Golf created by Crispin Porter + Bogusky and The Barbarian Group.
Ave a Word—customizable video ad for Mini created by Glue London - Silver Cannes Lion 2006.
Electric Feel—customizable music video for the so-titled song by the band MGMT.
"Conversational" online interactive videos:
Conversational videos allow the user to interact with a video in a turn-based manner, almost as though the user was having a simple conversation with the characters in the video. Recent examples include: Subservient Chicken - a "conversational" interactive video ad for Burger King created by Crispin Porter + Bogusky and The Barbarian Group Cannes Grand Prix 2005.
A Conversation with Sir Ian - Interactive video interview with Sir Ian McKellen on Shakespeare. Created for the National Theatre by Martin Percy. BAFTA nominee 2007.
A conversation with Shimon Peres the Israeli President, created by interactive media and tech company Eko.
An interactive interview with John Hamm before he hosted the 2013 ESPY Awards, also created by Eko.
"Exploratory" online interactive videos:
Exploratory videos allow the user to move through a space or look at an object such as an artwork from multiple angles, almost as though the user was looking at the object in real life. The object or space is depicted using video loops, not still, creating a more "live" feel. Recent examples include: The BT Series - Interactive video exploration of the works of Tracey Emin, Anthony Gormley and Rachel Whiteread. Created for the Tate Gallery by Martin Percy. Webby Nominee 2006 and Honoree 2007.
"Exploratory" online interactive videos:
Tate Tracks - Interactive video exploration of various works, allowing the user to listen to music while looking at art. Created for the Tate Gallery by Martin Percy. Part of integrated campaign winning Cannes Gold Lion 2007.
Interactive video in early computer games:
The term interactive video or interactive movie sometimes refers to a nowadays uncommon technique used to create computer games or interactive narratives. Instead of 3D computer graphics an interactive image flow is created using premade video clips, often produced by overlaying computer-generated material with 12-inch videodisc images (where the setup is known as "level III" interactive video, to distinguish it from "level I" or videodisc-only, and "level II" requiring specially made videodisc players that support handheld-remote-based interactivity without using an external computer setup). The clips can be animation like in the video game Dragon's Lair or live action video like in the video game Night Trap. Compared to other computer graphics techniques interactive video tends to emphasize the looks and movement of interactive characters instead of interactivity.
Interactive video in YouTube:
In 2008 YouTube added Video Annotations as an interactive layer of clickable speech-bubble, text-boxes and spotlights. Users may add interactive annotations to their videos and by that a new trend of interactive videos arose, including choose-your-own-adventure video series, online video games using YouTube videos, spot-the-difference-game videos, animal-dubbing and more.
In 2009 YouTube added a community aspect to its Video Annotations feature by allowing video owners to invite their friends and community to add annotations to their movies.
Around 2010 YouTube released the interactive takeovers, certain channels had the opportunity to integrate an iFrame experience enabling them to include interactive videos. Some of the most successful takeovers were done by brands such as Samsung, Tipp-Ex or Chrome.YouTube discontinued the use of annotations on January 15, 2019.
Interactive video in advertising:
In 2014, video marketing platform Innovid was awarded a U.S. patent for interactive video technology.In 2017, the interactive video agency Adways created a specific format called InContent that enables to add interactive ads on a live stream for Roland-Garros.
Interactive video art:
Contemporary interactive video artists like Miroslaw Rogala, Greyworld, Raymond Salvatore Harmon, Lee Wells, Camille Utterback, Scott Snibbe, and Alex Horn have extended the form of interactive video through the dialog of gesture and the participatory involvement of both active and passive viewers. Perpetual art machine is a video art portal and interactive video installation that integrates over 1000 international video artists into a single interactive large scale video experience.
Interactive video in VJing:
Technically VJing is also about creating a stream of video interactively. this involves the user/operator to mix video clips, runtime plugins, and FX to the music's mood, bpm, and vibe.
Interactive video in research:
The human-computer interaction (HCI) research community as well as the multimedia research community have published several works on video interaction tools. A survey is provided in | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Entoglossus**
Entoglossus:
The entoglossus is an elongated bony process in lizards and birds that projects from the body of the hyoid apparatus into the tongue. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Welding**
Welding:
Welding is a fabrication process that joins materials, usually metals or thermoplastics, by using high heat to melt the parts together and allowing them to cool, causing fusion. Welding is distinct from lower temperature techniques such as brazing and soldering, which do not melt the base metal (parent metal).
Welding:
In addition to melting the base metal, a filler material is typically added to the joint to form a pool of molten material (the weld pool) that cools to form a joint that, based on weld configuration (butt, full penetration, fillet, etc.), can be stronger than the base material. Pressure may also be used in conjunction with heat or by itself to produce a weld. Welding also requires a form of shield to protect the filler metals or melted metals from being contaminated or oxidized.
Welding:
Many different energy sources can be used for welding, including a gas flame (chemical), an electric arc (electrical), a laser, an electron beam, friction, and ultrasound. While often an industrial process, welding may be performed in many different environments, including in open air, under water, and in outer space. Welding is a hazardous undertaking and precautions are required to avoid burns, electric shock, vision damage, inhalation of poisonous gases and fumes, and exposure to intense ultraviolet radiation.
Welding:
Until the end of the 19th century, the only welding process was forge welding, which blacksmiths had used for millennia to join iron and steel by heating and hammering. Arc welding and oxy-fuel welding were among the first processes to develop late in the century, and electric resistance welding followed soon after. Welding technology advanced quickly during the early 20th century as world wars drove the demand for reliable and inexpensive joining methods. Following the wars, several modern welding techniques were developed, including manual methods like shielded metal arc welding, now one of the most popular welding methods, as well as semi-automatic and automatic processes such as gas metal arc welding, submerged arc welding, flux-cored arc welding and electroslag welding. Developments continued with the invention of laser beam welding, electron beam welding, magnetic pulse welding, and friction stir welding in the latter half of the century. Today, as the science continues to advance, robot welding is commonplace in industrial settings, and researchers continue to develop new welding methods and gain greater understanding of weld quality.
Etymology:
The term weld is of English origin, with roots from Scandinavia. It is often confused with the Old English word weald, meaning 'a forested area', but this word eventually morphed into the modern version, wild. The Old English word for welding iron was samod ('to bring together') or samodwellung ('to bring together hot', with hot more relating to red-hot or a swelling rage; in contrast to samodfæst, 'to bind together with rope or fasteners'). The term weld is derived from the Middle English verb well (wæll; plural/present tense: wælle) or welling (wællen), meaning 'to heat' (to the maximum temperature possible); 'to bring to a boil'. The modern word was probably derived from the past-tense participle welled (wællende), with the addition of d for this purpose being common in the Germanic languages of the Angles and Saxons. It was first recorded in English in 1590, from a version of the Christian Bible that was originally translated into English by John Wycliffe in the fourteenth century. The original version, from Isaiah 2:4, reads, "...thei shul bete togidere their swerdes into shares..." (they shall beat together their swords into plowshares), while the 1590 version was changed to, "...thei shullen welle togidere her swerdes in-to scharris..." (they shall weld together their swords into plowshares), suggesting this particular use of the word probably became popular in English sometime between these periods.The word is derived from the Old Swedish word valla, meaning 'to boil'. Sweden was a large exporter of iron during the Middle Ages, and many other European languages used different words but with the same meaning to refer to welding iron, such as the Illyrian (Greek) variti ('to boil'), Turkish kaynamak ('to boil'), Grison (Swiss) bulgir ('to boil'), or the Lettish (Latvian) sawdrit ('to weld or solder', derived from wdrit, 'to boil'). In Swedish, however, the word only referred to joining metals when combined with the word for iron (järn), as in valla järn (literally: 'to boil iron'). The word possibly entered English from the Swedish iron trade, or possibly was imported with the thousands of Viking settlements that arrived in England before and during the Viking Age, as more than half of the most common English words in everyday use are Scandinavian in origin.
History:
The history of joining metals goes back several millennia. The earliest examples of this come from the Bronze and Iron Ages in Europe and the Middle East. The ancient Greek historian Herodotus states in The Histories of the 5th century BC that Glaucus of Chios "was the man who single-handedly invented iron welding". Welding was used in the construction of the Iron pillar of Delhi, erected in Delhi, India about 310 AD and weighing 5.4 metric tons.The Middle Ages brought advances in forge welding, in which blacksmiths pounded heated metal repeatedly until bonding occurred. In 1540, Vannoccio Biringuccio published De la pirotechnia, which includes descriptions of the forging operation. Renaissance craftsmen were skilled in the process, and the industry continued to grow during the following centuries.In 1800, Sir Humphry Davy discovered the short-pulse electrical arc and presented his results in 1801. In 1802, Russian scientist Vasily Petrov created the continuous electric arc, and subsequently published "News of Galvanic-Voltaic Experiments" in 1803, in which he described experiments carried out in 1802. Of great importance in this work was the description of a stable arc discharge and the indication of its possible use for many applications, one being melting metals. In 1808, Davy, who was unaware of Petrov's work, rediscovered the continuous electric arc. In 1881–82 inventors Nikolai Benardos (Russian) and Stanisław Olszewski (Polish) created the first electric arc welding method known as carbon arc welding using carbon electrodes. The advances in arc welding continued with the invention of metal electrodes in the late 1800s by a Russian, Nikolai Slavyanov (1888), and an American, C. L. Coffin (1890). Around 1900, A. P. Strohmenger released a coated metal electrode in Britain, which gave a more stable arc. In 1905, Russian scientist Vladimir Mitkevich proposed using a three-phase electric arc for welding. Alternating current welding was invented by C. J. Holslag in 1919, but did not become popular for another decade.Resistance welding was also developed during the final decades of the 19th century, with the first patents going to Elihu Thomson in 1885, who produced further advances over the next 15 years. Thermite welding was invented in 1893, and around that time another process, oxyfuel welding, became well established. Acetylene was discovered in 1836 by Edmund Davy, but its use was not practical in welding until about 1900, when a suitable torch was developed. At first, oxyfuel welding was one of the more popular welding methods due to its portability and relatively low cost. As the 20th century progressed, however, it fell out of favor for industrial applications. It was largely replaced with arc welding, as advances in metal coverings (known as flux) were made. Flux covering the electrode primarily shields the base material from impurities, but also stabilizes the arc and can add alloying components to the weld metal.
History:
World War I caused a major surge in the use of welding, with the various military powers attempting to determine which of the several new welding processes would be best. The British primarily used arc welding, even constructing a ship, the "Fullagar" with an entirely welded hull.: 142 Arc welding was first applied to aircraft during the war as well, as some German airplane fuselages were constructed using the process. Also noteworthy is the first welded road bridge in the world, the Maurzyce Bridge in Poland (1928).
History:
During the 1920s, significant advances were made in welding technology, including the introduction of automatic welding in 1920, in which electrode wire was fed continuously. Shielding gas became a subject receiving much attention, as scientists attempted to protect welds from the effects of oxygen and nitrogen in the atmosphere. Porosity and brittleness were the primary problems, and the solutions that developed included the use of hydrogen, argon, and helium as welding atmospheres. During the following decade, further advances allowed for the welding of reactive metals like aluminum and magnesium. This in conjunction with developments in automatic welding, alternating current, and fluxes fed a major expansion of arc welding during the 1930s and then during World War II. In 1930, the first all-welded merchant vessel, M/S Carolinian, was launched.
History:
During the middle of the century, many new welding methods were invented. In 1930, Kyle Taylor was responsible for the release of stud welding, which soon became popular in shipbuilding and construction. Submerged arc welding was invented the same year and continues to be popular today. In 1932 a Russian, Konstantin Khrenov eventually implemented the first underwater electric arc welding. Gas tungsten arc welding, after decades of development, was finally perfected in 1941, and gas metal arc welding followed in 1948, allowing for fast welding of non-ferrous materials but requiring expensive shielding gases. Shielded metal arc welding was developed during the 1950s, using a flux-coated consumable electrode, and it quickly became the most popular metal arc welding process. In 1957, the flux-cored arc welding process debuted, in which the self-shielded wire electrode could be used with automatic equipment, resulting in greatly increased welding speeds, and that same year, plasma arc welding was invented by Robert Gage. Electroslag welding was introduced in 1958, and it was followed by its cousin, electrogas welding, in 1961. In 1953, the Soviet scientist N. F. Kazakov proposed the diffusion bonding method.Other recent developments in welding include the 1958 breakthrough of electron beam welding, making deep and narrow welding possible through the concentrated heat source. Following the invention of the laser in 1960, laser beam welding debuted several decades later, and has proved to be especially useful in high-speed, automated welding. Magnetic pulse welding (MPW) has been industrially used since 1967. Friction stir welding was invented in 1991 by Wayne Thomas at The Welding Institute (TWI, UK) and found high-quality applications all over the world. All of these four new processes continue to be quite expensive due to the high cost of the necessary equipment, and this has limited their applications.
Processes:
Gas welding The most common gas welding process is oxyfuel welding, also known as oxyacetylene welding. It is one of the oldest and most versatile welding processes, but in recent years it has become less popular in industrial applications. It is still widely used for welding pipes and tubes, as well as repair work.The equipment is relatively inexpensive and simple, generally employing the combustion of acetylene in oxygen to produce a welding flame temperature of about 3100 °C (5600 °F). The flame, since it is less concentrated than an electric arc, causes slower weld cooling, which can lead to greater residual stresses and weld distortion, though it eases the welding of high alloy steels. A similar process, generally called oxyfuel cutting, is used to cut metals.
Processes:
Arc welding These processes use a welding power supply to create and maintain an electric arc between an electrode and the base material to melt metals at the welding point. They can use either direct current (DC) or alternating current (AC), and consumable or non-consumable electrodes. The welding region is sometimes protected by some type of inert or semi-inert gas, known as a shielding gas, and filler material is sometimes used as well.
Processes:
Arc welding processes One of the most common types of arc welding is shielded metal arc welding (SMAW); it is also known as manual metal arc welding (MMAW) or stick welding. Electric current is used to strike an arc between the base material and consumable electrode rod, which is made of filler material (typical steel) and is covered with a flux that protects the weld area from oxidation and contamination by producing carbon dioxide (CO2) gas during the welding process. The electrode core itself acts as filler material, making a separate filler unnecessary.
Processes:
The process is versatile and can be performed with relatively inexpensive equipment, making it well suited to shop jobs and field work. An operator can become reasonably proficient with a modest amount of training and can achieve mastery with experience. Weld times are rather slow, since the consumable electrodes must be frequently replaced and because slag, the residue from the flux, must be chipped away after welding. Furthermore, the process is generally limited to welding ferrous materials, though special electrodes have made possible the welding of cast iron, stainless steel, aluminum, and other metals.
Processes:
Gas metal arc welding (GMAW), also known as metal inert gas or MIG welding, is a semi-automatic or automatic process that uses a continuous wire feed as an electrode and an inert or semi-inert gas mixture to protect the weld from contamination. Since the electrode is continuous, welding speeds are greater for GMAW than for SMAW.A related process, flux-cored arc welding (FCAW), uses similar equipment but uses wire consisting of a steel electrode surrounding a powder fill material. This cored wire is more expensive than the standard solid wire and can generate fumes and/or slag, but it permits even higher welding speed and greater metal penetration.Gas tungsten arc welding (GTAW), or tungsten inert gas (TIG) welding, is a manual welding process that uses a non-consumable tungsten electrode, an inert or semi-inert gas mixture, and a separate filler material. Especially useful for welding thin materials, this method is characterized by a stable arc and high-quality welds, but it requires significant operator skill and can only be accomplished at relatively low speeds.GTAW can be used on nearly all weldable metals, though it is most often applied to stainless steel and light metals. It is often used when quality welds are extremely important, such as in bicycle, aircraft and naval applications. A related process, plasma arc welding, also uses a tungsten electrode but uses plasma gas to make the arc. The arc is more concentrated than the GTAW arc, making transverse control more critical and thus generally restricting the technique to a mechanized process. Because of its stable current, the method can be used on a wider range of material thicknesses than can the GTAW process and it is much faster. It can be applied to all of the same materials as GTAW except magnesium, and automated welding of stainless steel is one important application of the process. A variation of the process is plasma cutting, an efficient steel cutting process.Submerged arc welding (SAW) is a high-productivity welding method in which the arc is struck beneath a covering layer of flux. This increases arc quality since contaminants in the atmosphere are blocked by the flux. The slag that forms on the weld generally comes off by itself, and combined with the use of a continuous wire feed, the weld deposition rate is high. Working conditions are much improved over other arc welding processes, since the flux hides the arc and almost no smoke is produced. The process is commonly used in industry, especially for large products and in the manufacture of welded pressure vessels. Other arc welding processes include atomic hydrogen welding, electroslag welding (ESW), electrogas welding, and stud arc welding. ESW is a highly productive, single-pass welding process for thicker materials between 1 inch (25 mm) and 12 inches (300 mm) in a vertical or close to vertical position.
Processes:
Arc welding power supplies To supply the electrical power necessary for arc welding processes, a variety of different power supplies can be used. The most common welding power supplies are constant current power supplies and constant voltage power supplies. In arc welding, the length of the arc is directly related to the voltage, and the amount of heat input is related to the current. Constant current power supplies are most often used for manual welding processes such as gas tungsten arc welding and shielded metal arc welding, because they maintain a relatively constant current even as the voltage varies. This is important because in manual welding, it can be difficult to hold the electrode perfectly steady, and as a result, the arc length and thus voltage tend to fluctuate. Constant voltage power supplies hold the voltage constant and vary the current, and as a result, are most often used for automated welding processes such as gas metal arc welding, flux-cored arc welding, and submerged arc welding. In these processes, arc length is kept constant, since any fluctuation in the distance between the wire and the base material is quickly rectified by a large change in current. For example, if the wire and the base material get too close, the current will rapidly increase, which in turn causes the heat to increase and the tip of the wire to melt, returning it to its original separation distance.The type of current used plays an important role in arc welding. Consumable electrode processes such as shielded metal arc welding and gas metal arc welding generally use direct current, but the electrode can be charged either positively or negatively. In welding, the positively charged anode will have a greater heat concentration, and as a result, changing the polarity of the electrode affects weld properties. If the electrode is positively charged, the base metal will be hotter, increasing weld penetration and welding speed. Alternatively, a negatively charged electrode results in more shallow welds. Non-consumable electrode processes, such as gas tungsten arc welding, can use either type of direct current, as well as alternating current. However, with direct current, because the electrode only creates the arc and does not provide filler material, a positively charged electrode causes shallow welds, while a negatively charged electrode makes deeper welds. Alternating current rapidly moves between these two, resulting in medium-penetration welds. One disadvantage of AC, the fact that the arc must be re-ignited after every zero crossings, has been addressed with the invention of special power units that produce a square wave pattern instead of the normal sine wave, making rapid zero crossings possible and minimizing the effects of the problem.
Processes:
Resistance welding Resistance welding involves the generation of heat by passing current through the resistance caused by the contact between two or more metal surfaces. Small pools of molten metal are formed at the weld area as high current (1,000–100,000 A) is passed through the metal. In general, resistance welding methods are efficient and cause little pollution, but their applications are somewhat limited and the equipment cost can be high.
Processes:
Spot welding is a popular resistance welding method used to join overlapping metal sheets of up to 3 mm thick. Two electrodes are simultaneously used to clamp the metal sheets together and to pass current through the sheets. The advantages of the method include efficient energy use, limited workpiece deformation, high production rates, easy automation, and no required filler materials. Weld strength is significantly lower than with other welding methods, making the process suitable for only certain applications. It is used extensively in the automotive industry—ordinary cars can have several thousand spot welds made by industrial robots. A specialized process called shot welding, can be used to spot weld stainless steel.Like spot welding, seam welding relies on two electrodes to apply pressure and current to join metal sheets. However, instead of pointed electrodes, wheel-shaped electrodes roll along and often feed the workpiece, making it possible to make long continuous welds. In the past, this process was used in the manufacture of beverage cans, but now its uses are more limited. Other resistance welding methods include butt welding, flash welding, projection welding, and upset welding.
Processes:
Energy beam welding Energy beam welding methods, namely laser beam welding and electron beam welding, are relatively new processes that have become quite popular in high production applications. The two processes are quite similar, differing most notably in their source of power. Laser beam welding employs a highly focused laser beam, while electron beam welding is done in a vacuum and uses an electron beam. Both have a very high energy density, making deep weld penetration possible and minimizing the size of the weld area. Both processes are extremely fast, and are easily automated, making them highly productive. The primary disadvantages are their very high equipment costs (though these are decreasing) and a susceptibility to thermal cracking. Developments in this area include laser-hybrid welding, which uses principles from both laser beam welding and arc welding for even better weld properties, laser cladding, and x-ray welding.
Processes:
Solid-state welding Like the first welding process, forge welding, some modern welding methods do not involve the melting of the materials being joined. One of the most popular, ultrasonic welding, is used to connect thin sheets or wires made of metal or thermoplastic by vibrating them at high frequency and under high pressure. The equipment and methods involved are similar to that of resistance welding, but instead of electric current, vibration provides energy input. Welding metals with this process does not involve melting the materials; instead, the weld is formed by introducing mechanical vibrations horizontally under pressure. When welding plastics, the materials should have similar melting temperatures, and the vibrations are introduced vertically. Ultrasonic welding is commonly used for making electrical connections out of aluminum or copper, and it is also a very common polymer welding process.Another common process, explosion welding, involves the joining of materials by pushing them together under extremely high pressure. The energy from the impact plasticizes the materials, forming a weld, even though only a limited amount of heat is generated. The process is commonly used for welding dissimilar materials, including bonding aluminum to carbon steel in ship hulls and stainless steel or titanium to carbon steel in petrochemical pressure vessels.Other solid-state welding processes include friction welding (including friction stir welding and friction stir spot welding), magnetic pulse welding, co-extrusion welding, cold welding, diffusion bonding, exothermic welding, high frequency welding, hot pressure welding, induction welding, and roll bonding.
Geometry:
Welds can be geometrically prepared in many different ways. The five basic types of weld joints are the butt joint, lap joint, corner joint, edge joint, and T-joint (a variant of this last is the cruciform joint). Other variations exist as well—for example, double-V preparation joints are characterized by the two pieces of material each tapering to a single center point at one-half their height. Single-U and double-U preparation joints are also fairly common—instead of having straight edges like the single-V and double-V preparation joints, they are curved, forming the shape of a U. Lap joints are also commonly more than two pieces thick—depending on the process used and the thickness of the material, many pieces can be welded together in a lap joint geometry.Many welding processes require the use of a particular joint design; for example, resistance spot welding, laser beam welding, and electron beam welding are most frequently performed on lap joints. Other welding methods, like shielded metal arc welding, are extremely versatile and can weld virtually any type of joint. Some processes can also be used to make multipass welds, in which one weld is allowed to cool, and then another weld is performed on top of it. This allows for the welding of thick sections arranged in a single-V preparation joint, for example.
Geometry:
After welding, a number of distinct regions can be identified in the weld area. The weld itself is called the fusion zone—more specifically, it is where the filler metal was laid during the welding process. The properties of the fusion zone depend primarily on the filler metal used, and its compatibility with the base materials. It is surrounded by the heat-affected zone, the area that had its microstructure and properties altered by the weld. These properties depend on the base material's behavior when subjected to heat. The metal in this area is often weaker than both the base material and the fusion zone, and is also where residual stresses are found.
Quality:
Many distinct factors influence the strength of welds and the material around them, including the welding method, the amount and concentration of energy input, the weldability of the base material, filler material, and flux material, the design of the joint, and the interactions between all these factors.For example, the factor of welding position influences weld quality, that welding codes & specifications may require testing—both welding procedures and welders—using specified welding positions: 1G (flat), 2G (horizontal), 3G (vertical), 4G (overhead), 5G (horizontal fixed pipe), or 6G (inclined fixed pipe).
Quality:
To test the quality of a weld, either destructive or nondestructive testing methods are commonly used to verify that welds are free of defects, have acceptable levels of residual stresses and distortion, and have acceptable heat-affected zone (HAZ) properties. Types of welding defects include cracks, distortion, gas inclusions (porosity), non-metallic inclusions, lack of fusion, incomplete penetration, lamellar tearing, and undercutting.
Quality:
The metalworking industry has instituted codes and specifications to guide welders, weld inspectors, engineers, managers, and property owners in proper welding technique, design of welds, how to judge the quality of welding procedure specification, how to judge the skill of the person performing the weld, and how to ensure the quality of a welding job. Methods such as visual inspection, radiography, ultrasonic testing, phased-array ultrasonics, dye penetrant inspection, magnetic particle inspection, or industrial computed tomography can help with detection and analysis of certain defects.
Quality:
Heat-affected zone The heat-affected zone (HAZ) is a ring surrounding the weld in which the temperature of the welding process, combined with the stresses of uneven heating and cooling, alters the heat-treatment properties of the alloy. The effects of welding on the material surrounding the weld can be detrimental—depending on the materials used and the heat input of the welding process used, the HAZ can be of varying size and strength. The thermal diffusivity of the base material plays a large role—if the diffusivity is high, the material cooling rate is high and the HAZ is relatively small. Conversely, a low diffusivity leads to slower cooling and a larger HAZ. The amount of heat injected by the welding process plays an important role as well, as processes like oxyacetylene welding have an unconcentrated heat input and increase the size of the HAZ. Processes like laser beam welding give a highly concentrated, limited amount of heat, resulting in a small HAZ. Arc welding falls between these two extremes, with the individual processes varying somewhat in heat input. To calculate the heat input for arc welding procedures, the following formula can be used: 60 1000 )×Efficiency where Q = heat input (kJ/mm), V = voltage (V), I = current (A), and S = welding speed (mm/min). The efficiency is dependent on the welding process used, with shielded metal arc welding having a value of 0.75, gas metal arc welding and submerged arc welding, 0.9, and gas tungsten arc welding, 0.8. Methods of alleviating the stresses and brittleness created in the HAZ include stress relieving and tempering.One major defect concerning the HAZ would be cracking at the toes , due to the rapid expansion (heating) and contraction (cooling) the material may not have the ability to withstand the stress and could cause cracking, one method the control these stress would be to control the heating and cooling rate, such as pre-heating and post- heating Lifetime extension with after treatment methods The durability and life of dynamically loaded, welded steel structures is determined in many cases by the welds, in particular the weld transitions. Through selective treatment of the transitions by grinding (abrasive cutting), shot peening, High-frequency impact treatment, Ultrasonic impact treatment, etc. the durability of many designs increases significantly.
Metallurgy:
Most solids used are engineering materials consisting of crystalline solids in which the atoms or ions are arranged in a repetitive geometric pattern which is known as a lattice structure. The only exception is material that is made from glass which is a combination of a supercooled liquid and polymers which are aggregates of large organic molecules.Crystalline solids cohesion is obtained by a metallic or chemical bond that is formed between the constituent atoms. Chemical bonds can be grouped into two types consisting of ionic and covalent. To form an ionic bond, either a valence or bonding electron separates from one atom and becomes attached to another atom to form oppositely charged ions. The bonding in the static position is when the ions occupy an equilibrium position where the resulting force between them is zero. When the ions are exerted in tension force, the inter-ionic spacing increases creating an electrostatic attractive force, while a repulsing force under compressive force between the atomic nuclei is dominant.Covalent bonding takes place when one of the constituent atoms loses one or more electrons, with the other atom gaining the electrons, resulting in an electron cloud that is shared by the molecule as a whole. In both ionic and covalent bonding the location of the ions and electrons are constrained relative to each other, thereby resulting in the bond being characteristically brittle.Metallic bonding can be classified as a type of covalent bonding for which the constituent atoms are of the same type and do not combine with one another to form a chemical bond. Atoms will lose an electron(s) forming an array of positive ions. These electrons are shared by the lattice which makes the electron cluster mobile, as the electrons are free to move as well as the ions. For this, it gives metals their relatively high thermal and electrical conductivity as well as being characteristically ductile.Three of the most commonly used crystal lattice structures in metals are the body-centred cubic, face-centred cubic and close-packed hexagonal. Ferritic steel has a body-centred cubic structure and austenitic steel, non-ferrous metals like aluminium, copper and nickel have the face-centred cubic structure.Ductility is an important factor in ensuring the integrity of structures by enabling them to sustain local stress concentrations without fracture. In addition, structures are required to be of an acceptable strength, which is related to a material's yield strength. In general, as the yield strength of a material increases, there is a corresponding reduction in fracture toughness.A reduction in fracture toughness may also be attributed to the embrittlement effect of impurities, or for body-centred cubic metals, from a reduction in temperature. Metals and in particular steels have a transitional temperature range where above this range the metal has acceptable notch-ductility while below this range the material becomes brittle. Within the range, the materials behavior is unpredictable. The reduction in fracture toughness is accompanied by a change in the fracture appearance. When above the transition, the fracture is primarily due to micro-void coalescence, which results in the fracture appearing fibrous. When the temperatures falls the fracture will show signs of cleavage facets. These two appearances are visible by the naked eye. Brittle fracture in steel plates may appear as chevron markings under the microscope. These arrow-like ridges on the crack surface point towards the origin of the fracture.Fracture toughness is measured using a notched and pre-cracked rectangular specimen, of which the dimensions are specified in standards, for example ASTM E23. There are other means of estimating or measuring fracture toughness by the following: The Charpy impact test per ASTM A370; The crack-tip opening displacement (CTOD) test per BS 7448–1; The J integral test per ASTM E1820; The Pellini drop-weight test per ASTM E208.
Unusual conditions:
While many welding applications are done in controlled environments such as factories and repair shops, some welding processes are commonly used in a wide variety of conditions, such as open air, underwater, and vacuums (such as space). In open-air applications, such as construction and outdoors repair, shielded metal arc welding is the most common process. Processes that employ inert gases to protect the weld cannot be readily used in such situations, because unpredictable atmospheric movements can result in a faulty weld. Shielded metal arc welding is also often used in underwater welding in the construction and repair of ships, offshore platforms, and pipelines, but others, such as flux cored arc welding and gas tungsten arc welding, are also common. Welding in space is also possible—it was first attempted in 1969 by Russian cosmonauts during the Soyuz 6 mission, when they performed experiments to test shielded metal arc welding, plasma arc welding, and electron beam welding in a depressurized environment. Further testing of these methods was done in the following decades, and today researchers continue to develop methods for using other welding processes in space, such as laser beam welding, resistance welding, and friction welding. Advances in these areas may be useful for future endeavours similar to the construction of the International Space Station, which could rely on welding for joining in space the parts that were manufactured on Earth.
Safety issues:
Welding can be dangerous and unhealthy if the proper precautions are not taken. However, using new technology and proper protection greatly reduces risks of injury and death associated with welding.Since many common welding procedures involve an open electric arc or flame, the risk of burns and fire is significant; this is why it is classified as a hot work process. To prevent injury, welders wear personal protective equipment in the form of heavy leather gloves and protective long-sleeve jackets to avoid exposure to extreme heat and flames. Synthetic clothing such as polyester should not be worn since it may burn, causing injury. Additionally, the brightness of the weld area leads to a condition called arc eye or flash burns in which ultraviolet light causes inflammation of the cornea and can burn the retinas of the eyes. Goggles and welding helmets with dark UV-filtering face plates are worn to prevent this exposure. Since the 2000s, some helmets have included a face plate which instantly darkens upon exposure to the intense UV light. To protect bystanders, the welding area is often surrounded with translucent welding curtains. These curtains, made of a polyvinyl chloride plastic film, shield people outside the welding area from the UV light of the electric arc, but cannot replace the filter glass used in helmets.
Safety issues:
Welders are often exposed to dangerous gases and particulate matter. Processes like flux-cored arc welding and shielded metal arc welding produce smoke containing particles of various types of oxides. The size of the particles in question tends to influence the toxicity of the fumes, with smaller particles presenting a greater danger. This is because smaller particles have the ability to cross the blood–brain barrier. Fumes and gases, such as carbon dioxide, ozone, and fumes containing heavy metals, can be dangerous to welders lacking proper ventilation and training. Exposure to manganese welding fumes, for example, even at low levels (<0.2 mg/m3), may lead to neurological problems or to damage to the lungs, liver, kidneys, or central nervous system. Nano particles can become trapped in the alveolar macrophages of the lungs and induce pulmonary fibrosis. The use of compressed gases and flames in many welding processes poses an explosion and fire risk. Some common precautions include limiting the amount of oxygen in the air, and keeping combustible materials away from the workplace.
Costs and trends:
As an industrial process, the cost of welding plays a crucial role in manufacturing decisions. Many different variables affect the total cost, including equipment cost, labor cost, material cost, and energy cost. Depending on the process, equipment cost can vary, from inexpensive for methods like shielded metal arc welding and oxyfuel welding, to extremely expensive for methods like laser beam welding and electron beam welding. Because of their high cost, they are only used in high production operations. Similarly, because automation and robots increase equipment costs, they are only implemented when high production is necessary. Labor cost depends on the deposition rate (the rate of welding), the hourly wage, and the total operation time, including time spent fitting, welding, and handling the part. The cost of materials includes the cost of the base and filler material, and the cost of shielding gases. Finally, energy cost depends on arc time and welding power demand.For manual welding methods, labor costs generally make up the vast majority of the total cost. As a result, many cost-saving measures are focused on minimizing operation time. To do this, welding procedures with high deposition rates can be selected, and weld parameters can be fine-tuned to increase welding speed. Mechanization and automation are often implemented to reduce labor costs, but this frequently increases the cost of equipment and creates additional setup time. Material costs tend to increase when special properties are necessary, and energy costs normally do not amount to more than several percent of the total welding cost.In recent years, in order to minimize labor costs in high production manufacturing, industrial welding has become increasingly more automated, most notably with the use of robots in resistance spot welding (especially in the automotive industry) and in arc welding. In robot welding, mechanized devices both hold the material and perform the weld and at first, spot welding was its most common application, but robotic arc welding increases in popularity as technology advances. Other key areas of research and development include the welding of dissimilar materials (such as steel and aluminum, for example) and new welding processes, such as friction stir, magnetic pulse, conductive heat seam, and laser-hybrid welding. Furthermore, progress is desired in making more specialized methods like laser beam welding practical for more applications, such as in the aerospace and automotive industries. Researchers also hope to better understand the often unpredictable properties of welds, especially microstructure, residual stresses, and a weld's tendency to crack or deform.The trend of accelerating the speed at which welds are performed in the steel erection industry comes at a risk to the integrity of the connection. Without proper fusion to the base materials provided by sufficient arc time on the weld, a project inspector cannot ensure the effective diameter of the puddle weld therefore he or she cannot guarantee the published load capacities unless they witness the actual installation. This method of puddle welding is common in the United States and Canada for attaching steel sheets to bar joist and structural steel members. Regional agencies are responsible for ensuring the proper installation of puddle welding on steel construction sites. Currently there is no standard or weld procedure which can ensure the published holding capacity of any unwitnessed connection, but this is under review by the American Welding Society.
Glass and plastic welding:
Glasses and certain types of plastics are commonly welded materials. Unlike metals, which have a specific melting point, glasses and plastics have a melting range, called the glass transition. When heating the solid material past the glass-transition temperature (Tg) into this range, it will generally become softer and more pliable. When it crosses through the range, above the glass-melting temperature (Tm), it will become a very thick, sluggish, viscous liquid, slowly decreasing in viscosity as temperature increases. Typically, this viscous liquid will have very little surface tension compared to metals, becoming a sticky, taffy to honey-like consistency, so welding can usually take place by simply pressing two melted surfaces together. The two liquids will generally mix and join at first contact. Upon cooling through the glass transition, the welded piece will solidify as one solid piece of amorphous material.
Glass and plastic welding:
Glass welding Glass welding is a common practice during glassblowing. It is used very often in the construction of lighting, neon signs, flashtubes, scientific equipment, and the manufacture of dishes and other glassware. It is also used during glass casting for joining the halves of glass molds, making items such as bottles and jars. Welding glass is accomplished by heating the glass through the glass transition, turning it into a thick, formable, liquid mass. Heating is usually done with a gas or oxy-gas torch, or a furnace, because the temperatures for melting glass are often quite high. This temperature may vary, depending on the type of glass. For example, lead glass becomes a weldable liquid at around 1,600 °F (870 °C), and can be welded with a simple propane torch. On the other hand, quartz glass (fused silica) must be heated to over 3,000 °F (1,650 °C), but quickly loses its viscosity and formability if overheated, so an oxyhydrogen torch must be used. Sometimes a tube may be attached to the glass, allowing it to be blown into various shapes, such as bulbs, bottles, or tubes. When two pieces of liquid glass are pressed together, they will usually weld very readily. Welding a handle onto a pitcher can usually be done with relative ease. However, when welding a tube to another tube, a combination of blowing and suction, and pressing and pulling is used to ensure a good seal, to shape the glass, and to keep the surface tension from closing the tube in on itself. Sometimes a filler rod may be used, but usually not.
Glass and plastic welding:
Because glass is very brittle in its solid state, it is often prone to cracking upon heating and cooling, especially if the heating and cooling are uneven. This is because the brittleness of glass does not allow for uneven thermal expansion. Glass that has been welded will usually need to be cooled very slowly and evenly through the glass transition, in a process called annealing, to relieve any internal stresses created by a temperature gradient.
Glass and plastic welding:
There are many types of glass, and it is most common to weld using the same types. Different glasses often have different rates of thermal expansion, which can cause them to crack upon cooling when they contract differently. For instance, quartz has very low thermal expansion, while soda-lime glass has very high thermal expansion. When welding different glasses to each other, it is usually important to closely match their coefficients of thermal expansion, to ensure that cracking does not occur. Also, some glasses will simply not mix with others, so welding between certain types may not be possible.
Glass and plastic welding:
Glass can also be welded to metals and ceramics, although with metals the process is usually more adhesion to the surface of the metal rather than a commingling of the two materials. However, certain glasses will typically bond only to certain metals. For example, lead glass bonds readily to copper or molybdenum, but not to aluminum. Tungsten electrodes are often used in lighting but will not bond to quartz glass, so the tungsten is often wetted with molten borosilicate glass, which bonds to both tungsten and quartz. However, care must be taken to ensure that all materials have similar coefficients of thermal expansion to prevent cracking both when the object cools and when it is heated again. Special alloys are often used for this purpose, ensuring that the coefficients of expansion match, and sometimes thin, metallic coatings may be applied to a metal to create a good bond with the glass.
Glass and plastic welding:
Plastic welding Plastics are generally divided into two categories, which are "thermosets" and "thermoplastics." A thermoset is a plastic in which a chemical reaction sets the molecular bonds after first forming the plastic, and then the bonds cannot be broken again without degrading the plastic. Thermosets cannot be melted, therefore, once a thermoset has set it is impossible to weld it. Examples of thermosets include epoxies, silicone, vulcanized rubber, polyester, and polyurethane.
Glass and plastic welding:
Thermoplastics, by contrast, form long molecular chains, which are often coiled or intertwined, forming an amorphous structure without any long-range, crystalline order. Some thermoplastics may be fully amorphous, while others have a partially crystalline/partially amorphous structure. Both amorphous and semicrystalline thermoplastics have a glass transition, above which welding can occur, but semicrystallines also have a specific melting point which is above the glass transition. Above this melting point, the viscous liquid will become a free-flowing liquid (see rheological weldability for thermoplastics). Examples of thermoplastics include polyethylene, polypropylene, polystyrene, polyvinylchloride (PVC), and fluoroplastics like Teflon and Spectralon.
Glass and plastic welding:
Welding thermoplastic is very similar to welding glass. The plastic first must be cleaned and then heated through the glass transition, turning the weld-interface into a thick, viscous liquid. Two heated interfaces can then be pressed together, allowing the molecules to mix through intermolecular diffusion, joining them as one. Then the plastic is cooled through the glass transition, allowing the weld to solidify. A filler rod may often be used for certain types of joints. The main differences between welding glass and plastic are the types of heating methods, the much lower melting temperatures, and the fact that plastics will burn if overheated. Many different methods have been devised for heating plastic to a weldable temperature without burning it. Ovens or electric heating tools can be used to melt the plastic. Ultrasonic, laser, or friction heating are other methods. Resistive metals may be implanted in the plastic, which respond to induction heating. Some plastics will begin to burn at temperatures lower than their glass transition, so welding can be performed by blowing a heated, inert gas onto the plastic, melting it while, at the same time, shielding it from oxygen.Many thermoplastics can also be welded using chemical solvents. When placed in contact with the plastic, the solvent will begin to soften it, bringing the surface into a thick, liquid solution. When two melted surfaces are pressed together, the molecules in the solution mix, joining them as one. Because the solvent can permeate the plastic, the solvent evaporates out through the surface of the plastic, causing the weld to drop out of solution and solidify. A common use for solvent welding is for joining PVC or ABS (acrylonitrile butadiene styrene) pipes during plumbing, or for welding styrene and polystyrene plastics in the construction of models. Solvent welding is especially effective on plastics like PVC which burn at or below their glass transition, but may be ineffective on plastics like Teflon or polyethylene that are resistant to chemical decomposition. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Correctness (computer science)**
Correctness (computer science):
In theoretical computer science, an algorithm is correct with respect to a specification if it behaves as specified. Best explored is functional correctness, which refers to the input-output behavior of the algorithm (i.e., for each input it produces an output satisfying the specification).Within the latter notion, partial correctness, requiring that if an answer is returned it will be correct, is distinguished from total correctness, which additionally requires that an answer is eventually returned, i.e. the algorithm terminates. Correspondingly, to prove a program's total correctness, it is sufficient to prove its partial correctness, and its termination. The latter kind of proof (termination proof) can never be fully automated, since the halting problem is undecidable.
Correctness (computer science):
For example, successively searching through integers 1, 2, 3, … to see if we can find an example of some phenomenon—say an odd perfect number—it is quite easy to write a partially correct program (see box). But to say this program is totally correct would be to assert something currently not known in number theory.
A proof would have to be a mathematical proof, assuming both the algorithm and specification are given formally. In particular it is not expected to be a correctness assertion for a given program implementing the algorithm on a given machine. That would involve such considerations as limitations on computer memory.
A deep result in proof theory, the Curry–Howard correspondence, states that a proof of functional correctness in constructive logic corresponds to a certain program in the lambda calculus. Converting a proof in this way is called program extraction.
Hoare logic is a specific formal system for reasoning rigorously about the correctness of computer programs. It uses axiomatic techniques to define programming language semantics and argue about the correctness of programs through assertions known as Hoare triples.
Correctness (computer science):
Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Although crucial to software quality and widely deployed by programmers and testers, software testing still remains an art, due to limited understanding of the principles of software. The difficulty in software testing stems from the complexity of software: we can not completely test a program with moderate complexity. Testing is more than just debugging. The purpose of testing can be quality assurance, verification and validation, or reliability estimation. Testing can be used as a generic metric as well. Correctness testing and reliability testing are two major areas of testing. Software testing is a trade-off between budget, time and quality. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bird feeder**
Bird feeder:
A birdfeeder, bird table, or tray feeder is a device placed outdoors to supply bird food to birds (bird feeding). The success of a bird feeder in attracting birds depends upon its placement and the kinds of foods offered, as different species have different preferences.
Most bird feeders supply seeds or bird food, such as millet, sunflower (oil and striped), safflower, nyjer seed, and rapeseed or canola seed to seed-eating birds.
Bird feeders often are used for birdwatching and many people keep webcams trained on feeders where birds often congregate, with some even living just near the bird feeder.
Types of feeders:
Seed feeders Seed feeders are the most common type of feeders. They can vary in design from tubes to hoppers and trays. Sunflower seeds or mixed seeds are popular for use in these feeders and will attract many songbirds such as cardinals, finches, and chickadees. Black oil sunflower seeds are especially popular with bird enthusiasts. The outer shell of the black oil sunflower seeds are thinner and easier to crack than other types of sunflower seeds. In addition, the kernel is larger than the striped or white sunflower seeds. Black oil sunflower seeds also contain a large amount of fat; therefore they are especially good to use in the winter. Most bird feeders are designed to dispense sunflower-sized foods, but there are specialty "finch feeders" with smaller openings to dispense the tiny Guizotia abyssinica (Niger seed), which is a favorite of smaller finches.
Types of feeders:
Seed feeders are mainly squirrel proof, tube-like or hopper. Due to the need of keeping squirrels away from the bird food, manufacturers have created different defense mechanisms that may deter squirrels from getting close to the seed. Some seed feeders come with weight sensitive technology which shuts off the access to the seed ports whenever a heavy weight is detected (as most squirrels are heavier than birds). Birds can still feed as they weigh less and the ports remain open under their weight. Other seed feeders are designed to be mounted on poles as it is believed that squirrels reach seed feeders more easily from trees than from poles. The simplest type of squirrel proof feeder is a tube-like feeder surrounded by a metal cage. These feeders also offer protection from larger and more aggressive birds. Tube seed feeders are primarily made of clear plastic tubes with plastic or metal caps, bases and perches. Hopper bird feeders look like a house and attract a wide range of birds such as finches, cardinals, blue jays, sparrows and titmice.
Types of feeders:
Hummingbird feeders Hummingbird feeders, rather than dispensing seed, supply liquid nourishment to hummingbirds in the form of a sugar solution. The solution is normally 4 parts water to 1 part white sugar. Only pure refined white cane or beet sugar should be used, according to experts: Brown, turbinado, or raw sugar must not be used because they contain levels of iron that could be lethal.
Types of feeders:
Honey must not be used, because it promotes dangerous fungal growth.
Types of feeders:
The nectar should be changed every 3–5 days.Hummingbird feeders usually have red accents or red glass to help attract hummingbirds. The sugar mixture is sometimes colored with red food coloring to attract birds, though this is not necessary if the feeder itself is red, and may actually be harmful to the birds. Yeasts tend to grow in hummingbird feeders and spoil the solution, so they must be refreshed frequently and kept very clean to avoid harm to the birds. See the article on hummingbirds for more details. Ants and other insects are also attracted to hummingbird nectar. Smearing petroleum jelly on the stem or cap of the feeder (away from the perch or flower part where the bird may come into contact with it) may prevent the ants from crawling to the feeder. When placing a hummingbird feeder, the feeder is best suited 15 to 20 feet from windows; 10 to 15 feet from the nearest cover, like shrubs or bushes; and in an open area that receives partial sun, so that hummingbirds can move from nectar source to nectar source.Hummingbird top-fill feeders are popular among bird lovers because they are easy to fill and clean and also because they do not need to be turned upright which means that there are less chances that the nectar is spilled. The sports bottle top-fill hummingbird feeders have the design of a sports bottle, with a mechanism that works similarly to such a bottle. With this type of feeder, one has to push down the plastic container in order to close the nectar reservoir and then to unscrew the cap and pour the nectar. After the cap is replaced, the body of the nectar reservoir can be pulled up. This type of bird feeder has the advantage that the feeder does not need to be turned upside down to be refilled and which results in less nectar wasted by spilling. The traditional top-fill hummingbird feeders are one of the most popular types. There is also a plunger type of top-filling hummingbird feeder which comes with a small plunger in the container that creates the vacuum seal when the lid is tightened and the nectar will start flowing only when the lid is sealed correctly to the feeding ports.The bottom-fill hummingbird feeders include a traditional bottom-fill feeder and several variations of it. The traditional ones are filled from an opening at the bottom of the nectar container but many manufacturers have come up with improved variations of the traditional style of feeder, to make feeding birds easier and with less nectar wasted. Some bottom-fill feeders come with a funnel-like opening at the bottom of the container, through which the feeder is filled. Other bottom-filled hummingbird feeders can be attached to one's window to provide a close-up of the birds.
Types of feeders:
In 1932, W. R. Sullivan invented a hummingbird feeder designed to prevent other birds or insects from drinking from it, which he produced and sold locally around Kerrville, Texas.
Types of feeders:
Oriole feeders Oriole feeders, which are traditionally colored orange, also supply such artificial nectar and are designed to serve New World orioles, which have an unusually shaped beak and tongue. These orioles and some other birds also will come to fruit foods, such as grape jelly, or half an orange on a peg. Hummingbirds will also feed from Oriole feeders.
Types of feeders:
Oriole feeders usually have nectar containers made of glass or plastic, which are designed to attract the orioles. Oriole feeders should be cleaned at least once a week and even more often when the temperatures are higher. Oriole feeders also come in top fill, bottom fill and dish-like designs.
Types of feeders:
Suet feeders A suet feeder is typically a metal cage-like construction with a plastic coating that contains a cake or block of suet to feed woodpeckers, flickers, nuthatches, and many other species of insect eaters. Suet logs are also very common. These wooden logs have holes drilled out for suet to be inserted. Suet is high in fat which helps to keep birds warm and nourished during the cold winter. Suet cakes consist of sunflower seeds and wheat or oat flakes mixed with suet, pork fat, or coconut oil.
Types of feeders:
Other Birds housed in wired or glass cages can be fed with electronic bird feeders. The electronic bird feeders are capable of storing bird food for days and even weeks, depending on the feeder type and automatically replenish the dish once it is empty.
Types of feeders:
Providing a varied array of tastes and feeding venues will result in less competition for food and dining spots for birds, just as well-planned and maintained gardens provide many plants which supply different types of seeds and nectars. A shallow bird bath can attract as many birds as a feeder but it must be safe from cats, kept clean, and refreshed frequently with clean water to avoid mosquitoes. The birdbath should be placed where a frightened bird can fly up easily to an overhanging limb or resting place if disturbed or attacked.
Squirrels:
Squirrels may also help themselves to the contents of bird feeders, often not merely feeding, but carrying away the food to their hoard. There are various anti-squirrel techniques and devices available to thwart attempts by squirrels to raid bird feeders. Several manufacturers produce feeders with perches that collapse under the weight of anything heavier than a bird, or that use battery power to shock an intruder lightly or spin the perching area to fling it off. Caged feeders are often designed so that squirrels cannot reach the seed inside, but birds can easily fly through the cage's holes. A UK company, The Nuttery, held the original patent on this cage-within-a-cage design. Caged feeders are best to keep out gray squirrels. Chipmunks and red squirrels can usually enter caged feeders. Hot pepper in bird seed and suet has also been shown to be effective against squirrels without harming birds, as birds are not sensitive to capsaicin oleoresin, but mammals experience a strong burning sensation when exposed to it.The placement of a bird feeder can also prevent squirrels from accessing the seed. In addition, baffles can be used that prevent squirrels from gaining their footing above feeders. Below feeders, baffles can prevent squirrels from climbing any further, however squirrels are very agile and acrobatic and often find a way to overcome devices of any nature.
Negative impacts:
Feeding wild birds does carry potential risks. Birds may contract and spread diseases like salmonellosis by gathering at feeders; poorly maintained feeding and watering stations may also cause illness. Birds at feeders risk predation by cats and other animals, or may incur injury by flying into windows. Steps should be taken to reduce the risks to birds, such as: regular disinfecting of feeders and watering stations, ensuring feed has not become moldy or rancid, and proper positioning of feeders to reduce crowding and window collisions. Birds are less likely to fly into windows that have a wooden lattice. Collisions with windows can also be reduced by using window decals.Depending on the feeder design and the type of feed used, species such as the house sparrow can dominate the use of the feeder. As a result, the house sparrow population can become inflated locally where feeders are used. In North America, where the house sparrow is an invasive species, competition from house sparrows can exclude the indigenous bluebirds from available nest sites as well as attack indigenous birds.The use of bird feeders has been claimed to cause many other environmental problems; some of these were highlighted in a 2002 front-page article in The Wall Street Journal, which provoked responses nationwide from bird enthusiasts and scientists who refuted the article's arguments.Prior to the publication of the Wall Street Journal article, Canadian ornithologist Jason Rogers also wrote about the environmental problems associated with the use of bird feeders in the journal Alberta Naturalist. In this article, Rogers explains how the use of bird feeders is inherently fraught with negative impacts and risks such as fostering dependency, altering natural distribution, density, and migration patterns, interfering with ecological processes, causing malnutrition, facilitating the spread of disease, and increasing the risk of death from cats, pesticides, hitting windows, and other causes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Electro-pneumatic action**
Electro-pneumatic action:
The electro-pneumatic action is a control system by the mean of air pressure for pipe organs, whereby air pressure, controlled by an electric current and operated by the keys of an organ console, opens and closes valves within wind chests, allowing the pipes to speak. This system also allows the console to be physically detached from the organ itself. The only connection was via an electrical cable from the console to the relay, with some early organ consoles utilizing a separate wind supply to operate combination pistons.
Invention:
Although early experiments with Barker lever, tubular-pneumatic and electro-pneumatic actions date as far back as the 1850s, credit for a feasible design is generally given to the English organist and inventor, Robert Hope-Jones. He overcame the difficulties inherent in earlier designs by including a rotating centrifugal air blower and replacing banks of batteries with a DC generator, which provided electrical power to the organ. This allowed the construction of new pipe organs without any physical linkages whatsoever. Previous organs used tracker action, which requires a mechanical linkage between the console and the organ windchests, or tubular-pneumatic action, which linked the console and windchests with a large bundle of lead tubing.
Operation:
When an organ key is depressed, an electric circuit is completed by means of a switch connected to that key. This causes a low-voltage current to flow through a cable to the windchest, upon which a rank, or multiple ranks of pipes are set. Within the chest, a small electro-magnet associated with the key that is pressed becomes energized. This causes a very small valve to open. This, in turn, allows wind pressure to activate a bellows or "pneumatic" which operates a larger valve. This valve causes a change of air pressure within a channel that leads to all pipes of that note. A separate "stop action" system is used to control the admittance of air or "wind" into the pipes of the rank or ranks selected by the organist's selection of stops, while other ranks are "stopped" from playing. The stop action can also be an electro-pneumatic action, or may be another type of action This pneumatically assisted valve action is in contrast to a direct electric action in which each pipe's valve is opened directly by an electric solenoid which is attached to the valve.
Advantages and disadvantages:
The console of an organ which uses either type of electric action is connected to the other mechanisms by an electrical cable. This makes it possible for the console to be placed in any desirable location. It also permits the console to be movable, or to be installed on a "lift", as was the practice with theater organs.
While many consider tracker action organs to be more sensitive to the player's control, others find some tracker organs heavy to play and tubular-pneumatic organs to be sluggish, and so prefer electro-pneumatic or direct electric actions.
Advantages and disadvantages:
An electro-pneumatic action requires less current to operate than a direct electric action. This causes less demand on switch contacts. An organ using electro-pneumatic action was more reliable in operation than early direct electric organs until improvements were made in direct electric components.A disadvantage of an electro-pneumatic organ is its use of large quantities of thin perishable leather, usually lambskin. This requires an extensive "re-leathering" of the windchests every twenty-five to forty years depending upon the quality of the material used, the atmospheric conditions and the use of the organ.Like tracker and tubular action, electro-pneumatic action—when employing the commonly used pitman-style windchests—is less flexible in operation than direct electric action. When electro-pneumatic action uses unit windchests (as does the electro-pneumatic action constructed by organ builder Schoenstein & Co.), then it works similarly to direct electric action, in which each rank operates independently, allowing "unification", where each individual rank on a windchest can be played at various octave ranges.
Advantages and disadvantages:
A drawback to older electric action organs was the large amount of wiring required for operation. With each stop tab and key being wired, the transmission cable could easily contain several hundred wires. The great number of wires required between the keyboards, the banks of relays and the organ itself, with each solenoid requiring its own signal wire, made the situation worse, especially if a wire was broken (this was particularly true with consoles located on lifts and/or turntables), which made tracing the break very difficult.
Advantages and disadvantages:
These problems increased with the size of the instrument, and it would not be unusual for a particular organ to contain over a hundred miles of wiring. The largest pipe organ in the world, the Boardwalk Hall Auditorium Organ, is said to contain more than 137,500 miles (221,300 km) of wire. Modern electronic switching has largely overcome these physical problems.
Modern methods:
In the years after the advent of the transistor, and later, integrated circuits and microprocessors, miles of wiring and electro-pneumatic relays have given way to electronic and computerized control and relay systems, which have made the control of pipe organs much more efficient. But for its time, the electro-pneumatic action was considered a great success, and even today modernized versions of this action are used in many new pipe organs, especially in the United States and the United Kingdom. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SciShow**
SciShow:
SciShow is a collection of YouTube channels that focuses on science news. The program is hosted by Hank Green along with a rotating cast of co-hosts. SciShow was launched as an original channel.
The series has been consistently releasing new material since it was created in 2012.
Since its launch, three additional channels have been launched under the SciShow brand: SciShow Space, SciShow Psych, and SciShow Kids.
History and funding:
The channel was launched as an "original channel", which meant that YouTube funded the channel. The show's initial grant was projected to expire in 2014, and in response, on September 12, 2013 SciShow joined the viewer-funding site Subbable, created in part by Green.In 2014, the channel landed a national advertisement deal with YouTube. The educational program was featured on platforms such as billboards and television commercials as a result. Green details that the advertisements had a positive effect on SciShow, stating, "My Twitter exploded, our followers and subscribers exploded."After Patreon acquired Subbable, the channel switched over to Patreon where it continues to receive support in exchange for various perks. SciShow currently has over four thousand patrons.
Production and hosting:
Though Green hosts the majority of episodes, the show has alternate hosts; Michael Aranda has been with the show since its inception, and Olivia Gordon of the Missoula Insectarium joined in June 2016. Gordon left SciShow in August 2020, and was replaced by ethnobotanist Rose Bear Don't Walk. Prior to her move to Chicago, Emily Graslie of The Brain Scoop also occasionally hosted on the channel. There have also been guest appearances by Lindsey Doe, who hosts Sexplanations, another channel launched by Green; and by longtime SciShow staffer Stefan Chin, who since 2017 has been a regular host. SciShow has grown since its 2012 launch; it now employs a full editorial, production, and operations staff.SciShow Space has three rotating hosts: Hank Green, Reid Reimers, and Caitlin Hofmeister. Similarly, SciShow Psych rotates hosting between Hank Green, Brit Garner, and Anthony Brown. SciShow Kids is primarily hosted by Jessi Knudsen Castañeda, host of Animal Wonders Montana.
Content:
Several different scientific fields are covered by SciShow, including chemistry, physics, biology, zoology, geology, geography, entomology, botany, meteorology, astronomy, medicine, psychology, anthropology, math and computer science. The videos on SciShow have a vast variety of different topics, such as nutrition, and "science superlatives". As of April 2020, SciShow has released over 2250 videos.A spin-off channel, SciShow Space, launched in April 2014 to specialize in space topics. A second spin-off, SciShow Kids, launched in March 2015 to specialize in delivering science topics to children. Kids went on hiatus in late 2018, returning in April 2020. A third spinoff channel was announced in February 2017, SciShow Psych, which debuted in March 2017, specializing in psychology and neuroscience. A podcast, SciShow Tangents, was launched in November 2018; it features entertaining exchanges of scientific facts among many of the shows' staffers, and is directed at a mature audience.
Podcast:
In November 2018, a co-branded podcast titled SciShow Tangents was launched as a co-production with WNYC Studios. It consists of a panel format where Hank Green, Ceri Riley, Stefan Chin, and Sam Schultz share facts about science on a weekly theme; each episode has multiple segments, several of which are competitive. In late 2020, the podcast ceased its association with WNYC Studios, and continues as an independently produced entity. The podcast is a restructured and reimagined continuation of their previous podcast, Holy Fucking Science, which ran from January 2017 to March 2018.
Reception:
As SciShow has amassed a large following, the channel has been featured on several media outlets.In October 2014 the channel surpassed two million subscribers, and over 210 million video views.
As of September 2021, the channel has over 6.7 million subscribers and over 1.4 billion total views.In 2017, SciShow won a Webby Award in the People's Voice category. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mexsana**
Mexsana:
Mexsana is an antiseptic medicated powder. It is used to relieve itching and chafing, to protect against perspiration odor and discomfort, while also keeping the skin's pH balanced. The product is also used to treat severely chapped skin, minor burns, and other minor skin irritations. Currently Mexsana medicated powder is produced by Merck Sharp & Dohme (MSD) Laboratories.
Not to be confused with "Mexana" medicated powder, sold in the Dominican Republic.
Active Ingredients: Zinc oxide, kaolin, benzethonium chloride.
Also contains: Camphor, eucalyptus oil, fragrance, lemon oil.
Process:
The product's elaboration process stands out for the strictness in its safety measures and also in the precise measures for weighting its components. The powder, mineral of less hardness in the Mohs scale, is the source that gives the name to the generic product used for personal hygienic purposes, and forms around 85% of the composition of the brand.
Initially, packages filled with huge amounts of this material, arranged in sacks of 25 kilograms each, are placed in warehouses that technically isolate the daily contamination, due to a sophisticated air flow mechanism.
In popular culture:
A can of Mexsana brand heat powder can be found on the back of the Greatest Hits album of Mötley Crüe.
Products available in Colombia:
Currently the brand has both powder and spray products available in the Colombian market (Made by Bayer Andina): Powder Talcos Mexsana Lady Mexsana Mexsana Avena Sprays Mexsana Pies Lady Mexsana Antibacterial Mexsana Pies Antiperspirant Mexsana Pies Ultra Mexsana Pies Avena | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Methyl dimethyldithiocarbamate**
Methyl dimethyldithiocarbamate:
Methyl dimethyldithiocarbamate is the organosulfur compound with the formula (CH3)2NC(S)SCH3. It is the one of simplest dithiocarbamic esters. It is a white volatile solid that is poorly soluble in water but soluble in many organic solvents. It was once used as a pesticide.
Methyl dimethyldithiocarbamate can be prepared by methylation of salts of dimethyldithiocarbamate: (CH3)2NCS2Na + (CH3O)2SO2 → (CH3)2NC(S)SCH3 + Na[CH3OSO3]It can also be prepared by the reaction of a tetramethylthiuram disulfide with methyl Grignard reagents: [(CH3)2NC(S)S]2 + CH3MgBr → (CH3)2NC(S)SCH3 + (CH3)2NCS2MgBr | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Model predictive control**
Model predictive control:
Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models and in power electronics. Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification. The main advantage of MPC is the fact that it allows the current timeslot to be optimized, while keeping future timeslots in account. This is achieved by optimizing a finite time-horizon, but only implementing the current timeslot and then optimizing again, repeatedly, thus differing from a linear–quadratic regulator (LQR). Also MPC has the ability to anticipate future events and can take control actions accordingly. PID controllers do not have this predictive ability. MPC is nearly universally implemented as a digital control, although there is research into achieving faster response times with specially designed analog circuitry.Generalized predictive control (GPC) and dynamic matrix control (DMC) are classical examples of MPC.
Overview:
The models used in MPC are generally intended to represent the behavior of complex and simple dynamical systems. The additional complexity of the MPC control algorithm is not generally needed to provide adequate control of simple systems, which are often controlled well by generic PID controllers. Common dynamic characteristics that are difficult for PID controllers include large time delays and high-order dynamics.
Overview:
MPC models predict the change in the dependent variables of the modeled system that will be caused by changes in the independent variables. In a chemical process, independent variables that can be adjusted by the controller are often either the setpoints of regulatory PID controllers (pressure, flow, temperature, etc.) or the final control element (valves, dampers, etc.). Independent variables that cannot be adjusted by the controller are used as disturbances. Dependent variables in these processes are other measurements that represent either control objectives or process constraints.
Overview:
MPC uses the current plant measurements, the current dynamic state of the process, the MPC models, and the process variable targets and limits to calculate future changes in the dependent variables. These changes are calculated to hold the dependent variables close to target while honoring constraints on both independent and dependent variables. The MPC typically sends out only the first change in each independent variable to be implemented, and repeats the calculation when the next change is required.
Overview:
While many real processes are not linear, they can often be considered to be approximately linear over a small operating range. Linear MPC approaches are used in the majority of applications with the feedback mechanism of the MPC compensating for prediction errors due to structural mismatch between the model and the process. In model predictive controllers that consist only of linear models, the superposition principle of linear algebra enables the effect of changes in multiple independent variables to be added together to predict the response of the dependent variables. This simplifies the control problem to a series of direct matrix algebra calculations that are fast and robust.
Overview:
When linear models are not sufficiently accurate to represent the real process nonlinearities, several approaches can be used. In some cases, the process variables can be transformed before and/or after the linear MPC model to reduce the nonlinearity. The process can be controlled with nonlinear MPC that uses a nonlinear model directly in the control application. The nonlinear model may be in the form of an empirical data fit (e.g. artificial neural networks) or a high-fidelity dynamic model based on fundamental mass and energy balances. The nonlinear model may be linearized to derive a Kalman filter or specify a model for linear MPC.
Overview:
An algorithmic study by El-Gherwi, Budman, and El Kamel shows that utilizing a dual-mode approach can provide significant reduction in online computations while maintaining comparative performance to a non-altered implementation. The proposed algorithm solves N convex optimization problems in parallel based on exchange of information among controllers.
Overview:
Theory behind MPC MPC is based on iterative, finite-horizon optimization of a plant model. At time t the current plant state is sampled and a cost minimizing control strategy is computed (via a numerical minimization algorithm) for a relatively short time horizon in the future: [t,t+T] . Specifically, an online or on-the-fly calculation is used to explore state trajectories that emanate from the current state and find (via the solution of Euler–Lagrange equations) a cost-minimizing control strategy until time t+T . Only the first step of the control strategy is implemented, then the plant state is sampled again and the calculations are repeated starting from the new current state, yielding a new control and new predicted state path. The prediction horizon keeps being shifted forward and for this reason MPC is also called receding horizon control. Although this approach is not optimal, in practice it has given very good results. Much academic research has been done to find fast methods of solution of Euler–Lagrange type equations, to understand the global stability properties of MPC's local optimization, and in general to improve the MPC method.
Overview:
Principles of MPC Model predictive control is a multivariable control algorithm that uses: an internal dynamic model of the process a cost function J over the receding horizon an optimization algorithm minimizing the cost function J using the control input uAn example of a quadratic cost function for optimization is given by: J=∑i=1Nwxi(ri−xi)2+∑i=1MwuiΔui2 without violating constraints (low/high limits) with xi : i th controlled variable (e.g. measured temperature) ri : i th reference variable (e.g. required temperature) ui : i th manipulated variable (e.g. control valve) wxi : weighting coefficient reflecting the relative importance of xi wui : weighting coefficient penalizing relative big changes in ui etc.
Nonlinear MPC:
Nonlinear model predictive control, or NMPC, is a variant of model predictive control that is characterized by the use of nonlinear system models in the prediction. As in linear MPC, NMPC requires the iterative solution of optimal control problems on a finite prediction horizon. While these problems are convex in linear MPC, in nonlinear MPC they are not necessarily convex anymore. This poses challenges for both NMPC stability theory and numerical solution.The numerical solution of the NMPC optimal control problems is typically based on direct optimal control methods using Newton-type optimization schemes, in one of the variants: direct single shooting, direct multiple shooting methods, or direct collocation. NMPC algorithms typically exploit the fact that consecutive optimal control problems are similar to each other. This allows to initialize the Newton-type solution procedure efficiently by a suitably shifted guess from the previously computed optimal solution, saving considerable amounts of computation time. The similarity of subsequent problems is even further exploited by path following algorithms (or "real-time iterations") that never attempt to iterate any optimization problem to convergence, but instead only take a few iterations towards the solution of the most current NMPC problem, before proceeding to the next one, which is suitably initialized; see, e.g.,.. Another promising candidate for the nonlinear optimization problem is to use a randomized optimization method. Optimum solutions are found by generating random samples that satisfy the constraints in the solution space and finding the optimum one based on cost function. While NMPC applications have in the past been mostly used in the process and chemical industries with comparatively slow sampling rates, NMPC is being increasingly applied, with advancements in controller hardware and computational algorithms, e.g., preconditioning, to applications with high sampling rates, e.g., in the automotive industry, or even when the states are distributed in space (Distributed parameter systems). As an application in aerospace, recently, NMPC has been used to track optimal terrain-following/avoidance trajectories in real-time.
Explicit MPC:
Explicit MPC (eMPC) allows fast evaluation of the control law for some systems, in stark contrast to the online MPC. Explicit MPC is based on the parametric programming technique, where the solution to the MPC control problem formulated as optimization problem is pre-computed offline. This offline solution, i.e., the control law, is often in the form of a piecewise affine function (PWA), hence the eMPC controller stores the coefficients of the PWA for each a subset (control region) of the state space, where the PWA is constant, as well as coefficients of some parametric representations of all the regions. Every region turns out to geometrically be a convex polytope for linear MPC, commonly parameterized by coefficients for its faces, requiring quantization accuracy analysis. Obtaining the optimal control action is then reduced to first determining the region containing the current state and second a mere evaluation of PWA using the PWA coefficients stored for all regions. If the total number of the regions is small, the implementation of the eMPC does not require significant computational resources (compared to the online MPC) and is uniquely suited to control systems with fast dynamics. A serious drawback of eMPC is exponential growth of the total number of the control regions with respect to some key parameters of the controlled system, e.g., the number of states, thus dramatically increasing controller memory requirements and making the first step of PWA evaluation, i.e. searching for the current control region, computationally expensive.
Robust MPC:
Robust variants of model predictive control are able to account for set bounded disturbance while still ensuring state constraints are met. Some of the main approaches to robust MPC are given below.
Robust MPC:
Min-max MPC. In this formulation, the optimization is performed with respect to all possible evolutions of the disturbance. This is the optimal solution to linear robust control problems, however it carries a high computational cost. The basic idea behind the min/max MPC approach is to modify the on-line "min" optimization to a "min-max" problem, minimizing the worst case of the objective function, maximized over all possible plants from the uncertainty set.
Robust MPC:
Constraint Tightening MPC. Here the state constraints are enlarged by a given margin so that a trajectory can be guaranteed to be found under any evolution of disturbance.
Robust MPC:
Tube MPC. This uses an independent nominal model of the system, and uses a feedback controller to ensure the actual state converges to the nominal state. The amount of separation required from the state constraints is determined by the robust positively invariant (RPI) set, which is the set of all possible state deviations that may be introduced by disturbance with the feedback controller.
Robust MPC:
Multi-stage MPC. This uses a scenario-tree formulation by approximating the uncertainty space with a set of samples and the approach is non-conservative because it takes into account that the measurement information is available at every time stage in the prediction and the decisions at every stage can be different and can act as recourse to counteract the effects of uncertainties. The drawback of the approach however is that the size of the problem grows exponentially with the number of uncertainties and the prediction horizon.
Robust MPC:
Tube-enhanced multi-stage MPC. This approach synergizes multi-stage MPC and tube-based MPC. It provides high degrees of freedom to choose the desired trade-off between optimality and simplicity by the classification of uncertainties and the choice of control laws in the predictions.
Commercially available MPC software:
Commercial MPC packages are available and typically contain tools for model identification and analysis, controller design and tuning, as well as controller performance evaluation.
A survey of commercially available packages has been provided by S.J. Qin and T.A. Badgwell in Control Engineering Practice 11 (2003) 733–764.
MPC vs. LQR:
Model predictive control and linear-quadratic regulators are both expressions of optimal control, with different schemes of setting up optimisation costs.
While a model predictive controller often looks at fixed length, often graduatingly weighted sets of error functions, the linear-quadratic regulator looks at all linear system inputs and provides the transfer function that will reduce the total error across the frequency spectrum, trading off state error against input frequency.
Due to these fundamental differences, LQR has better global stability properties, but MPC often has more locally optimal[?] and complex performance.
MPC vs. LQR:
The main differences between MPC and LQR are that LQR optimizes across the entire time window (horizon) whereas MPC optimizes in a receding time window, and that with MPC a new solution is computed often whereas LQR uses the same single (optimal) solution for the whole time horizon. Therefore, MPC typically solves the optimization problem in a smaller time window than the whole horizon and hence may obtain a suboptimal solution. However, because MPC makes no assumptions about linearity, it can handle hard constraints as well as migration of a nonlinear system away from its linearized operating point, both of which are major drawbacks to LQR.
MPC vs. LQR:
This means that LQR can become weak when operating away from stable fixed points. MPC can chart a path between these fixed points, but convergence of a solution is not guaranteed, especially if thought as to the convexity and complexity of the problem space has been neglected. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Orthopedic oncologist**
Orthopedic oncologist:
An orthopedic (orthopaedic) oncologist is a physician and surgeon who specializes in the diagnoses and treatment of primary benign and malignant tumors of the bones.
Education:
An orthopedic oncologist in the United States must complete 4 years of medical school. Following graduation from medical school, the completion of an orthopedic surgical residency (medicine) is required. This residency program is typically 5 years in length and focuses on general orthopedic surgical techniques for common orthopedic injuries. As the residency progresses, the level of injury, disease and trauma treated by the resident becomes increasingly complex. By completion of the residency program, the orthopedic surgeon should be able to competently diagnose and treat a variety of injury and trauma to the bony structures of the body. At this point, most orthopedic physicians become attending doctors specializing in general Orthopedic surgery. However, aspiring orthopedic surgeons who wish to sub-specialize in orthopedic oncology must complete an additional phase to their training known as a fellowship (medicine). A fellowship in orthopedic oncology general lasts an additional one to two years following the completion of the residency. During this time, the physician will learn in depth about the pathology and treatment of various forms of primary benign and malignant neoplasms of the bones and bony structures of the human body (any cancer which has originated from the bone, as opposed to cancers which originated from other organs and have secondarily spread, or metastasized, to the bones, which is much more common; these specialists deal mostly with primary bone tumors). The physician will study directly under an experienced attending orthopedic oncologist with one-on-one mentoring. The fellowship is designed to be an intense immersion into a complex medical topic.
Specializations:
Due to the relative rarity of primary bone tumor in relation to other forms of cancer, there are fewer than two hundred orthopedic oncologists practicing around the United States, nearly all of whom work in major urban teaching hospitals. While general orthopedic surgeons may be qualified to perform surgical intervention on these tumors, it is advisable when confronted with primary malignancy of the bone to seek out the treatment of an orthopedic oncologist, due to their increased knowledge and experience dealing with these rare and very serious tumors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cantharidic acid**
Cantharidic acid:
Cantharidic acid is a selective inhibitor of PP2A (protein phosphatase 2) and PP1 (protein phosphatase 1).It is the hydrolysis product of cantharidin. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glossary of rugby union terms**
Glossary of rugby union terms:
Rugby union is a team sport played between two teams of fifteen players.This is a general glossary of the terminology used in the sport of rugby union. Where words in a sentence are also defined elsewhere in this article, they appear in italics.
0-9:
22 The 22 m line, marking 22 metres (72 ft) from the tryline.89 An "89" or eight-nine move is a phase following a scrum, in which the number 8 picks up the ball and transfers it to number 9 (scrum-half).99 The "99" call was a policy of simultaneous retaliation by the 1974 British Lions tour to South Africa, (the 99 comes from the British emergency services telephone number which is 999). The tour was marred by on-pitch violence, which the match officials did not adequately control and the relative absence of cameras compared to the modern game made citing and punishment after the fact unlikely. The Lions' captain, Willie John McBride (Ireland), therefore instigated a policy of "one in, all in" - that is, when one Lions player retaliated, all other Lions were expected to join in the melee or hit the nearest Springbok. By doing so, the referee would be unable to identify any single instigator, and so would be left with the choice of sending off all or none of the team. In this respect, the "99" call was extremely successful - no Lions player was sent off during the tour.
A:
Accidental offside see OffsideAdvantage "Advantage" is the period of time after an infringement in which the non-offending side have the opportunity to gain sufficient territory or tactical opportunity to negate the need to stop the game due to the infringement. The referee will signal advantage with his arm out horizontally toward the non-infringing team. If no tactical or territorial advantage is gained, the referee will whistle and give the decision that had been delayed. If sufficient advantage is gained, the referee will call "advantage over", and play will continue. The Advantage Law allows the game to flow more freely, and not stop for every minor infringement, giving no incentive for a player to commit "intentional" fouls.An example of the application of advantage would be if Team A knocked the ball on (technical offence, conceding a scrum) but a Team B player picked the ball up and made a run forwards before being tackled.Advantage line Also called the gain line. It is an imaginary line drawn across the centre of the pitch when there is a breakdown in open play, such as a ruck, maul or scrum. Advancing across the gain line represents a gain in territory.Alickadoo A non-player associated with a rugby game or club, especially a committee member or administrative official. May perform various off-field roles, particularly on a match day.Ankle tap An ankle-tap or tap-tackle is a form of tackle. It is used when the player carrying the ball is running at speed and a defending player is approaching from behind. Even if the defender is not able to get close enough to the ball-carrier to wrap his arms around him in a conventional tackle, he may still be able to dive at the other player's feet and, with outstretched arm, deliver a tap or hook to the player's foot (or feet) causing the player to stumble.
B:
Ball back If the ball enters touch, then play is restarted by a line-out at the point where the ball left the field of play. The exception to this is if the ball is kicked into touch on the full. In this case, a line-out is taken at the point from where the ball was kicked, and not from where it entered touch.
B:
Ball back is waived in certain circumstances: If the kicking player is inside his own 22m line when he receives and then kicks the ball. If the player receives the ball outside the 22 then retreats back into the 22 and kicks into touch on the full it is a lineout at the nearest point on the touchline from where the ball was kicked.
B:
If a side elects to kick a penalty into touch.Bill Australian name given to the (William Webb Ellis) Webb Ellis Cup.Black dot A mark in the centre of the crossbar connecting the goal posts, usually black in colour, used as an aid to kickers with their aiming. A player scoring a try in the centre of the goal line, that is beneath the posts, can be said to have scored "under the black dot".Blindside The narrow side of the pitch in relation to a scrum or a breakdown in play; it is the opposite of openside. The blindside flanker is expected to cover the opposing team blindside at scrum and breakdown.The Blindside of the scrum is the side that the opposing scrum half is feeding the ball from. The Open side is the other side, 'Open' in this situation, means unimpeded.Blitz defence The blitz defence is a defensive technique similar to the defence used in rugby league. It relies on the whole defensive line moving forward towards their marked man as one, as soon as the ball leaves the base of a ruck or maul. The charge is usually led by the inside centre.The idea of this technique is to prevent the attacking team gaining any ground by tackling them behind the gain line and forcing interceptions and charged down kicks. However, the defending team can be vulnerable to chip kicks and any player breaking the defensive line will have much space to play because the defence are running the other way and must stop, turn and chase.Blood bin It is also called blood replacement or blood sub(stitution). A player who has a visible bleeding injury may be replaced for up to fifteen minutes (running time, not game time), during which he or she may receive first-aid treatment to stop the flow of blood and dress the wound. The player may then return to the pitch to continue playing.Bonus points Bonus points are a method of deciding table points from a rugby union match. It was implemented in order to encourage attacking play throughout a match, to discourage repetitive goal-kicking, and to reward teams for "coming close" in losing efforts.
B:
Under the standard system, 1 bonus point is awarded for scoring 4 (or more) tries and 1 bonus point for losing by 7 points (or fewer).
B:
The French professional league replaces the four-try bonus point with a point for a win in which the winning team scores at least 3 more tries than its opponent. In addition, the "losing" bonus point requires a margin of defeat of 5 or fewer points.Australia's National Rugby Championship uses the same system as France, except that a losing bonus point is awarded if the margin of defeat is 8 points or fewer.Box-kick This is a kick taken from behind a scrum, normally by the scrum-half, in which he turns away from the scrum facing the touchline, and kicks the ball back over the scrum into the clear "box" of space behind the opposition to allow his own team to chase through and regain the ball in undefended territory.Breakdown The breakdown is a colloquial term for the short period of open play immediately after a tackle and before and during the ensuing ruck. During this time teams compete for possession of the ball, initially with their hands and then using feet in the ruck. Most referees will call "ruck" or "hands away" as soon as the ruck is formed. Most infringements take place at the breakdown, owing to the greater variety of possible offences at a breakdown, for example handling in the ruck, killing the ball, offside at the ruck, and so on.
C:
Caution A player who deliberately or repeatedly infringes the laws is cautioned, and shown a yellow card. A cautioned player is suspended from playing for ten minutes.Cavalry Charge Typically during a penalty kick or free kick, the attacking players form a line behind their kicker. When signaled, they charge forward. The kicker then tap-kicks the ball and passes to one of the players behind. This move is explicitly forbidden under 10.4(p) and the penalty is a penalty kick.Centre They are the players wearing shirts numbers 12 & 13. They are divided into inside and outside centre. The inside centre is also known as the second five-eighths in New Zealand.Charge-down When a player makes a defensive clearance kick, but it hits an opponent who has run towards him in an attempt to block it, it is known as a charge-down. These can be good try-scoring opportunities.Choke tackle Tackle in which the tackler tries to keep the ball carrier on his feet and push him backwards before taking him to the ground. This is harder to execute but gives the tackler's side a gain in territory.Cibi The cibi is a Fijian war dance performed by the Fiji national team before each of their international matches.Colts Men's youth rugby players, generally with an upper age limit of 21 years, may be referred to as colts. In some instances the age limit may be as high as 23, but this is uncommon.A competition for players below the age of 19, for example, may be known as Under-19 Colts. The term is usually not applied to secondary school teams; most colts players have completed their high school years.Conversion If a team scores a try, they have an opportunity to "convert" it for two further points by kicking the ball through the goal - that is, between the posts and above the crossbar. The kick is taken at any point on the field of play in line with the point that the ball was grounded for the try, parallel to the touch-lines. So it is advantageous to score a try nearer to the posts as it is easier to convert it.The kick can be either a drop kick or a place kick in the 15-man game. However, in sevens, all conversions must be drop kicks.Counter rucking If a team (usually the team that took the ball into contact) has secured the ball at a ruck, and the other team manage to force them off the ball and secure possession themselves, the defending team are said to have "counter-rucked"Crash ball It is an attacking tactic where a player receives a pass at pace and runs directly at the opposition's defensive line. The crash ball runner attempts to commit two or more opposing players to the tackle, then attempts to make the ball available to teammates by off-loading in the tackle or recycling the ball quickly from the ruck.By committing players to the tackle, the crash ball runner creates holes in the opposition's defense, thereby creating attacking opportunities for teammates.Crash tackle Another name for the crash ball as mentioned above.Cross-Field Kick A kick which goes from one side of the field to the other and is kicked very high, usually resulting in an aerial battle between an attacker and defender to catch it. This is usually used near the defending team's try-line, often with the catch happening in the in-goal area itself. Most often used when the kicker knows the referee is playing advantage and his team will get a penalty if the kick fails - this is because the kick itself is very risky and likely to result in an interception.
D:
Drift defence The drift defence is a defensive technique that forces the attacking side into an ever-shrinking pocket near to the touchline. It operates by the defensive side moving forward and diagonally following the path of the attacking side's ball movements. If used successfully the ball will usually end up in the attacking winger's hands near the line of touch. This player would then find themselves surrounded on one side by a defending outside centre, with the opposing winger opposite and the touchline on his other side. This will prevent a cut-back and allows the Touchline to act as a 16th player. Its disadvantage is that if the attacking team are strong enough to break through the pocket tackle the defending team will have no players spare to cover a breakout.Drop goal A drop goal is scored when a player kicks the ball from hand through the opposition's goal, but the ball must touch the ground between being dropped and kicked. It is worth three points.The team awarded a free kick cannot score a dropped goal until the ball next becomes dead, or until an opponent has played or touched it, or has tackled the ball carrier. This restriction applies also to a scrum taken instead of a free kick.Drop kick A drop kick is when a player kicks the ball from hand and the ball touches the ground between being dropped and kicked. If a drop kick goes through a goal then it results in a drop goal.Dummy pass An offensive ruse, where the ball carrier moves as if to pass the ball to a teammate, but then continues to run with the ball himself; the objective is to trick defenders into marking the would-be pass receiver, creating a gap for the ball carrier to run into. If it is successful, the player is said to have "sold the dummy".Dummy runner Another offensive tactic; a player on the attacking team runs towards the opposition as if running onto a pass, only for the ball to be passed to another player, carried on by the ball carrier or kicked forwards. As with a dummy pass, this tactic draws defenders away from the ball and creates space for the attacking teamDump tackle It is a tackling technique. The tackler wraps his arms around the ball carrier's thighs and lifts him a short distance in the air before forcibly driving him to the ground. The tackler must go to ground with the ball carrier for the tackle to be legal. This technique is useful to completely stop the opponent in his tracks. A dump tackle that drops the ball carrier on his head or neck is known as a spear tackle, and will almost invariably concede a penalty and possibly result in a caution for the tackler. In rugby union, World Rugby has ruled that a dangerous tackle of this type, sometimes also called a tip tackle, should be punished with a straight red card.
E:
Eightman, Eighth-man Alternative name for the Number 8
F:
Five metre scrum, Scrum-five When a scrum offence is committed within 5 m of either try line, or in the in-goal area, the referee will award a scrum on the five metre line; this is to prevent all but the most brutal packs from driving the ball over the try line within the scrum.Fend or "hand off" Fending is the action by the ball carrier of repelling a tackler using his arm. For the action to be legal, the ball carrier's arm must be straight before contact is made; a shove or "straight-arm smash", where the arm is extended immediately before contact or on contact, is illegal and classed as dangerous play.First XV or First fifteen The preferred starting line-up of a team – more colloquially, the senior team of any club.Flanker Also known as breakaways or wing forwards. They are the players wearing shirts numbers 6 & 7. They are the players with the fewest set responsibilities. The player should have all round attributes: speed, strength, fitness, tackling and handling skills. Flankers are always involved in the game, as they are the real ball winners at the breakdown, especially the number 7. The two flankers do not usually bind to the scrum in a fixed position. Instead, the openside flanker will attach to the scrum on whichever side is further from the nearer touchline, while the blindside flanker attaches himself to the scrum on the side closer to the touchline.Fly half or five-eighth Referred to by a number of different names, including first five-eighth in New Zealand, this player wears shirt number 10. They are the back-line player most likely to be passed the ball from the scrum-half or half-back, and therefore make many key tactical decisions during a game. Often this player is also the goal kicker as the position requires excellent kicking skills.Forward pass It is called a throw-forward in the laws of the game.A forward pass occurs when the ball fails to travel backwards in a pass. If the ball is not thrown or passed forward but it bounces forward after hitting a player or the ground, it is not a throw-forward.If the referee deems it accidental, this results in a scrum to the opposing team; however deliberate forward passes result in the award of a penalty.Foul play Foul play is defined as the deliberate infringement of the laws of the game.Fourth official A fourth official is one who controls replacements and substitutes. He may also substitute for referee or touch judge in case of injury to either of them.Free-kick Also called short arm penalty. This is a lesser form of the penalty, usually awarded to a team for a technical offence committed by the opposing side such as numbers at the line-out or time wasting at a scrum. A free kick is also awarded for calling a mark.A team cannot kick for goal and the normal 22m rule applies for kicking for position from a free kick. A Free Kick is signalled by the referee with a bent arm raised in the air.Full-back They are the player wearing jersey number 15. They act as the last line of defence against running attacks by the opposing three-quarter backs. The full-back is expected to field high kicks from the opposition, and reply with a superior kick or a counter-attack. The full back is sometimes the specialist goal-kicker in a team, taking penalty and conversion kicks.Full house Scoring a try, conversion, penalty and drop goal in the same match.
G:
Gain line The gain line is an imaginary line drawn across the centre of the pitch when there is a breakdown in open play, such as a ruck, maul or scrum. Advancing across the gain line represents a gain in territory.Garryowen A Garryowen, or up-and-under kick, is a high short punt onto or behind the defending team.Goal A goal is scored when a player kicks the ball through the plane bounded by the two uprights and above the crossbar. A drop goal or penalty goal count for 3 points and conversions count for two.Goal from mark Goal from mark is an antiquated method of scoring. It occurred when a player "marked" and scored a goal from there. In the modern game, a goal cannot be scored from a free kick, but in the past the reward for scoring a "goal from mark" (which is a difficult kick to play) was three or four points. Occasionally referred to as a field goal.Goal line, Tryline Two solid, straight white lines (one at each end) stretching across the entire width of the pitch passing directly through the goal posts which defines the boundary between the "field of play" and the "in-goal". As the goal line is defined as part of the "in-goal", attacking players can score tries by placing the ball with downward pressure onto the goal line itself. The base of the goal posts and post protectors are also defined to be part of the goal line.The goal line is often referred to as the "try line" though that term does not appear in the Laws of the Game.
G:
Goose Step A goose step, made famous by Australian David Campese but now performed by many players, is a running technique where the player slows down and takes a small 'hop' into the air before sprinting off - sometimes in a different direction - upon landing. Its purpose is to confuse the defender, who is often unable to predict the sudden change of pace and direction.Group of death Is an informal sobriquet used to describe a situation that often occurs during the group stage of a tournament, where either (1) any team in the group could qualify and any team could be eliminated, or (2) more teams have a legitimate chance to advance to the next stage than allowed by the tournament structure.Typically, a group of death will see an unusual match-up of heavyweight sides, due to a quirk in the seeding system.Grubber kick It is a type of kick that makes the ball roll and tumble across the ground, producing irregular bounces making it hard for the defending team to pick up the ball without causing a knock-on. It gives the ball both high and low bounce and on occasions, the ball can sit up in a perfect catching position.
H:
Half-back Can refer to either the scrum-half or fly-half, but in New Zealand is exclusively used to describe the scrum-half.Haka The haka is a traditional Māori dance performed by New Zealand national teams, most famously the All Blacks, prior to international matches. It serves as a challenge to the opposing team.Hand-off Handing off (also called fend) is the action by the ball carrier of repelling a tackler using his arm. For the action to be legal, the ball carrier's arm must be straight before contact is made; a shove or "straight-arm smash", where the arm is extended immediately before contact or on contact, is illegal and classed as dangerous play.High tackle A high tackle (or head-high tackle) is a form of tackle where the tackler grasps the ball carrier above the line of the shoulders (most commonly around the neck or at the line of the chin and jaw). Executed violently or at speed, a high tackle is potentially dangerous, so are often not just sanctioned with a penalty, but also a yellow or red card.Hooker Hookers traditionally wear the number 2 shirt. The hooker is the player who is in the centre position of the front row of the scrum and who uses his/her feet to 'hook' the ball back. Due to the pressure put on the body by the scrum and the requirement to use both arms to bind to other players (and hence having no free arm to use to support or deflect bodyweight) it is considered to be one of the most dangerous positions to play.Hookers normally throw the ball in at line-outs, partly because they are normally the shortest of the forwards, but more often because they are the most skillful of the forwards.Hospital pass Any pass that is made which has the inevitable, unavoidable consequence of the receiver being tackled. This is because the receiver has already been marked and the opposing player is bearing down on the receiver so rapidly that, as soon as the ball is caught, the opposing player smashes into the receiver. Generally made in times of panic or when there is no one else available, it is called the hospital pass because of the inevitability of a hard tackle.
I:
Interception The gaining of possession by running forward from the defensive line and taking a pass meant for a member of the opposition. The result is similar to the result of a line break, and has a good chance of leading to a try.
K:
Kick-off A coin is tossed and the winning captain either chooses which direction his team shall play, or elects to take the kick that starts the game. Both halves of the match are started with a drop kick from the centre-point of the halfway line. The kick must cross the opposition's 10-metre line, unless played by a member of the receiving team. The opposition are not allowed to encroach beyond the 10-metre line until the ball is kicked.If the ball does not travel 10 metres, goes straight into touch, or goes over the dead ball line at the end of the pitch, the opposing team may accept the kick, have the ball kicked off again, or have a scrum at the centre.After a score, the game is restarted from the same place under the same restrictions, with the conceding team drop-kicking the ball to the scoring team. However, in sevens, the scoring team kicks off.Kick tennis Style of play characterised by both teams repeatedly kicking from hand to the opposition, rather than running at the opposition and risking a turnover. So-called because the ball moves back and forth like in a tennis match. It is considered boring to watch and also referred to as aerial ping-pong.Knock-on Also called knock-forward. A knock-on occurs when the ball accidentally moves forward after coming into contact with the upper body of a player, and then touches either the ground or another player. It results in a scrum with the put-in to the opposition. If the ball is intentionally knocked forward it is deemed a deliberate knock-on; the opposition is rewarded with a penalty and the offending player is given a yellow card and sent to the sin bin.
L:
Latcher/Latching on A latcher is a player who binds himself to the ballcarrier in open play in order to add his power and weight to an attempt to break the line and gain yards. If the defence is able to stop the ballcarrier and hold him up, a maul usually forms. However latching on does not automatically create a maul.Late tackle A late tackle is a tackle executed on a player who has already passed or kicked away the ball. As it is illegal to tackle a player who does not have the ball, late tackles are penalty offences (referees allow a short margin of error where the tackler was already committed to the tackle) and if severe or reckless may result in yellow or red Cards.If a late tackle occurs after a kick and a penalty is awarded, the non-offending team has the option of taking the penalty where the ball landed.
L:
Line break Action by which the player with the ball gets through the opponent's defensive line without being tackled. If there is insufficient cover, or the player has support, line breaks can often lead to tries.Line-out A minimum of two players line up parallel with each other one metre apart between the five-metre and 15-metre lines. Usually, the hooker of the team in possession throws the ball in while his opposite number [may] stand in between the touchline and the five-metre line.All players not involved in the lineout, except the receiver (usually the scrum-half) must retire 10 metres.The ball must be thrown in straight down the middle of the lineout and the hooker must not cross into the field of play while throwing in. If the throw is not straight then the throw is given to the opposition or a scrum awarded.Jumpers can be lifted by their teammates below the waist, but the opposition's jumpers must not be obstructed, barged or pulled down.Line-out code It is a coded piece of information, used to communicate intentions about a line-out within one team in a match without giving information away to the other team. The advantage in line-out comes from knowing in advance how the throw will be made.Line speed The speed with which a blitz defence closes down the opposing team. A high line speed will make it difficult for the opposition to cross the gain line.Lock Locks or second-row are the players wearing shirts number 4 & 5. Locks are very tall, athletic and have an excellent standing jump along with good strength. This makes them the primary targets at line-outs. They also make good ball carriers, bashing holes in the defence around the ruck and maul. They also have to push in the rucks and mauls.
L:
Loose head The loose head prop is the player who takes the left hand position on the front row of the scrum. A loose head prop traditionally wears the number 1 shirt.As the loose head has considerable potential freedom of movement compared to other front row players, the loose head can attempt to play various illegal techniques to divert the push of the opposing pack and is often able to illegally interfere with the ball in the scrum using his free arm.
M:
Mark A mark is the place where the game will restart after a stoppage, such as where a scrum-offence or penalty offence occurred, or on the touchline where the ball went out of play (or where the ball was kicked in the case of ball-back). Marks are generally defined by the referee, or the touch judge when the ball leaves play by the touchline.Marks can also be defined by a defending player who executes a clean catch (catches the ball before it bounces or touches another player) of a ball kicked by an attacking player if the defender is standing within his/her own 22-metre zone or in-goal. To "call a mark", the player shouts "Mark!" as he/she catches the ball. The referee then awards that player a free kick which must be taken by that specific player. (If, for whatever reason, that player cannot take the kick, a scrum is awarded instead.) If the player is simply a poor kicker he/she is likely to take a 'Tap Kick' and immediately pass the ball to the fly-half or full back who will generally deliver a clearance kick.Marks can be called when the ball is cleanly caught following a kick by the opposition for any type of kick except a kick off or restart after a score. It is legal, though very unusual, to call a mark from a clean catch of a penalty kick.Maul When a ball carrier is held up (without being tackled) by both an opposing player and a player from his own team, a maul is then considered formed.The offside line becomes the last foot of the last man on each side of the maul. Players can join in only from behind that teammate. Anyone who comes in from the sides will be penalised by the referee. Hands are allowed to be used in the maul. If either team deliberately collapses the maul then that side will be penalised by the referee. (Note that from August 1, 2008, the IRB is conducting a global trial of a modification of this Law which will allow players to deliberately collapse a maul providing the collapse is achieved by pulling from above the waist.)If the ball does not come out in a timely fashion, the referee will award a scrum to the team that did not take the ball into the maul.Mauls can only exist in the field of play. Play that looks like a maul can exist within the in-goal but restrictions on entry to the maul and the need to bind on to a team member do not apply.Medical joker A player signed by a professional club as an injury replacement. The term is directly borrowed from the French joker médical and is most commonly associated with France's top league; that country has long allowed such signings.Mismatch Situation where a back is one-on-one with a forward. This favours the attacking side, as often forwards are too slow to stop backs, and backs are too small to stop forwards.Mulligrubber The mulligrubber kick is a style of kicking. A mulligrubber is directed towards the ground and forced to bounce. Often used in situations where either the ball needs to be placed in a specific position (i.e. on the try line) or to intentionally stop the opponent from being able to catch the ball on the full.
N:
North (to go north, to head north, etc.)In the days prior to professionalism in rugby union, players would often convert to rugby league — which was a paid sport — thereby becoming ineligible to play rugby union again. In Wales and (to a lesser extent) England, the term "to go north" referred to this change of code, an allusion to the popularity of rugby league in the north of England.Not straight Referee's call when a line-out throw or the feeding of the ball into the scrum is unfairly towards the team in possession, preventing any contest for the ball. It is punished by resetting the set piece and giving control of the ball to the opposition.Number 8 They are the players wearing shirts number 8. It is the only position that is known only by the shirt number. Number eights must have a good tactical awareness in order to coordinate scrums and ruck moves with the scrum-half. If the ball is at his feet at the back of a scrum, ruck or maul, it is normally the number eight's decision whether to pass the ball out or drive the breakdown on in order to make ground.
O:
Obstruction Offence whereby a player deliberately impedes an opponent who does not have the ball.Off-load pass (offload) A short pass made by a player being tackled before he reaches the ground, usually by turning to face a teammate and tossing the ball into the air for a teammate to catch.Offside A player is offside when he/she is forward of the relevant offside line i.e. between the relevant offside line and the opposing team's dead ball line.In a match, most players will be offside several times but they only become liable for penalty if they do not act to attempt to become onside (which generally means retreat downfield) or attempt to interfere with play.In open play, only the ball carrier's team (or the team that last carried or deliberately touched the ball) is bound by offside - the offside line for them is the ball. (Note every player who passes the ball backwards is offside and must attempt to retire.)An accidental offside may occur (and be penalised) if the ball carrier accidentally collides with one of his own team's players while that player is attempting to retire behind the ball (or is otherwise in an offside position).Onside A player is onside whenever he or she is behind the relevant offside line for the particular phase of play. Players who are onside take an active part in playing the game.Previously offside players may be "put onside" by the actions of other players (for example, in a kick ahead in open play, players in the kicker's team in front of the kick are offside but can be put onside by the kicker or any other team member who was onside at the time of the kick running up the pitch past them). So that players can be confident they are now onside and can take an active part in the game, the referee may shout "Onside" or "All Onside".On the full If the ball is kicked into touch without first bouncing inside the field of play it is termed as ball is kicked into touch on the full. The line-out is then taken from where the ball was kicked, except in situations where it was kicked from inside the 22.Openside The broad side of the pitch in relation to a scrum or a breakdown in play. The openside flanker is expected to cover the opposing team openside at scrum and breakdown. It is the opposite of blindside.Overlap Situation where there are more attacking players (typically backs) on one side of the field than there are defending players. An overlap can be used to manufacture a try by forcing the defenders into tackles and offloading to teammates until the defenders have run out.
P:
Pack The pack is another name used for the forwards players, particularly when they are bound for a scrum.Passing A pass is to transfer a ball to a teammate by throwing it. Passes in rugby must not travel forwards. There are different varieties of pass, including the flat, direct spin pass; the short, close-quarters pop pass; and the floated pass - a long pass which an advancing player can run onto at pace.Penalty Penalties are awarded for serious infringements like dangerous play, offside and handling the ball on the ground in a ruck. Penalties are signalled by the referee with a straight arm raised in the air. Players can also receive red and yellow cards, as in Association football.The offending team must retire 10 metres (or to their goal line if closer) for both penalties and free kicks. A team can either kick for goal, tap and run the ball, take a scrum or kick directly into touch with the resulting line-out awarded to them.Penalty kick If a side commits a penalty infringement the opposition can take the option of a place kick at goal from where the infringement occurred (or, if the offence occurred when a player was in the process of kicking the ball, the non-offending team can opt to take the kick from where the ball landed which may be more advantageous). This is called a penalty kick. If successful, it is worth three points.Penalty try A penalty try awarded if the referee believes a team illegally prevented a try from probably being scored. As of 2018, penalty tries score an immediate seven points, with no conversion having to be taken. Generally a penalty try is awarded when the try-preventing offence cannot be easily attributed to a single individual, such as when a team repeatedly deliberately collapses a scrum near its own tryline. When the prevention of the try is due to an individual, a yellow card is a more common punishment.Phase A phase is the time a ball is in play between breakdowns. For example, first phase would be winning the ball at the lineout and passing to a centre who is tackled. Second phase would be winning the ball back from the ensuing breakdown and attacking again.Pitch The official name of a rugby playing field. Dimensions are 100 m long by 70 m wide.Place kick The place kick is a kicking style commonly used when kicking for goal. It typically involves placing the ball on the ground. To keep the ball in position, a mound of sand or plastic tee is sometimes used.Pop pass A very short pass.Professional foul A professional foul is a deliberate act of foul play, usually to prevent an opponent scoring. It is punishable by a yellow card.Prop They are the players wearing shirts number 1 & 3. The role of both the props is to support the hooker in the scrum and to provide support for the jumpers in the line-out. The props provide the main power in the push forward in the scrum. For this reason they need to be exceptionally big and strong.
R:
Red card In International matches, red cards are shown by the referee to players who have been ordered off the pitch, which results in the player being removed from the game without being replaced. This usually occurs when a player is guilty of serious foul play, or violent conduct or for committing two offences resulting in cautions (yellow cards).Red cards are also commonly used in non-international matches in precisely the same manner as in International matches but there is no regulation requiring their use. (i.e. in a domestic match, a referee may dismiss a player without actually displaying a red card.)Red zone This is a term most commonly used by coaches to describe the area of the pitch between the try line and around 22 metres out, in which it is most likely a try may be scored or conceded.Restart The kick taken from the centre line after the opposition have scored points.Round the corner kicking A style of place-kicking in which the kicker, instead of facing directly toward the goal-posts, approaches the ball from an angle and swings his kicking leg in an arc. It was first credited to Wilf Wooler in the 1930s.Ruck A ruck is formed when the ball is on the ground and two opposing players meet over the ball. The offside line becomes the last foot of the last man on each side of the ruck and players compete for the ball by attempting to drive one another from the area and to 'ruck' the ball backwards with their feet.Rucks commonly form soon after tackles, but can form anywhere in the field of play where the ball is on the ground.Handling the ball while it is in the vicinity of a ruck is a penalty offence. (Though modern practice allows a player on the ground to support the ball with his/her hands and for the player who is acting as scrum half to 'dig' for the ball once possession has been secured.)If the ball remains contested and does not come out of a ruck after about five seconds, the referee will award a scrum to the team he considers to have been moving forward in the ruck.
S:
Scrum The eight forwards from each team bind together and push against each other. The scrum-half from the team that has been awarded possession feeds the ball into the centre of the scrum from the side most advantageous for his hooker (which is typically the side of loose head prop).The ball must be fed straight down the middle of the tunnel and the hookers must not contest for the ball until it is put in. If they do, a free-kick is awarded for "foot up".The scrum is taken again if the ball comes straight out of the tunnel or if it collapses. If the scrum wheels (rotates) due to pushing more than 90 degrees the scrum is reformed and awarded to the other side. Pulling in an attempt to unbalance the other side or to assist in rotating the scrum is a Penalty Offence.Scrum half Also known as a half-back (especially in New Zealand), they are the players traditionally wearing shirt number 9. Scrum halves form the all-important link between the forwards and the backs. They are relatively small but with a high degree of vision, the ability to react to situations very quickly, and good handling skills.They are often the first tackler in defence and are behind every scrum, maul or ruck to get the ball out and maintain movement. They put the ball into the scrum and collect it afterwards. Scrum halves generally also act as "receiver" in the line-out to catch the ball knocked down by the forwards. (The receiver is a member of the line-out and so stands within 10 metres of it and may join the line once the ball is thrown.)Selector A person who is delegated with the task of choosing players for a team. Typically the term is used in the context of team selection for a national, county, state or provincial representative side, where the selector, or "selection panel", act under the authority of the relevant national or provincial administrative body.Set piece Collective term for the scrum, line-out and sometimes the restart.Shoeing At the breakdown a ruck commonly forms over the players involved in the tackle.Where players who are on the ground on the opposition side of the ruck do not move away quickly enough, players on their feet may be tempted to "help" them move by pushing them away with their boots.This potentially dangerous act is illegal and if done deliberately (or recklessly) may result in penalties and yellow or red cards.Short arm penalty See free kickSin bin The notional area where a player must remain for a minimum of ten minutes after being shown a yellow card. In high level games, the sin bin is monitored by the fourth official.Sipi Tau Sipi Tau is a Tongan war dance performed by the Tonga national team before each of their international matches.Siva Tau Siva Tau is a Samoan war dance performed by the Samoa national team before each of their international matches.Spear tackle A spear tackle is a dangerous tackle in which a player is picked up by the tackler and turned so that they are upside down. The tackler then drops or drives the player into the ground often head, neck or shoulder first.Spear tackles are particularly dangerous and have caused serious injury, including spinal damage, dislocations and broken bones in the shoulder or neck. On rare occasion, even death can occur.Spear tackles are taken very seriously by the various Union disciplinary committees and can result in lengthy playing bans.Stellenbosch Laws The Stellenbosch Laws were a set of experimental laws of rugby union considered by World Rugby, then known as the International Rugby Board (IRB), from 2006 to 2008. The trials ended in late 2008, with the IRB choosing to adopt roughly half of the proposed changes.
T:
Tackle A tackle takes place when one or more opposition players [tackler(s)] grasp onto the ball carrier and succeed in bringing/pulling him/her to ground and holding them there.Once briefly held, the tackler(s) must release the tackled player who must then him/herself immediately release or attempt to pass the ball so that play can continue.Tap kick A tap kick is a type of kick used by players at penalties or free kicks to meet the regulation that requires the ball must be kicked a visible distance before a player may pass or run with it.In a tap kick, the player momentarily releases the ball from his hands and taps it with his foot or lower leg and then quickly catches it again. The player will then generally try to run forward with the ball.Tap-tackle Despite its name, a tap tackle is a not actually a tackle as the ball carrier is brought to ground by a form of trip, is not actually held on the ground and may attempt to get up and continue to run. A tap tackle is used when a defending player is unable to get close enough to the ball carrier but is able to dive at the other player's feet and, with outstretched arm, deliver a tap or hook to the player's foot (or feet) causing the player to stumble. At speed, this will often be sufficient to bring the ball-carrier down, allowing a teammate of the tackling player to retrieve the ball or provide sufficient delay for the defending team to organise a defence.Ten Metre Law The Ten Metre Law is a form of offside which is designed to prevent injury to a defending player who attempts to catch a ball that has been kicked ahead by the attacking side.In the normal Law of Offside in open play, it is possible for an offside player to be put onside by actions of the opposing team. This ability to be put onside by a member of the opposing team does not apply if the offside player was within 10 metres along the field of a defending player waiting to catch the ball and the offside player remains offside until either he/she retreats onside or is put onside by a member of their own team.Test match International rugby union matches with full (Test) status are called Test matches.Tight Head The tight head prop is the player who takes the right-hand position on the front row of the scrum. A tight head prop traditionally wears the number 3 shirt. He is named the tight head since in the scrum he will have an opposition player bind to both his left- and right-hand sides, meaning his head is unexposed to the side of the scrum, as opposed to the loose head, whose left-hand side is exposed.TMO Television match official (TMO), commonly called the video referee.Touch Touch is the area outside and including the two touch-lines which define the sides of the playing area. As the touch-lines are not part of the playing area they are part of touch. The ball, and players carrying the ball, are not considered to be in touch until they touch the floor.Touch judge The touch judge is an official who monitors the touch-line and raises a flag if the ball (or player carrying it) goes into touch. Touch judges also stand behind the posts to confirm that a goal has been scored following a penalty kick or conversion of a try.Truck and trailer A colloquial term for an accidental obstruction. "Truck and trailer" occurs when a player carrying the ball leaves a maul, along with one or more of his teammates. Once the ball carrier leaves the maul, the maul is over, and if the ball carrier's teammates are in front of the ball carrier and prevent defending players from making a tackle, the defending team will be awarded a scrum. If the incident of truck and trailer is judged to be deliberate or the latest in a series of similar infringements, a penalty may be awarded instead.
T:
Try This is the primary method of scoring. A try is worth five points. It is scored when a player places the ball on the ground with downward pressure in the in-goal area between (and including) the goal-line and up to but not including dead ball line of the opposition's half. (As the goal posts and post protectors are also part of the goal-line, touching the ball down against the base of these is also a try.)There is no such thing as an "own try". If you touch the ball down in your own in-goal area, it results in a goal-line dropout or a five-metre scrum.Tunnel When a scrum is formed, the gap between the legs of the three players from each team who form the 'front row' is called the 'tunnel'.Turnover When a team concedes possession of the ball, particularly at the breakdown, they are said to have turned the ball over to the other team. This can happen due to defending players stealing the ball from an isolated attacker, counter rucking, a knock on, an intercepted pass or the ball not emerging from a maul (wherein the referee awards the scrum feed to the opposing team).Twenty two metre drop-out A drop kick is taken from behind the 22m line if a team touches down in its own in-goal area but did not carry the ball over the try line, or if the ball is kicked over the dead ball line from any other play other than the kick-off.The ball only needs to cross the line, but if it goes directly into touch a scrum is awarded to the receiving team at the centre-point of the 22m line.
U:
Uncontested scrum Scrum in which, due to absence of key specialist forwards through injuries or yellow cards, the safety of the scrum cannot be guaranteed. In an uncontested scrum, the players form a scrum but the two teams do not push against each other or compete for possession.Up and under An up and under, or a Garryowen kick, is a high, short punt onto or behind the defending team.Use it or lose it If a maul stops moving forward the referee will often shout "use it or lose it" to the team in possession of the ball. This means they must pass the ball within a five-second time period. If they do not, the referee will call a scrum and the team not in possession at the beginning of the maul will be given the feed.
V:
Video Referee Also called TMO (Television Match Official). This is the official who monitors the match in television recorded matches. He is the person who could be called upon by the referee if he is unaware of the outcome of a rugby situation. A good example is a try that is obscured from view i.e. under numerous players.
W:
Wheeling A scrum that has rotated through 90 degrees or more is said to have "wheeled". The referee will order the scrum to be reset, with the ball being turned over if the attacking team is deemed to have been deliberately or repeatedly wheeling the scrum.Wing The players wearing shirts numbers 11 and 14 are the left and right wingers. Wingers must be fast runners and agile in order to evade tackles and have excellent ball-handling skills in order to pass and receive the ball at pace.
Y:
Yellow Card In International matches, a yellow card is shown to a player who has been cautioned to indicate "temporary suspension" for repeated or deliberate infringements of the rules. The offending player is sent to the “sin bin” for at least 10 minutes while his team must play a man short. A player who is temporarily suspended cannot return to the pitch until the first break in play after his/her 10-minute suspension is completed.In domestic matches, yellow cards are commonly used in exactly the same manner as in International matches but this is not required by regulation so a referee may order the temporary suspension of a player without showing a yellow card. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Collaboration Data Objects for Windows NT Server**
Collaboration Data Objects for Windows NT Server:
Collaboration Data Objects for Windows NT Server (CDONTS) is a component included with Microsoft's Windows NT and Windows 2000 server products. It facilitates creating and sending e-mail messages from within web application scripts, typically ASP pages. It is implemented as a COM component, and requires a locally installed SMTP server to handle mail delivery.
CDONTS was deprecated in Windows 2000, and removed completely in Windows Server 2003 in favour of a significantly improved interface, Collaboration Data Objects (CDOSYS). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dual cone and polar cone**
Dual cone and polar cone:
Dual cone and polar cone are closely related concepts in convex analysis, a branch of mathematics.
Dual cone:
In a vector space The dual cone C* of a subset C in a linear space X over the reals, e.g. Euclidean space Rn, with dual space X* is the set C∗={y∈X∗:⟨y,x⟩≥0∀x∈C}, where ⟨y,x⟩ is the duality pairing between X and X*, i.e. ⟨y,x⟩=y(x) C* is always a convex cone, even if C is neither convex nor a cone.
Dual cone:
In a topological vector space If X is a topological vector space over the real or complex numbers, then the dual cone of a subset C ⊆ X is the following set of continuous linear functionals on X: := Re for all x∈C} ,which is the polar of the set -C. No matter what C is, C′ will be a convex cone. If C ⊆ {0} then C′=X′ In a Hilbert space (internal dual cone) Alternatively, many authors define the dual cone in the context of a real Hilbert space (such as Rn equipped with the Euclidean inner product) to be what is sometimes called the internal dual cone.
Dual cone:
internal := {y∈X:⟨y,x⟩≥0∀x∈C}.
Using this latter definition for C*, we have that when C is a cone, the following properties hold: A non-zero vector y is in C* if and only if both of the following conditions hold:y is a normal at the origin of a hyperplane that supports C.
y and C lie on the same side of that supporting hyperplane.C* is closed and convex.
C1⊆C2 implies C2∗⊆C1∗ If C has nonempty interior, then C* is pointed, i.e. C* contains no line in its entirety.
If C is a cone and the closure of C is pointed, then C* has nonempty interior.
C** is the closure of the smallest convex cone containing C (a consequence of the hyperplane separation theorem)
Self-dual cones:
A cone C in a vector space X is said to be self-dual if X can be equipped with an inner product ⟨⋅,⋅⟩ such that the internal dual cone relative to this inner product is equal to C. Those authors who define the dual cone as the internal dual cone in a real Hilbert space usually say that a cone is self-dual if it is equal to its internal dual. This is slightly different from the above definition, which permits a change of inner product. For instance, the above definition makes a cone in Rn with ellipsoidal base self-dual, because the inner product can be changed to make the base spherical, and a cone with spherical base in Rn is equal to its internal dual.
Self-dual cones:
The nonnegative orthant of Rn and the space of all positive semidefinite matrices are self-dual, as are the cones with ellipsoidal base (often called "spherical cones", "Lorentz cones", or sometimes "ice-cream cones"). So are all cones in R3 whose base is the convex hull of a regular polygon with an odd number of vertices. A less regular example is the cone in R3 whose base is the "house": the convex hull of a square and a point outside the square forming an equilateral triangle (of the appropriate height) with one of the sides of the square.
Polar cone:
For a set C in X, the polar cone of C is the set Co={y∈X∗:⟨y,x⟩≤0∀x∈C}.
It can be seen that the polar cone is equal to the negative of the dual cone, i.e. Co = −C*.
For a closed convex cone C in X, the polar cone is equivalent to the polar set for C. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cube attack**
Cube attack:
The cube attack is a method of cryptanalysis applicable to a wide variety of symmetric-key algorithms, published by Itai Dinur and Adi Shamir in a September 2008 preprint.
Attack:
A revised version of this preprint was placed online in January 2009, and the paper has also been accepted for presentation at Eurocrypt 2009.
Attack:
A cipher is vulnerable if an output bit can be represented as a sufficiently low degree polynomial over GF(2) of key and input bits; in particular, this describes many stream ciphers based on LFSRs. DES and AES are believed to be immune to this attack. It works by summing an output bit value for all possible values of a subset of public input bits, chosen such that the resulting sum is a linear combination of secret bits; repeated application of this technique gives a set of linear relations between secret bits that can be solved to discover these bits. The authors show that if the cipher resembles a random polynomial of sufficiently low degree then such sets of public input bits will exist with high probability, and can be discovered in a precomputation phase by "black box probing" of the relationship between input and output for various choices of public and secret input bits making no use of any other information about the construction of the cipher.
Attack:
The paper presents a practical attack, which the authors have implemented and tested, on a stream cipher on which no previous known attack would be effective. Its state is a 10,000 bit LFSR with a secret dense feedback polynomial, which is filtered by an array of 1000 secret 8-bit to 1-bit S-boxes, whose input is based on secret taps into the LFSR state and whose output is XORed together. Each bit in the LFSR is initialized by a different secret dense quadratic polynomial in 10, 000 key and IV bits. The LFSR is clocked a large and secret number of times without producing any outputs, and then only the first output bit for any given IV is made available to the attacker. After a short preprocessing phase in which the attacker can query output bits for a variety of key and IV combinations, only 230 bit operations are required to discover the key for this cipher.
Attack:
The authors also claim an attack on a version of Trivium reduced to 735 initialization rounds with complexity 230, and conjecture that these techniques may extend to breaking 1100 of Trivium's 1152 initialization rounds and "maybe even the original cipher". As of December 2008 this is the best attack known against Trivium.
Attack:
The attack is, however, embroiled in two separate controversies. Firstly, Daniel J. Bernstein disputes the assertion that no previous attack on the 10,000-bit LFSR-based stream cipher existed, and claims that the attack on reduced-round Trivium "doesn't give any real reason to think that (the full) Trivium can be attacked". He claims that the Cube paper failed to cite an existing paper by Xuejia Lai detailing an attack on ciphers with small-degree polynomials, and that he believes the Cube attack to be merely a reinvention of this existing technique.
Attack:
Secondly, Dinur and Shamir credit Michael Vielhaber's "Algebraic IV Differential Attack" (AIDA) as a precursor of the Cube attack. Dinur has stated at Eurocrypt 2009 that Cube generalises and improves upon AIDA. However, Vielhaber contends that the cube attack is no more than his attack under another name.
Attack:
It is, however, acknowledged by all parties involved that Cube's use of an efficient linearity test such as the BLR test results in the new attack needing less time than AIDA, although how substantial this particular change is remains in dispute. It is not the only way in which Cube and AIDA differ. Vielhaber claims, for instance, that the linear polynomials in the key bits that are obtained during the attack will be unusually sparse. He has not yet supplied evidence of this, but claims that such evidence will appear in a forthcoming paper by himself entitled "The Algebraic IV Differential Attack: AIDA Attacking the full Trivium". (It is not clear whether this alleged sparsity applies to any ciphers other than Trivium.) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Trust fall**
Trust fall:
A trust fall is an activity in which a person deliberately falls, trusting the members of a group (spotters) to catch them. It has also at times been considered a popular team-building exercise in corporate training events.
Trust fall:
There are many variants of the trust fall. In one type, the group stands in a circle, with one person in the middle with arms folded against his chest and falls in various directions, being pushed by the group back to a standing position before falling again. In another variant, a person stands on an elevated position (such as a stage, stepping stool or tree stump) and relies on multiple people to catch them. This variant is potentially more dangerous and often leads to injuries.The trust fall was a popular activity conducted as a part of corporate team building activities. However, it fell out of favor due to the legal liabilities associated with the trust fall and the fact that it is known to cause traumatic brain injury when the catcher or catchers fail at their task. Furthermore, while the fall may establish trust in the exercise, "there is little evidence that this trust spills over into day-to-day life". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Protocadherin 19**
Protocadherin 19:
Protocadherin 19 is a protein belonging to the protocadherin family, which is part of the large cadherin superfamily of cell-adhesion proteins. The PCDH19 gene encoding the protein is located on the long arm of the X chromosome.
Clinical significance:
Mutations of the PCDH19 gene cause epilepsy-intellectual disability in females. According to a review published in 2021, PCDH19 was one of the six genes most often affected in genetic epilepsies.
History:
The PCDH19 gene that encodes the protein was first cloned in 2000 by Nagase et al. In 2008, PCDH19 was identified as the gene responsible for the development of epilepsy-intellectual disability in females, and in the years that have passed since, rare cases were found of males affected by this disease. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Austrian Centre for Electron Microscopy and Nanoanalysis**
Austrian Centre for Electron Microscopy and Nanoanalysis:
The Austrian Centre for Electron Microscopy and Nanoanalysis (short: FELMI-ZFE) is a cooperation between the Institute of Electron Microscopy and Nanoanalysis (FELMI) of the Graz University of Technology (TUG) and the Graz Centre of Electron Microscopy (ZFE), which is a member of Austrian Cooperative Research (ACR) and run by the non-profit association for the promotion of electron microscopy. It is located at the “Neue Technik Steyrergasse” campus in Graz.
Austrian Centre for Electron Microscopy and Nanoanalysis:
The FELMI-ZFE is offering both research and services, to interested partners from academia and industry, using advanced electron microscopic methods for both structural and chemical characterization.
History:
The acquisition process of the first electron microscope of the Graz University of Technology was started by a donation from industry in 1949. The next year a research group, headed by Fritz Grasenick, was established. Finally, the first electron microscope (“Übermikroskop UEM100” by Siemens & Halske) was bought in 1951 and the opening ceremony was attended by Ernst Ruska, Werner Glaser and Otto Wolf. Despite the Graz University of Technology providing rooms and infrastructure, from the very beginning supplementary income from services research provide for industry was necessary to help cover the high operating and investment cost. Due to the larger interest in measurements and the high utilization of the instrument, the group soon need to expand its personal and look to acquire need microscopes. In order to concentrate all funding sources the non-profit association for the promotion of electron microscopy (Verein zur Förderung der Elektronenmikroskopie und Feinstrukturforschung) was founded in 1959 under the direction of the governor of Styria Josef Krainer senior. The Graz Centre of Electron Microscopy (ZFE) was attached to this non-profit. The combined institutions grew over the years, where the combined role of the head of the university institute and the head of the ZFE in one person played a crucial role in the development of a tight interconnection between fundamental research and application. In 2011 the to this date most expansive and impressive acquisition was possible, a at that time worldwide unique STEM. With the ASTEM (Austrian Scanning Transmission Electron Microscope) magnification of more than one million became possible, this enables atomic resolution. In addition, with an investment volume of 4.5 Mio. Euro, the ASTEM was one of the larges scientific infrastructure investments in Austria.
Organizational structure, Research & Services:
Approximately 50 people work at the FELMI-ZFE with the number varying somewhat due to dissertations and research projects. In addition roughly 300 scientist visit the FELMI-ZFE each year.
International collaboration There are standing collaboration with approximately 30 research institutes and 140 companies. In addition, since the establishment of the ASTEM, the FELMI-ZFE is part of ESTEEM3 (Enabling Science and Technology through European Electron Microscopy), which is a network of 14 electron microscopy institutes in Europe.
Organizational structure, Research & Services:
Research & Services Five groups work on four main topics of research: Nanoanalytic of materials Functional Nanostructuring 3D and in situ measurements Polymers and biological materials Instruments Scanning electron microscopy (SEM) Transmission electron microscopy (TEM) Infrared- and Raman-microscopy (IR/Raman) Focused-Ion-Beam-Microscopy (FIB) Atomic forces microscopy (AFM) X-ray diffraction (XRD) Sample preparation Teaching and Education In the academic year 2019/2020 approximately 600 students visited lectures and lab exercises of the Institute of Electron Microscopy and Nanoanalysis. The courses offered are on the topics of fundamental physics, material analysis, electron microscopy and nano-manufacturing. In addition apprenticeships for both lab-technician and media technology are offered. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Food group**
Food group:
A food group is a collection of foods that share similar nutritional properties or biological classifications. List of nutrition guides typically divide foods into food groups and Recommended Dietary Allowance recommend daily servings of each group for a healthy diet. In the United States for instance, USDA has described food as being in from 4 to 11 different groups.
Historical food groups:
The USDA promoted eight basic food groups prior to 1943, then seven basic food groups until 1956, then four food groups. A food pyramid was introduced in 1992, then MyPyramid in 2005, followed by MyPlate in 2011. Dietary guidelines were introduced in 2015 and slated to be rereleased every five years. The 2020 guidelines were to be released in Spring 2020.
The most common food groups:
Dairy, also called milk products and sometimes categorized with milk alternatives or meat, is typically a smaller category in nutrition guides, if present at all, and is sometimes listed apart from other food groups. Examples of dairy products include milk, butter, ghee, yogurt, cheese, cream and ice cream. The categorization of dairy as a food group with recommended daily servings has been criticized by, for example, the Harvard School of Public Health who point out that "research has shown little benefit, and considerable potential for harm, of such high dairy intakes. Moderate consumption of milk or other dairy products—one to two servings a day—is fine, and likely has some benefits for children. But it’s not essential for adults, for a host of reasons." Fruits, sometimes categorized with vegetables, include apples, oranges, bananas, berries and lemons. Fruits contain carbohydrates, mostly in the form of sugar as well as important vitamins and minerals.
The most common food groups:
Cereals and legumes, sometimes categorized as grains, is often the largest category in nutrition guides. Cereal examples include wheat, rice, oats, barley, bread and pasta. Legumes are also known as pulses and include beans, soy beans, lentils and chickpeas. Cereals are a good source of starch and are often categorized with other starchy food such as potatoes. Legumes are good source of essential amino acids as well as carbohydrates.
The most common food groups:
Meat, sometimes labelled protein and occasionally inclusive of legumes and beans, eggs, meat analogues and/or dairy, is typically a medium- to smaller-sized category in nutrition guides. Examples include chicken, fish, turkey, pork and beef.
Confections, also called sugary foods and sometimes categorized with fats and oils, is typically a very small category in nutrition guides, if present at all, and is sometimes listed apart from other food groups. Examples include candy, soft drinks, and chocolate.
Vegetables, sometimes categorized with fruit and occasionally inclusive of legumes, is typically a large category second only to grains, or sometimes equal or superior to grains, in nutrition guides. Examples include spinach, carrots, onions, and broccoli.
Water is treated in very different ways by different food guides. Some exclude the category, others list it separately from other food groups, and yet others make it the center or foundation of the guide. Water is sometimes categorized with tea, fruit juice, vegetable juice and even soup, and is typically recommended in plentiful amounts.
Uncommon food groups:
The number of "common" food groups varies depending on who is defining them. Canada's Food Guide, which has been in continual publication since 1942 and is the second most requested government document after the income tax form in Canada, recognizes only four official food groups, listing the remainder of foods as "another". Some of these "others" include: Alcoholic beverage or Alcohol is listed apart from other food groups and recommended only for certain people in moderation by Harvard's Healthy Eating Pyramid and the University of Michigan's Healing Foods Pyramid, while Italy's food pyramid includes a half-serving of wine and beer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**AMPTE-IRM**
AMPTE-IRM:
AMPTE-IRM, also called as AMPTE-Ion Release Module, was a Germany satellite designed and tasked to study the magnetosphere of Earth, being launched as part of the Explorer program. The AMPTE (Active Magnetospheric Particle Tracer Explorers) mission was designed to study the access of solar wind ions to the magnetosphere, the convective-diffusive transport and energization of magnetospheric particles, and the interactions of plasmas in space.
Mission:
The AMPTE-IRM is one of the three components of the international space mission AMPTE, which also included AMPTE-CCE (Charge Composition Explorer), designed by NASA, and AMPTE-UKS (United Kingsom Subsatellite), provided by the United Kingdom.
Spacecraft:
The program consisted of three spacecraft: the AMPTE-CCE, which measured in the magnetosphere the ions released by the AMPTE-IRM; and the AMPTE-UKS, which used thrusters to keep station near the AMPTE-IRM to provide two-point local measurements. The AMPTE-IRM provided multiple ion releases in the solar wind, the magnetosheath, and the magnetotail, with in situ diagnostics of each. The AMPTE-IRM spacecraft was spin-stabilized at 15 rpm. Its spin axis was initially in the ecliptic plane, but later it was adjusted with magnetic torqueing to be at right angles to the ecliptic. The power system was a 60 watts solar array with redundant batteries. There was a redundant S-band telemetry and telecommand system. Telemetry rates could be chosen between 1 and 8 kbps. For injection into the final orbit, the AMPTE-IRM carried its own kick stage. In addition to the ion releases, the instruments on board the spacecraft monitored the ambient, magnetosphere, but with the data acquisition confined to the passes that could be tracked in real time from Germany.
Launch:
AMPTE-IRM was launched with the two other satellites of the AMPTE program on 16 August 1984, at 16:48 UTC, from a Cape Canaveral launch pad by a Delta 3924 launch vehicle.
Experiments:
3-D Plasma Analyzer (30-channel, Electrons: 15 eV-30 keV; Ions: 20 eV/q-40 keV/q) The main instrument consisted of two symmetrical quadrispherical electrostatic analyzers to measure the three-dimensional distributions of electrons and ions, respectively, over 4-pi-sr during every satellite spin period (4 seconds). The energy range covered was 15 eV/Q to 30 keV/Q in 30 channels. The angular resolution was 22.5°. Moments of the measured distributions were directly computed on board. An additional retarding-potential analyzer measured the flux of electrons between approximately 0 and 25 eV.
Experiments:
Ion Release Experiment The experiment consisted of eight lithium and eight barium canisters, which were injected from the AMPTE-IRM in pairs by ground command and ignited 10 minutes after separation from the spacecraft. Each of these was either totally lithium or totally barium. A pair of Li/Ba canisters produced a total of 2.E25/7.E24 Li/Ba atoms, respectively, which were subsequently ionized by solar radiation. Li releases in the solar wind, which were carried out in August/September 1984, were to be followed by an artificial comet release of Ba ions in the dawnside magnetosheath and a number of Ba and Li releases in the geomagnetic tail. In situ diagnostics by AMPTE-IRM and AMPTE-UKS and optical observations of the clouds from the ground were followed by tracing of the ions in the inner magnetosphere by AMPTE-CCE.
Experiments:
Mass Separation Ion Spectrometer (MSIS) (H through Ba: 0.5 eV/q-14 keV/q) The instrument consisted of a retarding-potential analyzer entrance section and a toroidal electrostatic energy-per-charge analyzer, followed by a quadrispherical electrostatic analyzer with superimposed radial magnetic field for mass-per-charge analysis. The energy range covered was approximately 0 to 12 (or 24) keV/Q, with adequate mass resolution to separate the Li and Ba tracer ions. Up to eight different ion species could be analyzed simultaneously.
Experiments:
Plasma Wave Spectrometer (64 channel, E- and B-field, E-: 0.0-5.6 MHz; B-: 30 Hz-1.5 MHz) The instrument used a 42 m (138 ft) tip-to-tip antenna to measure electric fields from DC to 5 MHz and two boom-mounted search coil magnetometers to measure magnetic fields from 30 Hz to 1 MHz. The signals were analyzed by a very low frequency VLF/MF 16-channel spectrum analyzer, three VLF narrow-band swept-frequency receivers, a 60-channel high frequency HF stepped-frequency receiver, and an analog wide-band receiver.
Experiments:
Suprathermal Energy Ionic Charge Analyzer (H through Fe: 5-270 keV/q; electrons: 35-207 keV) The main instrument consisted of a curved plate electrostatic energy-per-charge analyzer followed by a 12 cm (4.7 in) time-of-flight telescope with a thin carbon foil at the front and a solid-state detector at the rear, which measured ion velocity and residual energy. The energy-per-charge range was 10 to 300 keV/Q. The mass resolution, delta M/M, ranged from 0.25 to 0.12. The instrument package also contained an electron sensor for the energy range 35 to 220 keV, provided by University of California, Berkeley.
Experiments:
Triaxial Fluxgate Magnetometer The instrument was a three-axis fluxgate magnetometer mounted on a 2 m (6 ft 7 in) boom. It had two switchable ranges (± 4 microtesla, and ± 60 microtesla) with resolutions of 0.12 and 1.8 nT, respectively and was read out at 32, 16, 8, or 4 vector samples per second, depending on the T/M rate. Signals from each sensor were also fed into four band pass filters with 5.5, 11, 22, and 44-Hz center frequencies and were read out up to two times per second.
End of mission:
The spacecraft became inoperational on 14 August 1986. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**NUN buffer**
NUN buffer:
NUN buffer is a solution that makes it possible to purify proteins located in the nucleus of eukaryotic cells. Although other procedures are available they result in loss of albumin D-box binding protein (DBP) which is unwanted if nuclear signal pathways are to be investigated. Therefore, a new extraction procedure was developed in 1993 to increase recovery of nonhistone proteins using a (NUN) solution containing 0.3 M NaCl, 1 M urea, and 1% nonionic detergent Nonidet P-40, which destabilize salt bridges, hydrogen bonds, and hydrophobic interactions, respectively; resulting in a disruption of interaction between proteins and DNA. By incubating nuclei in NUN buffer and centrifuging the solution, the supernatant will therefore contain nuclear proteins.
NUN buffer:
NUN buffer contains: HEPES [pH 7.6], Urea, NaCl, DDT, PIC 1 & 2, 1.1% NP-40, Sodium orthovanadate, β-glycerol phosphate and water. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Catastrophic kill**
Catastrophic kill:
A catastrophic kill, K-Kill or complete kill is damage inflicted on an armored vehicle that renders it permanently non-functional (most commonly via fire and/or an explosion).
Catastrophic kill:
Among tank crewmen it is also commonly known as a brew-up, coined from the British World War II term for lighting a fire in order to brew tea. The expression arose because British troops used an old petrol tin with holes punched in the side as a makeshift stove on which to brew their tea. The flames licking out of the holes in the side of the tin resembled a burning tank, and thus the expression was coined. Typically, a catastrophic kill results in the ignition of any fuel the vehicle may be carrying as well as the detonation (cooking off, or sympathetic detonation) of its ammunition. A catastrophic kill does not necessarily preclude the survival of the vehicle's crew, although most historical casualties in armored warfare were the result of K-kills. This type of kill is also associated with the jack-in-the-box effect, where a tank's turret is blown skyward due to the overpressure of an ammunition explosion. Some tank designs employ blow-off panels, channeling such explosions outside of the vehicle, turning an otherwise catastrophic kill into a firepower kill.
Catastrophic kill:
By contrast, the term knocked out refers to a vehicle which has been damaged to the point of inoperability and abandoned by its crew, but is not obviously beyond the point of repair. A knocked-out vehicle may, however, be later determined to be irreparable and written off. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Meat on the bone**
Meat on the bone:
Meat on the bone, also called bone-in meat is meat that is sold with some or all of the bones included in the cut or portion, i.e. meat that has not been filleted. The phrase "on the bone" can also be applied to specific types of meat, most commonly ham on the bone, and to fish. Meat or fish on the bone may be cooked and served with the bones still included or the bones may be removed at some stage in the preparation.Examples of meat on the bone include T-bone steaks, chops, spare ribs, chicken leg portions and whole chicken. Examples of fish on the bone include unfilleted plaice and some cuts of salmon.
Meat on the bone:
Meat on the bone is used in many traditional recipes.
Effect on flavor and texture:
The principal effect of cooking meat on the bone is that it alters the flavour and texture. Albumen and collagen in the bones release gelatin when boiled which adds substance to stews, stocks, soups and sauces. The bone also conducts heat within the meat so that it cooks more evenly and prevents meat drying out and shrinking during cooking.
Eating:
Consumption methods vary by size; smaller bones can be eaten whole, while larger ones can be broken or gnawed.
Some meat on the bone is most commonly eaten by picking it up, notably ribs and chicken (particularly wings and drumsticks). Others are primarily eaten by cutting off the meat, such as steaks, but possibly picking up and gnawing the bone when otherwise finished.
Smaller fish are often eaten whole, with the bones. Examples include whitebait of all sorts, anchovies, and smelt. In some cases the bone marrow may also be eaten, notably for beef or poultry (especially chicken), in the later case by the eater breaking or chewing off the end of a soft leg bone and sucking the marrow out.
Cooking:
Meat on the bone typically cooks slower than boneless meat when roasted in a joint. Individual bone-in portions such as chops also take longer to cook than their filleted equivalents.
Value for money:
Meat on the bone is quicker and easier to butcher as there is no filleting involved. Filleting is a skilled process that adds to labour and wastage costs as meat remaining on the bones after filleting is of low value (although it can be recovered). As a result, meat on the bone can be better value for money. However, relative value can be hard to judge as the bone part of the product is undesirable in many cultures, for larger bones are inedible. Various portions may contain a greater or lesser proportion of bone.
Ease of handling:
The presence of bones may make meat products more bulky, irregular in shape, and difficult to pack. Bones may make preparation and carving difficult. However, bones can sometimes be used as handles to make the meat easier to eat.
Import restrictions:
Foot-and-mouth disease (FMD) is a contagious disease affecting cloven-hoofed animals. Because FMD rarely infects humans but spreads rapidly among animals, it is a much greater threat to the agriculture industry than to human health.
FMD can be contracted by contact with infected meat, with meat on the bone representing a higher risk than filleted meat. As a result, import of meat on the bone remains more restricted than that of filleted meat in many countries.
Health issues:
Injury Meat and fish served on the bone can present a risk of accident or injury. Small, sharp fish bones are the most likely to cause injury although sharp fragments of meat bone can also cause problems. Typical injuries include bones being swallowed and becoming trapped in the throat, and bones being trapped under the tongue.Discarded bones can also present a risk of injury to pets or wild animals as some types of cooked meat bone break into sharp fragments when chewed.
Health issues:
BSE Bovine spongiform encephalopathy (BSE), also known as "mad cow disease", is a fatal brain disease affecting cattle. It is believed by most scientists that the disease may be transmitted to human beings who eat the brain or spinal cord of infected carcasses. In humans, it is known as new variant Creutzfeldt–Jakob disease (vCJD or nvCJD), and is also fatal.
Health issues:
The largest outbreak of BSE was in the United Kingdom, with several other countries affected to a lesser extent. The outbreak started in 1984, and continued into the 1990s, leading to increasing concern among governments and beef consumers as the risk to humans became known, but could not be quantified. Many countries banned or restricted the import of beef products from countries affected by BSE.
Health issues:
Animal brain and spinal cord had already been removed from the human and animal food chain when, in 1997, prion infection was also detected in the dorsal root ganglia within the spinal column of infected animals. As a result, beef on the bone was banned from sale in the UK as a precaution. This led to criticism that the government was overreacting. The European Union also considered banning beef and lamb on the bone. The UK ban lasted from December 1997 to December 1999, when it was lifted and the risk from beef on the bone declared negligible.
Use as a metaphor:
The phrase "meat on the bones" is used metaphorically to mean substance. For example, "I expect that we'll start putting some meat on the bones of regulatory reform" indicates an intention to add detail and substance to plans for regulatory reform and implies that these plans were previously only set out in broad or vague terms.
The phrase to "flesh out" relies of the same imagery in which a basic idea is likened to a skeleton or bones and the specific details of the idea to meat or flesh on that skeleton. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Survivin**
Survivin:
Survivin, also called baculoviral inhibitor of apoptosis repeat-containing 5 or BIRC5, is a protein that, in humans, is encoded by the BIRC5 gene.Survivin is a member of the inhibitor of apoptosis (IAP) family. The survivin protein functions to inhibit caspase activation, thereby leading to negative regulation of apoptosis or programmed cell death. This has been shown by disruption of survivin induction pathways leading to increase in apoptosis and decrease in tumour growth. The survivin protein is expressed highly in most human tumours and fetal tissue, but is completely absent in terminally differentiated cells. These data suggest survivin might provide a new target for cancer therapy that would discriminate between transformed and normal cells. Survivin expression is also highly regulated by the cell cycle and is only expressed in the G2-M phase. It is known that Survivin localizes to the mitotic spindle by interaction with tubulin during mitosis and may play a contributing role in regulating mitosis. The molecular mechanisms of survivin regulation are still not well understood, but regulation of survivin seems to be linked to the p53 protein. It also is a direct target gene of the Wnt pathway and is upregulated by beta-catenin.
IAP family of anti-apoptotic proteins:
Survivin is a member of the IAP family of antiapoptotic proteins. It is shown to be conserved in function across evolution as homologues of the protein are found both in vertebrates and invertebrates. The first members of the IAPs identified were from the baculovirus IAPs, Cp-IAP and Op-IAP, which bind to and inhibit caspases as a mechanism that contributes to its efficient infection and replication cycle in the host. Later, five more human IAPs that included XIAP, c-IAPl, C-IAP2, NAIP, and survivin were discovered. Survivin, like the others, was discovered by its structural homology to IAP family of proteins in human B-cell lymphoma. The human IAPs, XIAP, c-IAPl, C-IAP2 have been shown to bind to caspase-3 and -7, which are the effector caspases in the signaling pathway of apoptosis. It is not known with absolute certainty though, how the IAPs inhibit apoptosis mechanistically at the molecular level.
IAP family of anti-apoptotic proteins:
A common feature that is present in all IAPs in the presence of a BIR (Baculovirus IAP Repeat, a ~70 amino acid motif) in one to three copies. It was shown by Tamm et al. that knocking out BIR2 from XIAP was enough to cause a loss of function in terms of XIAPs ability to inhibit caspases. This gives the implication that it is within these BIR motifs that contains the anti-apoptotic function of these IAPs. Survivin's one BIR domain shows a similar sequence compared to that of XIAP's BIR domains.
Isoforms:
The single survivin gene can give rise to four different alternatively spliced transcripts: Survivin, which has a three-intron–four-exon structure in both the mouse and human.
Survivin-2B, which has an insertion of an alternative exon 2.
Survivin-Delta-Ex-3, which has exon 3 removed. The removal of exon 3 results in a frame shift that generates a unique carboxyl terminus with a new function. This new function may involve a nuclear localization signal. Moreover, a mitochondrial localization signal is also generated.
Survivin-3B, which has an insertion of an alternative exon 3.
Structure:
A structural feature common to all IAP family proteins is that they all contain at least one baculoviral IAP repeat (BIR) domain characterized by a conserved zinc-coordinating Cys/His motif at the N-terminal half of the protein.Survivin is distinguished from other IAP family members in that it has only one BIR domain. The mice and human BIR domain of survivin are very similar structurally except for two differences that may affect function variability. The human survivin also contains an elongated C-terminal helix comprising 42 amino acids. Survivin is 16.5 kDa large and is the smallest member of the IAP family.
Structure:
X-ray crystallography has shown two molecules of human survivin coming together to form a bowtie-shape dimer through a hydrophobic interface. This interface includes N-terminal residues 6-10 just before the BIR domain region and the 10 residue region connecting the BIR domain to the C-terminal helix. The structural integrity of the determined crystal structure of survivin is quite reliable, as physiological conditions were used to obtain the images.
Function:
Apoptosis Apoptosis, the process of programmed cell death, involves complex signaling pathways and cascades of molecular events. This process is needed for proper development during embryonic and fetal growth where there is destruction and reconstruction of cellular structures. In adult organisms, apoptosis is needed to maintain differentiated tissue by striking the balance between proliferation and cell death. It is known that intracellular proteases called caspases degrade the cellular contents of the cell by proteolysis upon activation of the death pathway.
Function:
Mammalian cells have two main pathways that lead to apoptosis.
Function:
1. Extrinsic pathway: Initiated by extrinsic ligands binding to death receptors on the surface of the cell. An example of this is the binding of tumour necrosis factor-alpha (TNF-alpha) to TNF-alpha receptor. An example of a TNF receptor is Fas (CD95), which recruits activator caspases like caspase-8 upon binding TNF at the cell surface. The activation of the initiator caspases then initiates a downstream cascade of events that results in the induction of effector caspases that function in apoptosis.2. Intrinsic pathway: This pathway is initiated by intracellular or environmental stimuli. It is focused on detecting the improper functioning of the mitochondria in the cell and, as a result, activates signaling pathways to commit suicide. The membrane permeability of the mitochondria increases and particular proteins are released into the cytoplasm that facilitates the activation of initiator caspases. The particular protein released from the mitochondria is cytochrome c. Cytochrome c then binds to Apaf-1 in the cytosol and results in the activation of initiator caspase-9. The activation of the initiator caspases then initiates a downstream cascade of events that results in the induction of effector caspases that function in apoptosis.One family of proteins called IAPs plays a role in regulating cell death by inhibiting the process. IAPs like survivin, inhibit apoptosis by physically binding to and inhibiting proper caspase function. The function of IAPs is evolutionarily conserved as Drosophila homologues of IAPs have been shown to be essential for cell survival.IAPs have been implicated in studies to have a regulatory effect on cell division. Yeast cells with knock-outs of certain IAP genes did not show problems associated with cell death, but showed defects in mitosis characterized by improper chromosome segregation or failed cytokinesis.Deletion of particular IAPs does not seem to have a profound effect on the cell-death pathway as there is a redundancy of function by the many IAPs that exist in a cell. They have been implicated, however, to play a role in maintaining an anti-apoptotic environment intracellularly. Changing the expression of particular IAPs has shown an increase in spontaneous cell death induction or increased sensitivity to death stimuli.
Function:
Mechanism of action Inhibition of Bax and Fas-induced apoptosis Tamm et al. have shown that survivin inhibits both Bax and Fas-induced apoptotic pathways. The experiment involved transfecting HEK 293 cells with a Bax-encoding plasmid, which resulted in an increase in apoptosis (~7 fold) as measured by DAPI staining. They then contransfected the 293 cells with Bax-encoding plasmid and survivin-encoding plasmids. They observed that cells transfected along with the survivin showed a significant decrease in apoptosis (~3 fold). A similar result also showed for cells transfected with the Fas-overexpressing plasmid. Immunoblots were performed and confirmed that survivin does not inhibit by mechanism of preventing Bax or Fas protein from being made into fully functional proteins. Therefore, survivin should be acting somewhere downstream of the Bax or Fas signaling pathway to inhibit apoptosis through these pathways.
Function:
Interaction with caspase-3 and -7 In this part of the experiment, Tamm et al. transfected 293 cells with survivin and lysed them to obtain cell lysate. The lysates were incubated with different caspase forms and survivin was immunopercipitated with anti-survivin antibody. The idea behind this is that, if survivin binds physically with the caspase it is incubated with, it will be co-precipitated along with the survivin while everything else in the lysate is washed away. The immunoprecipitates were then run on SDS-PAGE and then immunoblotted for detection of the desired caspase. If the caspase of interest was detected, it meant that it was bound to survivin in the immunoprecipitation step implicating that survivin and the particular caspase had bound beforehand. Active caspase-3 and -7 coimmunoprecipitated with survivin. The inactive proforms of caspase-3 and -7 did not bind survivin. Survivin also does not bind to active caspase-8. Caspase-3 and -7 are effector proteases whereas caspase-8 is an initiator caspase that sits more upstream in the apoptotic pathway. These results demonstrate survivin's capability to bind with particular caspases in vitro, but may not necessarily translate over to actual physiological conditions. Later, a 2001 study confirmed that human survivin tightly binds caspase-3 and -7 when expressed in E. coli.Further evidence to support the idea that survivin blocks apoptosis by directly inhibiting caspases was given by Tamm et al. 293 cells were transfected with either overexposed caspase-3 or -7 encoding plasmid and with survivin. They showed that survivin inhibited processing of these two caspases into their active forms. While survivin has been shown as mentioned above to bind to only the active forms of these caspases, it is likely here that survivin inhibits the active forms of the caspases resulting from cleaving and activating more of its own proforms. Thus, survivin acts possibly by preventing such a cascade of cleavage and activation amplification from happening resulting in decreased apoptosis.In similar manner, looking at the mitochondrial pathway of apoptosis, cytochrome c was transiently expressed in 293 cells to look at the inhibitory effects survivin had on this pathway. Although the details are not here, survivin was shown to also inhibit cytochrome c and caspase-8-induced activation of caspases.
Function:
Regulation of cytokinesis While the mechanism by which survivin may regulate cell mitosis and cytokinesis is not known, the observations made on its localization during mitosis suggests strongly that it is involved in some way in the cytokinetic process.
Function:
Proliferating Daoy cells were placed on a glass coverslip, fixed and stained with fluorescent antibodies for survivin and alpha-tubulin. Immunoflourescence using confocal microscopy was used to look at the localization of survivin and tubulin during the cell-cycle to look for any patterns of survivin expression. Survivin was absent in interphase, but present in the G2-M phase.During the different stages of mitosis, one could see that survivin follows a certain localization pattern. At prophase and metaphase, survivin is mainly nuclear in location. During prophase, as the chromatin condenses so that it is visible under the microscope, survivin starts to move to the centromeres. At prometaphase when the nuclear membrane dissociates and spindle microtubules cross over the nuclear region, survivin stays put at the centromeres. At metaphase, when the chromosomes align at the middle plate and are pulled with high tension to either pole by the kinetochore attachments, survivin then associates with the kinetochores. At anaphase as separation of the chromatids happens, the kinetochore microtubules shorten as the chromosomes move towards to the spindle poles and survivin also moves along to the midplate. Survivin thus accumulates at the midplate at telophase. Finally, survivin localizes to the midbody at the cleavage furrow.
Function:
Interaction and localization to the mitochondria It has been shown that survivin can heterodimerize individually with the two splice variants Survivin-2B and survivin-deltaEx3. Evidence of the heterodimerization of survivin splice variants with survivin was shown with co-immunoprecipitation experiments after cotransfection with the respective survivin variants with survivin. To determine the localization of exogenously expressed survivin-2B and survivin-deltaEx3, fusion constructs of the proteins were made with GFP and HcRed respectively and Daoy cells were transfected with the plasmid constructs. Survivin was also tagged with a fluorescent protein. The fusion of the survivin variants with the fluorescent molecules allows for simple detection of cellular location by fluorescence microscopy. Survivin-2B by itself, localized to both nuclear and cytoplasmic compartments whereas survivin-deltaEx3 localized only in the nucleus. The localization of the three variants (survivin, Survivin-2B, and survivin-deltaEx3) differ, however, when cotransfected together rather than individually.To see which subcellular compartments contained the survivin splice variants complexes, fluorescent antibody markers for different organelles in the cell were employed. The assumption is that, under fluorescence microscopy, if the particular survivin complex is located in that particular cell compartment, one would observe an overlap from the fluorescence given off by the tagged survivin complex and the tagged compartment as well. Different color fluorescence is used to distinguish compartment from survivin.
Function:
Endoplasmic reticulum and lyosomes: no colocalization Mitochondria and golgi: both survivin/survivin-2B and survivin/survivin-deltaEx3 colocalizeTo verify these observations, they fractionated the subcellular compartments and performed western blot analysis to definitively say that survivin complexes did indeed localize at these compartments.
Function:
Role in cancer Expression in different carcinomas Survivin is known to be expressed during fetal development and across most tumour cell types, but is rarely present in normal, non-malignant adult cells. Tamm et al. showed that survivin was expressed in all 60 different human tumour lines used in the National Cancer Institute's cancer drug-screening program, with the highest levels of expression in breast and lung cancer lines and the lowest levels in renal cancers. Knowing the relative expression levels of survivin in different tumour types may prove helpful as survivin-related therapy may be administered depending on the expression level and reliance of the tumour type on survivin for resistance to apoptosis.
Function:
As an oncogene Survivin can be regarded as an oncogene as its aberrant overexpression in most cancer cells contributes to their resistance to apoptotic stimuli and chemotherapeutic therapies, thus contributing to their ongoing survival.
Function:
Genomic instability Most human cancers have been found to have gains and losses of chromosomes that may be due to chromosomal instability (CIN). One of the things that cause CIN is the inactivation of genes that control the proper segregation of the sister chromatids during mitosis. In gaining a better understanding of survivin's function in mitotic regulation, scientists have looked into the area of genomic instability. It is known that survivin associates with microtubules of the mitotic spindle at the start of mitosis.It has been shown in the literature that knocking out survivin in cancer cells will disrupt microtubule formation and result in polyploidy as well as massive apoptosis. It has also been shown that survivin-depleted cells exit mitosis without achieving proper chromosome alignment and then reforms single tetraploid nuclei. Further evidence also suggests that survivin is needed for sustaining mitotic arrest upon encounter with mitosis problems. The evidence mentioned above implicates that survivin plays an important regulatory role both in the progression of mitosis and sustaining mitotic arrest. This seems strange, as survivin is known to be highly upregulated in most cancer cells (that usually contain chromosome instability characteristics), and its function is that which promotes proper regulation of mitosis.
Regulation by p53:
p53 inhibits survivin expression at the transcriptional level Wild-type p53 has been shown to repress survivin expression at the mRNA level. Using an adenovirus vector for wild-type p53, human ovarian cancer cell line 2774qw1 (which expresses mutant p53) was transfected. mRNA levels of survivin were analyzed by real-time quantitative PCR (RT-PCR) and showed time-dependent down regulation of survivin mRNA levels when the cells were infected with wild-type p53. A 3.6 fold decrease of survivin mRNA level was observed 16 hours after infection initiation and decreased 6.7 fold 24 hours after infection. Western blot results do show that there is indeed the p53 from the adenoviral vector was being expressed in the cells using antibody specific for p53. The expression of p53 levels indicative of its role in survivin repression shows that p53 started to be expressed 6 hours into infection and had its highest level at 16–24 hours. To further confirm that endogenous wild-type p53 is really causing the repression of survivin gene expression, the authors induced A549 (human lung cancer cell line with wild-type p53) and T47D (human breast cancer cell line with mutant p53) cells with DNA-damaging agent adriamycin to trigger the physiological p53 apoptotic response in these cancer cells and compare the survivin levels measured to the same cells without DNA damage induction. The A549 line, which intrinsically has functioning wild-type p53, showed significant reduction in survivin levels compared to non-induced cells. This same effect was not seen in T47D cells that carry mutant inactive p53.P53's normal function is to regulate genes that control apoptosis. As survivin is a known inhibitor of apoptosis, it can be implied that p53 repression of survivin is one mechanism by which cells can undergo apoptosis upon induction by apoptotic stimuli or signals. When survivin is over-expressed in the cell lines mentioned in the previous paragraph, apoptotic response from DNA-damaging agent adriamycin decreased in a dose-dependent manner. This suggests that down-regulation of survivin by p53 is important for p53-mediated apoptotic pathway to successfully result in apoptosis. It is known that a defining characteristic of most tumors is the over-expression of survivin and the complete loss of wild-type p53. The evidence put forth by Mirza et al. shows that there exists a link between survivin and p53 that can possibly explain a critical event that contributes to cancer progression.
Regulation by p53:
p53 suppression of survivin expression In order to see whether p53 re-expression in cancer cells (that have lost p53 expression) has the suppressive effect on the promoter of the survivin gene, a luciferase reporter construct was made. The isolated survivin promoter was placed upstream of the luciferase reporter gene. In a luciferase reporter assay, if the promoter is active, the luciferase gene is transcribed and translated into a product that gives off light that can measured quantitatively and, thus, represents the activity of the promoter. This construct was transfected into cancer cells that had either wild-type or mutant p53. High luciferase activity was measured in the cells with mutant p53 and significantly lower luciferase levels were measured for cells with wild-type p53.Transfection of different cell types with wild-type p53 was associated with a strong repression of the survivin promoter. Transfection with mutant p53 was not shown to strongly repress the survivin promoter. More luciferase constructs were prepared with varying degrees of deletion from the 5' end of the survivin promoter region. At one point, there was deletion that caused the survivin levels to be indifferent to the presence of the p53 over-expression plasmid, indicating that there is a specific region proximal to the transcription start site that is needed for p53 suppression of survivin. Although it has been found that two p53 binding sites are located on the survivin gene promoter, analysis using deletions and mutations has shown that these sites are not essential to transcriptional inactivation.Instead, it is observed that modification of the chromatin inside of the promoter region may be responsible for the transcriptional repression of the survivin gene. This is explained below in the epigenetic regulation section.
Regulation by p53:
Cell cycle regulation Survivin is shown to be clearly regulated by the cell cycle, as its expression is found to be dominant only in the G2/M phase. This regulation exists at the transcriptional level, as there is evidence of the presence of cell-cycle-dependent element/cell-cycle gene homology region (CDE/CHR)boxes located in the survivin promoter region. Further evidence to support this mechanism of regulation includes the evidence that surivin is poly-ubiquinated and degraded by proteasomes during interphase of the cell cycle. Moreover, survivin has been shown to localize to components of the mitotic spindle during metaphase and anaphase of mitosis. Physical association between polymerized tubulin and survivin have been shown in vitro as well. It is also shown that post-transcriptional modification of survivin involving the phosphorylation of Thr34 leads to increased protein stability in the G2/M phase of the cell cycle.It is known from Mirza et al. that repression of survivin by p53 is not a result of any cell cycle progressive regulation. The same experiment by Mirza et al. with regard to determining p53 suppression of survivin at the transcriptional level was repeated, but this time for cells arrested in different stages of the cell cycle. It was shown that, although p53 arrests the numbers of cells to different extents in different phases, the measured level of survivin mRNA and protein levels were the same across all the samples transfected with the wild-type p53. This shows that p53 acts in a cell-cycle independent manner to inhibit survivin expression.
Regulation by p53:
Epigenetic and genetic regulation As observed through the literature, survivin is found to be over-expressed across many tumour types. Scientists are not sure of the mechanism that causes this abnormal over-expression of survivin; however, p53 is downregulated in almost all cancers, so it is tempting to suggest that survivin over-expression is due to p53 inactivity. Wagner et al. investigated the possible molecular mechanism involved with the over expression of survivin in acute myeloid leukemia (AML). In their experiments, they did both an epigenetic and a genetic analysis of the survivin gene promoter region in AML patients and compared the observations to what was seen in peripheral blood mononuclear cells (PBMCs) that have been shown to express no survivin. Assuming that the molecular mechanism of survivin re-expression in cancerous cells is at the transcriptional level, the authors decided to look at particular parts of the promoter region of survivin in order to see what happens in cancer cells that does not happen in normal cells that causes such a high level of survivin to be expressed. With regards to an epigenetic mechanism of survivin gene regulation, the authors measured the methylation status of the survivin promoter, since it is accepted that methylation of genes plays an important role in carcinogenesis by silencing of certain genes or vice versa. The authors used methylation specific polymerase chain reaction with bisulfite sequencing methods to measure the promoter methylation status in AML and PBMCs and found unmethylated survivin promoters in both groups. This result shows that DNA methylation status is not an important regulator of survivin re-expression during leukemogenesis. However, De Carvalho et al. performed a DNA methylation screening and identified that DNA methylation of IRAK3 plays a key role in survivin up-regulation in different types of Cancer, suggesting that epigenetic mechanisms plays an indirect role on abnormal over-expression of survivin. With regard to genetic analysis of the survivin promoter region, the isolated DNA of AML and PBMCs were treated with bisulfite, and the survivin promoter region sequence was amplified out with PCR and sequenced to look for any particular genetic changes in the DNA sequence between the two groups. Three single-nucleotide polymorphisms (SNPs) were identified and were all present both in AML patients and in healthy donors. This result suggests that the occurrence of these SNPs in the promoter region of the survivin gene also appears to be of no importance to survivin expression. However, it has not been ruled out yet that there may be other possible epigenetic mechanisms that may be responsible for a high level of survivin expression observed in cancer cells and not in normal cells. For example, the acetylation profile of the survivin promoter region can also be looked at. Different cancer and tissue types may have slight or significant differences in the way survivin expression is regulated in the cell, and, thus, the methylation status or genetic differences in the survivin promoter may be observed to be different in different tissues. Thus, further experiments assessing the epigenetic and genetic profile of different tumour types must be investigated.
As a drug target:
Expression in cancer as a tool for cancer-directed therapy Survivin is known to be highly expressed in most tumour cell types and absent in normal cells, making it a good target for cancer therapy. The exploitation of survivin's over-active promoter in most cancer cell types allows for the delivery of therapeutics only in cancer cells and removed from normal cells.Small interfering RNA (siRNA) are synthetic antisense oligonucleotides to the mRNA of the gene of interest that works to silence the expression of a particular gene by its complementary binding. siRNAs, such as LY2181308, bound to the respective mRNA results in disruption of translation of that particular gene and thus the absence of that protein in the cell. Thus, the use of siRNAs has great potential to be a human therapeutic, as it can target and silence the expression of potentially any protein you want. A problem arises when siRNA expression in a cell cannot be controlled, allowing its constitutive expression to cause toxic side-effects. With regard to practical treatment of cancer, it is required to either deliver the siRNAs specifically into cancer cells or control the siRNA expression. Previous methods of siRNA therapy employ the use of siRNA sequences cloned into vectors under the control of constitutively active promoters. This causes a problem, as this model is non-specific to cancer cells and damages normal cells too. Knowing that survivin is over-expressed specifically in cancer cells and absent in normal cells, one can imply that the survivin promoter is active only in cancer cells. Thus, the exploitation of this difference between cancer cells and normal cells will allow appropriate therapy directed only at the cells in a patient that are harmful. In an experiment to demonstrate this idea, Trang et al. have created a cancer-specific vector expressing siRNA for green fluorescent protein (GFP) under the human survivin promoter. MCF7 breast cancer cells were cotransfected with this vector and a GFP-expressing vector as well. Their major finding was that MCF7 cells transfected with the siRNA vector for GFP under the survivin promoter had a significant reduction in GFP expression then the cells transfected with the siRNA vector under a cancer non-specific promoter. Moreover, normal non-cancerous cells transfected in the same way mentioned above showed no significant reduction in GFP expression. This is implying that, in normal cells, survivin promoter is not active, and, thus, the siRNA will not be expressed under an inactive survivin promoter.
As a drug target:
Antisense oligonucleotides targeting survivin mRNA As it is known that survivin is over-expressed in most cancers, which may be contributing to the cancer cells' resistance to apoptotic stimuli from the environment. The use of antisense survivin therapy hopes to render cancer cells susceptible to apoptosis by eliminating survivin expression in the cancer cells.Olie et al. developed different 20-mer phosphorothioate antisense oligonucleotides that target different regions in the mRNA of the survivin gene. The antisense function of the oligonucleotides allows binding to surviving mRNA and, depending on the region on which it binds, might inhibit surviving mRNA from being translated into a functional protein. Real-time PCR was used to assess the levels of mRNA present in a lung adenocarcinoma cell line A549 that overexpresses survivin. The best antisense oligonucleotide was identified that effectively down-regulated survivin mRNA levels and resulted in apoptosis of the cells. Survivin's role in cancer development in the context of a signaling pathway is its ability to inhibit activation of downstream caspase-3 and -7 from apoptosis inducing stimuli. The overexpression of survivin in tumors may serve to increase the tumors resistance to apoptosis and, thus, contribute to cell immortality even in the presence of death stimuli. In this experiment, the oligonucleotide 4003 that targets nucleotides 232-251 of survivin mRNA was found to be the most effective at down-regulating the levels of survivin mRNA in the A549 tumour line. The 4003 oligonucleotides were introduced into the tumour cells by transfection. Further experiments were then conducted on 4003. One of the additional experiments involved determining the dose-dependent effect of 4003 on the down-regulation of survivin mRNA levels. It was found that a concentration of 400 nM resulted in a maximum down-regulation of 70% of the initial survivin mRNA present. Another experiment on 4003 involved assessing any biological or cytotoxic effect 4003 down-regulation of survivin mRNA has on A549 cells using the MTT assay. The numbers of A549 cells transfected with 4003 significantly decreased with increasing concentration of 4003 compared to cells transfected either with a mismatch form of the 4003 or lipofectin control. Many physical observations that confirmed the induction of apoptosis by 4003 were made. For example, lysates of the 4003-treated cells showed increased levels of caspase-3-like protease activity; nuclei were observed to be condensed and chromatin was fragmented.
As a drug target:
Cancer immunotherapy Survivin has been a target of attention in recent years for cancer immunotherapy, as it is an antigen that is expressed mostly in cancer cells and absent in normal cells. This is because survivin is deemed to be a crucial player in tumour survival. There has been much evidence accumulated over the years that shows survivin as a strong T-cell-activating antigen, and clinical trials have already been initiated to prove its usefulness in the clinic.
As a drug target:
Activation of the adaptive immune system A. Cellular T cell response The first evidence of survivin-specific CTL recognition and killing was shown in an assay wherein cytotoxic T cells (CTLs) induced lysis of B cells transfected to present survivin peptides on its surface. The naive CD8+ T cells were primed with dendritic cells and could therefore recognize the specific peptides of survivin presented on the surface Major Histocompatibility Complex I (MHC I) molecules of the B cells.
As a drug target:
B. Humoral antibody response Taking blood samples from cancer patients, scientists have found antibodies that are specific for survivin. These antibodies were absent in the blood samples of healthy normal patients. Therefore, this shows that survivin is able to elicit a full humoral immune response. This may prove useful, as one could measure the level of survivin-specific antibodies in the patient's blood as a monitor of tumour progression. In acquiring the humoral response to tumour antigens such as survivin, CD4+ T cells are activated to induce B cells to produce antibodies directed against the particular antigens.
As a drug target:
The isolation of the antibodies specific for survivin peptides is useful, as one can look at the structure and sequence of the epitope binding groove of the antibody and, therefore, deduce possible epitopes that may fit in that particular antibody groove. Therefore, one can determine the particular peptide portion of the survivin protein that is bound most efficiently and most commonly by humoral antibodies generated against survivin. This will lead to the production of more specific survivin vaccines that contain a specific portion of the survivin protein that is known to elicit a good immune response, generate immune memory, and allow for protection from tumour development.
As a drug target:
Over-expression in tumours and metastatic tissues Xiang et al. found a new approach in inhibiting tumour growth and metastasis by simultaneously attacking both the tumour and its vasculature by a cytotoxic T cell (CTL) response against the survivin protein, which will later result in the activation of apoptosis in tumour cells.The idea and general principle behind his technique is described below. Mice were immunized with the oral vaccination and then subjected to tumour challenges by injecting them in the chest with a certain number of tumour cells and a Matrigel pre-formed extracellular matrix to hold the tumour cells together. The mice were sacrificed and the endothelium tissue was stained with a fluorescent dye that would aid in the quantification of tumour neovascularisation using a Matrigel assay. There was found to be a significant difference between the control and test groups, whereby mice given the vaccine had less angiogenesis from the tumour challenge than the control mice that were not given any of the vaccine prior to tumour challenge. In vitro assays and other tests were also performed to validate the idea of the occurrence of an actual immune response to support what they observed in the mice. For example, the spleen on the challenged mice were isolated and measured for the presence of any cytokines, and specifically activated immune cell groups that would indicative that a specific immune response did occur upon vaccination. The isolated CTLs specific for the survivin protein after vaccination of the mice were used in cytoxicity assays where mice tumour cells expressing survivin were shown to be killed upon incubation with the specific CTLs.By using an oral DNA vaccine carried in an attenuated non-virulent form of Salmonella typhimurium, which co-encoded secretory chemokine CCL21 and survivin protein in C57BL/6J mice, Xiang et al. have been able to elicit an immune response carried out by dendritic cells (DCs) and CTLs to eliminate and suppress the pulmonary metastases of non-small cell lung carcinoma. The activation of the immune response is most likely taking place in the secondary lymphoid organ called the Peyer's Patch in the small intestine where DCs take up the survivin protein by phagocytosis and present them on their surface receptors to naive CD8+ T cells (uninactivated CTL) to achieve a specific immune response targeting survivin exclusively. Activated CTLs specific for a particular antigen kill their target cells by first recognizing parts of the survivin protein expressed on MHC I (immunohistocompatability) proteins presented on the surface of tumour cells and vasculature and then releasing granules that induce the tumour cells to undergo apoptosis. The DNA vaccine contained the CCL21 secretory chemokine as a way to enhance the likelihood of eliciting the immune response by better mediating the physical interaction of the antigen-presenting DCs and the naive CD8+ T cells, resulting in a greater likelihood of immune activation.
As a drug target:
Resveratrol-mediated sensitization It has been shown by Fulda et al. that the naturally occurring compound resveratrol (a polyphenol found in grapes and red wine) can be used as a sensitizer for anticancer drug-induced apoptosis by the action of causing cell cycle arrest. This cell cycle arrest causes a dramatic decline in survivin levels in the cells, as it is known from the literature that survivin expression is highly linked with the cell cycle phase state. Thus, the decrease in survivin, which is a contributing factor to chemotherapy resistance and apoptosis induction therapies, would render the cancer cells more prone to such cancer treatments. Fulda et al. have demonstrated the benefits of resveratrol through a series of experiments. First, the authors of the paper tested the intrinsic cytotoxic effects of resveratrol. They found that it induced moderate apoptosis levels only in SHEP neuroblastoma cells. After, they tested resveratrol in combination with several different known anticancer agents. They found a consistent increase in the level of apoptosis induced by the drugs when resveratrol was also present. Moreover, they varied the order with which either the drugs or resveratrol was introduced to the cancer cells to determine whether the sequence of treatment had any important effect. It was found that the highest levels of apoptosis induction were observed when resveratrol was added prior to anticancer drug treatment. Next, the authors tested for any differential sensitivity to apoptosis linked to the phase of the cell cycle the cells were in. Analysis by flow cytometry revealed an accumulation of cells in S phase upon treatment with resveratrol. The cells were also halted in different phases of the cell cycle using special compounds and then treated with the anticancer drugs. They found that cells halted in S phase were significantly more sensitive to the cytotoxic effects of the drugs.To determine the involvement of survivin in resveratrol-mediated sensitization, the authors decided to test whether downregulation of the specific survivin protein expression would confer a similar effect on the phenotype of resveratrol-treated cells. In terms of seeing at which level resveratrol worked, they did a northern blot and found that resveratrol treatment resulted in a decrease in survivin mRNA levels, thus implying resveratrol's inhibitory action at the transcriptional level. To further see whether survivin played a key role in sensitization of the cancer cells to cytotoxic drugs, survivin antisense oligonucleotides were used to knock down any survivin mRNA, and, thus, its possibility to be translated is also eliminated. siRNAs for survivin are complements in sequence to the mRNA sequence encoding survivin. When these siRNAs for survivin are introduced into cells, they will bind to the respective complementary mRNA and, thus, prevent its translation since the mRNA is now impeded from proper physical interaction with the translational machinery. In this way, the siRNAs for survivin effectively downregulates survivin expression level in the cell. Cells treated with antisense oligonucleotides for survivin showed similar sensitization to cytotoxic drugs as cells treated with resveratrol, which offers support for the mechanism of action of resveratrol.
As a drug target:
Prostate cancer It has been observed that the development of hormone resistance in prostate cancer may be due to the upregulation of antiapoptotic genes, one of which is survivin.Zhang et al. hypothesize that, if survivin is a significant contributor to the development of hormonal therapy resistance in prostate cancer cells, targeting survivin and blocking it would enhance prostate cancer cell susceptibility to anti-androgen therapy. (Anti-androgen therapy uses drugs to eliminate the presence of androgens in the cell and cellular environment, since such androgens are known to enhance tumour immortality in prostate cancer cells.) Zhang et al. first assessed the level of survivin expression of LNCaP (an androgen-dependent prostate cancer cell line that expresses intact androgen receptors) using quantitative Western analysis and found high expression of survivin in these cells. Cells exposed to dihydrotestosterone (DHT) showed increased levels of survivin expression only and not other IAP family members. This result suggests that androgens may upregulate survivin, which contributes to the resistance to apoptosis observed in the tumour cells. Next, with the addition of flutamide (an antiandrogen) to the cells, survivin levels were observed to significantly decrease. The LNCaP cells were transduced separately with the different constructs of the survivin gene (mutant or wild-type) and subjected to flutamide treatment and assessed for the apoptosis level. Flutamide-treated survivin mutant-transduced cells were shown to significantly increase apoptosis by double that of flutamide treatment alone. On the other end, overexpression of the wild-type survivin was found to significantly reduce the apoptosis levels from flutamide treatment compared to flutamide treatment alone. Therefore, these results support the hypothesis that survivin plays a role in the anti-apoptotic nature of the LNCaP cancer cell line and that inhibiting survivin in prostate cancer cells appears to enhance the therapeutic effect of flutamide.
Interactions:
Survivin has been shown to interact with: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rotational temperature**
Rotational temperature:
The characteristic rotational temperature (θR or θrot) is commonly used in statistical thermodynamics to simplify the expression of the rotational partition function and the rotational contribution to molecular thermodynamic properties. It has units of temperature and is defined as θR=hcB¯kB=ℏ22kBI, where B¯=B/hc is the rotational constant, I is a molecular moment of inertia, h is the Planck constant, c is the speed of light, ħ = h/2π is the reduced Planck constant and kB is the Boltzmann constant.
Rotational temperature:
The physical meaning of θR is as an estimate of the temperature at which thermal energy (of the order of kBT) is comparable to the spacing between rotational energy levels (of the order of hcB). At about this temperature the population of excited rotational levels becomes important. Some typical values are given in the table. In each case the value refers to the most common isotopic species. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glaciolacustrine deposits**
Glaciolacustrine deposits:
Sediments deposited into lakes that have come from glaciers are called glaciolacustrine deposits. In some European geological traditions, the term limnoglacial is used. These lakes include ice margin lakes or other types formed from glacial erosion or deposition. Sediments in the bedload and suspended load are carried into lakes and deposited. The bedload is deposited at the lake margin while the suspended load is deposited all over the lake bed. Glaciolacustrine deposits commonly form varves, which are annually deposited layers of silt and clay, where silt is deposited during the summer, and clay during the winter.
Bedload deposits:
Sediments carried in the bedload of a stream, mostly sands and gravels, are deposited in deltas that form at the edges of lakes. These deposits will only be found near the edges of the lake.
Suspended deposits:
Sediments that are carried in the suspended load of a stream, commonly silts and clays, are transported into the lake in suspension or by currents along the lake floor. These are the principal deposits during the winter because of lack of melting of the glacier so the stream has a reduced discharge therefore carrying less coarse material. These sediments normally consist of fine-grained rhythmites that are laid down in layers known as varves or varvites. A varve represent an annual deposit of silt and clay. Sedimentation in deltas also occurs in rhythmic patterns as in the lake deposits, but they are thicker and contain coarse-grained materials instead of just silt and clay. As the varves get closer to the shoreline the clay layer will stay relatively the same thickness, but there will be an increase in thickness of the silt layer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**WS-Coordination**
WS-Coordination:
WS-Coordination is a Web Services specification developed by BEA Systems, IBM, and Microsoft and accepted by OASIS Web Services Transaction TC in its 1.2 version. It describes an extensible framework for providing protocols that coordinate the actions of distributed applications. Such coordination protocols are used to support a number of applications, including those that need to reach consistent agreement on the outcome of distributed transactions. The framework defined in this specification enables an application service to create a context needed to propagate an activity to other services and to register for coordination protocols. The framework enables existing transaction processing, workflow, and other systems for coordination to hide their proprietary protocols and to operate in a heterogeneous environment. Additionally WS-Coordination describes a definition of the structure of context and the requirements for propagating context between cooperating services. However, this specification isn't enough to coordinate transactions among web services. It only provides a coordination framework, and other specifications like WS-Atomic Transaction or WS-BusinessActivity are needed for this purpose. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ambulance chasing**
Ambulance chasing:
Ambulance chasing, also known as barratry, is a term which refers to a lawyer soliciting for clients at a disaster site. The term "ambulance chasing" comes from the stereotype of lawyers who follow ambulances to the emergency room to find clients. "Ambulance chaser" is used as a derogatory term for a personal injury lawyer.
History:
In 1881, Edward Watkin of the South Eastern Railway (England) complained about attorneys who solicited business from passengers after accidents: We had an accident, I may tell you, at Forrest-hill two years ago. Well, there was a gentleman—an attorney in the train. He went round to all the people in the train and gave them his card; and, having distributed all the cards in his card-case, he went round and expressed extreme regret to the others that he could not give them a card; but he gave them his name as ‘So and So,’ his place was in ‘Such a street,’ and the ‘No, So and So’ in the City. That was touting for business.
History:
"Now, there is a very admirable body called the 'Law Association'", Watkin added. "Why does not the Law Association take hold of cases of that kind?"
Description:
Ambulance chasing is prohibited in the United States by state rules that follow Rule 7.3 of the American Bar Association Model Rules of Professional Conduct. Some bar associations strongly enforce rules against ambulance chasing. For example, the State Bar of California dispatches investigators to large-scale disaster scenes to discourage ambulance chasers, and to catch any who attempt to solicit business from disaster victims at the scene.In the UK, Indicative Behaviour (IB) 8.5 of the Solicitors Regulation Authority Code of Conduct 2011 specifies that "approaching people in the street, at ports of entry, in hospital or at the scene of an accident" is to be taken as an indication of non-compliance with the SRA Principles.
Other uses:
The term has also been used to refer to disreputable motorsport journalists who cover racing crashes in a tabloid journalism-style with little respect for those who may have been injured or killed.In scientific literature, the term “ambulance chasing” refers to a socio-scientific phenomenon that manifests as a surge in the number of preprint papers on a particular topic. In particular, it refers to interpretive papers published quickly after a new anomalous measurement has been produced. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Multifidelity simulation**
Multifidelity simulation:
Multifidelity (or multi-fidelity) methods leverage both low- and high-fidelity data in order to maximize the accuracy of model estimates, while minimizing the cost associated with parametrization. They have been successfully used in impedance cardiography, wing-design optimization, robotic learning, computational biomechanics, and have more recently been extended to human-in-the-loop systems, such as aerospace and transportation. They include both model-based methods, where a generative model is available or can be learned, in addition to model-free methods, that include regression-based approaches, such as stacked-regression. A more general class of regression-based multi-fidelity methods are Bayesian approaches, e.g. Bayesian linear regression, Gaussian mixture models, Gaussian processes, auto-regressive Gaussian processes, or Bayesian polynomial chaos expansions.The approach used depends on the domain and properties of the data available, and is similar to the concept of metasynthesis, proposed by Judea Pearl.
Data fidelity spectrum:
The fidelity of data can vary along a spectrum between low- and high-fidelity. The next sections provide examples of data across the fidelity spectrum, while defining the benefits and limitations of each type of data.
Data fidelity spectrum:
Low fidelity data (LoFi) Low-fidelity data (LoFi) includes any data that was produced by a person or Stochastic Process that deviates from the real-world system of interest. For example, LoFi data can be produced by models of a physical system that use approximations to simulate the system, rather than modeling the system in an exhaustive manner.Moreover, in human-in-the-loop (HITL) situations the goal may be to predict the impact of technology on expert behavior within the real-world operational context. Machine learning can be used to train statistical models that predict expert behavior, provided that an adequate amount of high-fidelity (i.e., real-world) data are available or can be produced.
Data fidelity spectrum:
LoFi benefits and limitations In situations when there is not an adequate amount of high-fidelity data available to train the model, low-fidelity data can sometimes be used. For example, low-fidelity data can be acquired by using a distributed simulation platform, such as X-Plane, and requiring novice participants to operate in scenarios that are approximations of the real-world context. The benefit of using low-fidelity data is that they are relatively inexpensive to acquire, so it is possible to elicit larger amounts of data. However, the limitation is that the low-fidelity data may not be useful for predicting real-world expert (i.e., high-fidelity) performance due to differences between the low-fidelity simulation platform and the real-world context, or between novice and expert performance (e.g., due to training).
Data fidelity spectrum:
High-fidelity data (HiFi) High-fidelity data (HiFi) includes data that was produced by a person or Stochastic Process that closely matches the operational context of interest. For example, in wing design optimization, high-fidelity data uses physical models in simulation that produce results that closely match the wing in a similar real-world setting. In HITL situations, HiFi data would be produced from an operational expert acting in the technological and situational context of interest.
Data fidelity spectrum:
HiFi benefits and limitations An obvious benefit of utilizing high-fidelity data is that the estimates produced by the model should generalize well to the real-world context. However, these data are expensive in terms of both time and money, which limits the amount of data that can be obtained. The limited amount of data available can significantly impair the ability of the model to produce valid estimates.
Data fidelity spectrum:
Multifidelity methods (MfM) Multifidelity methods attempt to leverage the strengths of each data source, while overcoming the limitations. Although small to medium differences between low- and high-fidelity data are sometimes able to be overcome by multifidelity models, large differences (e.g., in KL divergence between novice and expert action distributions) can be problematic leading to decreased predictive performance when compared to models that exclusively relied on high-fidelity data.Multifidelity models enable low-fidelity data to be collected on different technology concepts to evaluate the risk associated with each concept before actually deploying the system.
Bayesian auto-regressive Gaussian processes:
In an auto-regressive model of Nt Gaussian processes (GP), each level of output fidelity, t , where a higher t denotes a higher fidelity, is modeled as a GP, zt(x) , which can be expressed in terms of the previous level's GP, zt−1(x) , a proportionality constant ρt−1 and a "difference-GP" δt(x) as follows: z1(x)=δ1(x) zt(x)=ρt−1zt−1(x)+δt(x) The scaling constant that quantifies the correlation of levels t and t−1 , and can generally depend on x .Under the assumption, that all information about a level is contained in the data corresponding to the same pivot point x at level t as well as t−1 , semi-analytical first and second moments are feasible. This assumption formally is Cov(zt(x),zt−1(x′)∣zt−1(x))=0 I.e. given a data at x on level t−1 , there is no further information about level t to extract from the data x′ on level t−1 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pointing device**
Pointing device:
A pointing device is a human interface device that allows a user to input spatial (i.e., continuous and multi-dimensional) data to a computer. CAD systems and graphical user interfaces (GUI) allow the user to control and provide data to the computer using physical gestures by moving a hand-held mouse or similar device across the surface of the physical desktop and activating switches on the mouse. Movements of the pointing device are echoed on the screen by movements of the pointer (or cursor) and other visual changes. Common gestures are point and click and drag and drop.
Pointing device:
While the most common pointing device by far is the mouse, many more devices have been developed. However, the term mouse is commonly used as a metaphor for devices that move a computer cursor.
Fitts's law can be used to predict the speed with which users can use a pointing device.
Classification:
To classify several pointing devices, a certain number of features can be considered. For example, the device's movement, controlling, positioning or resistance. The following points should provide an overview of the different classifications.
Classification:
direct vs. indirect inputIn case of a direct-input pointing device, the on-screen pointer is at the same physical position as the pointing device (e.g., finger on a touch screen, stylus on a tablet computer). An indirect-input pointing device is not at the same physical position as the pointer but translates its movement onto the screen (e.g., computer mouse, joystick, stylus on a graphics tablet).
Classification:
absolute vs. relative movementAn absolute-movement input device (e.g., stylus, finger on touch screen) provides a consistent mapping between a point in the input space (location/state of the input device) and a point in the output space (position of pointer on screen). A relative-movement input device (e.g., mouse, joystick) maps displacement in the input space to displacement in the output state. It therefore controls the relative position of the cursor compared to its initial position.
Classification:
isotonic vs. elastic vs. isometricAn isotonic pointing device is movable and measures its displacement (mouse, pen, human arm) whereas an isometric device is fixed and measures the force which acts on it (trackpoint, force-sensing touch screen). An elastic device increases its force resistance with displacement (joystick).
position control vs. rate controlA position-control input device (e.g., mouse, finger on touch screen) directly changes the absolute or relative position of the on-screen pointer.
A rate-control input device (e.g., trackpoint, joystick) changes the speed and direction of the movement of the on-screen pointer.
translation vs. rotationAnother classification is the differentiation between whether the device is physically translated or rotated.
degrees of freedomDifferent pointing devices have different degrees of freedom (DOF). A computer mouse has two degrees of freedom, namely its movement on the x- and y-axis. However the Wiimote has 6 degrees of freedom: x-, y- and z-axis for movement as well as for rotation.
possible statesAs mentioned later in this article, pointing devices have different possible states. Examples for these states are out of range, tracking or dragging.
Examples a computer mouse is an indirect, relative, isotonic, position-control, translational input device with two degrees of freedom (x, y position) and two states (tracking, dragging).
a touch screen is a direct, absolute, isometric, position-control input device with two or more degrees of freedom (x, y position and optionally pressure) and two states (out of range, dragging).
a joystick is an indirect, relative, elastic, rate-control, translational input device with two degrees of freedom (x, y angle) and two states (tracked, dragging).
a Wiimote is an indirect, relative, elastic, rate-control, translational input device with six degrees of freedom (x, y, z orientation and x, y, z position) and two or three states (tracking, dragging for orientation and position; out-of-range for position).
Buxton's taxonomy:
The following table shows a classification of pointing devices by their number of dimensions (columns) and which property is sensed (rows) introduced by Bill Buxton. The sub-rows distinguish between mechanical intermediary (i.e. stylus) (M) and touch-sensitive (T). It is rooted in the human motor/sensory system. Continuous manual input devices are categorized. Sub-columns distinguish devices that use comparable motor control for their operation. The table is based on the original graphic of Bill Buxton's work on "Taxonomies of Input".
Buxton's Three-State-Model:
This model describes different states that a pointing device can assume. The three common states as described by Buxton are out of range, tracking and dragging. Not every pointing device can switch to all states.
Fitts' Law:
Fitts's law (often cited as Fitts' law) is a predictive model of human movement primarily used in human–computer interaction and ergonomics. This scientific law predicts that the time required to rapidly move to a target area is a function of the ratio between the distance to the target and the width of the target. Fitts's law is used to model the act of pointing, either by physically touching an object with a hand or finger, or virtually, by pointing to an object on a computer monitor using a pointing device.
Fitts' Law:
In other words, this means for example, that the user needs more time to click on a small button which is distant to the cursor, than he needs to click a large button near the cursor. Thereby it is generally possible to predict the speed which is needed for a selective movement to a certain target.
Mathematical formulation The common metric to calculate the average time to complete the movement is the following: MT ID log 2(2DW) where: MT is the average time to complete the movement.
a and b are constants that depend on the choice of input device and are usually determined empirically by regression analysis.
ID is the index of difficulty.
D is the distance from the starting point to the center of the target.
W is the width of the target measured along the axis of motion.
W can also be thought of as the allowed error tolerance in the final position, since the final point of the motion must fall within ±W⁄2 of the target's center.This results in the interpretation that, as mentioned before, large and close targets can be reached faster than little, distant targets.
Applying Fitts' Law in user interface design As mentioned above, the size and distance of an object influence its selection. Additionally this effects the user experience. Therefore, it is important, that Fitts' Law is considered while designing user interfaces. Below some basic principles are mentioned.
Fitts' Law:
Interactive elementsCommand buttons for example should have different sizes than non-interactive elements. Larger interactive objects are easier to select with any pointing device.Edges and cornersDue to the fact, that the cursor gets pinned on the edges and corners of a graphical user interface, those points can be accessed faster than other spots on the display.Pop-up menusThey should support immediate selection of interactive elements in order to reduce the user's "travel time".Options for selectingWithin menus like dropdown menus or top-level navigation, the distance increases the further the user goes down the list. However in pie menus, the distance to the different buttons is always the same. In addition, the target areas in pie menus are larger.Task barsTo operate a task bar, the user needs a higher level of precision, thus more time. Generally they hinder the movement through the interface.
Control-Display Gain:
The Control-Display Gain (or CD gain) describes the proportion between movements in the control space to the movements in the display space. For example, a hardware mouse moves in another speed or distance than the cursor on the screen. Even if these movements take place in two different spaces, the units for measurement have to be the same in order to be meaningful (e.g. meters instead of pixels). The CD gain refers to the scale factor of these two movements: CDgain=VDisplay/VControl The CD gain settings can be adjusted in most cases. However, a compromise has to be found: with high gains it is easier to approach a distant target, with low gains this takes longer. High gains hinder the selection of targets, whereas low gains facilitate this process. The Microsoft, macOS and X window systems have implemented mechanisms which adapt the CD gain to the user's needs. e.g. the CD gain increases when the user's movement velocity increases (historically referred to as "mouse acceleration").
Common pointing devices:
Motion-tracking pointing devices Mouse A mouse is a small handheld device pushed over a horizontal surface.
Common pointing devices:
A mouse moves the graphical pointer by being slid across a smooth surface. The conventional roller-ball mouse uses a ball to create this action: the ball is in contact with two small shafts that are set at right angles to each other. As the ball moves these shafts rotate, and the rotation is measured by sensors within the mouse. The distance and direction information from the sensors is then transmitted to the computer, and the computer moves the graphical pointer on the screen by following the movements of the mouse. Another common mouse is the optical mouse. This device is very similar to the conventional mouse but uses visible or infrared light instead of a roller-ball to detect the changes in position.
Common pointing devices:
Additionally there is the mini-mouse, which is a small egg-sized mouse for use with laptop computers; usually small enough for use on a free area of the laptop body itself, it is typically optical, includes a retractable cord and uses a USB port to save battery life.
Common pointing devices:
Trackball A trackball is a pointing device consisting of a ball housed in a socket containing sensors to detect rotation of the ball about two axis, similar to an upside-down mouse: as the user rolls the ball with a thumb, fingers, or palm the pointer on the screen will also move. Tracker balls are commonly used on CAD workstations for ease of use, where there may be no desk space on which to use a mouse. Some are able to clip onto the side of the keyboard and have buttons with the same functionality as mouse buttons. There are also wireless trackballs which offer a wider range of ergonomic positions to the user.
Common pointing devices:
Joystick Isotonic joysticks are handle sticks where the user can freely change the position of the stick, with more or less constant force.
Isometric joysticks are where the user controls the stick by varying the amount of force they push with, and the position of the stick remains more or less constant. Isometric joysticks are often cited as more difficult to use due to the lack of tactile feedback provided by an actual moving joystick.
Common pointing devices:
Pointing stick A pointing stick is a pressure-sensitive small nub used like a joystick. It is usually found on laptops embedded between the G, H, and B keys. It operates by sensing the force applied by the user. The corresponding "mouse" buttons are commonly placed just below the space bar. It is also found on mice and some desktop keyboards.
Common pointing devices:
Wii Remote The Wii Remote, also known colloquially as the Wiimote, is the primary controller for Nintendo's Wii console. A main feature of the Wii Remote is its motion sensing capability, which allows the user to interact with and manipulate items on screen via gesture recognition and pointing through the use of accelerometer and optical sensor technology.
Common pointing devices:
Finger tracking A finger tracking device tracks fingers in the 3D space or close to the surface without contact with a screen. Fingers are triangulated by technologies like stereo camera, time-of-flight and laser. Good examples of finger tracking pointing devices are LM3LABS' Ubiq'window and AirStrike Position-tracking pointing devices Graphics tablet A graphics tablet or digitizing tablet is a special tablet similar to a touchpad, but controlled with a pen or stylus that is held and used like a normal pen or pencil. The thumb usually controls the clicking via a two-way button on the top of the pen, or by tapping on the tablet's surface.
Common pointing devices:
A cursor (also called a puck) is similar to a mouse, except that it has a window with cross hairs for pinpoint placement, and it can have as many as 16 buttons. A pen (also called a stylus) looks like a simple ballpoint pen but uses an electronic head instead of ink. The tablet contains electronics that enable it to detect movement of the cursor or pen and translate the movements into digital signals that it sends to the computer." This is different from a mouse because each point on the tablet represents a point on the screen.
Common pointing devices:
Stylus A stylus is a small pen-shaped instrument that is used to input commands to a computer screen, mobile device or graphics tablet.
The stylus is the primary input device for personal digital assistants, smartphones and some handheld gaming systems such as the Nintendo DS that require accurate input, although devices featuring multi-touch finger-input with capacitive touchscreens have become more popular than stylus-driven devices in the smartphone market.
Common pointing devices:
Touchpad A touchpad or trackpad is a flat surface that can detect finger contact. It is a stationary pointing device, commonly used on laptop computers. At least one physical button normally comes with the touchpad, but the user can also generate a mouse click by tapping on the pad. Advanced features include pressure sensitivity and special gestures such as scrolling by moving one's finger along an edge.
Common pointing devices:
It uses a two-layer grid of electrodes to measure finger movement: one layer has vertical electrode strips that handle vertical movement, and the other layer has horizontal electrode strips to handle horizontal movements.
Touchscreen A touchscreen is a device embedded into the screen of the TV monitor, or system LCD monitor screens of laptop computers. Users interact with the device by physically pressing items shown on the screen, either with their fingers or some helping tool.
Several technologies can be used to detect touch. Resistive and capacitive touchscreens have conductive materials embedded in the glass and detect the position of the touch by measuring changes in electric current. Infrared controllers project a grid of infrared beams inserted into the frame surrounding the monitor screen itself, and detect where an object intercepts the beams.
Modern touchscreens could be used in conjunction with stylus pointing devices, while those powered by infrared do not require physical touch, but just recognize the movement of hand and fingers in some minimum range distance from the real screen.
Touchscreens became popular with the introduction of palmtop computers like those sold by the Palm, Inc. hardware manufacturer, some high range classes of laptop computers, mobile smartphone like HTC or the Apple iPhone, and the availability of standard touchscreen device drivers into the Symbian, Palm OS, Mac OS X, and Microsoft Windows operating systems.
Common pointing devices:
Pressure-tracking pointing devices Isometric Joystick In contrast to a 3D Joystick, the stick itself doesn't move or just moves very little and is mounted in the device chassis. To move the pointer, the user has to apply force to the stick. Typical representatives can be found on notebook's keyboards between the "G" and "H" keys. By performing pressure on the TrackPoint, the cursor moves on the display.
Other devices:
A light pen is a device similar to a touch screen, but uses a special light-sensitive pen instead of the finger, which allows for more accurate screen input. As the tip of the light pen makes contact with the screen, it sends a signal back to the computer containing the coordinates of the pixels at that point. It can be used to draw on the computer screen or make menu selections, and does not require a special touch screen because it can work with any CRT display.
Other devices:
Light gun Palm mouse – held in the palm and operated with only two buttons; the movements across the screen correspond to a feather touch, and pressure increases the speed of movement Footmouse – sometimes called a mole – a mouse variant for those who do not wish to or cannot use the hands or the head; instead, it provides footclicks Puck, similar to a mouse, but, but designed for absolute positioning rather than relative. It typically has a transparent plastic with crosshairs for precise positioning and tracing. Pucks are most commonly used for tracing in CAD/CAM/CAE work.
Other devices:
Eye tracking devices – a mouse controlled by the user's retinal movements, allowing cursor-manipulation without touch Finger-mouse – An extremely small mouse controlled by two fingers only; the user can hold it in any position Gyroscopic mouse – a gyroscope senses the movement of the mouse as it moves through the air. Users can operate a gyroscopic mouse when they have no room for a regular mouse or must give commands while standing up. This input device needs no cleaning and can have many extra buttons, in fact, some laptops doubling as TVs come with gyroscopic mice that resemble, and double as, remotes with LCD screens built in.
Other devices:
Steering wheel – can be thought of as a 1D pointing device – see also steering wheel section of game controller article Paddle – another 1D pointing device Jog dial – another 1D pointing device Yoke (aircraft) Some high-degree-of-freedom input devices 3Dconnexion – six-degree controller Discrete pointing devices directional pad – a very simple keyboard Dance pad – used to point at gross locations in space with feet Soap mouse – a handheld, position-based pointing device based on existing wireless optical mouse technology Laser pen – can be used in presentations as a pointing device | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Noodle soup**
Noodle soup:
Noodle soup refers to a variety of soups with noodles and other ingredients served in a light broth. Noodle soup is a common dish across East Asia, Southeast Asia and the Himalayan states of South Asia. Various types of noodles are used, such as rice noodles, wheat noodles and egg noodles.
Varieties:
East Asia China There are myriad noodle soup dishes originating in China, and many of these are eaten in, or adapted in various Asian countries.
Ban mian (板面) – Hakka-style, flat-shaped egg noodles in soup.
Chongqing noodles Cold noodle (冷面/冷麵) – Shanghai-style, flat noodle stirred with peanut butter sauce, soy sauce and vinegar, served cold.
Varieties:
Crossing the bridge noodles (Chinese: 过桥米线; pinyin: Guò qiáo mǐxiàn) – ingredients are placed separately on the table, then added into a bowl of hot chicken stock to be cooked and served. The ingredients are uncooked rice noodles, meat, raw eggs, vegetables and edible flowers. The stock stays warm because of a layer of oil on top of the bowl. Typical cuisine of Kunming, Yunnan Province (昆明, 云南省).
Varieties:
Lanzhou (hand-pulled) beef noodle – (兰州拉面, lanzhou lāmiàn), also called Lanzhou lāmiàn. It is made of stewed or red braised beef soup, beef broth, vegetables and Chinese noodles.
Spring noodle soup (阳春面/陽春麵 yángchūn mian) – white noodles in soup with vegetables. It is one of the most popular and simple Chinese snacks.
Wonton noodles (雲吞麵) – a Cantonese dish.
Hong Kong Cart noodle (車仔麵) – noodle soup sold with an assortment of toppings and styles by street vendors using carts.
Japan Traditional Japanese noodles in soup are served in a hot soy-dashi broth and garnished with chopped scallions. Popular toppings include tempura, tempura batter, kakiage (deep fried vegetables) or aburaage (deep-fried tofu).
Hot soba (そば) – thin brown buckwheat noodles, similar to pizzoccherri pasta but thinner and longer. Also known as Nihon-soba ("Japanese buckwheat noodles"). In Okinawa, however, soba likely refers to Okinawa soba, not buckwheat.
Udon (うどん) – thick wheat noodles served with various toppings, usually in a hot soy-dashi broth, or sometimes in a Japanese curry soup.
Varieties:
Chinese-influenced wheat noodles, served in a meat or chicken broth, have become very popular in the early 20th century.Ramen (ラーメン) – thin light yellow noodle served in hot chicken or pork broth, flavoured with soy or miso, with various toppings such as slices of pork, menma (pickled bamboo shoots), seaweed, or boiled egg. Also known as Shina-soba or Chuka-soba (both mean "Chinese soba").
Varieties:
Champon – yellow noodles of medium thickness served with a great variety of seafood and vegetable toppings in a hot chicken broth which originated in Nagasaki as a cheap food for students.
Okinawa soba (沖縄そば) – a thick wheat-flour noodle served in Okinawa, often served in a hot broth with sōki (steamed pork), kamaboko (fish cake slice), beni shōga (pickled ginger) and kōrēgusu (chilli-infused awamori). Akin to a cross between udon and ramen.
Hōtō – a popular regional dish originating from Yamanashi, Japan made by stewing flat udon noodles and vegetables in miso soup.
North Korea and South Korea Janchi guksu (잔치국수) – noodles in a light seaweed broth, served with fresh condiments (usually kimchi, thinly sliced egg, green onions, and cucumbers).
Jjamppong (짬뽕) – spicy noodle soup of Korean-Chinese origin.
Kalguksu (칼국수) – Hand-cut wheat noodles served in a seafood broth.
Makguksu (막국수) – buckwheat noodles with chilled broth.
Naengmyeon (냉면) – Korean stretchy buckwheat noodles in cold beef broth, with onions, julienned cucumber, boiled egg sliced in half, and slices of pears. This dish is popular in the humid summers of Korea.
Varieties:
Ramyeon (라면) – South Korean noodles in soup, served in food stalls, made of instant noodles with toppings added by stalls. In the 1960s, instant noodles were introduced to South Korea from Japan. Its quick and easy preparation, as well as its cheap price, ensured it quickly caught on. It is typically spicy with chili and kimchi added, amongst other ingredients.
Varieties:
Taiwan Beef noodle soup (牛肉麵) – noodles in beef soup, sometimes with a chunk of stewed beef, beef bouillon granules and dried parsley. Popular in Taiwan.
Oyster vermicelli (蚵仔麵線) – vermicelli noodles with oysters.
Varieties:
Tibet Bhakthuk (Tibetan: བག་ཐུག་, Wylie: bag thug) – flattened short noodles in beef soup, with chunks of stewed beef, dried beef strips, seaweed, daikon, potatoes and topped with green onions. Popular in Tibet as well as Bhutan and Nepal which have large populations of Tibetans. The soup is thicker and richer than thukpa due to use of dried beef strips.
Varieties:
Thukpa (Tibetan: ཐུག་པ་, Wylie: thug pa) or Thenthuk – flat strip noodles in beef soup, with chunks of stewed beef, spinach and topped with green onions. Popular in Tibet as well as Nepal and some areas of India with large Nepalese and Tibetan population.
Southeast Asia Cambodia Kuyteav (គុយទាវ, kŭytéav): – a pork broth based rice noodle soup served with ground pork, shrimp, meat balls, pork liver and garnished with fried garlic, green onions, cilantro, lime and hoisin sauce.
Varieties:
Kuyteav khor ko (គុយទាវខគោ): A rice noodle dish created from the stewed/braised beef combined with flat rice noodles. It features French influences including potatoes and carrots topped off with chives and coriander. It is eaten with bread as well.Num banhchok (នំបញ្ចុក): A popular Cambodian breakfast soup, consisting of lightly fermented rice noodles with a fish gravy made from prahok and yellow kroeung topped off with fresh mint leaves, bean sprouts, green beans, banana flowers, cucumbers and other greens. There is also a red curry version usually reserved for ceremonial occasions and wedding festivities.Nom banhchok samlar khmer (Khmer: នំបញ្ចុកសម្លរខ្មែរ, lit. ‘num banhchok with Khmer soup) often abbreviated as Nom banhchok – a rice noodle soup with a broth based on minced fish, lemongrass as well as specific Cambodian spices that make up the kroeung. In Siem Reap, the broth is prepared with coconut milk and is accompanied by a sweet and spicy tamarind sauce (ទឹកអម្ពិល, tœ̆k âmpĭl), which is not the case in Phnom Penh.Num banhchok samlar kari (នំបញ្ចុកសម្លការី, lit. ‘num banhchok with curry soup): A rice noodle dish eaten with a Khmer curry soup. The curry may be yellow (turmeric soup base) or red (chilli curry soup base) depending on the type of soup created and generally include chicken (including legs) or beef, potatoes, onions, and carrots.
Varieties:
Num banhchok Kampot (នំបញ្ចុកកំពត): A speciality of Kampot featuring a cold rice noodle salad rather than a soup base. It features cuts of spring rolls, a variety of herbs, ground nuts, pork, and fish sauce.
Varieties:
Num banhchok teuk mrech (នំបញ្ចុកទឹកម្ហេច): A speciality soup of Kampot that features a clear fish broth (that does not feature the use of prahok) cooked with chives and vegetables. It is a regional speciality not found in Phnom Penh and other parts of Cambodia where Khmer and Vietnamese varieties of num banhchok are eaten.Mee kiew (មីគាវ, mii kiəv): A Cambodian rendition of the Chinese wonton noodles. The broth is clear topped with garlic chives and the dumplings are filled with seasoned minced pork and shrimp. Variations are often served with wheat vermicelli, a mixture of rice-wheat noodles or flat rice noodles (គុយទាវមីគាវ, kŭytéav mii kiəv).
Varieties:
Laos Feu – fine white noodles in a meat broth, served with a garnish of green leaves and flavourings, typically including lime juice, vinegar, salt and sugar.
Khao piak sen - literally translates to wet rice strands. The broth is usually made from chicken simmered with galangal, lemongrass, kaffir lime leaves, and garlic cooked in oil. The fresh noodles are made of rice flour, tapioca starch, and water and cook directly in the broth, releasing starches that give khao piak sen its distinct consistency.
Khao poon - also known as Lao laksa and is a popular type of spicy Lao rice vermicelli soup. It is a long-simmered soup most often made with pounded chicken, fish, or pork and seasoned with common Lao ingredients such as fish sauce, lime leaves, galangal, garlic, shallots, Lao chillies, and perilla.
Lao khao soi is a soup made with wide rice noodles, coarsely chopped pork, tomatoes, fermented soy beans, chillies, shallots, and garlic, then topped with pork rind, bean sprouts, chopped scallions, and chopped cilantro. Though northern Laotians have a special way of preparing this dish, different versions of it can be found at Lao restaurants.
Indonesia Mi ayam – chicken noodle soup comprising a bowl of chicken stock, boiled choy sim, celery leaves, diced chicken cooked with sweet soy sauce, and fried shallots. Some variants add mushrooms and fried/boiled pangsit (wonton). Normally it is eaten with chili sauce and pickles.
Mi bakso – bakso meatballs served with yellow noodles and rice vermicelli in beef broth.
Mi celor – a noodle dish served in coconut milk soup and shrimp-based broth, specialty of Palembang city, South Sumatra.
Mi koclok – chicken noodle soup from Cirebon. It is served with cabbage, bean sprout, boiled egg, fried onion, and spring onion.
Mi kocok – (lit: "shaken noodle"), is an Indonesian beef noodle soup from Bandung, consists of noodles served in rich beef consommé soup, kikil (beef tendon), bean sprouts, and bakso (beef meatball), kaffir lime juice, and sprinkled with sliced fresh celery, scallion, and fried shallot. Some recipes might add beef tripe.
Mi kopyok – is an Indonesian noodle dish, specialty of Semarang. The dish consists of noodles served in garlic soup, slices of fried tofu, lontong, bean sprouts and crushed of kerupuk gendar, sprinkled with sliced fresh celery, and fried shallot. It served with kecap manis on top.
Mi rebus – literally "boiled noodles" in English, made of yellow egg noodles with a spicy soup gravy.
Soto ayam – spicy chicken soup with rice vermicelli. Served with hard-boiled eggs, slices of fried potatoes, celery leaves, and fried shallots. Sometimes, slices of Lontong (compressed rice roll) or "poya", a powder of mixed fried garlic with shrimp crackers or bitter sambal (orange colored) are added.
Soto mi – spicy noodle soup dish it can be made of beef, chicken, or offals such as skin, cartilage, and tendons of cow's trotters, or tripes. A combination of either noodle or rice vermicelli along with slices of tomato, boiled potato, hard-boiled egg, cabbages, peanut, bean sprout, and beef, offal, or chicken meat are added.
Malaysia and Singapore Assam laksa – rice noodles in a sour fish soup. Various toppings including shredded fish, cucumber, raw onion, pineapple, chilli and mint. There are regional variations throughout Malaysia.
Curry laksa – rice noodles in a coconut curry soup. Topped with prawns or chicken, cockles, bean sprouts, tofu puffs and sliced fish cakes. Boiled egg may be added. Served with a dollop of sambal chilli paste and Vietnamese coriander. Popular in Singapore.
Varieties:
Hae mee (虾面; pinyin: xiāmiàn), or "prawn noodles" – egg noodles served in richly flavored dark soup stock with prawns, pork slices, fish cake slices and bean sprouts topped with fried shallots and spring onion. The stock is made using dried shrimps, plucked heads of prawns, white pepper, garlic and other spices. Traditionally, small cubes of fried pork fat are added to the soup, but this is now less common due to health concerns.
Varieties:
Myanmar (Burma) Kya zan hinga (ကြာဆံဟင်းခါး) – glass noodles in a chicken consommé with mushrooms, bean curd skin, lily stems, shrimp, garlic, pepper and sometimes fish balls. For the addition of texture and flavour, it can be garnished with coriander, sliced shallots, fish sauce, chilli powder and a squeeze of lime.
Kyay oh – a popular noodle soup made with pork and egg in Burmese cuisine. Fish and chicken versions are also made as well as a "dry" version without broth.
Mohinga (မုန့်ဟင်းခါး) – said to be the national dish of Myanmar. Essentially rice noodles in a rich, spicy fish soup. Typical ingredients include fish or prawn sauce, salted fish, lemon grass, tender banana stems, ginger, garlic, pepper, onion, turmeric powder, rice flour, chickpea flour, chili and cooking oil.
On no khauk swe (အုန်းနို့ခေါက်ဆွဲ) – wheat noodles in a chicken and coconut broth. Garnished for added flavour with finely sliced shallots, crispy fried rice cracker, fish sauce, roasted chilli powder and a squeeze of lemon or lime.
Varieties:
Philippines Philippine noodle soups can be seen served in street stalls, as well as in the home. They show a distinct blend of Oriental and Western culture adjusted to suit the Philippine palate. They are normally served with condiments such as patis, soy sauce, the juice of the calamondin, as well as pepper to further adjust the flavor. Like other types of soup, they may be regarded as comfort food and are regularly associated with the cold, rainy season in the Philippines. They are normally eaten with a pair of spoon and fork, alternating between scooping the soup, and handling the noodles, and are less commonly eaten with the combination of chopsticks and a soup spoon.
Varieties:
Almondigas – From the Spanish word "albondigas", which means "meatballs". It features meatballs in a clear broth with vegetables and misua noodles.
Batchoy – a noodle soup from Iloilo garnished with pork innards, crushed pork cracklings, chopped vegetables, and topped with a raw egg.
Varieties:
Batchoy Tagalog – a dish sporting a similar name with its Iloilo counterpart. It features a broth of pork innards like liver and pancreas (lapay) as well as tampalen/tampalin fat - a flavorful pork fat from the stomach area; spiced with garlic, onions, ginger, finger chillies, chilli leaves, and pork blood. Patola (culinary luffa) is the vegetable normally used. The dish also uses misua noodles. It is normally eaten with rice instead of on its own.
Varieties:
Kinalas – a noodle soup from Bicol. It has noodles (flat rice noodles, egg noodles or lye water-soaked noodles) in a beef broth with beef strips, topped with thick gravy-like sauce, scallions and garlic, and served with a hard boiled egg.
Lomi – a noodle soup that uses egg noodles soaked in lye water, in a thick broth. The lye-soaked noodles add a distinct aftertaste to the broth. The dish has meat and vegetables in it, and the broth is thickened by stirring in a raw egg to the dish after the heat is turned off.
Varieties:
Mami – a noodle soup similar to the Chinese variety, with either a beef, pork, chicken, or wanton garnish and topped with chives. Usually thin egg noodles are used, but there are versions using flat rice noodles (ho fan). Introduced in the Philippines by Ma Mon Luk. He coined the term mami in 1950. When it comes to this food, it is akin to two famous restaurants — Ma Mon Luk and Mami King.
Varieties:
Miswa – a soup with wheat flour noodles. Chopped pork (with fat to give more flavor to the soup) is fried before the water is added. The noodles take very little time to cook, so they are added last. The dish also normally has chopped patola. "Miswa" also refers to the noodles itself.
Pancit Molo – a noodle soup that has wonton wrappers for its "noodles." It is normally made from meat broth, leafy as well as chopped vegetables, and possible wonton dumplings.
(Beef) Pares Mami – a noodle soup which combines beef broth-based mami noodle soup and pares, a spiced beef stew with a thich sauce. Pares is laid over the mami noodles and then beef broth is poured over it.
Sinanta – a noodle soup from the Cagayan Valley Region which consists of flat egg noodles, rice vermicelli, spring onions, clams and chicken. The broth is colored with annatto seeds.
Sopas – a noodle soup that has a Western influence. It usually has chicken strips and broth, chopped vegetables, and macaroni noodles. Milk is added to give it a richer flavor. The name literally means "soup".
Sotanghon – a noodle soup that features cellophane noodles, chicken and vegetables. The broth is slightly oily as garlic and onion are sauteed and chicken meat browned before the broth is added. Annatto is added to give it a distinct orange color.
Varieties:
Thailand Chinese style noodle soups in Thailand are commonly eaten at street stalls, canteens and food courts. A variety of noodles, from wide rice noodles to egg noodles, are served in a light stock made from chicken, pork or vegetables, or a mixture thereof, and often topped with either cuts of meat (popular is char siu), fish, pork or beef balls, or wontons, or combinations thereof, and sprinkled with coriander leaves. The diners adjust the flavour by themselves using sugar, nam pla (fish sauce), dried chilli and chilli in vinegar provided in jars at the table. Unlike most other Thai food, noodles are eaten with chopsticks. Both noodles and chopsticks are clear Chinese influences.
Varieties:
In addition to the Chinese style noodle soups, fermented rice noodles (khanom chin) served with a variety of curries or soup-like sauces, are also very popular in Thai cuisine.
Bami nam (Thai: บะหมี่น้ำ) – egg noodles in soup, often with minced pork, braised or roast duck, or cuts of mu daeng (char siu).
Kaeng chuet wunsen (Thai: แกงจืดวุ้นเส้น) – glass noodles in a vegetable soup, often with additional ingredients such as silken tofu, minced pork, mushrooms, and seaweed.
Khanom chin kaeng khiao wan kai (Thai: ขนมจีนแกงเขียวหวานไก่) – Thai fermented rice noodles (khanom chin) served with chicken green curry.
Khanom chin nam ngiao Thai: ขนมจีนน้ำเงี้ยว – Thai fermented rice noodles served in a soup-like sauce made from pork and tomato, crushed fried dry chillies, pork blood, dry fermented soy bean, and dried red kapok flowers.
Khao soi (Thai: ข้าวซอย) – most often egg noodles in a Thai curry soup, with deep-fried egg noodles additionally sprinkled on top; a speciality of northern Thailand.
Kuaitiao nam (Thai: ก๋วยเตี๋ยวน้ำ) – rice noodles in soup.
Nam ngiao (Thai: น้ำเงี้ยว) – a noodle soup of northern Thai cuisine and Shan cuisine with a characteristic spicy and salty flavor.
Yen tafo (Thai: เย็นตาโฟ) – the Thai version of the Chinese dish yong tau foo, it is a clear broth with very silky wide rice noodles, fish balls, sliced fried tofu, squid, and water spinach.
Bami tom yum (Thai: บะหมี่ต้มยำ) – a spicy version of Bami nam, often with other ingredients such as ground peanuts and pork entrails.
Vietnam Bánh canh – a soup made with bánh canh noodles (thick noodles, made from tapioca or tapioca/rice mixture) Bánh đa cua – a soup made with bánh đa đỏ noodles (red noodles) and crab-roe. It's a special dish of Hai Phong.
Bún bò Huế – a spicy signature noodle soup from Huế, consisting of rice vermicelli in a beef broth with beef, shrimp sauce, lemon grass, and other ingredients Bún riêu – rice vermicelli soup with freshwater crab meat, tofu and tomatoes. Congealed boiled pig blood is also sometimes used.
Cao lầu – a signature noodle dish from Hội An consisting of yellow wheat flour noodles in a small amount of broth, with various meats and herbs.
Hủ tiếu – a soup made with bánh hủ tiếu and egg noodles. This dish was brought over by the Teochew immigrants (Hoa people).
Hủ tiếu Nam Vang - a pork broth noodle soup dish that was influenced from the Cambodian noodle soup Kuyteav. It is most commonly eaten in Southern Vietnam. Mì or súp mì - yellow wheat/egg noodle soup brought over by Chinese immigrants. Mì hoành thánh is Vietnamese version of wonton noodles.
Mì Quảng – a signature noodle dish from Quảng Nam consisting of wide yellow rice noodles in a small amount of broth, with various meats and herbs.
Varieties:
Phở – white rice noodles in clear beef broth with thin cuts of beef, garnished with ingredients such as scallions, white onions, coriander leaves, ngo gai ("saw leaf herb"), and mint. Basil, lemon or lime, bean sprouts, and chili peppers are usually provided on a separate plate, which allows customers to adjust the soup's flavor as they like. Some sauces such as hoisin sauce and fish sauce are also sometimes added. Bánh đa dishes in northern Vietnam are also similar to phở.
Varieties:
South Asia Bhutan Bagthuk – flattened short noodles served with potatoes, chilli power and vegetables.
Varieties:
Nepal and Sikkim (India) Thukpa (Nepali: थुक्पा) – boiled noodles, filtered and mixed with vegetables and/or various meat items. A Tibetan influenced dish, the Nepalese version contains more spice such as chili powder, and masala. Popular in Nepal and amongst the Nepalese and Tibetan diasporas in the neighbouring Indian state of Sikkim and within the Darjeeling district of West Bengal. It is also popular amongst the people of Ladakh who have a close cultural and historical connections with Tibet.
Varieties:
North America United States Saimin – Soft wheat and egg noodles in dashi broth. A popular hybrid dish reflecting the multicultural roots of modern Hawaii. Toppings include green onion, kamaboko (fish cakes), and SPAM, char siu (Chinese roast pork), or linguiça. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DBpedia**
DBpedia:
DBpedia (from "DB" for "database") is a project aiming to extract structured content from the information created in the Wikipedia project. This structured information is made available on the World Wide Web. DBpedia allows users to semantically query relationships and properties of Wikipedia resources, including links to other related datasets.In 2008, Tim Berners-Lee described DBpedia as one of the most famous parts of the decentralized Linked Data effort.
Background:
The project was started by people at the Free University of Berlin and Leipzig University in collaboration with OpenLink Software, and is now maintained by people at the University of Mannheim and Leipzig University. The first publicly available dataset was published in 2007. The data is made available under free licences (CC BY-SA), allowing others to reuse the dataset; it doesn't however use an open data license to waive the sui generis database rights.Wikipedia articles consist mostly of free text, but also include structured information embedded in the articles, such as "infobox" tables (the pull-out panels that appear in the top right of the default view of many Wikipedia articles, or at the start of the mobile versions), categorization information, images, geo-coordinates and links to external Web pages. This structured information is extracted and put in a uniform dataset which can be queried.
Dataset:
The 2016-04 release of the DBpedia data set describes 6.0 million entities, out of which 5.2 million are classified in a consistent ontology, including 1.5 million persons, 810,000 places, 135,000 music albums, 106,000 films, 20,000 video games, 275,000 organizations, 301,000 species and 5,000 diseases. DBpedia uses the Resource Description Framework (RDF) to represent extracted information and consists of 9.5 billion RDF triples, of which 1.3 billion were extracted from the English edition of Wikipedia and 5.0 billion from other language editions.From this data set, information spread across multiple pages can be extracted. For example, book authorship can be put together from pages about the work, or the author.One of the challenges in extracting information from Wikipedia is that the same concepts can be expressed using different parameters in infobox and other templates, such as |birthplace= and |placeofbirth=. Because of this, queries about where people were born would have to search for both of these properties in order to get more complete results. As a result, the DBpedia Mapping Language has been developed to help in mapping these properties to an ontology while reducing the number of synonyms. Due to the large diversity of infoboxes and properties in use on Wikipedia, the process of developing and improving these mappings has been opened to public contributions.Version 2014 was released in September 2014. A main change since previous versions was the way abstract texts were extracted. Specifically, running a local mirror of Wikipedia and retrieving rendered abstracts from it made extracted texts considerably cleaner. Also, a new data set extracted from Wikimedia Commons was introduced.
Dataset:
As of June 2021, DBPedia contains over 850 million triples.
Examples:
DBpedia extracts factual information from Wikipedia pages, allowing users to find answers to questions where the information is spread across multiple Wikipedia articles. Data is accessed using an SQL-like query language for RDF called SPARQL. For example, if one were interested in the Japanese shōjo manga series Tokyo Mew Mew, and wanted to find the genres of other works written by its illustrator Mia Ikumi.DBpedia combines information from Wikipedia's entries on Tokyo Mew Mew, Mia Ikumi and on works such as Super Doll Licca-chan and Koi Cupid. Since DBpedia normalises information into a single database, the following query can be asked without needing to know exactly which entry carries each fragment of information, and will list related genres:
Use cases:
DBpedia has a broad scope of entities covering different areas of human knowledge. This makes it a natural hub for connecting datasets, where external datasets could link to its concepts. The DBpedia dataset is interlinked on the RDF level with various other Open Data datasets on the Web. This enables applications to enrich DBpedia data with data from these datasets. As of September 2013, there are more than 45 million interlinks between DBpedia and external datasets including: Freebase, OpenCyc, UMBEL, GeoNames, MusicBrainz, CIA World Fact Book, DBLP, Project Gutenberg, DBtune Jamendo, Eurostat, UniProt, Bio2RDF, and US Census data. The Thomson Reuters initiative OpenCalais, the Linked Open Data project of The New York Times, the Zemanta API and DBpedia Spotlight also include links to DBpedia. The BBC uses DBpedia to help organize its content. Faviki uses DBpedia for semantic tagging. Samsung also includes DBpedia in its "Knowledge Sharing Platform".
Use cases:
Such a rich source of structured cross-domain knowledge is fertile ground for Artificial Intelligence systems. DBpedia was used as one of the knowledge sources in IBM Watson's Jeopardy! winning systemAmazon provides a DBpedia Public Data Set that can be integrated into Amazon Web Services applications.Data about creators from DBpedia can be used for enriching artworks' sales observations.The crowdsourcing software company, Ushahidi, built a prototype of its software that leveraged DBpedia to perform semantic annotations on citizen-generated reports. The prototype incorporated the "YODIE" (Yet another Open Data Information Extraction system) service developed by the University of Sheffield, which uses DBpedia to perform the annotations. The goal for Ushahidi was to improve the speed and facility with which incoming reports could be validated managed.
DBpedia Spotlight:
DBpedia Spotlight is a tool for annotating mentions of DBpedia resources in text. This allows linking unstructured information sources to the Linked Open Data cloud through DBpedia. DBpedia Spotlight performs named entity extraction, including entity detection and name resolution (in other words, disambiguation). It can also be used for named entity recognition, and other information extraction tasks. DBpedia Spotlight aims to be customizable for many use cases. Instead of focusing on a few entity types, the project strives to support the annotation of all 3.5 million entities and concepts from more than 320 classes in DBpedia. The project started in June 2010 at the Web Based Systems Group at the Free University of Berlin.
DBpedia Spotlight:
DBpedia Spotlight is publicly available as a web service for testing and a Java/Scala API licensed via the Apache License. The DBpedia Spotlight distribution includes a jQuery plugin that allows developers to annotate pages anywhere on the Web by adding one line to their page. Clients are also available in Java or PHP. The tool handles various languages through its demo page and web services. Internationalization is supported for any language that has a Wikipedia edition.
Archivo ontology database:
From 2020, the DBpedia project provides a regularly updated database of web‑accessible ontologies written in the OWL ontology language. Archivo also provides a four star rating scheme for the ontologies it scrapes, based on accessibility, quality, and related fitness‑for‑use criteria. For instance, SHACL compliance for graph‑based data is evaluated when appropriate. Ontologies should also contain metadata about their characteristics and specify a public license describing their terms‑of‑use. As of June 2021 the Archivo database contains 1368 entries.
History:
DBpedia was initiated in 2007 by Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak and Zachary Ives. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mandatory eight count**
Mandatory eight count:
The mandatory eight count, also called a compulsory eight count, is a rule in boxing and kickboxing requiring the referee to give any fighter a count of eight seconds once they have been knocked down by their opponent, and before the fight is allowed to resume. Even if the fighter gets up before the count reaches eight, the referee is required to count to eight before checking if the fighter is able to continue unless they make a judgement call that the fighter cannot continue. The mandatory eight count is a part of the Unified Rules of Boxing as adopted by the Association of Boxing Commissions.
History:
The Marquess of Queensberry Rules, the base rules of boxing, defined that fighters should be given ten seconds to return to their feet after being knocked down. In 1953, the New York State Athletic Commission introduced the first mandatory eight count for all matches except championship matches. The move was done to protect boxers from unnecessary damage. Ten years later, the mandatory eight count was adopted for all matches in a regulation passed by the New York State Legislature. The mandatory eight count was first used in 1961 in a title fight for the bout between Floyd Patterson and Ingemar Johansson in Florida. Reaction to the new rule from the fighters was positive with Johansson saying "It was good that he had the eight-count" and Patterson said "The eight-count helped me, those extra few seconds gave my head a chance to clear." In 1997, the mandatory eight count was adopted by the World Kickboxing Association for professional kickboxing matches.The mandatory eight count is different from the standing eight count where referees had the power to pause the fight and start a count if he felt a fighter was in trouble at his discretion even if there was not a knockdown. The mandatory eight count is a requirement for all knockdowns. In 1998, the Association of Boxing Commissions abolished the standing eight count as it was felt that it gave an advantage to the fighter whom it was issued against. However the mandatory eight count was retained and is distinguished from the former standing eight count in the rules of professional boxing. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Se-tenant (philately)**
Se-tenant (philately):
Se-tenant stamps or labels are printed from the same plate and sheet and adjoin one another, unsevered in a strip or block. They differ from each other by design, color, denomination or overprint. They may have a continuous design. The word "se-tenant" translates from French as meaning "joined together" or "holding together".There are differing ways of preparing a se-tenant sheet. One can have stamps of one design on half of the sheet and the second design on the other half. In this case, the only se-tenants would be in the center where the two halves meet. A more frequent set-up is to have pairs of differing stamps throughout the sheet. Sometimes when two different designs appear on a single pane, the stamps are arranged like a checkerboard, with the different designs alternating in each row and column horizontally and vertically. One can have a triptych, or a tête-bêche format (head to toe). Stamp booklets often contain se-tenant stamps or labels.
United States stamps:
Four of the U. S. Postmasters' Provisional stamp issues distributed between 1845 and 1847 were se-tenant productions: the Baltimore Postmaster's provisionals (two different images [5¢ and 10¢] on a sheet of twelve), the St. Louis Bears (three different images [5¢, 10¢ and 20¢] on a sheet of six), the Providence R. I. provisionals (two different images [5¢ and 10¢] on a sheet of twelve) and the Alexandria Postmaster's Provisionals (a pair of not-quite-identical 5¢ images). With the issuance of U. S. national postage stamps, which began in 1847, se-tenant production disappeared from the nation for 117 years, not introduced until the 1964 Christmas Issue, which presented images of holly, mistletoe, poinsettia and a conifer sprig in a block of four stamps. After 1967, the U. S. began offering se-tenant issues with some frequency.
United States stamps:
The US has since printed as many as 50 different stamps on a single sheet, such as in the 50 state flags, birds and flowers. Se-tenant stamps began as issues of separate designs that were simply attached to one another, but have developed to issues where the stamps are part of a larger continuous design. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Feltrim Formation**
Feltrim Formation:
The Feltrim Formation is a geologic formation in Ireland. It preserves fossils dating back to the Carboniferous period. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rum-running**
Rum-running:
Rum-running, or bootlegging, is the illegal business of smuggling alcoholic beverages where such transportation is forbidden by law. Smuggling usually takes place to circumvent taxation or prohibition laws within a particular jurisdiction. The term rum-running is more commonly applied to smuggling over water; bootlegging is applied to smuggling over land.
Rum-running:
It is believed that the term bootlegging originated during the American Civil War, when soldiers would sneak liquor into army camps by concealing pint bottles within their boots or beneath their trouser legs. Also, according to the PBS documentary Prohibition, the term bootlegging was popularized when thousands of city dwellers sold liquor from flasks they kept in their boot legs all across major cities and rural areas. The term rum-running was current by 1916, and was used during the Prohibition era in the United States (1920–1933), when ships from Bimini in the western Bahamas transported cheap Caribbean rum to Florida speakeasies. However, rum's cheapness made it a low-profit item for the rum-runners, and they soon moved on to smuggling Canadian whisky, French champagne, and English gin to major cities like New York City, Boston, and Chicago, where prices ran high. It was said that some ships carried $200,000 in contraband in a single run.
History:
It was not long after the first taxes were implemented on alcoholic beverages that someone began to smuggle alcohol. The British government had "revenue cutters" in place to stop smugglers as early as the 16th century. Pirates often made extra money running rum to heavily taxed colonies. There were times when the sale of alcohol was limited for other reasons, such as laws against sales to American Indians in the Old West and Canada West or local prohibitions like the one on Prince Edward Island between 1901 and 1948.Industrial-scale smuggling flowed both ways across the Canada–United States border at different points in the early twentieth century, largely between Windsor, Ontario and Detroit, Michigan. Although Canada never had true nationwide prohibition, the federal government gave the provinces an easy means to ban alcohol under the War Measures Act (1914), and most provinces and the Yukon Territory already had enacted prohibition locally by 1918 when a regulation issued by the federal cabinet banned the interprovincial trade and importation of liquor. National prohibition in the United States did not begin until 1920, though many states had statewide prohibition before that. For the two-year interval, enough American liquor entered Canada illegally to undermine support for prohibition in Canada, so it was slowly lifted, beginning with Quebec and Yukon in 1919 and including all the provinces but Prince Edward Island by 1930. Additionally, Canada's version of prohibition had never included a ban on the manufacture of liquor for export. Soon the black-market trade was reversed with Canadian whisky and beer flowing in large quantities to the United States. Again, this illegal international trade undermined the support for prohibition in the receiving country, and the American version ended (at the national level) in 1933.
History:
One of the most famous periods of rum-running began in the United States when Prohibition began on January 16, 1920, when the Eighteenth Amendment went into effect. This period lasted until the amendment was repealed with ratification of the Twenty-first Amendment on December 5, 1933.
At first, there was much action on the seas, but after several months, the Coast Guard began reporting decreased smuggling activity. This was the start of the Bimini–Bahamas rum trade and the introduction of Bill McCoy.
History:
With the start of prohibition, Captain McCoy began bringing rum from Bimini and the rest of the Bahamas into south Florida through Government Cut. The Coast Guard soon caught up with him, so he began to bring the illegal goods to just outside U.S. territorial waters and let smaller boats and other captains, such as Habana Joe, take the risk of bringing it to shore.
History:
The rum-running business was very good, and McCoy soon bought a Gloucester knockabout schooner named Arethusa at auction and renamed her Tomoka. He installed a larger auxiliary, mounted a concealed machine gun on her deck, and refitted the fish pens below to accommodate as much contraband as she could hold. She became one of the most famous of the rum-runners, along with his two other ships hauling mostly Irish and Canadian whiskey as well as other fine liquors and wines to ports from Maine to Florida.
History:
In the days of rum running, it was common for captains to add water to the bottles to stretch their profits or to re-label it as better goods. Often, cheap sparkling wine would become French champagne or Italian Spumante; unbranded liquor became top-of-the-line name brands. McCoy became famous for never adding water to his booze and selling only top brands. Although the phrase appears in print in 1882, this is one of several false etymologies for the origin of the term "The real McCoy".
History:
On November 15, 1923, McCoy and Tomoka encountered the U.S. Coast Guard Cutter Seneca just outside U.S. territorial waters. A boarding party attempted to board, but McCoy chased them off with the machine gun. Tomoka tried to run, but Seneca placed a shell just off her hull, and William McCoy surrendered his ship and cargo.
History:
The Rum Row McCoy is credited with the idea of bringing large boats just to the edge of the 3-mile (4.8 km) limit of U.S. jurisdiction and selling his wares there to "contact boats", local fishermen, and small boat captains. The small, quick boats could more easily outrun Coast Guard ships and could dock in any small river or eddy and transfer their cargo to a waiting truck. They were also known to load float planes and flying boats. Soon others were following suit, and the three-mile limit became known as "Rum Line" with the ships waiting called "Rum row". The Rum Line was extended to a 12-mile (19 km) limit by an act of the United States Congress on April 21, 1924, which made it harder for the smaller and less seaworthy craft to make the trip.Rum Row was not the only front for the Coast Guard. Rum-runners often made the trip through Canada via the Great Lakes and the Saint Lawrence Seaway and down the west coast to San Francisco and Los Angeles. Rum-running from Canada was also an issue, especially throughout prohibition in the early 1900s. There was a high number of distilleries in Canada, one of the most famous being Hiram Walker who developed Canadian Club Whisky. The French islands of Saint-Pierre and Miquelon, located south of Newfoundland, were an important base used by well-known smugglers, including Al Capone, Savannah Unknown, and Bill McCoy. The Gulf of Mexico also teemed with ships running from Mexico and the Bahamas to Galveston, Texas, the Louisiana swamps, and Alabama coast. By far the biggest Rum Row was in the New York/Philadelphia area off the New Jersey coast, where as many as 60 ships were seen at one time. One of the most notable New Jersey rum runners was Habana Joe, who could be seen at night running into remote areas in Raritan Bay with his flat-bottom skiff for running up on the beach, making his delivery, and speeding away.
History:
With that much competition, the suppliers often flew large banners advertising their wares and threw parties with prostitutes on board their ships to draw customers. Rum Row was completely lawless, and many crews armed themselves not against government ships but against the other rum-runners, who would sometimes sink a ship and hijack its cargo rather than make the run to Canada or the Caribbean for fresh supplies.
History:
The ships At the start, the rum-runner fleet consisted of a ragtag flotilla of fishing boats, such as the schooner Nellie J. Banks, excursion boats, and small merchant craft. As prohibition wore on, the stakes got higher and the ships became larger and more specialized. Converted fishing ships like McCoy's Tomoka waited on Rum Row and were soon joined by small motor freighters custom-built in Nova Scotia for rum running, with low, grey hulls, hidden compartments, and powerful wireless equipment. Examples include the Reo II. Specialized high-speed craft were built for the ship-to-shore runs. These high-speed boats were often luxury yachts and speedboats fitted with powerful aircraft engines, machine guns, and armor plating. Often, builders of rum-runners' ships also supplied Coast Guard vessels, such as Fred and Mirto Scopinich's Freeport Point Shipyard. Rum-runners often kept cans of used engine oil handy to pour on hot exhaust manifolds in case a screen of smoke was needed to escape the revenue ships.
History:
On the government's side, the rum chasers were an assortment of patrol boats, inshore patrol, and harbor cutters. Most of the patrol boats were of the "six-bit" variety: 75-foot craft with a top speed of about 12 knots. There was also an assortment of launches, harbor tugs, and miscellaneous small craft.
History:
The rum-runners were often faster and more maneuverable than government ships, and a rum-running captain could make several hundred thousand dollars a year. In comparison, the Commandant of the Coast Guard made just $6,000 annually, and seamen made $30/week. Because of this disparity, the rum-runners were generally willing to take bigger risks. They ran without lights at night and in fog, risking life and limb. Shores could sometimes be found littered with bottles from a rum-runner who sank after hitting a sandbar or a reef in the dark at high speed.The Coast Guard relied on hard work, reconnaissance, and big guns to get their job done. It was not uncommon for rum-runners' ships to be sold at auction shortly after a trial – ships were often sold back to the original owners. Some ships were captured three or four times before they were finally sunk or retired. In addition, the Coast Guard had other duties and often had to let a rum-runner go in order to assist a sinking vessel or handle another emergency.
History:
Rum-running in Northern Europe in the 1920s and 1930s Prohibitive alcohol laws in Finland (total ban of alcohol from 1919 to 1931), Norway (liquor above 20 per cent abv 1917–1927) and the Swedish Bratt System which heavily restricted the sale of alcohol made these three countries attractive for alcohol smuggling from abroad. The main product used for smuggling were rectified spirits produced in Central Europe (Germany, Poland, Netherlands etc.). Alcohol was legally exported on large ships as tax-free produce via ports like Hamburg, Tallinn, Kiel and particularly the Free City of Danzig. Similar to the Rum Row near the U.S. coast, these ships usually did not leave international waters and the alcohol was clandestinely loaded onto smaller boats that illegally brought it into the destination countries. Despite various efforts led by Finland to fight contraband (Helsinki Convention for the Suppression of the Contraband Traffic in Alcoholic Liquors of 1925), the smugglers managed to bypass anti-smuggling laws, e.g., through the use of flags of convenience.
Alcohol smuggling today:
For multiple reasons (including the avoidance of taxes and minimum purchase prices), alcohol smuggling is still a worldwide concern.
Alcohol smuggling today:
In the United States, the smuggling of alcohol did not end with the repeal of prohibition. In the Appalachian United States, for example, the demand for moonshine was at an all-time high in the 1920s, but an era of rampant bootlegging in dry areas continued into the 1970s. Although the well-known bootleggers of the day may no longer be in business, bootlegging still exists, even if on a smaller scale. The state of Virginia has reported that it loses up to $20 million a year from illegal whiskey smuggling.The Government of the United Kingdom fails to collect an estimated £900 million in taxes due to alcohol smuggling activities.Absinthe was smuggled into the United States until it was legalized in 2007. Cuban rum is also sometimes smuggled into the United States, circumventing the embargo in existence since 1960. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tioconazole**
Tioconazole:
Tioconazole is an antifungal medication of the imidazole class used to treat infections caused by a fungus or yeast. It is marketed under the brand names Trosyd and Gyno-Trosyd (Pfizer, now Johnson & Johnson). Tioconazole ointments serve to treat women's vaginal yeast infections. They are available in one day doses, as opposed to the 7-day treatments commonly used in the past.
Tioconazole:
Tioconazole topical (skin) preparations are also available for ringworm, jock itch, athlete's foot, and tinea versicolor or "sun fungus".
It was patented in 1975 and approved for medical use in 1982.
Side effects:
Side effects of vaginal tioconazole may include temporary burning itching, or irritation of the vagina. Vaginal swelling or redness, difficulty or burning during urination, headache, abdominal pain, and upper respiratory tract infection have been reported by people using tioconazole. These side effects may be only temporary, and do not normally interfere with the patient's comfort enough to outweigh the result.
Synthesis:
Antimycotic imidazole derivative.
A displacement reaction between 1-(2,4-dichlorophenyl)-2-(1H-imidazol-1-yl)ethanol and 2-chloro-3-(chloromethyl)thiophene is performed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hysteresis**
Hysteresis:
Hysteresis is the dependence of the state of a system on its history. For example, a magnet may have more than one possible magnetic moment in a given magnetic field, depending on how the field changed in the past. Plots of a single component of the moment often form a loop or hysteresis curve, where there are different values of one variable depending on the direction of change of another variable. This history dependence is the basis of memory in a hard disk drive and the remanence that retains a record of the Earth's magnetic field magnitude in the past. Hysteresis occurs in ferromagnetic and ferroelectric materials, as well as in the deformation of rubber bands and shape-memory alloys and many other natural phenomena. In natural systems it is often associated with irreversible thermodynamic change such as phase transitions and with internal friction; and dissipation is a common side effect.
Hysteresis:
Hysteresis can be found in physics, chemistry, engineering, biology, and economics. It is incorporated in many artificial systems: for example, in thermostats and Schmitt triggers, it prevents unwanted frequent switching.
Hysteresis can be a dynamic lag between an input and an output that disappears if the input is varied more slowly; this is known as rate-dependent hysteresis. However, phenomena such as the magnetic hysteresis loops are mainly rate-independent, which makes a durable memory possible.
Systems with hysteresis are nonlinear, and can be mathematically challenging to model. Some hysteretic models, such as the Preisach model (originally applied to ferromagnetism) and the Bouc–Wen model, attempt to capture general features of hysteresis; and there are also phenomenological models for particular phenomena such as the Jiles–Atherton model for ferromagnetism.
It is difficult to define hysteresis precisely. Isaak D. Mayergoyz wrote "..the very meaning of hysteresis varies from one area to another, from paper to paper and from author to author. As a result, a stringent mathematical definition of hysteresis is needed in order to avoid confusion and ambiguity.".
Etymology and history:
The term "hysteresis" is derived from ὑστέρησις, an Ancient Greek word meaning "deficiency" or "lagging behind". It was coined in 1881 by Sir James Alfred Ewing to describe the behaviour of magnetic materials.Some early work on describing hysteresis in mechanical systems was performed by James Clerk Maxwell. Subsequently, hysteretic models have received significant attention in the works of Ferenc Preisach (Preisach model of hysteresis), Louis Néel and Douglas Hugh Everett in connection with magnetism and absorption. A more formal mathematical theory of systems with hysteresis was developed in the 1970s by a group of Russian mathematicians led by Mark Krasnosel'skii.
Types:
Rate-dependent One type of hysteresis is a lag between input and output. An example is a sinusoidal input X(t) that results in a sinusoidal output Y(t), but with a phase lag φ: sin sin (ωt−φ).
Types:
Such behavior can occur in linear systems, and a more general form of response is Y(t)=χiX(t)+∫0∞Φd(τ)X(t−τ)dτ, where χi is the instantaneous response and Φd(τ) is the impulse response to an impulse that occurred τ time units in the past. In the frequency domain, input and output are related by a complex generalized susceptibility that can be computed from Φd ; it is mathematically equivalent to a transfer function in linear filter theory and analogue signal processing.This kind of hysteresis is often referred to as rate-dependent hysteresis. If the input is reduced to zero, the output continues to respond for a finite time. This constitutes a memory of the past, but a limited one because it disappears as the output decays to zero. The phase lag depends on the frequency of the input, and goes to zero as the frequency decreases.When rate-dependent hysteresis is due to dissipative effects like friction, it is associated with power loss.
Types:
Rate-independent Systems with rate-independent hysteresis have a persistent memory of the past that remains after the transients have died out. The future development of such a system depends on the history of states visited, but does not fade as the events recede into the past. If an input variable X(t) cycles from X0 to X1 and back again, the output Y(t) may be Y0 initially but a different value Y2 upon return. The values of Y(t) depend on the path of values that X(t) passes through but not on the speed at which it traverses the path. Many authors restrict the term hysteresis to mean only rate-independent hysteresis. Hysteresis effects can be characterized using the Preisach model and the generalized Prandtl−Ishlinskii model.
In engineering:
Control systems In control systems, hysteresis can be used to filter signals so that the output reacts less rapidly than it otherwise would by taking recent system history into account. For example, a thermostat controlling a heater may switch the heater on when the temperature drops below A, but not turn it off until the temperature rises above B. (For instance, if one wishes to maintain a temperature of 20 °C then one might set the thermostat to turn the heater on when the temperature drops to below 18 °C and off when the temperature exceeds 22 °C).
In engineering:
Similarly, a pressure switch can be designed to exhibit hysteresis, with pressure set-points substituted for temperature thresholds.
Electronic circuits Often, some amount of hysteresis is intentionally added to an electronic circuit to prevent unwanted rapid switching. This and similar techniques are used to compensate for contact bounce in switches, or noise in an electrical signal.
A Schmitt trigger is a simple electronic circuit that exhibits this property.
A latching relay uses a solenoid to actuate a ratcheting mechanism that keeps the relay closed even if power to the relay is terminated.
Some positive feedback from the output to one input of a comparator can increase the natural hysteresis (a function of its gain) it exhibits.
In engineering:
Hysteresis is essential to the workings of some memristors (circuit components which "remember" changes in the current passing through them by changing their resistance).Hysteresis can be used when connecting arrays of elements such as nanoelectronics, electrochrome cells and memory effect devices using passive matrix addressing. Shortcuts are made between adjacent components (see crosstalk) and the hysteresis helps to keep the components in a particular state while the other components change states. Thus, all rows can be addressed at the same time instead of individually.
In engineering:
In the field of audio electronics, a noise gate often implements hysteresis intentionally to prevent the gate from "chattering" when signals close to its threshold are applied.
In engineering:
User interface design A hysteresis is sometimes intentionally added to computer algorithms. The field of user interface design has borrowed the term hysteresis to refer to times when the state of the user interface intentionally lags behind the apparent user input. For example, a menu that was drawn in response to a mouse-over event may remain on-screen for a brief moment after the mouse has moved out of the trigger region and the menu region. This allows the user to move the mouse directly to an item on the menu, even if part of that direct mouse path is outside of both the trigger region and the menu region. For instance, right-clicking on the desktop in most Windows interfaces will create a menu that exhibits this behavior.
In engineering:
Aerodynamics In aerodynamics, hysteresis can be observed when decreasing the angle of attack of a wing after stall, regarding the lift and drag coefficients. The angle of attack at which the flow on top of the wing reattaches is generally lower than the angle of attack at which the flow separates during the increase of the angle of attack.
In engineering:
Backlash Moving parts within machines, such as the components of a gear train, normally have a small gap between them, to allow movement and lubrication. As a consequence of this gap, any reversal in direction of a drive part will not be passed on immediately to the driven part. This unwanted delay is normally kept as small as practicable, and is usually called backlash. The amount of backlash will increase with time as the surfaces of moving parts wear.
In mechanics:
Elastic hysteresis In the elastic hysteresis of rubber, the area in the centre of a hysteresis loop is the energy dissipated due to material internal friction.
In mechanics:
Elastic hysteresis was one of the first types of hysteresis to be examined.The effect can be demonstrated using a rubber band with weights attached to it. If the top of a rubber band is hung on a hook and small weights are attached to the bottom of the band one at a time, it will stretch and get longer. As more weights are loaded onto it, the band will continue to stretch because the force the weights are exerting on the band is increasing. When each weight is taken off, or unloaded, the band will contract as the force is reduced. As the weights are taken off, each weight that produced a specific length as it was loaded onto the band now contracts less, resulting in a slightly longer length as it is unloaded. This is because the band does not obey Hooke's law perfectly. The hysteresis loop of an idealized rubber band is shown in the figure.
In mechanics:
In terms of force, the rubber band was harder to stretch when it was being loaded than when it was being unloaded. In terms of time, when the band is unloaded, the effect (the length) lagged behind the cause (the force of the weights) because the length has not yet reached the value it had for the same weight during the loading part of the cycle. In terms of energy, more energy was required during the loading than the unloading, the excess energy being dissipated as thermal energy.
In mechanics:
Elastic hysteresis is more pronounced when the loading and unloading is done quickly than when it is done slowly. Some materials such as hard metals don't show elastic hysteresis under a moderate load, whereas other hard materials like granite and marble do. Materials such as rubber exhibit a high degree of elastic hysteresis.
In mechanics:
When the intrinsic hysteresis of rubber is being measured, the material can be considered to behave like a gas. When a rubber band is stretched it heats up, and if it is suddenly released, it cools down perceptibly. These effects correspond to a large hysteresis from the thermal exchange with the environment and a smaller hysteresis due to internal friction within the rubber. This proper, intrinsic hysteresis can be measured only if the rubber band is thermally isolated.
In mechanics:
Small vehicle suspensions using rubber (or other elastomers) can achieve the dual function of springing and damping because rubber, unlike metal springs, has pronounced hysteresis and does not return all the absorbed compression energy on the rebound. Mountain bikes have made use of elastomer suspension, as did the original Mini car.
The primary cause of rolling resistance when a body (such as a ball, tire, or wheel) rolls on a surface is hysteresis. This is attributed to the viscoelastic characteristics of the material of the rolling body.
In mechanics:
Contact angle hysteresis The contact angle formed between a liquid and solid phase will exhibit a range of contact angles that are possible. There are two common methods for measuring this range of contact angles. The first method is referred to as the tilting base method. Once a drop is dispensed on the surface with the surface level, the surface is then tilted from 0° to 90°. As the drop is tilted, the downhill side will be in a state of imminent wetting while the uphill side will be in a state of imminent dewetting. As the tilt increases the downhill contact angle will increase and represents the advancing contact angle while the uphill side will decrease; this is the receding contact angle. The values for these angles just prior to the drop releasing will typically represent the advancing and receding contact angles. The difference between these two angles is the contact angle hysteresis.
In mechanics:
The second method is often referred to as the add/remove volume method. When the maximum liquid volume is removed from the drop without the interfacial area decreasing the receding contact angle is thus measured. When volume is added to the maximum before the interfacial area increases, this is the advancing contact angle. As with the tilt method, the difference between the advancing and receding contact angles is the contact angle hysteresis. Most researchers prefer the tilt method; the add/remove method requires that a tip or needle stay embedded in the drop which can affect the accuracy of the values, especially the receding contact angle.
In mechanics:
Bubble shape hysteresis The equilibrium shapes of bubbles expanding and contracting on capillaries (blunt needles) can exhibit hysteresis depending on the relative magnitude of the maximum capillary pressure to ambient pressure, and the relative magnitude of the bubble volume at the maximum capillary pressure to the dead volume in the system. The bubble shape hysteresis is a consequence of gas compressibility, which causes the bubbles to behave differently across expansion and contraction. During expansion, bubbles undergo large non equilibrium jumps in volume, while during contraction the bubbles are more stable and undergo a relatively smaller jump in volume resulting in an asymmetry across expansion and contraction. The bubble shape hysteresis is qualitatively similar to the adsorption hysteresis, and as in the contact angle hysteresis, the interfacial properties play an important role in bubble shape hysteresis.
In mechanics:
The existence of the bubble shape hysteresis has important consequences in interfacial rheology experiments involving bubbles. As a result of the hysteresis, not all sizes of the bubbles can be formed on a capillary. Further the gas compressibility causing the hysteresis leads to unintended complications in the phase relation between the applied changes in interfacial area to the expected interfacial stresses. These difficulties can be avoided by designing experimental systems to avoid the bubble shape hysteresis.
In mechanics:
Adsorption hysteresis Hysteresis can also occur during physical adsorption processes. In this type of hysteresis, the quantity adsorbed is different when gas is being added than it is when being removed. The specific causes of adsorption hysteresis are still an active area of research, but it is linked to differences in the nucleation and evaporation mechanisms inside mesopores. These mechanisms are further complicated by effects such as cavitation and pore blocking.
In mechanics:
In physical adsorption, hysteresis is evidence of mesoporosity-indeed, the definition of mesopores (2–50 nm) is associated with the appearance (50 nm) and disappearance (2 nm) of mesoporosity in nitrogen adsorption isotherms as a function of Kelvin radius. An adsorption isotherm showing hysteresis is said to be of Type IV (for a wetting adsorbate) or Type V (for a non-wetting adsorbate), and hysteresis loops themselves are classified according to how symmetric the loop is. Adsorption hysteresis loops also have the unusual property that it is possible to scan within a hysteresis loop by reversing the direction of adsorption while on a point on the loop. The resulting scans are called "crossing," "converging," or "returning," depending on the shape of the isotherm at this point.
In mechanics:
Matric potential hysteresis The relationship between matric water potential and water content is the basis of the water retention curve. Matric potential measurements (Ψm) are converted to volumetric water content (θ) measurements based on a site or soil specific calibration curve. Hysteresis is a source of water content measurement error. Matric potential hysteresis arises from differences in wetting behaviour causing dry medium to re-wet; that is, it depends on the saturation history of the porous medium. Hysteretic behaviour means that, for example, at a matric potential (Ψm) of 5 kPa, the volumetric water content (θ) of a fine sandy soil matrix could be anything between 8% and 25%.Tensiometers are directly influenced by this type of hysteresis. Two other types of sensors used to measure soil water matric potential are also influenced by hysteresis effects within the sensor itself. Resistance blocks, both nylon and gypsum based, measure matric potential as a function of electrical resistance. The relation between the sensor's electrical resistance and sensor matric potential is hysteretic. Thermocouples measure matric potential as a function of heat dissipation. Hysteresis occurs because measured heat dissipation depends on sensor water content, and the sensor water content–matric potential relationship is hysteretic. As of 2002, only desorption curves are usually measured during calibration of soil moisture sensors. Despite the fact that it can be a source of significant error, the sensor specific effect of hysteresis is generally ignored.
In materials:
Magnetic hysteresis When an external magnetic field is applied to a ferromagnetic material such as iron, the atomic domains align themselves with it. Even when the field is removed, part of the alignment will be retained: the material has become magnetized. Once magnetized, the magnet will stay magnetized indefinitely. To demagnetize it requires heat or a magnetic field in the opposite direction. This is the effect that provides the element of memory in a hard disk drive.
In materials:
The relationship between field strength H and magnetization M is not linear in such materials. If a magnet is demagnetized (H = M = 0) and the relationship between H and M is plotted for increasing levels of field strength, M follows the initial magnetization curve. This curve increases rapidly at first and then approaches an asymptote called magnetic saturation. If the magnetic field is now reduced monotonically, M follows a different curve. At zero field strength, the magnetization is offset from the origin by an amount called the remanence. If the H-M relationship is plotted for all strengths of applied magnetic field the result is a hysteresis loop called the main loop. The width of the middle section is twice the coercivity of the material.A closer look at a magnetization curve generally reveals a series of small, random jumps in magnetization called Barkhausen jumps. This effect is due to crystallographic defects such as dislocations.Magnetic hysteresis loops are not exclusive to materials with ferromagnetic ordering. Other magnetic orderings, such as spin glass ordering, also exhibit this phenomenon.
In materials:
Physical origin The phenomenon of hysteresis in ferromagnetic materials is the result of two effects: rotation of magnetization and changes in size or number of magnetic domains. In general, the magnetization varies (in direction but not magnitude) across a magnet, but in sufficiently small magnets, it does not. In these single-domain magnets, the magnetization responds to a magnetic field by rotating. Single-domain magnets are used wherever a strong, stable magnetization is needed (for example, magnetic recording).
In materials:
Larger magnets are divided into regions called domains. Across each domain, the magnetization does not vary; but between domains are relatively thin domain walls in which the direction of magnetization rotates from the direction of one domain to another. If the magnetic field changes, the walls move, changing the relative sizes of the domains. Because the domains are not magnetized in the same direction, the magnetic moment per unit volume is smaller than it would be in a single-domain magnet; but domain walls involve rotation of only a small part of the magnetization, so it is much easier to change the magnetic moment. The magnetization can also change by addition or subtraction of domains (called nucleation and denucleation).
In materials:
Magnetic hysteresis models The most known empirical models in hysteresis are Preisach and Jiles-Atherton models. These models allow an accurate modeling of the hysteresis loop and are widely used in the industry. However, these models lose the connection with thermodynamics and the energy consistency is not ensured. A more recent model, with a more consistent thermodynamical foundation, is the vectorial incremental nonconservative consistent hysteresis (VINCH) model of Lavet et al. (2011) Applications There are a great variety of applications of the hysteresis in ferromagnets. Many of these make use of their ability to retain a memory, for example magnetic tape, hard disks, and credit cards. In these applications, hard magnets (high coercivity) like iron are desirable, such that as much energy is absorbed as possible during the write operation and the resultant magnetized information is not easily erased.
In materials:
On the other hand, magnetically soft (low coercivity) iron is used for the cores in electromagnets. The low coercivity minimizes the energy loss associated with hysteresis, as the magnetic field periodically reverses in the presence of an alternating current. The low energy loss during a hysteresis loop is the reason why soft iron is used for transformer cores and electric motors.
In materials:
Electrical hysteresis Electrical hysteresis typically occurs in ferroelectric material, where domains of polarization contribute to the total polarization. Polarization is the electrical dipole moment (either C·m−2 or C·m). The mechanism, an organization of the polarization into domains, is similar to that of magnetic hysteresis.
In materials:
Liquid–solid-phase transitions Hysteresis manifests itself in state transitions when melting temperature and freezing temperature do not agree. For example, agar melts at 85 °C (185 °F) and solidifies from 32 to 40 °C (90 to 104 °F). This is to say that once agar is melted at 85 °C, it retains a liquid state until cooled to 40 °C. Therefore, from the temperatures of 40 to 85 °C, agar can be either solid or liquid, depending on which state it was before.
In biology:
Cell biology and genetics Hysteresis in cell biology often follows bistable systems where the same input state can lead to two different, stable outputs. Where bistability can lead to digital, switch-like outputs from the continuous inputs of chemical concentrations and activities, hysteresis makes these systems more resistant to noise. These systems are often characterized by higher values of the input required to switch into a particular state as compared to the input required to stay in the state, allowing for a transition that is not continuously reversible, and thus less susceptible to noise. Cells undergoing cell division exhibit hysteresis in that it takes a higher concentration of cyclins to switch them from G2 phase into mitosis than to stay in mitosis once begun.Biochemical systems can also show hysteresis-like output when slowly varying states that are not directly monitored are involved, as in the case of the cell cycle arrest in yeast exposed to mating pheromone. Here, the duration of cell cycle arrest depends not only on the final level of input Fus3, but also on the previously achieved Fus3 levels. This effect is achieved due to the slower time scales involved in the transcription of intermediate Far1, such that the total Far1 activity reaches its equilibrium value slowly, and for transient changes in Fus3 concentration, the response of the system depends on the Far1 concentration achieved with the transient value. Experiments in this type of hysteresis benefit from the ability to change the concentration of the inputs with time. The mechanisms are often elucidated by allowing independent control of the concentration of the key intermediate, for instance, by using an inducible promoter.
In biology:
Darlington in his classic works on genetics discussed hysteresis of the chromosomes, by which he meant "failure of the external form of the chromosomes to respond immediately to the internal stresses due to changes in their molecular spiral", as they lie in a somewhat rigid medium in the limited space of the cell nucleus.
In biology:
In developmental biology, cell type diversity is regulated by long range-acting signaling molecules called morphogens that pattern uniform pools of cells in a concentration- and time-dependent manner. The morphogen sonic hedgehog (Shh), for example, acts on limb bud and neural progenitors to induce expression of a set of homeodomain-containing transcription factors to subdivide these tissues into distinct domains. It has been shown that these tissues have a 'memory' of previous exposure to Shh.
In biology:
In neural tissue, this hysteresis is regulated by a homeodomain (HD) feedback circuit that amplifies Shh signaling. In this circuit, expression of Gli transcription factors, the executors of the Shh pathway, is suppressed. Glis are processed to repressor forms (GliR) in the absence of Shh, but in the presence of Shh, a proportion of Glis are maintained as full-length proteins allowed to translocate to the nucleus, where they act as activators (GliA) of transcription. By reducing Gli expression then, the HD transcription factors reduce the total amount of Gli (GliT), so a higher proportion of GliT can be stabilized as GliA for the same concentration of Shh.
In biology:
Immunology There is some evidence that T cells exhibit hysteresis in that it takes a lower signal threshold to activate T cells that have been previously activated. Ras GTPase activation is required for downstream effector functions of activated T cells. Triggering of the T cell receptor induces high levels of Ras activation, which results in higher levels of GTP-bound (active) Ras at the cell surface. Since higher levels of active Ras have accumulated at the cell surface in T cells that have been previously stimulated by strong engagement of the T cell receptor, weaker subsequent T cell receptor signals received shortly afterwards will deliver the same level of activation due to the presence of higher levels of already activated Ras as compared to a naïve cell.
In biology:
Neuroscience The property by which some neurons do not return to their basal conditions from a stimulated condition immediately after removal of the stimulus is an example of hysteresis.
Neuropsychology Neuropsychology, in exploring the neural correlates of consciousness, interfaces with neuroscience, although the complexity of the central nervous system is a challenge to its study (that is, its operation resists easy reduction). Context-dependent memory and state-dependent memory show hysteretic aspects of neurocognition.
In biology:
Respiratory physiology Lung hysteresis is evident when observing the compliance of a lung on inspiration versus expiration. The difference in compliance (Δvolume/Δpressure) is due to the additional energy required to overcome surface tension forces during inspiration to recruit and inflate additional alveoli.The transpulmonary pressure vs Volume curve of inhalation is different from the Pressure vs Volume curve of exhalation, the difference being described as hysteresis. Lung volume at any given pressure during inhalation is less than the lung volume at any given pressure during exhalation.
In biology:
Voice and speech physiology A hysteresis effect may be observed in voicing onset versus offset. The threshold value of the subglottal pressure required to start the vocal fold vibration is lower than the threshold value at which the vibration stops, when other parameters are kept constant. In utterances of vowel-voiceless consonant-vowel sequences during speech, the intraoral pressure is lower at the voice onset of the second vowel compared to the voice offset of the first vowel, the oral airflow is lower, the transglottal pressure is larger and the glottal width is smaller.
In biology:
Ecology and epidemiology Hysteresis is a commonly encountered phenomenon in ecology and epidemiology, where the observed equilibrium of a system can not be predicted solely based on environmental variables, but also requires knowledge of the system's past history. Notable examples include the theory of spruce budworm outbreaks and behavioral-effects on disease transmission.It is commonly examined in relation to critical transitions between ecosystem or community types in which dominant competitors or entire landscapes can change in a largely irreversible fashion.
In ocean and climate science:
Complex ocean and climate models rely on the principle.
In economics:
Economic systems can exhibit hysteresis. For example, export performance is subject to strong hysteresis effects: because of the fixed transportation costs it may take a big push to start a country's exports, but once the transition is made, not much may be required to keep them going.
In economics:
When some negative shock reduces employment in a company or industry, fewer employed workers then remain. As usually the employed workers have the power to set wages, their reduced number incentivizes them to bargain for even higher wages when the economy again gets better instead of letting the wage be at the equilibrium wage level, where the supply and demand of workers would match. This causes hysteresis: the unemployment becomes permanently higher after negative shocks.
In economics:
Permanently higher unemployment The idea of hysteresis is used extensively in the area of labor economics, specifically with reference to the unemployment rate. According to theories based on hysteresis, severe economic downturns (recession) and/or persistent stagnation (slow demand growth, usually after a recession) cause unemployed individuals to lose their job skills (commonly developed on the job) or to find that their skills have become obsolete, or become demotivated, disillusioned or depressed or lose job-seeking skills. In addition, employers may use time spent in unemployment as a screening tool, i.e., to weed out less desired employees in hiring decisions. Then, in times of an economic upturn, recovery, or "boom", the affected workers will not share in the prosperity, remaining unemployed for long periods (e.g., over 52 weeks). This makes unemployment "structural", i.e., extremely difficult to reduce simply by increasing the aggregate demand for products and labor without causing increased inflation. That is, it is possible that a ratchet effect in unemployment rates exists, so a short-term rise in unemployment rates tends to persist. For example, traditional anti-inflationary policy (the use of recession to fight inflation) leads to a permanently higher "natural" rate of unemployment (more scientifically known as the NAIRU). This occurs first because inflationary expectations are "sticky" downward due to wage and price rigidities (and so adapt slowly over time rather than being approximately correct as in theories of rational expectations) and second because labor markets do not clear instantly in response to unemployment.
In economics:
The existence of hysteresis has been put forward as a possible explanation for the persistently high unemployment of many economies in the 1990s. Hysteresis has been invoked by Olivier Blanchard among others to explain the differences in long run unemployment rates between Europe and the United States. Labor market reform (usually meaning institutional change promoting more flexible wages, firing, and hiring) or strong demand-side economic growth may not therefore reduce this pool of long-term unemployed. Thus, specific targeted training programs are presented as a possible policy solution. However, the hysteresis hypothesis suggests such training programs are aided by persistently high demand for products (perhaps with incomes policies to avoid increased inflation), which reduces the transition costs out of unemployment and into paid employment easier.
Additional considerations:
Models of hysteresis Each subject that involves hysteresis has models that are specific to the subject. In addition, there are hysteretic models that capture general features of many systems with hysteresis. An example is the Preisach model of hysteresis, which represents a hysteresis nonlinearity as a linear superposition of square loops called non-ideal relays. Many complex models of hysteresis arise from the simple parallel connection, or superposition, of elementary carriers of hysteresis termed hysterons.
Additional considerations:
A simple and intuitive parametric description of various hysteresis loops may be found in the Lapshin model. Along with the smooth loops, substitution of trapezoidal, triangular or rectangular pulses instead of the harmonic functions allows piecewise-linear hysteresis loops frequently used in discrete automatics to be built in the model. There are implementations of the hysteresis loop model in Mathcad and in R programming language.The Bouc–Wen model of hysteresis is often used to describe non-linear hysteretic systems. It was introduced by Bouc and extended by Wen, who demonstrated its versatility by producing a variety of hysteretic patterns. This model is able to capture in analytical form, a range of shapes of hysteretic cycles which match the behaviour of a wide class of hysteretical systems; therefore, given its versability and mathematical tractability, the Bouc–Wen model has quickly gained popularity and has been extended and applied to a wide variety of engineering problems, including multi-degree-of-freedom (MDOF) systems, buildings, frames, bidirectional and torsional response of hysteretic systems two- and three-dimensional continua, and soil liquefaction among others. The Bouc–Wen model and its variants/extensions have been used in applications of structural control, in particular in the modeling of the behaviour of magnetorheological dampers, base isolation devices for buildings and other kinds of damping devices; it has also been used in the modelling and analysis of structures built of reinforced concrete, steel, masonry and timber.. The most important extension of Bouc-Wen Model was carried out by Baber and Noori and later by Noori and co-workers. That extended model, named, BWBN, can reproduce the complex shear pinching or slip-lock phenomenon that earlier model could not reproduce. BWBN model has been widely used in a wide spectrum of applications and have been incorporated in several software codes such as OpenSees.
Additional considerations:
Energy When hysteresis occurs with extensive and intensive variables, the work done on the system is the area under the hysteresis graph. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fuel gauge**
Fuel gauge:
In automotive and aerospace engineering, a fuel gauge is an instrument used to indicate the amount of fuel in a fuel tank. In electrical engineering, the term is used for ICs determining the current State of Charge of accumulators.
Motor vehicles:
As used in vehicles, the gauge consists of two parts: The sending unit - in the tank The indicator - on the dashboardThe sending unit usually uses a float connected to a potentiometer, typically printed ink design in a modern automobile. As the tank empties, the float drops and slides a moving contact along the resistor, increasing its resistance. In addition, when the resistance is at a certain point, it will also turn on a "low fuel" light on some vehicles.Meanwhile, the indicator unit (usually mounted on the dashboard) is measuring and displaying the amount of electric current flowing through the sending unit. When the tank level is high and maximum current is flowing, the needle points to "F" indicating a full tank. When the tank is empty and the least current is flowing, the needle points to "E" indicating an empty tank; some vehicles use the indicators "1" (for full) and "0" or "R" (for empty) instead.
Motor vehicles:
The system can be fail-safe. If an electrical fault opens, the electrical circuit causes the indicator to show the tank as being empty (theoretically provoking the driver to refill the tank) rather than full (which would allow the driver to run out of fuel with no prior notification). Corrosion or wear of the potentiometer will provide erroneous readings of fuel level. However, this system has a potential risk associated with it. An electric current is sent through the variable resistor to which a float is connected, so that the value of resistance depends on the fuel level. In most automotive fuel gauges such resistors are on the inward side of the gauge, i.e., inside the fuel tank. Sending current through such a resistor has a fire hazard and an explosion risk associated with it. These resistance sensors are also showing an increased failure rate with the incremental additions of alcohol in automotive gasoline fuel. Alcohol increases the corrosion rate at the potentiometer, as it is capable of carrying current like water. Potentiometer applications for alcohol fuel use a pulse-and-hold methodology, with a periodic signal being sent to determine fuel level decreasing the corrosion potential. Therefore, demand for another safer, non-contact method for fuel level is desired.
Motor vehicles:
Moylan arrow Since the early 1990s, many fuel gauges have included an icon with a fuel pump and an arrow, indicating the side of the vehicle on which the fuel filler is located. The use of the icon and arrow was invented in 1986 by Jim Moylan, a designer for Ford Motor Company. After he proposed the idea in April 1986, the 1989 Ford Escort and Mercury Tracer were the first vehicles to see it implemented. Other automotive companies noticed the addition and began to incorporate it into their own fuel gauges.
Aircraft:
Magnetoresistance type fuel level sensors, now becoming common in small aircraft applications, offer a potential alternative for automotive use. These fuel level sensors work similar to the potentiometer example, however a sealed detector at the float pivot determines the angular position of a magnet pair at the pivot end of the float arm. These are highly accurate, and the electronics are completely outside the fuel. The non-contact nature of these sensors address the fire and explosion hazard, and also the issues related to any fuel combinations or additives to gasoline or to any alcohol fuel mixtures. Magneto resistive sensors are suitable for all fuel or fluid combinations, including LPG and LNG. The fuel level output for these senders can be ratiometric voltage or preferable CAN bus digital. These sensors also fail-safe in that they either provide a level output or nothing.
Aircraft:
Systems that measure large fuel tanks (including underground storage tanks) may use the same electro-mechanical principle or may make use of a pressure sensor, sometimes connected to a mercury manometer.
Aircraft:
Many large transport aircraft use a different fuel gauge design principle. An aircraft may use a number (around 30 on an A320) of low voltage tubular capacitor probes where the fuel becomes the dielectric. At different fuel levels, different values of capacitance are measured and therefore the level of fuel can be determined. In early designs, the profiles and values of individual probes were chosen to compensate for fuel tank shape and aircraft pitch and roll attitudes. In more modern aircraft, the probes tend to be linear (capacitance proportional to fuel height) and the fuel computer works out how much fuel there is (slightly different on different manufacturers). This has the advantage that a faulty probe may be identified and eliminated from the fuel calculations. In total this system can be more than 99% accurate. Since most commercial aircraft only take on board fuel necessary for the intended flight (with appropriate safety margins), the system allows the fuel load to be preselected, causing the fuel delivery to be shut off when the intended load has been taken on board.
Fuel Gauge ICs:
In electronics there are different ICs available, which control the current State of Charge of accumulators. These devices are also called "Fuel Gauge". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vaccine (instrument)**
Vaccine (instrument):
Vaccine (or sometimes vaksin) are rudimentary single-note trumpets found in Haiti and, to a lesser extent, the Dominican Republic as well as Jamaica. They consist of a simple tube, usually bamboo, with a mouthpiece at one end.
They are thus also referred to as banbou or bambú, as well as bois bourrique (or bwa bourik), granboe, fututo, or boom pipe. They are not to be confused with other Haitian handmade trumpets called konè or klewon, made of a yard-long white metal tube with a flared horn, called kata.Vaccine players are known as banboulyès.
Origins:
Haitian ethnographer Jean Bernard traces the vaksin back to indigenous precolonial peoples of Haiti. However both Thompson and Holloway draw links to the single-note Bakongo bamboo trumpets called disoso, themselves originated in Mbuti hocketing music. Gillis also likens them to trumpets used in Bambara broto music along the Niger, and Jamaican Kumina.
Construction:
Traditionally, vaccine are made of a length of bamboo, hollowed-out and dried, with a node membrane pierced and wrapped with leather or bicycle inner-tube rubber to form a mouthpiece at one end. One or more segments are taken from higher or lower in the bamboo trunk to fashion vaccines; usually more than 1 m long and 5 to 7 cm in diameter. Each one is cut shorter or longer in order to produce a higher or lower tone: bas banbou is long and gives a low-pitched sound, and charlemagne banbou is short and is pitched high.McAlister explains that Afro-Hispaniolan lore involves asking the bamboo plant for its use and leaving a small payment in its place. Landies witnessed this process, which she described as follows: "the harvest of the bamboo was accompanied by an offering. [...] [It] is harvested with the permission of Simbi, a Petwo Lwa who loves water, as bamboo in the Dominican Republic grows in moist land, e.g., along rivers"On occasion, iron or plastic pipes are substituted for the bamboo.
Playing:
A typical vaccine band is composed of three to five players, usually marching abreast of each other. Players use a method called hocketing, whereby each individual blows a single tone rhythmically to create an ostinato motif together. These motifs are usually composed through a process of group improvisation.To keep rhythm, vaccine players also beat a rhythmic timeline, called kata with a long stick on the side of the tube, making the instrument both melodic and percussive.
Tuning and scale:
Within an ostinato, vaccine tones stack up in approximate third intervals to each other—creating tritones and arpeggiated diminished chords, but without a harmonic intent—with the two treble-most vaccines often tuned a semitone apart. Landies also reports other intervals between the lowest two voices. One of the vaccine serves as the tonal center of the motif.
Uses:
Most importantly, vaccines are a key component of rara orchestras. In his 1941 article, Courlander wrote that rara bands "seldom have drums and depend almost entirely on vaccines"; though both Lomax's mid-1930s and McAllister's early 1990s studies report many more instruments—mostly percussive—as part of rara orchestras.
Scholars also report vaccines used as signal horns by parties of agricultural workers, fishermen, stevedores as well as sometimes used in dances of the Congo cycle. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HemoSpat**
HemoSpat:
HemoSpat is bloodstain pattern analysis software created by FORident Software in 2006. Using photos from a bloodshed incident at a crime scene, a bloodstain pattern analyst can use HemoSpat to calculate the area-of-origin of impact patterns. This information may be useful for determining position and posture of suspects and victims, sequencing of events, corroborating or refuting testimony, and for crime scene reconstruction.
HemoSpat:
The results of the analyses may be viewed in 2D within the software as top-down, side, and front views, or exported to several 3D formats for integration with point cloud or modelling software. The formats which HemoSpat exports include: AUTOCAD DXF COLLADA PLY VRML Wavefront OBJHemoSpat is capable of calculating impact pattern origins with only part of the pattern available, as well as impacts on non-orthogonal surfaces.
HemoSpat:
HemoSpat has also been used in research into what kind of information may be captured from cast-off patterns, methods of scene documentation, and in improving area-of-origin calculations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ian Roberts (linguist)**
Ian Roberts (linguist):
Ian G. Roberts is Professor of Linguistics at the University of Cambridge and a fellow of Downing College, Cambridge. He also serves on the Advisory Council of METI (Messaging Extraterrestrial Intelligence).
He received his PhD from the University of Southern California in 1985 and taught at the Universities of Geneva (1985–1993), Bangor (1991–1996) and Stuttgart (1996–2000) before taking up his present position at Cambridge in 2000. He is a fellow of Downing College.
Professor Roberts is a generative linguist and enthusiastic adopter of Chomsky's Minimalist Program. He has published widely in the synchronic and diachronic syntax of Romance and Germanic languages and Welsh. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HP WinRunner**
HP WinRunner:
HP WinRunner software was an automated functional GUI testing tool that allowed a user to record and play back user interface (UI) interactions as test scripts.
As a functional test suite, it worked with HP QuickTest Professional and supported enterprise quality assurance. It captured, verified and replayed user interactions automatically, in order to identify defects and determine whether business processes worked as designed.
The software implemented a proprietary Test Script Language (TSL) that allowed customization and parameterization of user input.
HP WinRunner was originally written by Mercury Interactive. Mercury Interactive was subsequently acquired by Hewlett Packard (HP) in 2006. On February 15, 2008, HP Software Division announced the end of support for HP WinRunner versions 7.5, 7.6, 8.0, 8.2, 9.2—suggesting migration to HP Functional Testing software as a replacement. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sialoadhesin**
Sialoadhesin:
Sialoadhesin is a cell adhesion molecule found on the surface of macrophages. It is found in especially high amounts on macrophages of the spleen, liver, lymph node, bone marrow, colon, and lungs. Also, in patients with rheumatoid arthritis, the protein has been found in great amounts on macrophages of the affected tissues. It is defined as an I-type lectin, since it contains 17 immunoglobulin (Ig) domains (one variable domain and 16 constant domains), and thus also belongs to the immunoglobulin superfamily (IgSF). Sialoadhesin binds to certain molecules called sialic acids. During this binding process a salt bridge (protein) is formed between a highly conserved arginine residue (from the v-set domain to the 3'-sialyllactose) and the carboxylate group of the sialic acid. Since sialoadhesin binds sialic acids with its N-terminal IgV-domain, it is also a member of the SIGLEC family. Alternate names for sialoadhesin include siglec-1 and CD169 (cluster of differentiation 169).Sialoadhesin predominantly binds neutrophils, but can also bind monocytes, natural killer cells, B cells and a subset of cytotoxic T cells by interacting with sialic acid molecules in the ligands on their surfaces.Sialoadhesin (CD169) positive macrophages, along with mesenchymal stem cells and beta-adrenergic neurons, form the hematopoietic stem cell niche in the bone marrow. CD169+ macrophages mediate signaling between the various cells and seem to promote hematopoietic stem cell retention to the niche. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dubnium**
Dubnium:
Dubnium is a synthetic chemical element with the symbol Db and atomic number 105. It is highly radioactive: the most stable known isotope, dubnium-268, has a half-life of about 16 hours. This greatly limits extended research on the element.
Dubnium:
Dubnium does not occur naturally on Earth and is produced artificially. The Soviet Joint Institute for Nuclear Research (JINR) claimed the first discovery of the element in 1968, followed by the American Lawrence Berkeley Laboratory in 1970. Both teams proposed their names for the new element and used them without formal approval. The long-standing dispute was resolved in 1993 by an official investigation of the discovery claims by the Transfermium Working Group, formed by the International Union of Pure and Applied Chemistry and the International Union of Pure and Applied Physics, resulting in credit for the discovery being officially shared between both teams. The element was formally named dubnium in 1997 after the town of Dubna, the site of the JINR.
Dubnium:
Theoretical research establishes dubnium as a member of group 5 in the 6d series of transition metals, placing it under vanadium, niobium, and tantalum. Dubnium should share most properties, such as its valence electron configuration and having a dominant +5 oxidation state, with the other group 5 elements, with a few anomalies due to relativistic effects. A limited investigation of dubnium chemistry has confirmed this.
Introduction:
The heaviest atomic nuclei are created in nuclear reactions that combine two other nuclei of unequal size into one; roughly, the more unequal the two nuclei in terms of mass, the greater the possibility that the two react. The material made of the heavier nuclei is made into a target, which is then bombarded by the beam of lighter nuclei. Two nuclei can fuse into one only if they approach each other closely enough; normally, nuclei (all positively charged) repel each other due to electrostatic repulsion. The strong interaction can overcome this repulsion but only within a very short distance from a nucleus; beam nuclei are thus greatly accelerated in order to make such repulsion insignificant compared to the velocity of the beam nucleus. Coming close alone is not enough for two nuclei to fuse: when two nuclei approach each other, they usually remain together for approximately 10−20 seconds and then part ways (not necessarily in the same composition as before the reaction) rather than form a single nucleus. If fusion does occur, the temporary merger—termed a compound nucleus—is an excited state. To lose its excitation energy and reach a more stable state, a compound nucleus either fissions or ejects one or several neutrons, which carry away the energy. This occurs in approximately 10−16 seconds after the initial collision.The beam passes through the target and reaches the next chamber, the separator; if a new nucleus is produced, it is carried with this beam. In the separator, the newly produced nucleus is separated from other nuclides (that of the original beam and any other reaction products) and transferred to a surface-barrier detector, which stops the nucleus. The exact location of the upcoming impact on the detector is marked; also marked are its energy and the time of the arrival. The transfer takes about 10−6 seconds; in order to be detected, the nucleus must survive this long. The nucleus is recorded again once its decay is registered, and the location, the energy, and the time of the decay are measured.Stability of a nucleus is provided by the strong interaction. However, its range is very short; as nuclei become larger, their influence on the outermost nucleons (protons and neutrons) weakens. At the same time, the nucleus is torn apart by electrostatic repulsion between protons, as it has unlimited range. Nuclei of the heaviest elements are thus theoretically predicted and have so far been observed to primarily decay via decay modes that are caused by such repulsion: alpha decay and spontaneous fission; these modes are predominant for nuclei of superheavy elements. Alpha decays are registered by the emitted alpha particles, and the decay products are easy to determine before the actual decay; if such a decay or a series of consecutive decays produces a known nucleus, the original product of a reaction can be determined arithmetically. Spontaneous fission, however, produces various nuclei as products, so the original nuclide cannot be determined from its daughters.The information available to physicists aiming to synthesize one of the heaviest elements is thus the information collected at the detectors: location, energy, and time of arrival of a particle to the detector, and those of its decay. The physicists analyze this data and seek to conclude that it was indeed caused by a new element and could not have been caused by a different nuclide than the one claimed. Often, provided data is insufficient for a conclusion that a new element was definitely created and there is no other explanation for the observed effects; errors in interpreting data have been made.
Discovery:
Background Uranium, element 92, is the heaviest element to occur in significant quantities in nature; heavier elements can only be practically produced by synthesis. The first synthesis of a new element—neptunium, element 93—was achieved in 1940 by a team of researchers in the United States. In the following years, American scientists synthesized the elements up to mendelevium, element 101, which was synthesized in 1955. From element 102, the priority of discoveries was contested between American and Soviet physicists. Their rivalry resulted in a race for new elements and credit for their discoveries, later named the Transfermium Wars.
Discovery:
Reports The first report of the discovery of element 105 came from the Joint Institute for Nuclear Research (JINR) in Dubna, Moscow Oblast, Soviet Union, in April 1968. The scientists bombarded 243Am with a beam of 22Ne ions, and reported 9.4 MeV (with a half-life of 0.1–3 seconds) and 9.7 MeV (t1/2 > 0.05 s) alpha activities followed by alpha activities similar to those of either 256103 or 257103. Based on prior theoretical predictions, the two activity lines were assigned to 261105 and 260105, respectively.
Discovery:
24395Am + 2210Ne → 265−x105 + x n (x = 4, 5)After observing the alpha decays of element 105, the researchers aimed to observe spontaneous fission (SF) of the element and study the resulting fission fragments. They published a paper in February 1970, reporting multiple examples of two such activities, with half-lives of 14 ms and 2.2±0.5 s. They assigned the former activity to 242mfAm and ascribed the latter activity to an isotope of element 105. They suggested that it was unlikely that this activity could come from a transfer reaction instead of element 105, because the yield ratio for this reaction was significantly lower than that of the 242mfAm-producing transfer reaction, in accordance with theoretical predictions. To establish that this activity was not from a (22Ne,xn) reaction, the researchers bombarded a 243Am target with 18O ions; reactions producing 256103 and 257103 showed very little SF activity (matching the established data), and the reaction producing heavier 258103 and 259103 produced no SF activity at all, in line with theoretical data. The researchers concluded that the activities observed came from SF of element 105.In April 1970, a team at Lawrence Berkeley Laboratory (LBL), in Berkeley, California, United States, claimed to have synthesized element 105 by bombarding californium-249 with nitrogen-15 ions, with an alpha activity of 9.1 MeV. To ensure this activity was not from a different reaction, the team attempted other reactions: bombarding 249Cf with 14N, Pb with 15N, and Hg with 15N. They stated no such activity was found in those reactions. The characteristics of the daughter nuclei matched those of 256103, implying that the parent nuclei were of 260105.
Discovery:
24998Cf + 157N → 260105 + 4 nThese results did not confirm the JINR findings regarding the 9.4 MeV or 9.7 MeV alpha decay of 260105, leaving only 261105 as a possibly produced isotope.JINR then attempted another experiment to create element 105, published in a report in May 1970. They claimed that they had synthesized more nuclei of element 105 and that the experiment confirmed their previous work. According to the paper, the isotope produced by JINR was probably 261105, or possibly 260105. This report included an initial chemical examination: the thermal gradient version of the gas-chromatography method was applied to demonstrate that the chloride of what had formed from the SF activity nearly matched that of niobium pentachloride, rather than hafnium tetrachloride. The team identified a 2.2-second SF activity in a volatile chloride portraying eka-tantalum properties, and inferred that the source of the SF activity must have been element 105.In June 1970, JINR made improvements on their first experiment, using a purer target and reducing the intensity of transfer reactions by installing a collimator before the catcher. This time, they were able to find 9.1 MeV alpha activities with daughter isotopes identifiable as either 256103 or 257103, implying that the original isotope was either 260105 or 261105.
Discovery:
Naming controversy JINR did not propose a name after their first report claiming synthesis of element 105, which would have been the usual practice. This led LBL to believe that JINR did not have enough experimental data to back their claim. After collecting more data, JINR proposed the name bohrium (Bo) in honor of the Danish nuclear physicist Niels Bohr, a founder of the theories of atomic structure and quantum theory; they soon changed their proposal to nielsbohrium (Ns) to avoid confusion with boron. Another proposed name was dubnium. When LBL first announced their synthesis of element 105, they proposed that the new element be named hahnium (Ha) after the German chemist Otto Hahn, the "father of nuclear chemistry", thus creating an element naming controversy.In the early 1970s, both teams reported synthesis of the next element, element 106, but did not suggest names. JINR suggested establishing an international committee to clarify the discovery criteria. This proposal was accepted in 1974 and a neutral joint group formed. Neither team showed interest in resolving the conflict through a third party, so the leading scientists of LBL—Albert Ghiorso and Glenn Seaborg—traveled to Dubna in 1975 and met with the leading scientists of JINR—Georgy Flerov, Yuri Oganessian, and others—to try to resolve the conflict internally and render the neutral joint group unnecessary; after two hours of discussions, this failed. The joint neutral group never assembled to assess the claims, and the conflict remained unresolved. In 1979, IUPAC suggested systematic element names to be used as placeholders until permanent names were established; under it, element 105 would be unnilpentium, from the Latin roots un- and nil- and the Greek root pent- (meaning "one", "zero", and "five", respectively, the digits of the atomic number). Both teams ignored it as they did not wish to weaken their outstanding claims.In 1981, the Gesellschaft für Schwerionenforschung (GSI; Society for Heavy Ion Research) in Darmstadt, Hesse, West Germany, claimed synthesis of element 107; their report came out five years after the first report from JINR but with greater precision, making a more solid claim on discovery. GSI acknowledged JINR's efforts by suggesting the name nielsbohrium for the new element. JINR did not suggest a new name for element 105, stating it was more important to determine its discoverers first.
Discovery:
In 1985, the International Union of Pure and Applied Chemistry (IUPAC) and the International Union of Pure and Applied Physics (IUPAP) formed a Transfermium Working Group (TWG) to assess discoveries and establish final names for the controversial elements. The party held meetings with delegates from the three competing institutes; in 1990, they established criteria on recognition of an element, and in 1991, they finished the work on assessing discoveries and disbanded. These results were published in 1993. According to the report, the first definitely successful experiment was the April 1970 LBL experiment, closely followed by the June 1970 JINR experiment, so credit for the discovery of the element should be shared between the two teams.LBL said that the input from JINR was overrated in the review. They claimed JINR was only able to unambiguously demonstrate the synthesis of element 105 a year after they did. JINR and GSI endorsed the report.In 1994, IUPAC published a recommendation on naming the disputed elements. For element 105, they proposed joliotium (Jl) after the French physicist Frédéric Joliot-Curie, a contributor to the development of nuclear physics and chemistry; this name was originally proposed by the Soviet team for element 102, which by then had long been called nobelium. This recommendation was criticized by the American scientists for several reasons. Firstly, their suggestions were scrambled: the names rutherfordium and hahnium, originally suggested by Berkeley for elements 104 and 105, were respectively reassigned to elements 106 and 108. Secondly, elements 104 and 105 were given names favored by JINR, despite earlier recognition of LBL as an equal co-discoverer for both of them. Thirdly and most importantly, IUPAC rejected the name seaborgium for element 106, having just approved a rule that an element could not be named after a living person, even though the 1993 report had given the LBL team the sole credit for its discovery.In 1995, IUPAC abandoned the controversial rule and established a committee of national representatives aimed at finding a compromise. They suggested seaborgium for element 106 in exchange for the removal of all the other American proposals, except for the established name lawrencium for element 103. The equally entrenched name nobelium for element 102 was replaced by flerovium after Georgy Flerov, following the recognition by the 1993 report that that element had been first synthesized in Dubna. This was rejected by American scientists and the decision was retracted. The name flerovium was later used for element 114.In 1996, IUPAC held another meeting, reconsidered all names in hand, and accepted another set of recommendations; it was approved and published in 1997. Element 105 was named dubnium (Db), after Dubna in Russia, the location of the JINR; the American suggestions were used for elements 102, 103, 104, and 106. The name dubnium had been used for element 104 in the previous IUPAC recommendation. The American scientists "reluctantly" approved this decision. IUPAC pointed out that the Berkeley laboratory had already been recognized several times, in the naming of berkelium, californium, and americium, and that the acceptance of the names rutherfordium and seaborgium for elements 104 and 106 should be offset by recognizing JINR's contributions to the discovery of elements 104, 105, and 106.Even after 1997, LBL still sometimes used the name hahnium for element 105 in their own material, doing so as recently as 2014. However, the problem was resolved in the literature as Jens Volker Kratz, editor of Radiochimica Acta, refused to accept papers not using the 1997 IUPAC nomenclature.
Isotopes:
Dubnium, having an atomic number of 105, is a superheavy element; like all elements with such high atomic numbers, it is very unstable. The longest-lasting known isotope of dubnium, 268Db, has a half-life of around a day. No stable isotopes have been seen, and a 2012 calculation by JINR suggested that the half-lives of all dubnium isotopes would not significantly exceed a day. Dubnium can only be obtained by artificial production.The short half-life of dubnium limits experimentation. This is exacerbated by the fact that the most stable isotopes are the hardest to synthesize. Elements with a lower atomic number have stable isotopes with a lower neutron–proton ratio than those with higher atomic number, meaning that the target and beam nuclei that could be employed to create the superheavy element have fewer neutrons than needed to form these most stable isotopes. (Different techniques based on rapid neutron capture and transfer reactions are being considered as of the 2010s, but those based on the collision of a large and small nucleus still dominate research in the area.)Only a few atoms of 268Db can be produced in each experiment, and thus the measured lifetimes vary significantly during the process. As of 2022, following additional experiments performed at the JINR's Superheavy Element Factory (which started operations in 2019), the half-life of 268Db is measured to be 16+6−4 hours. The second most stable isotope, 270Db, has been produced in even smaller quantities: three atoms in total, with lifetimes of 33.4 h, 1.3 h, and 1.6 h. These two are the heaviest isotopes of dubnium to date, and both were produced as a result of decay of the heavier nuclei 288Mc and 294Ts rather than directly, because the experiments that yielded them were originally designed in Dubna for 48Ca beams. For its mass, 48Ca has by far the greatest neutron excess of all practically stable nuclei, both quantitative and relative, which correspondingly helps synthesize superheavy nuclei with more neutrons, but this gain is compensated by the decreased likelihood of fusion for high atomic numbers.
Predicted properties:
According to the periodic law, dubnium should belong to group 5, with vanadium, niobium, and tantalum. Several studies have investigated the properties of element 105 and found that they generally agreed with the predictions of the periodic law. Significant deviations may nevertheless occur, due to relativistic effects, which dramatically change physical properties on both atomic and macroscopic scales. These properties have remained challenging to measure for several reasons: the difficulties of production of superheavy atoms, the low rates of production, which only allows for microscopic scales, requirements for a radiochemistry laboratory to test the atoms, short half-lives of those atoms, and the presence of many unwanted activities apart from those of synthesis of superheavy atoms. So far, studies have only been performed on single atoms.
Predicted properties:
Atomic and physical A direct relativistic effect is that as the atomic numbers of elements increase, the innermost electrons begin to revolve faster around the nucleus as a result of an increase of electromagnetic attraction between an electron and a nucleus. Similar effects have been found for the outermost s orbitals (and p1/2 ones, though in dubnium they are not occupied): for example, the 7s orbital contracts by 25% in size and is stabilized by 2.6 eV.A more indirect effect is that the contracted s and p1/2 orbitals shield the charge of the nucleus more effectively, leaving less for the outer d and f electrons, which therefore move in larger orbitals. Dubnium is greatly affected by this: unlike the previous group 5 members, its 7s electrons are slightly more difficult to extract than its 6d electrons.
Predicted properties:
Another effect is the spin–orbit interaction, particularly spin–orbit splitting, which splits the 6d subshell—the azimuthal quantum number ℓ of a d shell is 2—into two subshells, with four of the ten orbitals having their ℓ lowered to 3/2 and six raised to 5/2. All ten energy levels are raised; four of them are lower than the other six. (The three 6d electrons normally occupy the lowest energy levels, 6d3/2.)A singly ionized atom of dubnium (Db+) should lose a 6d electron compared to a neutral atom; the doubly (Db2+) or triply (Db3+) ionized atoms of dubnium should eliminate 7s electrons, unlike its lighter homologs. Despite the changes, dubnium is still expected to have five valence electrons; 7p energy levels have not been shown to influence dubnium and its properties. As the 6d orbitals of dubnium are more destabilized than the 5d ones of tantalum, and Db3+ is expected to have two 6d, rather than 7s, electrons remaining, the resulting +3 oxidation state is expected to be unstable and even rarer than that of tantalum. The ionization potential of dubnium in its maximum +5 oxidation state should be slightly lower than that of tantalum and the ionic radius of dubnium should increase compared to tantalum; this has a significant effect on dubnium's chemistry.Atoms of dubnium in the solid state should arrange themselves in a body-centered cubic configuration, like the previous group 5 elements. The predicted density of dubnium is 21.6 g/cm3.
Predicted properties:
Chemical Computational chemistry is simplest in gas-phase chemistry, in which interactions between molecules may be ignored as negligible. Multiple authors have researched dubnium pentachloride; calculations show it to be consistent with the periodic laws by exhibiting the properties of a compound of a group 5 element. For example, the molecular orbital levels indicate that dubnium uses three 6d electron levels as expected. Compared to its tantalum analog, dubnium pentachloride is expected to show increased covalent character: a decrease in the effective charge on an atom and an increase in the overlap population (between orbitals of dubnium and chlorine).Calculations of solution chemistry indicate that the maximum oxidation state of dubnium, +5, will be more stable than those of niobium and tantalum and the +3 and +4 states will be less stable. The tendency towards hydrolysis of cations with the highest oxidation state should continue to decrease within group 5 but is still expected to be quite rapid. Complexation of dubnium is expected to follow group 5 trends in its richness. Calculations for hydroxo-chlorido- complexes have shown a reversal in the trends of complex formation and extraction of group 5 elements, with dubnium being more prone to do so than tantalum.
Experimental chemistry:
Experimental results of the chemistry of dubnium date back to 1974 and 1976. JINR researchers used a thermochromatographic system and concluded that the volatility of dubnium bromide was less than that of niobium bromide and about the same as that of hafnium bromide. It is not certain that the detected fission products confirmed that the parent was indeed element 105. These results may imply that dubnium behaves more like hafnium than niobium.The next studies on the chemistry of dubnium were conducted in 1988, in Berkeley. They examined whether the most stable oxidation state of dubnium in aqueous solution was +5. Dubnium was fumed twice and washed with concentrated nitric acid; sorption of dubnium on glass cover slips was then compared with that of the group 5 elements niobium and tantalum and the group 4 elements zirconium and hafnium produced under similar conditions. The group 5 elements are known to sorb on glass surfaces; the group 4 elements do not. Dubnium was confirmed as a group 5 member. Surprisingly, the behavior on extraction from mixed nitric and hydrofluoric acid solution into methyl isobutyl ketone differed between dubnium, tantalum, and niobium. Dubnium did not extract and its behavior resembled niobium more closely than tantalum, indicating that complexing behavior could not be predicted purely from simple extrapolations of trends within a group in the periodic table.This prompted further exploration of the chemical behavior of complexes of dubnium. Various labs jointly conducted thousands of repetitive chromatographic experiments between 1988 and 1993. All group 5 elements and protactinium were extracted from concentrated hydrochloric acid; after mixing with lower concentrations of hydrogen chloride, small amounts of hydrogen fluoride were added to start selective re-extraction. Dubnium showed behavior different from that of tantalum but similar to that of niobium and its pseudohomolog protactinium at concentrations of hydrogen chloride below 12 moles per liter. This similarity to the two elements suggested that the formed complex was either DbOX−4 or [Db(OH)2X4]−. After extraction experiments of dubnium from hydrogen bromide into diisobutyl carbinol (2,6-dimethylheptan-4-ol), a specific extractant for protactinium, with subsequent elutions with the hydrogen chloride/hydrogen fluoride mix as well as hydrogen chloride, dubnium was found to be less prone to extraction than either protactinium or niobium. This was explained as an increasing tendency to form non‐extractable complexes of multiple negative charges. Further experiments in 1992 confirmed the stability of the +5 state: Db(V) was shown to be extractable from cation‐exchange columns with α‐hydroxyisobutyrate, like the group 5 elements and protactinium; Db(III) and Db(IV) were not. In 1998 and 1999, new predictions suggested that dubnium would extract nearly as well as niobium and better than tantalum from halide solutions, which was later confirmed.The first isothermal gas chromatography experiments were performed in 1992 with 262Db (half-life 35 seconds). The volatilities for niobium and tantalum were similar within error limits, but dubnium appeared to be significantly less volatile. It was postulated that traces of oxygen in the system might have led to formation of DbOBr3, which was predicted to be less volatile than DbBr5. Later experiments in 1996 showed that group 5 chlorides were more volatile than the corresponding bromides, with the exception of tantalum, presumably due to formation of TaOCl3. Later volatility studies of chlorides of dubnium and niobium as a function of controlled partial pressures of oxygen showed that formation of oxychlorides and general volatility are dependent on concentrations of oxygen. The oxychlorides were shown to be less volatile than the chlorides.In 2004–05, researchers from Dubna and Livermore identified a new dubnium isotope, 268Db, as a fivefold alpha decay product of the newly created element 115. This new isotope proved to be long-lived enough to allow further chemical experimentation, with a half-life of over a day. In the 2004 experiment, a thin layer with dubnium was removed from the surface of the target and dissolved in aqua regia with tracers and a lanthanum carrier, from which various +3, +4, and +5 species were precipitated on adding ammonium hydroxide. The precipitate was washed and dissolved in hydrochloric acid, where it converted to nitrate form and was then dried on a film and counted. Mostly containing a +5 species, which was immediately assigned to dubnium, it also had a +4 species; based on that result, the team decided that additional chemical separation was needed. In 2005, the experiment was repeated, with the final product being hydroxide rather than nitrate precipitate, which was processed further in both Livermore (based on reverse phase chromatography) and Dubna (based on anion exchange chromatography). The +5 species was effectively isolated; dubnium appeared three times in tantalum-only fractions and never in niobium-only fractions. It was noted that these experiments were insufficient to draw conclusions about the general chemical profile of dubnium.In 2009, at the JAEA tandem accelerator in Japan, dubnium was processed in nitric and hydrofluoric acid solution, at concentrations where niobium forms NbOF−4 and tantalum forms TaF−6. Dubnium's behavior was close to that of niobium but not tantalum; it was thus deduced that dubnium formed DbOF−4. From the available information, it was concluded that dubnium often behaved like niobium, sometimes like protactinium, but rarely like tantalum.In 2021, the volatile heavy group 5 oxychlorides MOCl3 (M = Nb, Ta, Db) were experimentally studied at the JAEA tandem accelerator. The trend in volatilities was found to be NbOCl3 > TaOCl3 ≥ DbOCl3, so that dubnium behaves in line with periodic trends. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dependency grammar**
Dependency grammar:
Dependency grammar (DG) is a class of modern grammatical theories that are all based on the dependency relation (as opposed to the constituency relation of phrase structure) and that can be traced back primarily to the work of Lucien Tesnière. Dependency is the notion that linguistic units, e.g. words, are connected to each other by directed links. The (finite) verb is taken to be the structural center of clause structure. All other syntactic units (words) are either directly or indirectly connected to the verb in terms of the directed links, which are called dependencies. Dependency grammar differs from phrase structure grammar in that while it can identify phrases it tends to overlook phrasal nodes. A dependency structure is determined by the relation between a word (a head) and its dependents. Dependency structures are flatter than phrase structures in part because they lack a finite verb phrase constituent, and they are thus well suited for the analysis of languages with free word order, such as Czech or Warlpiri.
History:
The notion of dependencies between grammatical units has existed since the earliest recorded grammars, e.g. Pāṇini, and the dependency concept therefore arguably predates that of phrase structure by many centuries. Ibn Maḍāʾ, a 12th-century linguist from Córdoba, Andalusia, may have been the first grammarian to use the term dependency in the grammatical sense that we use it today. In early modern times, the dependency concept seems to have coexisted side by side with that of phrase structure, the latter having entered Latin, French, English and other grammars from the widespread study of term logic of antiquity. Dependency is also concretely present in the works of Sámuel Brassai (1800–1897), a Hungarian linguist, Franz Kern (1830-1894), a German philologist, and of Heimann Hariton Tiktin (1850–1936), a Romanian linguist.Modern dependency grammars, however, begin primarily with the work of Lucien Tesnière. Tesnière was a Frenchman, a polyglot, and a professor of linguistics at the universities in Strasbourg and Montpellier. His major work Éléments de syntaxe structurale was published posthumously in 1959 – he died in 1954. The basic approach to syntax he developed seems to have been seized upon independently by others in the 1960s and a number of other dependency-based grammars have gained prominence since those early works. DG has generated a lot of interest in Germany in both theoretical syntax and language pedagogy. In recent years, the great development surrounding dependency-based theories has come from computational linguistics and is due, in part, to the influential work that David Hays did in machine translation at the RAND Corporation in the 1950s and 1960s. Dependency-based systems are increasingly being used to parse natural language and generate tree banks. Interest in dependency grammar is growing at present, international conferences on dependency linguistics being a relatively recent development (Depling 2011, Depling 2013, Depling 2015, Depling 2017, Depling 2019).
Dependency vs. phrase structure:
Dependency is a one-to-one correspondence: for every element (e.g. word or morph) in the sentence, there is exactly one node in the structure of that sentence that corresponds to that element. The result of this one-to-one correspondence is that dependency grammars are word (or morph) grammars. All that exist are the elements and the dependencies that connect the elements into a structure. This situation should be compared with phrase structure. Phrase structure is a one-to-one-or-more correspondence, which means that, for every element in a sentence, there is one or more nodes in the structure that correspond to that element. The result of this difference is that dependency structures are minimal compared to their phrase structure counterparts, since they tend to contain many fewer nodes.
Dependency vs. phrase structure:
These trees illustrate two possible ways to render the dependency and phrase structure relations (see below). This dependency tree is an "ordered" tree, i.e. it reflects actual word order. Many dependency trees abstract away from linear order and focus just on hierarchical order, which means they do not show actual word order. This constituency (= phrase structure) tree follows the conventions of bare phrase structure (BPS), whereby the words themselves are employed as the node labels.
Dependency vs. phrase structure:
The distinction between dependency and phrase structure grammars derives in large part from the initial division of the clause. The phrase structure relation derives from an initial binary division, whereby the clause is split into a subject noun phrase (NP) and a predicate verb phrase (VP). This division is certainly present in the basic analysis of the clause that we find in the works of, for instance, Leonard Bloomfield and Noam Chomsky. Tesnière, however, argued vehemently against this binary division, preferring instead to position the verb as the root of all clause structure. Tesnière's stance was that the subject-predicate division stems from term logic and has no place in linguistics. The importance of this distinction is that if one acknowledges the initial subject-predicate division in syntax is real, then one is likely to go down the path of phrase structure grammar, while if one rejects this division, then one must consider the verb as the root of all structure, and so go down the path of dependency grammar.
Dependency grammars:
The following frameworks are dependency-based: Algebraic syntax Operator grammar Link grammar Functional generative description Lexicase Meaning–text theory Word grammar Extensible dependency grammar Universal DependenciesLink grammar is similar to dependency grammar, but link grammar does not include directionality between the linked words, and thus does not describe head-dependent relationships. Hybrid dependency/phrase structure grammar uses dependencies between words, but also includes dependencies between phrasal nodes – see for example the Quranic Arabic Dependency Treebank. The derivation trees of tree-adjoining grammar are dependency structures, although the full trees of TAG rendered in terms of phrase structure, so in this regard, it is not clear whether TAG should be viewed more as a dependency or phrase structure grammar.
Dependency grammars:
There are major differences between the grammars just listed. In this regard, the dependency relation is compatible with other major tenets of theories of grammar. Thus like phrase structure grammars, dependency grammars can be mono- or multistratal, representational or derivational, construction- or rule-based.
Representing dependencies:
There are various conventions that DGs employ to represent dependencies. The following schemata (in addition to the tree above and the trees further below) illustrate some of these conventions: The representations in (a–d) are trees, whereby the specific conventions employed in each tree vary. Solid lines are dependency edges and lightly dotted lines are projection lines. The only difference between tree (a) and tree (b) is that tree (a) employs the category class to label the nodes whereas tree (b) employs the words themselves as the node labels. Tree (c) is a reduced tree insofar as the string of words below and projection lines are deemed unnecessary and are hence omitted. Tree (d) abstracts away from linear order and reflects just hierarchical order. The arrow arcs in (e) are an alternative convention used to show dependencies and are favored by Word Grammar. The brackets in (f) are seldom used, but are nevertheless quite capable of reflecting the dependency hierarchy; dependents appear enclosed in more brackets than their heads. And finally, the indentations like those in (g) are another convention that is sometimes employed to indicate the hierarchy of words. Dependents are placed underneath their heads and indented. Like tree (d), the indentations in (g) abstract away from linear order.
Representing dependencies:
The point to these conventions is that they are just that, namely conventions. They do not influence the basic commitment to dependency as the relation that is grouping syntactic units.
Types of dependencies:
The dependency representations above (and further below) show syntactic dependencies. Indeed, most work in dependency grammar focuses on syntactic dependencies. Syntactic dependencies are, however, just one of three or four types of dependencies. Meaning–text theory, for instance, emphasizes the role of semantic and morphological dependencies in addition to syntactic dependencies. A fourth type, prosodic dependencies, can also be acknowledged. Distinguishing between these types of dependencies can be important, in part because if one fails to do so, the likelihood that semantic, morphological, and/or prosodic dependencies will be mistaken for syntactic dependencies is great. The following four subsections briefly sketch each of these dependency types. During the discussion, the existence of syntactic dependencies is taken for granted and used as an orientation point for establishing the nature of the other three dependency types.
Types of dependencies:
Semantic dependencies Semantic dependencies are understood in terms of predicates and their arguments. The arguments of a predicate are semantically dependent on that predicate. Often, semantic dependencies overlap with and point in the same direction as syntactic dependencies. At times, however, semantic dependencies can point in the opposite direction of syntactic dependencies, or they can be entirely independent of syntactic dependencies. The hierarchy of words in the following examples show standard syntactic dependencies, whereas the arrows indicate semantic dependencies: The two arguments Sam and Sally in tree (a) are dependent on the predicate likes, whereby these arguments are also syntactically dependent on likes. What this means is that the semantic and syntactic dependencies overlap and point in the same direction (down the tree). Attributive adjectives, however, are predicates that take their head noun as their argument, hence big is a predicate in tree (b) that takes bones as its one argument; the semantic dependency points up the tree and therefore runs counter to the syntactic dependency. A similar situation obtains in (c), where the preposition predicate on takes the two arguments the picture and the wall; one of these semantic dependencies points up the syntactic hierarchy, whereas the other points down it. Finally, the predicate to help in (d) takes the one argument Jim but is not directly connected to Jim in the syntactic hierarchy, which means that semantic dependency is entirely independent of the syntactic dependencies.
Types of dependencies:
Morphological dependencies Morphological dependencies obtain between words or parts of words. When a given word or part of a word influences the form of another word, then the latter is morphologically dependent on the former. Agreement and concord are therefore manifestations of morphological dependencies. Like semantic dependencies, morphological dependencies can overlap with and point in the same direction as syntactic dependencies, overlap with and point in the opposite direction of syntactic dependencies, or be entirely independent of syntactic dependencies. The arrows are now used to indicate morphological dependencies.
Types of dependencies:
The plural houses in (a) demands the plural of the demonstrative determiner, hence these appears, not this, which means there is a morphological dependency that points down the hierarchy from houses to these. The situation is reversed in (b), where the singular subject Sam demands the appearance of the agreement suffix -s on the finite verb works, which means there is a morphological dependency pointing up the hierarchy from Sam to works. The type of determiner in the German examples (c) and (d) influences the inflectional suffix that appears on the adjective alt. When the indefinite article ein is used, the strong masculine ending -er appears on the adjective. When the definite article der is used, in contrast, the weak ending -e appears on the adjective. Thus since the choice of determiner impacts the morphological form of the adjective, there is a morphological dependency pointing from the determiner to the adjective, whereby this morphological dependency is entirely independent of the syntactic dependencies. Consider further the following French sentences: The masculine subject le chien in (a) demands the masculine form of the predicative adjective blanc, whereas the feminine subject la maison demands the feminine form of this adjective. A morphological dependency that is entirely independent of the syntactic dependencies therefore points again across the syntactic hierarchy.
Types of dependencies:
Morphological dependencies play an important role in typological studies. Languages are classified as mostly head-marking (Sam work-s) or mostly dependent-marking (these houses), whereby most if not all languages contain at least some minor measure of both head and dependent marking.
Types of dependencies:
Prosodic dependencies Prosodic dependencies are acknowledged in order to accommodate the behavior of clitics. A clitic is a syntactically autonomous element that is prosodically dependent on a host. A clitic is therefore integrated into the prosody of its host, meaning that it forms a single word with its host. Prosodic dependencies exist entirely in the linear dimension (horizontal dimension), whereas standard syntactic dependencies exist in the hierarchical dimension (vertical dimension). Classic examples of clitics in English are reduced auxiliaries (e.g. -ll, -s, -ve) and the possessive marker -s. The prosodic dependencies in the following examples are indicated with the hyphen and the lack of a vertical projection line: The hyphens and lack of projection lines indicate prosodic dependencies. A hyphen that appears on the left of the clitic indicates that the clitic is prosodically dependent on the word immediately to its left (He'll, There's), whereas a hyphen that appears on the right side of the clitic (not shown here) indicates that the clitic is prosodically dependent on the word that appears immediately to its right. A given clitic is often prosodically dependent on its syntactic dependent (He'll, There's) or on its head (would've). At other times, it can depend prosodically on a word that is neither its head nor its immediate dependent (Florida's).
Types of dependencies:
Syntactic dependencies Syntactic dependencies are the focus of most work in DG, as stated above. How the presence and the direction of syntactic dependencies are determined is of course often open to debate. In this regard, it must be acknowledged that the validity of syntactic dependencies in the trees throughout this article is being taken for granted. However, these hierarchies are such that many DGs can largely support them, although there will certainly be points of disagreement. The basic question about how syntactic dependencies are discerned has proven difficult to answer definitively. One should acknowledge in this area, however, that the basic task of identifying and discerning the presence and direction of the syntactic dependencies of DGs is no easier or harder than determining the constituent groupings of phrase structure grammars. A variety of heuristics are employed to this end, basic tests for constituents being useful tools; the syntactic dependencies assumed in the trees in this article are grouping words together in a manner that most closely matches the results of standard permutation, substitution, and ellipsis tests for constituents. Etymological considerations also provide helpful clues about the direction of dependencies. A promising principle upon which to base the existence of syntactic dependencies is distribution. When one is striving to identify the root of a given phrase, the word that is most responsible for determining the distribution of that phrase as a whole is its root.
Linear order and discontinuities:
Traditionally, DGs have had a different approach to linear order (word order) than phrase structure grammars. Dependency structures are minimal compared to their phrase structure counterparts, and these minimal structures allow one to focus intently on the two ordering dimensions. Separating the vertical dimension (hierarchical order) from the horizontal dimension (linear order) is easily accomplished. This aspect of dependency structures has allowed DGs, starting with Tesnière (1959), to focus on hierarchical order in a manner that is hardly possible for phrase structure grammars. For Tesnière, linear order was secondary to hierarchical order insofar as hierarchical order preceded linear order in the mind of a speaker. The stemmas (trees) that Tesnière produced reflected this view; they abstracted away from linear order to focus almost entirely on hierarchical order. Many DGs that followed Tesnière adopted this practice, that is, they produced tree structures that reflect hierarchical order alone, e.g.
Linear order and discontinuities:
The traditional focus on hierarchical order generated the impression that DGs have little to say about linear order, and it has contributed to the view that DGs are particularly well-suited to examine languages with free word order. A negative result of this focus on hierarchical order, however, is that there is a dearth of DG explorations of particular word order phenomena, such as of standard discontinuities. Comprehensive dependency grammar accounts of topicalization, wh-fronting, scrambling, and extraposition are mostly absent from many established DG frameworks. This situation can be contrasted with phrase structure grammars, which have devoted tremendous effort to exploring these phenomena.
Linear order and discontinuities:
The nature of the dependency relation does not, however, prevent one from focusing on linear order. Dependency structures are as capable of exploring word order phenomena as phrase structures. The following trees illustrate this point; they represent one way of exploring discontinuities using dependency structures. The trees suggest the manner in which common discontinuities can be addressed. An example from German is used to illustrate a scrambling discontinuity: The a-trees on the left show projectivity violations (= crossing lines), and the b-trees on the right demonstrate one means of addressing these violations. The displaced constituent takes on a word as its head that is not its governor. The words in red mark the catena (=chain) of words that extends from the root of the displaced constituent to the governor of that constituent. Discontinuities are then explored in terms of these catenae. The limitations on topicalization, wh-fronting, scrambling, and extraposition can be explored and identified by examining the nature of the catenae involved.
Syntactic functions:
Traditionally, DGs have treated the syntactic functions (= grammatical functions, grammatical relations) as primitive. They posit an inventory of functions (e.g. subject, object, oblique, determiner, attribute, predicative, etc.). These functions can appear as labels on the dependencies in the tree structures, e.g.
Syntactic functions:
The syntactic functions in this tree are shown in green: ATTR (attribute), COMP-P (complement of preposition), COMP-TO (complement of to), DET (determiner), P-ATTR (prepositional attribute), PRED (predicative), SUBJ (subject), TO-COMP (to complement). The functions chosen and abbreviations used in the tree here are merely representative of the general stance of DGs toward the syntactic functions. The actual inventory of functions and designations employed vary from DG to DG.
Syntactic functions:
As a primitive of the theory, the status of these functions is very different from that in some phrase structure grammars. Traditionally, phrase structure grammars derive the syntactic functions from the constellation. For instance, the object is identified as the NP appearing inside finite VP, and the subject as the NP appearing outside of finite VP. Since DGs reject the existence of a finite VP constituent, they were never presented with the option to view the syntactic functions in this manner. The issue is a question of what comes first: traditionally, DGs take the syntactic functions to be primitive and they then derive the constellation from these functions, whereas phrase structure grammars traditionally take the constellation to be primitive and they then derive the syntactic functions from the constellation.
Syntactic functions:
This question about what comes first (the functions or the constellation) is not an inflexible matter. The stances of both grammar types (dependency and phrase structure) are not narrowly limited to the traditional views. Dependency and phrase structure are both fully compatible with both approaches to the syntactic functions. Indeed, monostratal systems, that are solely based on dependency or phrase structure, will likely reject the notion that the functions are derived from the constellation or that the constellation is derived from the functions. They will take both to be primitive, which means neither can be derived from the other. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Voiced labiodental plosive**
Voiced labiodental plosive:
The voiced labiodental plosive or stop is a consonant sound produced like a [b], but with the lower lip contacting the upper teeth, as in [v]. This can be represented in the IPA as ⟨b̪⟩. A separate symbol that is sometimes seen, especially in Bantu linguistics, but not recognized by the IPA, is the db ligature ⟨ȸ⟩.
The voiced labiodental plosive is not known to be phonemic in any language. However, it does occur allophonically: In the Austronesian language Sika, this sound occurs as an allophone of the labiodental flap in careful pronunciation.The XiNkuna dialect of Tsonga has affricates, [p̪͡f] (voiceless labiodental affricate) and [b̪͡v] (voiced labiodental affricate).
Features:
Features of the "voiced labiodental stop": Its manner of articulation is occlusive, which means it is produced by obstructing airflow in the vocal tract. Since the consonant is also oral, with no nasal outlet, the airflow is blocked entirely, and the consonant is a plosive.
Its place of articulation is labiodental, which means it is articulated with the lower lip and the upper teeth.
Its phonation is voiced, which means the vocal cords vibrate during the articulation.
It is an oral consonant, which means air is allowed to escape through the mouth only.
It is a central consonant, which means it is produced by directing the airstream along the center of the tongue, rather than to the sides.
The airstream mechanism is pulmonic, which means it is articulated by pushing air solely with the intercostal muscles and diaphragm, as in most sounds. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GALNT2**
GALNT2:
Polypeptide N-acetylgalactosaminyltransferase 2 is an enzyme that in humans is encoded by the GALNT2 gene.This gene encodes polypeptide N-acetylgalactosaminyltransferase 2, a member of the GalNAc-transferases family. This family transfers an N-acetyl galactosamine to the hydroxyl group of a serine or threonine residue in the first step of O-linked oligosaccharide biosynthesis. The localization site of this particular enzyme is preponderantly the trans-Golgi. Individual GalNAc-transferases have distinct activities, and initiation of O-glycosylation in a cell is regulated by a repertoire of GalNAc-transferases. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Church bell**
Church bell:
A church bell in Christian architecture is a bell which is rung in a church for a variety of religious purposes, and can be heard outside the building. Traditionally they are used to call worshippers to the church for a communal service, and to announce the fixed times of daily Christian prayer, called the canonical hours, which number seven and are contained in breviaries. They are also rung on special occasions such as a wedding, or a funeral service. In some religious traditions they are used within the liturgy of the church service to signify to people that a particular part of the service has been reached. The ringing of church bells, in the Christian tradition, is also believed to drive out demons.The traditional European church bell (see cutaway drawing) used in Christian churches worldwide consists of a cup-shaped metal resonator with a pivoted clapper hanging inside which strikes the sides when the bell is swung. It is hung within a steeple or belltower of a church or religious building, so the sound can reach a wide area. Such bells are either fixed in position ("hung dead") or hung from a pivoted beam (the "headstock") so they can swing to and fro. A rope hangs from a lever or wheel attached to the headstock, and when the bell ringer pulls on the rope the bell swings back and forth and the clapper hits the inside, sounding the bell. Bells that are hung dead are normally sounded by hitting the sound bow with a hammer or occasionally by a rope which pulls the internal clapper against the bell.
Church bell:
A church may have a single bell, or a collection of bells which are tuned to a common scale. They may be stationary and chimed, rung randomly by swinging through a small arc, or swung through a full circle to enable the high degree of control of English change ringing.
Before modern communications, church bells were a common way to call the community together for all purposes, both sacred and secular.
Uses and traditions:
Call to prayer Oriental Orthodox Christians, such as Copts and Indians, use a breviary such as the Agpeya and Shehimo to pray the canonical hours seven times a day while facing in the eastward direction; church bells are tolled, especially in monasteries, to mark these seven fixed prayer times.In Christianity, some churches ring their church bells from belltowers three times a day, at 9 am, 12 pm and 3 pm to summon the Christian faithful to recite the Lord's Prayer; the injunction to pray the Lord's prayer thrice daily was given in Didache 8, 2 f., which, in turn, was influenced by the Jewish practice of praying thrice daily found in the Old Testament, specifically in Psalm 55:17, which suggests "evening and morning and at noon", and Daniel 6:10, in which the prophet Daniel prays thrice a day. The early Christians thus came to pray the Lord's Prayer at 9 am, 12 pm and 3 pm.
Uses and traditions:
Many Catholic Christian churches ring their bells thrice a day, at 6 am, 12 pm, and 6 pm to call the faithful to recite the Angelus, a prayer recited in honour of the Incarnation of God.Some Protestant Christian Churches ring church bells during the congregational recitation of the Lord's Prayer, after the sermon, in order to alert those who are unable to be present to "unite themselves in spirit with the congregation".In many historic Christian Churches, church bells are also rung on All Hallows' Eve, as well as during the processions of Candlemas and Palm Sunday; the only time of the Christian Year when church bells are not rung include Maundy Thursday through the Easter Vigil. The Christian tradition of the ringing of church bells from a belltower is analogous to the Islamic tradition of the adhan from a minaret.
Uses and traditions:
Call to worship Most Christian denominations ring church bells to call the faithful to worship, signalling the start of a mass or service of worship.
Uses and traditions:
In the United Kingdom predominantly in the Anglican church, there is a strong tradition of change ringing on full-circle tower bells for about half an hour before a service. This originated from the early 17th century when bell ringers found that swinging a bell through a large arc gave more control over the time between successive strikes of the clapper. This culminated in ringing bells through a full circle, which let ringers easily produce different striking sequences; known as changes.
Uses and traditions:
Exorcism of demons In Christianity, the ringing of church bells is traditionally believed to drive out demons and other unclean spirits. Inscriptions on church bells relating to this purpose of church bells, as well as the purpose of serving as a call to prayer and worship, were customary, for example "the sound of this bell vanquishes tempests, repels demons, and summons men". Some churches have several bells with the justification that "the more bells a church had, the more loudly they rang, and the greater the distance over which they could be heard, the less likely it was that evil forces would trouble the parish." Funeral and memorial ringing The ringing of a church bell in the English tradition to announce a death is called a death knell. The pattern of striking depended on the person who had died; for example in the counties of Kent and Surrey in England it was customary to ring three times three strokes for a man and three times two for a woman, with a varying usage for children. The age of the deceased was then rung out. In small settlements this could effectively identify who had just died.There were three occasions surrounding a death when bells could be rung. There was the "Passing Bell" to warn of impending death, the second the Death Knell to announce the death, and the last was the "Lych Bell", or "Corpse Bell" which was rung at the funeral as the procession approached the church. This latter is known today as the Funeral toll.
Uses and traditions:
A more modern tradition where there are full-circle bells is to use "half-muffles" when sounding one bell as a tolled bell, or all the bells in change-ringing. This means a leather muffle is placed on the clapper of each bell so that there is a loud "open" strike followed by a muffled strike, which has a very sonorous and mournful effect. The tradition in the United Kingdom is that bells are only fully muffled for the death of a sovereign. A slight variant on this rule occurred in 2015 when the bones of Richard III of England were interred in Leicester Cathedral 532 years after his death.
Uses and traditions:
Sanctus bells The term "Sanctus bell" traditionally referred to a bell suspended in a bell-cot at the apex of the nave roof, over the chancel arch, or hung in the church tower, in medieval churches. This bell was rung at the singing of the Sanctus and again at the elevation of the consecrated elements, to indicate to those not present in the building that the moment of consecration had been reached. The practice and the term remain in common use in many Anglican churches.
Uses and traditions:
Within the body of a church the function of a sanctus bell can also be performed by a small hand bell or set of such bells (called altar bells) rung shortly before the consecration of the bread and wine into the Body and Blood of Christ and again when the consecrated elements are shown to the people. Sacring rings or "Gloria wheels" are commonly used in Catholic churches in Spain and its former colonies for this purpose.
Uses and traditions:
Orthodox Church In the Eastern Orthodox Church there is a long and complex history of bell ringing, with particular bells being rung in particular ways to signify different parts of the divine services, Funeral tolls, etc. This custom is particularly sophisticated in the Russian Orthodox Church. Russian bells are usually stationary, and are sounded by pulling on a rope that is attached to the clapper so that it will strike the inside of the bell.
Uses and traditions:
Victory Celebration The noon church bell tolling in Europe has a specific historical significance that has its roots in the Siege of Belgrade by the Ottomans in 1456. Initially, the bell ringing was intended as a call to prayer for the victory of the defenders of Belgrade. However, because in many European countries the news of victory arrived before the order for prayer, the ringing of the church bells was believed to be in celebration of the victory. As a result, the significance of noon bell ringing is now a commemoration of John Hunyadi's victory against the Turks.
Uses and traditions:
Other uses Clock chimes Some churches have a clock chime which uses a turret clock to broadcast the time by striking the hours and sometimes the quarters. A well-known musical striking pattern is the Westminster Quarters. This is only done when the bells are stationary, and the clock mechanism actuates hammers striking on the outside of the sound-bows of the bells. In the cases of bells which are normally swung for other ringing, there is a manual lock-out mechanism which prevents the hammers from operating whilst the bells are being rung.
Uses and traditions:
Warning In World War II in Great Britain, all church bells were silenced, to ring only to inform of an invasion by enemy troops. However this ban was lifted temporarily in 1942 by order of Winston Churchill. Starting with Easter Sunday, April 25, 1943, the Control of Noise (Defence) (No. 2) Order, 1943, allowed that church bells could be rung to summon worshippers to church on Sundays, Good Friday and Christmas Day. On May 27, 1943, all restrictions were removed.In the 2021 German floods it was reported that church bells were rung to warn inhabitants of coming floods. In Beyenburg in Wuppertal the last friar of Steinhaus Abbey rang the storm bells after other systems failed. Some church bells are being used in England for similar purposes.
Design and ringing technique:
Christian church bells have the form of a cup-shaped cast metal resonator with a flared thickened rim, and a pivoted clapper hanging from its centre inside. It is usually mounted high in a bell tower on top of the church, so it can be heard by the surrounding community. The bell is suspended from a headstock which can swing on bearings. A rope is tied to a wheel or lever on the headstock, and hangs down to the bell ringer. To ring the bell, the ringer pulls on the rope, swinging the bell. The motion causes the clapper to strike the inside of the bell rim as it swings, thereby sounding the bell. Some bells have full-circle wheels, which is used to swing the bell through a larger arc, such as in the United Kingdom where full- circle ringing is practised.
Design and ringing technique:
Bells which are not swung are "chimed", which means they are struck by an external hammer, or by a rope attached to the internal clapper, which is the tradition in Russia.
Blessing of bells:
In some churches, bells are often blessed before they are hung.
Blessing of bells:
In the Roman Catholic Church the name Baptism of Bells has been given to the ceremonial blessing of church bells, at least in France, since the eleventh century. It is derived from the washing of the bell with holy water by the bishop, before he anoints it with the "oil of the infirm" without and with chrism within; a fuming censer is placed under it and the bishop prays that these sacramentals of the Church may, at the sound of the bell, put the demons to flight, protect from storms, and call the faithful to prayer.
History:
Before the introduction of church bells into the Christian Church, different methods were used to call the worshippers: playing trumpets, hitting wooden planks, shouting, or using a courier. In AD 604, Pope Sabinian officially sanctioned the usage of bells. These tintinnabula were made from forged metal and did not have large dimensions. Larger bells were made at the end of the 7th and during the 8th century by casting metal originating from Campania. The bells consequently took the name of campana and nola from the eponymous city in the region. This would explain the apparently erroneous attribution of the origin of church bells to Paulinus of Nola in AD 400. By the early Middle Ages, church bells became common in Europe. They were first common in northern Europe, reflecting Celtic influence, especially that of Irish missionaries. Before the use of church bells, Greek monasteries would ring a flat metal plate (see semantron) to announce services. The signa and campanae used to announce services before Irish influence may have been flat plates like the semantron rather than bells. The oldest surviving circle of bells in Great Britain is housed in St Lawrence Church, Ipswich.
In literature:
The evocative sound of church bells has inspired many writers, both in poetry and prose. One example is an early poem by the English poet Letitia Elizabeth Landon entitled simply, Bells. She returned to the subject towards the end of her life in Fisher's Drawing Room Scrap Book, 1839 with The Village Bells., a poetical illustration to a picture by J. Franklin. How Soft the Music of those Village Bells.
Controversies about noise:
The sound of church bells is capable of causing noise that interrupts or prevents people from sleeping. A 2013 study from the Swiss Federal Institute of Technology in Zurich found that "An estimated 2.5-3.5 percent of the population in the Canton of Zurich experiences at least one additional awakening per night due to church bell noise." It concluded that "The number of awakenings could be reduced by more than 99 percent by, for example, suspending church bell ringing between midnight and 06 h in the morning", or by "about 75 percent (...) by reducing the sound-pressure levels of bells by 5 dB."In the Netherlands, there have been lawsuits about church bell noise pollution experienced by nearby residents. The complaints are usually, but not always, raised by new local residents (or tourists who spend the night in the neighbourhood) who are not used to the noise at night or during the day. Local residents who had been used to it for longer usually retort that the newcomers "should have known this before they moved here" and that the ringing bells "belong to the local tradition", which sometimes goes back more than a hundred years. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ludomusicology**
Ludomusicology:
Ludomusicology (also called video game music studies or video game music research) is a field of academic research and scholarly analysis focusing on video game music, understood as the music found in video games and in related contexts. It is closely related to the fields of musicology and interactive and games audio research, and game music and audio are sometimes studied as a united phenomenon. Ludomusicology is also related to the field of game studies, as music is one element of the wider video game text and some theories on video game functions are directly relevant to music.
Ludomusicology:
Whereas the overarching areas of interactive and game audio research and game studies are highly interdisciplinary (ranging from interface research, neurological research, psychology and informatics to sound studies, cultural studies and media studies), ludomusicology as a subfield has been mainly driven by musicologists (albeit with an openness to interdisciplinary inquiry). Ludomusicology not only deals with music in games and music games as its subject matter, but is also interested in the ways in which games and their music have become subjects of playful engagement themselves, e.g. within the frame of fancultural practices. Additionally and more generally, game music challenges the ways in which we think about music, and subsequently how we study it.The number of anthologies and monographs dealing with the specific subject of game music and music in game culture is steadily increasing. The ludomusicological community now organizes conferences, runs subgroups within the musicological societies, and engages in discourse with scholarly colleagues in a wide range of related fields.
History:
Academic research on video game music began in the late 1990s, and developed through the mid 2000s. Early research on the topic often involved historical studies of game music, or comparative studies of video game music and film music (see, for instance, Zach Whalen's article "Play Along – An Approach to Videogame Music" which includes both). The study of video game music is also known by some as "ludomusicology" – a portmanteau of "ludology" (the study of games and gameplay) and "musicology" (the study and analysis of music) – a term coined independently by Guillaume Laroche and Roger Moseley.A prominent figure in early video game music and audio research is Karen Collins, who is associate professor at the University of Waterloo and Canada Research Chair in Interactive Audio at the University of Waterloo Games Institute. Her monograph Game Sound: An Introduction to the History, Theory and Practice of Video Game Music and Sound Design (MIT Press 2008) is considered a seminal work in the field, and was influential in the subsequent development of video game music studies.
History:
In 2012, the Ludomusicology Research Group held its inaugural conference at the University of Oxford. This was the first conference in the world to be focused specifically on video game music. In 2014 the inaugural North American Conference on Video Game Music was held at Youngstown State University. Both conferences have since been held annually, at varying locations in Europe and North America respectively.
History:
In late 2016, the Society for the Study of Sound and Music in Games (SSSMG) was launched by the Ludomusicology Research Group in conjunction with the organisers of the North American Conference on Video Game Music and the Audio Mostly conference. The SSSMG is the first international society dedicated to the study of video game music. In September 2017, the SSSMG announced the planned launch of a new journal dedicated to video game music and sound research, the Journal of Sound and Music in Games.
Areas of enquiry:
Ludomusicology examines any and all aspects of video game music and audio, some of which are described in brief here.
Areas of enquiry:
Interactivity Interactivity is one of the major differentiating qualities of video game music, distinguishing it from other screen media music through the incorporation of player actions. In interactive gameplay, the sound and music played by the game can respond to the actions of the player within the game. Furthermore, as video games frequently feature non-linear or multi-linear timelines, their music can be similarly multi-threaded in both its form and its experience.: 3–4 Because each player takes an independently-chosen path through the game, any player's experience of a game's music will be different (considered as a whole) to any other player's experience from the same game, even though constructed from the same basic elements. Therefore, the intent and output of the composer and the music's effect on the player (which similarly varies from player to player) must both be considered during analysis.A related concept to that of interactivity is that of immersion, which considers the ability of a video game to draw the player into a deep engagement with the game's diegetic space. Isabella van Elferen has developed a model of video game music immersion called "the ALI model", which understands player immersion to be a confluence of "musical affect, musical literacy and musical interaction": Affect: "personal investment in a given situation through memory, emotion and identification" Literacy: "fluency in hearing and interpreting... music through the fact of our frequent exposure to [it]", which builds on such literacy from other musical media Interaction: the player's reactions to music, and vice versa.
Areas of enquiry:
Technology Historical studies of video game music usually incorporate examinations of game music technology. Technological limitations of audio chips in early consoles and computer systems were, in many ways, instrumental in shaping the development of both the functions and aesthetics of game music. For example, Karen Collins describes how the Television Interface Adapter in the Atari 2600 created tones that could be out of tune by up to half a semitone; this led to minimal music in games for this platform, and substantial modifications to the music of ported games.: 21–23 Similarly, Melanie Fritsch observes the relative freedoms afforded to game composers by CD audio (higher quality, though limited to 79.8 minutes) and later MP3 (similar quality to CD audio but with compression minimising length restrictions), while also noting the challenges presented by writing increasingly detailed music to accompany hundreds of hours of gameplay. Another technology that has attracted significant academic attention is iMUSE, a MIDI-based system developed by LucasArts that allowed dynamic and smooth musical transitions, which were triggered by game conditions but which would occur at musically expedient (pre-designated) positions in the soundtrack.: 51–57 Ludomusicology also investigates muso-technological practices in fields surrounding video games. For example, scholarly attention is turning to chiptune, which is the creation of music using old video game or sound chip hardware (a definition which is sometimes broadened to include the imitation of the resulting aesthetic).
Areas of enquiry:
Composition Ludomusicology examines the composition of video game music, both in relation to and as distinct from other forms of music composition. Studies of game music compositional processes are often contributed by active composers and industry practitioners (for example, Winifred Phillips's book A Composer's Guide to Game Music, and Stephen Baysted's chapter "Palimpsest, Pragmatism and the Aesthetics of Genre Transformation: Composing the Hybrid Score to Electronic Arts' Need for Speed Shift 2: Unleashed"). A significant distinction of video game music composition is the need to incorporate the player's interactivity into the compositional process, and particularly as part of the negotiation of engagement and immersion.: 35–36 Composition is also at the nexus of creative potential and technological possibility, and throughout the development of game music (and particularly in its early phases) this has had significant effects on both compositional processes and game music aesthetics.: 35 Music video games Music video games, which use music as part of a dominant gameplay mechanism, are a focus of many ludomusicological studies. The clear relationship between music and gameplay in games like the Guitar Hero and Rock Band series facilitates the study of performance and performativity within gameplay, and provokes questions of musical ideology, performance practice and multimedia pedagogy. Kiri Miller writes that music video games feature a smaller disparity between the actions of player and avatar than is usually present in video games, and that this can encourage heightened levels of physical and musical engagement. David Roesner, Anna Paisley, and Gianna Cassidy have examined these phenomena within the classroom context, observing that music video games can inspire not only musical creativity, but the positive self-perception of musicality in students; they suggest that such effects can be used to encourage student engagement and development within both musical and non-musical curricula.
Areas of enquiry:
Relationship to music in other screen media Because video games are (usually) screen-based media, there are strong links between the study of game music and the study of music in other screen-based media (like film and television). Concepts such as diegesis and acousmatics, which originate in film and film audio studies, are broadly applicable to video game music analyses, often with minimal adjustment. Furthermore, there are similarities between video game and film music techniques as varying stages throughout video game music history. For example, Neil Lerner notes a relationship between music in early/silent cinema and game music aesthetics from the 1970s onwards, on the basis of "largely nonverbal communication system[s]" and "continuous musical accompaniment". Similarly, Gregor Herzfeld compares the use of high energy music in Gran Turismo to the use of rock music in action films like The Fast and the Furious, due to associations with risky and/or exciting behavior.However, it is also well noted within ludomusicological discourse that video games are very different media to film and television due to the player's interaction. Consequently, the application of concepts like diegesis does require a nuanced approach that takes the peculiarities of the video game medium into account. For example, Collins observes that linear approaches to musical analysis (such as the observation of synchronicity between musical and visual cues) fail to address the non-linear timescales that are typical of video games.: 3–4 This forms a basis of Collins's reiteration of a warning by video game theorists against "theoretical imperialism".: 5 Research methods Several academics have written about the research methods involved in ludomusicology. One of the more comprehensive of these studies is found in Tim Summers's book Understanding Video Game Music, in which Summers describes the process of "analytical play", wherein the analyst is "deliberately subverting the game's expectations of the player's actions.... At moments when game rules are tested, the architecture is often clearest. By playing experimentally (or using 'analytical play') to investigate the musical system in the game and comparing multiple play sessions, the musical mechanics of the game programming can be divined".: 35 Summers places this analytical method alongside more conventional sources of research data, both from inside the game (e.g., programmatic and musical information, the latter of which frequently involves the application of conventional musicology and music theory to the video game music text) and from texts and communities surrounding the game.: 34–50 In her monograph Performing Bytes. Musikperformances der Computerspielkultur, Melanie Fritsch proposes an overarching ludomusicological theoretical framework, building on a subject-specific concept of performance that emphasizes the relationship between the two dimensions of performance ("ausführen" and "aufführen"). This concept is used as the initial step for developing an extended vocabulary on games and computer games, rooted in the relevant discourse of Game Studies. On that basis, a game performance theory is developed that allows to analyze gameplaying as a form of performance. In the next section an introduction to the theorization about "Music as Performance", as conducted by researchers such as Nicholas Cook, Carolyn Abbate, Philip Auslander and Christopher Small, is provided. Building on an understanding of music as a performative and playful process a terminological framework is developed that allows to analyze both games and music as playful performative practices, including questions of embodiment and socio-cultural aspects. The theoretical model is applied in six case studies to demonstrate how music as a design element in games, music games, and participatory musical practices in computer game culture can be fruitfully analyzed with this terminology.
Groups and Conferences:
Ludomusicology Research Group The Ludomusicology Research Group is an inter-university research organisation focusing on the study of music in games, music games and music in video game culture, composed of five researchers: Melanie Fritsch, Andra Ivănescu, Michiel Kamp, Tim Summers, and Mark Sweeney. Together they organise an annual international conference held in the UK or mainland Europe. The Ludo2018 held in Leipzig, Germany, in April 2018, was the biggest Ludo-conference so far, attracting more than 80 participants from all over the world. The group was originally founded by Kamp, Summers and Sweeney in August 2011, who have also edited a collection of essays based around the study of game sound entitled Ludomusicology: Approaches to Video Game Music, published in July 2016. They also edited a double special issue of The Soundtrack and initiated a new book series called Studies in Game Sound and Music in 2017. In September 2016, Tim Summers' book Understanding Video Game Music was published by Cambridge University Press. Fritsch officially joined the group in 2016. She had edited the 2nd issue of the online journal ACT - Zeitschrift für Musik und Performance, published in July 2011 that included ludomusicological contributions written by Tim Summers, Steven Reale and Jason Brame. She had been a regular at the conferences since 2012 and published several book chapters on the topic. Ivănescu joined the group in 2021. Her monograph Popular Music and the Nostalgia Video Game was published by Palgrave Macmillan in 2019. Whereas Ivănescu, Kamp, Summers, and Sweeney have a background in Musicology, Fritsch has her background in Performance Studies.
Groups and Conferences:
North American Conference on Video Game Music (NACVGM) The North American Conference on Video Game Music (NACVGM) is an international conference on video game music held annually in North America since 2014. The first conference was organised by Neil Lerner, Steven Reale, and William Gibbons.
Groups and Conferences:
Society for the Study of Sound and Music in Games (SSSMG) In late 2016 the Society for the Study of Sound and Music in Games (SSSMG) was launched by the Ludomusicology Research Group in conjunction with the organisers of the North American Conference on Video Game Music and the Audio Mostly conference. The SSSMG has the aim of bringing together both practitioners and researchers from across the globe in order to develop the field's understanding of sound and video game music and audio. Its initial focus is the use of its website as a "hub" for communication and resource centralisation, including a video game music research bibliography (a project initially begun by the Ludomusicology Research Group).In 2018, the Journal of Sound and Music in Games was launched in collaboration with University of California Press. JSMG is a specialist journal for scholars and industry practitioners of video game music and sound. While the core audience is game music scholars, the interdisciplinary nature of the field means that the journal encourages submissions from authors who identify primarily with other fields (such as game studies, computer science, educational science, performance studies etc.), as well as practitioners (game music composers, sound designers etc.). While JSMG primarily focuses on video games, studies of music and/or sound in any form of game (for example, sports, historical games predating video games, and so on) are explicitly welcome. JSMG's principal focus is original research articles, supplemented from time-to-time by a range of other content including review articles surveying important subjects, reviews of pertinent books and games, communications with responses, and interviews. The first issue is scheduled in early 2020.
Groups and Conferences:
AMS Ludomusicology Study Group The Ludomusicology Study Group of the American Musicological Society was founded in 2015 and "is dedicated to facilitating academic research on music interactive media", including holding a panel on video game music as part of the annual meetings of the Society.
Ludomusicology Society of Australia (LSA) The Ludomusicology Society of Australia was launched in April 2017, during the Ludo2017 conference in Bath, UK; it aims to "offer a centralised and local professional body nurturing game music studies for academics, people in industry and game music fans alike in the Australasian region." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Smart shop**
Smart shop:
A smart shop (or smartshop) is a retail establishment that specializes in the sale of psychoactive substances, usually including psychedelics, as well as related literature and paraphernalia. The name derives from the name "smart drugs", a class of drugs and food supplements intended to affect cognitive enhancements which are often sold in smart shops.
The rise of anonymous smart shops:
Some governments do not tolerate smart shops in any form as a part of their crime prevention. For example, the Government of the Kingdom of Sweden with its zero tolerance drug policy, does not accept physical smart shops and has shut down every known Swedish online smart shop that have been selling pure research chemicals on the visible Web. To circumvent this the usage of anonymous marketplaces through the Tor network has taken over since the establishment of Silk Road, which in contrast took the FBI two and a half years to take down for one month.
Typical products specialization:
Smart drugs Smart shops (often webshops) offer prescription-free pharmacy products such as Ritalin, Adderall, and modafinil, for example.
Typical products specialization:
Psychedelics, dissociatives, and deliriants Traditional entheogens Smart shops are best known in practice for selling whatever psychedelics, dissociatives, entactogens and deliriants local law permits. In the Netherlands, which is home to most of the smart shops in Europe, this includes Salvia divinorum, Amanita muscaria, Peyote, San Pedro cactus, Tabernanthe iboga, and various ingredients for Ayahuasca preparations. As of 1 December 2008, magic mushrooms are under stricter control in the Netherlands. Those new controls are quite controversial, because the list of banned mushrooms also contains species that have no psychoactive substances. Magic Mushroom spore prints and grow boxes are still available over the counter in the Netherlands. Psilocybin is not included in the ban and continues to be sold in smart shops nationwide in truffle form.
Typical products specialization:
The decline of designer drugs Smart shops in various countries have been known in the past to sell designer drugs: that is, synthetic substances that were not (yet) illegal. The sale of synthetic drugs not explicitly approved as food, supplements or medicines is illegal in some of them. For example, in the Netherlands it is dealt with by the relatively benign machinery of the Warenautoriteit (Commodities Authority) rather than in criminal law, as would be the case with controlled substances.
Typical products specialization:
Yet, this has made it effectively impossible to sell them in a formal retail setting, even if their production and possession is entirely legitimate. Smart shops have attempted no further marketing of synthetics since they tried to sell methylone as a "room odorizer" but were ultimately forced to pull it from their shelves in 2004, though it can still be obtained under the counter in some shops.
Typical products specialization:
Drug paraphernalia Smart shops sell many products that can be seen as complement goods to psychoactive drugs, including illegal ones. In the Netherlands, which has no drug paraphernalia laws, this is entirely legal. In particular, the sales of literature about illegal drugs or their manufacture is rarely criticized and protected by a traditional concern for free speech in local law and custom that is more pronounced than in other European nations.
Typical products specialization:
Many of the paraphernalia and complements sold in smart shops reduce, in one way or another, the harm associated with illegal drugs. For instance, reagent kits for testing the purity of ecstasy can be essential now that tablets named ecstasy can in practice contain just about anything, and often do not, in fact, contain MDMA at all. Supplements of vitamins and amino acids have been developed to mitigate specifically the damage of certain illegal drugs. Tryptophan and 5-hydroxy-tryptophan, for instance, can be used to help the body replenish serotonin levels in the brain after the use of MDMA, and vitamin supplements are appropriate for users of stimulants such as amphetamine. Vitamin B12 is depleted by recreational use of nitrous oxide, and is thereby useful.
Typical products specialization:
Smart shop is distinguished from head shops found in many countries. Head shops provide only paraphernalia, whereas smart shops usually sell at least some actual drugs. The term head shop is more common in the UK, though many British head shops sold magic mushrooms until July 2005 when the Government introduced a complete ban on magic mushrooms, putting them in the same category as heroin and crack cocaine. Many of the British head shops still sell a range of other legal highs.
Education and information:
Smart shops have become a natural source of information about the drugs they sell. They commonly provide instruction leaflets similar to the package inserts distributed with prescription drugs, which contain information on contra-indications, side effects, and the importance of set and setting. In the Netherlands, there is relatively little formal regulation of the smart shop industry, but the natural concentration of expertise about a relatively exotic range of products in combination with the realization that closer public scrutiny and regulation are always lurking in the background have caused the smart shops to organize into an industry association that, among other things, promotes the spread of information about its wares.
Legality:
The Netherlands Legally, smart shops operate under a decision of the Hoge Raad (Supreme Court) that has declared that unprepared mushrooms and cacti are not considered "preparations" of the substances they contain, and are therefore not banned under the Opium Act or international law even if their active ingredients are.
There are some shops from the Netherlands that operate as both a smart shop and a head shop on an international level. Customers are expected to accept the responsibility to inform themselves about the local laws, import and custom regulations before ordering and to certify that the import to their country of the products ordered is legal.
As of December 1, 2008, the sale of magic mushrooms was subject to tighter control in the Netherlands.
Legality:
This legal regime is markedly different from the one that applies to cannabis products. Those are formally illegal under the Opium Act and international law, which explicitly bans the plant rather than the cannabinoids in it. Cannabis products such as marijuana and hashish can be sold and possessed only pursuant to a web of executive orders more-or-less silently assented to by parliament. The sale of magic mushrooms, on the other hand, was entirely legal and subject only to the common regulation of foodstuffs by the Warenautoriteit (Commodities Authority).
Legality:
Republic of Ireland In the Republic of Ireland, there was a sharp rise in Smart shops around the Celtic Tiger era, however, given new government legislations against psychoactive substances, Smart shops that do still operate in the Republic Of Ireland have become shops for paraphernalia and growing equipment, more comparable to a Head Shop.
Legality:
UK As with UK based head shops, both paraphernalia and "legal highs" are available from these stores, such as Salvia Divinorum-based products designed to simulate illegal drug highs such as those experienced through the use of amphetamine [speed], methamphetamine, and psychedelics [psilocybin]. Magic mushrooms were available until the government closed a loophole, effectively banning the sale of raw or prepared magic mushrooms in January 2006.
Legality:
Since the passing of the Psychoactive Substances Act 2016, the sale of any chemical substance which alters or affects mental functioning in any way is illegal. This has effectively rendered the term "smartshop" obsolete in the UK.
Legality:
Portugal In Portugal, prior to March 2013, the drug laws were very liberal, and several smartshops were opened. A chain store, called Magic Mushroom, emerged as the market leader. Shops in Portugal still sell all type of herbal incense and plant feeders. In March 2013, the Portuguese Government enacted a law making it illegal to sell psychoactive drugs, thus ending the smartshop business in the country. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Composition of electronic cigarette aerosol**
Composition of electronic cigarette aerosol:
The chemical composition of the electronic cigarette aerosol varies across and within manufacturers. Limited data exists regarding their chemistry. However, researchers at Johns Hopkins University analyzed the vape clouds of popular brands such as Juul and Vuse, and found "nearly 2,000 chemicals, the vast majority of which are unidentified."The aerosol of e-cigarettes is generated when the e-liquid comes in contact with a coil heated to a temperature of roughly 100–250 °C (212–482 °F) within a chamber, which is thought to cause pyrolysis of the e-liquid and could also lead to decomposition of other liquid ingredients. The aerosol (mist) produced by an e-cigarette is commonly but inaccurately called vapor. E-cigarettes simulate the action of smoking, but without tobacco combustion. The e-cigarette aerosol looks like cigarette smoke to some extent. E-cigarettes do not produce aerosol between puffs. The e-cigarette aerosol usually contains propylene glycol, glycerin, nicotine, flavors, aroma transporters, and other substances. The levels of nicotine, tobacco-specific nitrosamines (TSNAs), aldehydes, metals, volatile organic compounds (VOCs), flavors, and tobacco alkaloids in e-cigarette aerosols vary greatly. The yield of chemicals found in the e-cigarette aerosol varies depending on, several factors, including the e-liquid contents, puffing rate, and the battery voltage.Metal parts of e-cigarettes in contact with the e-liquid can contaminate it with metals. Heavy metals and metal nanoparticles have been found in tiny amounts in the e-cigarette aerosol. Once aerosolized, the ingredients in the e-liquid go through chemical reactions that form new compounds not previously found in the liquid. Many chemicals, including carbonyl compounds such as formaldehyde, can inadvertently be produced when the nichrome wire (heating element) that touches the e-liquid is heated and chemically reacted with the liquid. Propylene glycol-containing liquids produced the most amounts of carbonyls in e-cigarette vapors, while in 2014 most e-cigarettes companies began using water and glycerin instead of propylene glycol for vapor production.Propylene glycol and glycerin are oxidized to create aldehydes that are also found in cigarette smoke when e-liquids are heated and aerosolized at a voltage higher than 3 V. Depending on the heating temperature, the carcinogens in the e-cigarette aerosol may surpass the levels of cigarette smoke. Reduced voltage e-cigarettes generate very low levels of formaldehyde. A Public Health England (PHE) report found "At normal settings, there was no or negligible formaldehyde release." However, this statement was contradicted by other researchers in a 2018 study. E-cigarettes can emit formaldehyde at high levels (between five and 15 times higher than what is reported for cigarette smoke) at moderate temperatures and under conditions that have been reported to be non-averse to users. As e-cigarette engineering evolves, the later-generation and "hotter" devices could expose users to greater amounts of carcinogens.
Background:
There is a debate on the composition, and the subsequent health burden, of tobacco smoke compared with electronic cigarette vapor. Tobacco smoke is a complex, dynamic and reactive mixture containing around 5,000 chemicals. In 2021, researchers at Johns Hopkins University analyzed the vape aerosols of popular brands such as Juul and Vuse, and found "nearly 2,000 chemicals, the vast majority of which are unidentified." E-cigarette vapor contains many of the known harmful toxicants found in traditional cigarette smoke, such as formaldehyde, cadmium, and lead, though usually at a reduced percentage.There are substances in e-cigarette vapor that are not found in tobacco smoke. Researchers are part of the conflict, with some opposing and others supporting of e-cigarette use. The public health community is divided, even polarized, over how the use of these devices will impact the tobacco epidemic. Proponents of e-cigarettes think that these devices contain merely "water vapour" in the e-cigarette aerosols, but this view is refuted by the evidence.
Particulate matter:
Pathogens E-liquid used in e-cigarettes have been found to be contaminated with fungi and bacteria. Nicotine-containing e-liquids are extracted from tobacco that may contain impurities. Tobacco-specific impurities such as cotinine, nicotine-N'-oxides (cis and trans isomers), and beta-nornicotyrine are believed to be the result of bacterial action or oxidation during the extracting of nicotine from tobacco.
Re-used vapes, and vape sharing Bacterial pneumonia.
Fungal pneumonia Viral pneumonia from vape sharing.SARS-CoV-2: Shared vaping devices are linked to COVID-19.
Particulate matter:
Chemicals E-cigarette components include a mouthpiece, a cartridge (liquid storage area), a heating element/atomizer, a microprocessor, a battery, and some of them have a LED light at the tip. They are disposable or reusable devices. Disposable ones are not rechargeable and typically cannot be refilled with a liquid. There are a diverse range of disposable and reusable devices, resulting in broad variations in their structure and their performance. Since many devices include interchangeable components, users have the ability to alter the nature of the inhaled vapor.For the majority of e-cigarettes many aspects are similar to their traditional counterparts such as giving nicotine to the user. E-cigarettes simulates the action of smoking, with a vapor that looks like cigarette smoke to some extent. E-cigarettes do not involve tobacco combustion, and they do not produce vapor between puffs. They do not produce sidestream smoke or sidestream vapor.Vapor production basically entails preprocessing, vapor generation, and postprocessing. First, the e-cigarette is activated by pressing a button or other devices switch on by an airflow sensor or other type of trigger sensor. Then, power is released to an LED, other sensors, and other parts of the device, and to a heating element or other kind of vapor generator. Subsequently, the e-liquid flows by capillary action to the heating element or other devices to the e-cigarette vapor generator. Second, the e-cigarette vapor processing entails vapor generation.The e-cigarette vapor is generated when the e-liquid is vaporized by the heating element or by other mechanical methods. The last step of vapor processing happens as the e-cigarette vapor passes through the main air passage to the user. For some advanced devices, before inhaling, the user can adjust the heating element temperature, air flow rate or other features. The liquid within the chamber of e-cigarette is heated to roughly 100-250 °C to create an aerosolized vapor. This is thought to result in pyrolysis of the e-liquid and could also lead to decomposition of other liquid ingredients. The aerosol (mist) produced by an e-cigarette is commonly but inaccurately called vapor. In physics, a vapor is a substance in the gas phase whereas an aerosol is a suspension of tiny particles of liquid, solid or both within a gas.The power output of the e-cigarette is correlated to the voltage and resistance (P = V2/R, in watts), which is one aspect that impacts the production and the amount of toxicants of e-cigarette vapors. The power generated by the heating coil is not based solely on the voltage because it also relies upon the current, and the resultant temperature of the e-liquid relies upon the power output of the heating element. The production of vapor also relies upon the boiling point of the solvent. Propylene glycol boils at 188 °C, while glycerin boils at 290 °C. The higher temperature reached by glycerin may impact the toxicants emitted by the e-cigarette. The boiling point for nicotine is 247 °C. Each e-cigarette company's designs generate different amounts of heating power.The evidence indicates that larger capacity tanks, increasing the coil temperature, and dripping configurations seem to be end user modified designs adopted by e-cigarette companies. Variable voltage e-cigarettes can raise the temperature within the device to allow users to adjust the e-cigarette vapor. No firm information is available on the temperature differences in variable voltage devices. The length of time that the e-cigarette vapor is being heated within the device also affects the e-cigarette vapor properties. When the temperature of the heating element rises, the temperature of the e-cigarette vapor in the air rises. The hotter air can support more e-liquid air density.E-cigarettes have a wide array of engineering designs. The differences in e-cigarette manufacturing materials are broad and often unknown. Concern exists over lack of quality control. E-cigarette companies often lack manufacturing standards or are non-existent. Some e-cigarettes are designed and manufactured to a high standard. The manufactured standards of e-cigarettes are not equivalent to pharmaceutical products. Improved manufacturing standards could reduce the levels of metals and other chemicals found in e-cigarette vapor. Quality control is influenced by market forces.The engineering designs typically affect the nature, number, and size of particles generated. High amounts of vapor particle deposition are believed to enter into the lungs with each puff because the particle size in e-cigarette vapors is within the respiratory range. After a puff, the inhaled vapor changes in the size distributions of particles in the lungs. This results in smaller exhaled particles. E-cigarette vapor is made up of fine and ultrafine particles of particulate matter. Vaping generates particulate matter 2.5 μm or less in diameter (PM2.5), but at notably less concentrations compared to cigarette smoke. Particle concentrations from vaping ranged from 6.6 to 85.0 μg/m3.Particle-size distributions of particulate matter from vaping differ across studies. The longer the puff duration the greater the amount of particles produced. The greater the amount of nicotine in the e-liquid the greater the amount of particles produced. Flavoring does not influence the particle emissions. The various kinds of devices such as cig-a-likes, medium-sized vaporizers, tanks, or mods may function at different voltages and temperatures. Thus, the particle size of the e-cigarette vapor can vary, due to the device used. Comparable to cigarette smoke, the particle size distribution mode of e-cigarette vapor ranged from 120 to 165 nm, with some vaping devices producing more particles than cigarette smoke.
Particulate matter:
Ingredients Exactly what the e-cigarette vapor consists of varies in composition and concentration across and within manufacturers. Limited data exists regarding their chemistry. The e-cigarette vapor usually contains propylene glycol, glycerin, nicotine, flavors, aroma transporters, and other substances. The levels of solvents and flavors are not provided on the labels of e-liquids, according to many studies.The yield of chemicals found in the e-cigarette vapor varies depending on, several factors, including the e-liquid contents, puffing rate, and the battery voltage. A 2017 review found that "Adjusting battery wattage or the inhaled airflow modifies the amount of vapor and chemical density in each puff." A high amount of e-liquid contains propylene glycol and/or glycerin.Limited but consistent data indicates that flavoring agents are at levels above the National Institute for Occupational Safety and Health safety limit. High amounts of flavoring agents have been found in e-cigarette vapors.The main chemical found in the e-cigarette vapor was propylene glycol. A 2013 study, under close to real-life conditions in an emission test chamber, using a test subject who took six forceful puffs from an e-cigarette, resulted in a high level of propylene glycol released into the air. The next greatest amount in the e-cigarette vapor was nicotine.Cig-a-likes are usually first-generation e-cigarettes, tanks are commonly second-generation e-cigarettes, tanks that let vapers adjust the voltage setting are third-generation e-cigarettes, and tanks that have the ability for sub ohm (Ω) vaping and to set temperature control limits are fourth-generation devices. Vaping nicotine using e-cigarettes differs from smoking traditional cigarettes in many ways. First-generation e-cigarettes are often designed to simulate smoking traditional cigarettes; they are low-tech vaporizers with a limited number of settings. First-generation devices usually deliver a smaller amount nicotine. Second-generation and third-generation e-cigarettes use more advanced technology; they have atomizers (i.e., heating coils that convert e-liquids into vapor) which improve nicotine dispersal and house high capacity batteries.Third-generation and fourth-generation devices represent a diverse set of products and, aesthetically, constitute the greatest departure from the traditional cigarette shape, as many are square or rectangular and feature customizable and rebuildable atomizers and batteries. Cartomizers are similar in design to atomizers; their main difference is a synthetic filler material wrapped around the heating coil. Clearomizers are now commonly available and similar to cartomizers, but they include a clear tank of a larger volume and no filler material; additionally they have a disposable head containing the coil(s) and wicks. Vaping enthusiasts often begin with a cig-a-like first-generation device and tend to move towards using a later-generation device with a larger battery.Cig-a-likes and tanks are among the most popular devices. But tanks vaporize nicotine more effectively, and there are a greater selection of flavors and levels of nicotine, and are usually used by experienced users. Under five minutes of cig-a-like vaping, blood nicotine levels can elevate to about 5 ng/ml, while under 30 minutes of using 2 mg of nicotine gum, blood nicotine levels ranged from 3–5 ng/ml. Under five minutes of using tank systems by experienced vapers, the elevation in blood nicotine level can be 3–4 times greater.Many devices lets the user use interchangeable components, which result in variations in the e-cigarette vaporized nicotine. One of the primary features of the more recent generation of devices is that they contain larger batteries and are capable of heating the liquid to a higher temperature, potentially releasing more nicotine, forming additional toxicants, and creating larger clouds of particulate matter. A 2017 review found "Many e-cig users prefer to vape at high temperatures as more aerosol is generated per puff. However, applying a high voltage to a low-resistance heating coil can easily heat e-liquids to temperatures in excess of 300 °C; temperatures sufficient to pyrolyze e-liquid components." The nicotine levels in the e-cigarette vapor greatly varies across companies. The nicotine levels in the e-cigarette vapor also varies greatly either from puff-to-puff or among devices of the same company. Nicotine intake across users using same device or liquid varies substantially. Puffing characteristics differ between smoking and vaping. Vaping typically require more 'suck' than cigarette smoking. Factors that influence the level of blood nicotine concentrations include nicotine content in a device; how well the nicotine is vapored from the liquid reservoir; and additives that may contribute to nicotine intake. Nicotine intake from vaping also relies upon the habits of the user.Other factors that influence nicotine intake include engineering designs, battery power, and vapor pH. For instance, some e-cigarettes have e-liquids that contain amounts of nicotine comparable to other companies, though the e-cigarette vapor contains far less amounts of nicotine. Puffing behavior substantially varies. New e-cigarette users tend to take shorter puffs than experienced users which may result in less nicotine intake. Among experienced users there is a wide range in puffing time. Some experienced users may not adapt to increase their puffing time. Inexperienced users vape less forcefully than experienced users.E-cigarettes share a common design, but construction variations and user alterations generate varied nicotine delivery. Lowering the heater resistance probably increases the nicotine concentration. Some 3.3 V vaping devices using low-resistance heating elements such as an ohm of 1.5, containing 36 mg/mL liquid nicotine can obtain blood nicotine levels after 10 puffs that may be higher than with traditional cigarettes. A 2015 study evaluated "a variety of factors that can influence nicotine yield and found that increasing power output from 3 to 7.5 W (an approximately 2.5-fold increase), by increasing the voltage from 3.3 to 5.2 V, led to an approximately 4- to 5-fold increase in nicotine yield." A 2015 study, using a model to approximate indoor air workplace exposure, anticipates greatly reduced exposure to nicotine from e-cigarettes than traditional cigarettes.A 2016 World Health Organization (WHO) report found "nicotine in SHA [second-hand aerosol] has been found between 10 and 115 times higher than in background air levels." A 2015 Public Health England (PHE) report concluded that e-cigarettes "release negligible levels of nicotine into ambient air". A 2016 Surgeon General of the United States report stated that the exposure to nicotine from e-cigarette vaping is not negligible and is higher than in non-smoking environments. Vaping generates more surrounding air levels of particulate matter and nicotine in indoor areas than background air levels. Extended indoor e-cigarette use in rooms that are not sufficiently ventilated could surpass occupational exposure limits to the inhaled metals.The e-cigarette vapor may also contain tiny amounts of toxicants, carcinogens, and heavy metals. The majority of toxic chemicals found in e-cigarette vapor are below 1% of the corresponding levels permissible by workplace exposure standards, but the threshold limit values for workplace exposure standards are generally much higher than levels considered satisfactory for outdoor air quality. Some chemicals from exposures to the e-cigarette vapor could be higher than workplace exposure standards. A 2018 PHE report stated that the toxicants found in e-cigarette vapor are less than 5% and the majority are less than 1% in comparison with traditional cigarettes.Although several studies have found lower levels of carcinogens in e-cigarette aerosol compared to smoke emitted by traditional cigarettes, the mainstream and second-hand e-cigarette aerosol has been found to contain at least ten chemicals that are on California's Proposition 65 list of chemicals known to cause cancer, birth defects, or other reproductive harm, including acetaldehyde, benzene, cadmium, formaldehyde, isoprene, lead, nickel, nicotine, N-Nitrosonornicotine, and toluene. Free radicals produced from frequent e-cigarette use is estimated to be greater than compared to air pollution. E-cigarette vapor can contain a range of toxicants, and since they have been be used in methods unintended by the producer such as dripping or mixing liquids, this could result in generating greater levels of toxicants."Dripping", where the liquid is dripped directly onto the atomizer, could yield a higher level of nicotine when the liquid contains nicotine, and also a higher level of chemicals may be generated from heating the other contents of the liquid, including formaldehyde. Dripping may result in higher levels of aldehydes. Considerable pyrolysis might occur during dripping. Emissions of certain compounds increased over time during use as a result of increased residues of polymerization by-products around the coil. As the devices age and get dirty, the constituents they produce may become different. Proper cleaning or more routine replacement of coils may lower emissions by preventing buildup of residual polymers.
Particulate matter:
E-liquid carrying agents Glycerin and/or propylene glycol is used in liquid vapes. Vapes for cloud-chasing usually don't contain other ingredients.
Glycerin Glycerin (often called vegetable glycerin, or VG) was long thought to be a safe option. However, the carcinogen formaldehyde is known as an impurity found in propylene glycol and glycerol vapor degradation.
Propylene glycol Propylene glycol (often referred to as PG).
Misc MCT oil Flavoring Flavoring are often added to e-liquids as well as dry smoke blends. There are currently over 7,700 e-liquid flavors available, most have not been laboratory tested for toxicity.There are numerous flavors (e.g., fruit, vanilla, caramel, coffee) of e-liquid available. There are also flavorings that resemble the taste of cigarettes.
Psychoactive substances Cannabinoids CBD is common in vape products. Vaped or smoked CBD heated to 250-300 C will partially be converted to THC. CBD is one among the most suspected ingredients involved in VAPI.Synthetic cannabinoids are increasingly offered in e-cigarette form as "c-liquid".
Particulate matter:
Nicotine E-liquids were purchased from retailers and via online for a 2013 study. The Royal College of General Practitioners stated in 2016 that "To date 42 chemicals have been detected in ENDS aerosol – though with the ENDS market being unregulated there is significant variation between devices and brands."E-liquid nicotine concentrations vary. The amount of nicotine stated on the labels of e-liquids can be very different from analyzed samples. Some e-liquids sold as nicotine-free contained nicotine, and some of them were at substantial levels. The analyzed liquids nicotine levels were between 14.8 and 87.2 mg/mL and the actual amount varied from the stated amount by as much as 50%.Possibly, 60–70% of the nicotine is vaporized. E-cigarettes without nicotine is also available. Via nicotine-containing e-cigarettes, nicotine is absorbed through the upper and lower respiratory tract. A greater amount of nicotine is possibly absorbed through oral mucosa and upper airways. The composition of the e-liquid may affect nicotine delivery. E-liquid containing glycerin and propylene glycol delivers nicotine more efficiently than a glycerin-based liquid with the same amount of nicotine. It is believed that propylene glycol vaporizes quicker than glycerin, which subsequently transports a higher amount of nicotine to the user.Vaping appears to give less nicotine per puff than cigarette smoking. Early devices typically delivered low amounts of nicotine than that of traditional cigarettes, but newer devices containing a high amount of nicotine in the liquid may deliver nicotine at amounts similar to that of traditional cigarettes. Similar to traditional cigarettes, e-cigarettes rapidly delivers nicotine to the brain. The peak concentration of nicotine delivered by e-cigarettes is comparable to that of traditional cigarettes. E-cigarettes take longer to reach peak concentration than with traditional cigarettes, but they provide nicotine to the blood quicker than nicotine inhalers. The yield of nicotine users obtain is similar to that of nicotine inhalers.Newer e-cigarette models deliver nicotine to the blood quicker than with older devices. E-cigarettes with more powerful batteries can delivery a higher level of nicotine in the e-cigarette vapor. Some research indicates that experienced e-cigarette users can obtain nicotine levels similar to that of smoking. Some vapers can obtain nicotine levels comparable to smoking, and this ability generally improves with experience. E‐cigarettes users still may be able to obtain similar blood nicotine levels compared with traditional cigarettes, particularly with experienced smokers, but it takes more time to obtain such levels.
Particulate matter:
By-products Metals and other content A 2020 systematic review found aluminum, antimony, arsenic, cadmium, cobalt, chromium, copper, iron, lead, manganese, nickel, selenium, tin, and zinc, possibly due to coil contact.Metal parts of e-cigarettes in contact with the e-liquid can contaminate it. The temperature of the atomizer can reach up to 500 °F. The atomizer contains metals and other parts where the liquid is kept, and an atomizer head is made of a wick and metal coil which heats the liquid. Due to this design, some metals are potentially found in the e-cigarette vapor. E-cigarette devices differ in the amount of metals in the e-cigarette vapor. This may be associated with the age of various cartridges, and also what is contained in the atomizers and coils.Usage behavior may contribute to variations in the specific metals and amounts of metals found in e-cigarette vapor. An atomizer made of plastics could react with e-liquid and leach plasticizers. The amounts and kinds of metals or other materials found in the e-cigarette vapor is based on the material and other manufacturing designs of the heating element. E-cigarettes devices can be made with ceramics, plastics, rubber, filament fibers, and foams, of which some can be found in the e-cigarette vapor.E-cigarette parts, including exposed wires, wire coatings, solder joints, electrical connectors, heating element material, and vitreous fiber wick material, account for the second significant source of substances, to which users may be exposed. Metal and silicate particles, some of which are at higher levels than in traditional cigarettes, have been detected in e-cigarette aerosol, resulting from degradation from the metal coil used to heat the solution. Other materials used are Pyrex glass rather than plastics and stainless steel rather than metal alloys.Metals and metal nanoparticles have been found in tiny amounts in e-cigarette vapor. Aluminum, antimony, barium, boron, cadmium, chromium, copper, iron, lanthanum, lead, magnesium, manganese, mercury, nickel, potassium, silicate, silver, sodium, strontium, tin, titanium, zinc, and zirconium have been found in e-cigarette vapor. Arsenic may leach from the device itself and may end up in the liquid, and then the e-cigarette vapor. Arsenic has been found in some e-liquids, and in e-cigarette vapor.Considerable differences in exposure to metals have been identified from the e-cigarettes tested, particularly metals such as cadmium, lead, and nickel. Poor quality first-generation e-cigarettes produce several metals in their vapor, in some cases the amounts were greater than with cigarette smoke. A 2013 study found metal particles in the e-cigarette vapor were at concentrations 10-50 times less than permitted in inhalation medicines.A 2018 study found significantly higher amounts of metals in e-cigarette vapor samples in comparison with the e-liquids before they came in contact with the customized e-cigarettes that were provided by everyday e-cigarette users. Lead and zinc were 2,000% higher and chromium, nickel, and tin were 600% higher. The e-cigarette vapor levels for nickel, chromium, lead, manganese surpassed occupational or environmental standards for at least 50% of the samples. The same study found 10% of the e-liquids tested contained arsenic and the amounts remained about the same as the e-cigarette vapor.The average amounts of exposure to cadmium from 1,200 e-cigarette puffs were found to be 2.6 times lower than the chronic Permissible Daily Exposure from inhalation medications, outlined by the US Pharmacopeia. One sample tested resulted in daily exposure 10% greater than chronic PDE from inhalation medications, while in four samples the amounts were comparable to outdoor air levels. Cadmium and lead have been found in the e-cigarette vapor at 2–3 times greater levels than with a nicotine inhaler. A 2015 study stated the amount of copper have been found to be six times greater than with cigarette smoke. A 2013 study stated the levels of nickel have been found to be 100 times higher than cigarette smoke.A 2014 study stated the levels of silver have been found to be at a greater amount than with cigarette smoke. Increased amounts of copper and zinc in vapor generated by some e-cigarettes may be the result of corrosion on the brass electrical connector as indicated in particulates of copper and zinc in e-liquid. In addition, a tin solder joint may be subjected to corrosion, which may result in increased amounts of tin in some e-liquids.Generally low levels of contaminates may include metals from the heating coils, solders, and wick. The metals nickel, chromium, and copper coated with silver have been used to make the normally thin-wired e-cigarette heating elements. The atomizers and heating coils possibly contain aluminum. They likely account for most of the aluminum in the e-cigarette vapor. The chromium used to make the atomizers and heating coils is probably the origin of the chromium. Copper is commonly used to make atomizers. Atomizers and heating coils commonly contain iron.Cadmium, lead, nickel, and silver originated from the heating element. Silicate particles may originate from the fiberglass wicks. Silicate nanoparticles have been found in vapors generated from the fiberglass wicks. Tin may originate from the e-cigarette solder joints. Nickel potentially found in the e-cigarette vapor may originate from the atomizer and heating coils. The nanoparticles can be produced by the heating element or by pyrolysis of chemicals directly touching the wire surface.Chromium, iron, tin, and nickel nanoparticles potentially found in the e-cigarette vapor can originate from the e-cigarette heating coils. Kanthal and nichrome are frequently used heating coils which may account for chromium and nickel in the e-cigarette vapor. Metals can originate from the "cartomizer" from the later-generation devices where an atomizer and cartridge are constructed into one unit. Metal and glass particles can be created and vaporized because of the heating of the liquid with glass fiber.
Particulate matter:
Solutions Metal coils coated with microporous ceramic have been developed to protect against oxidation of metals.
Comparison of levels of metals in e-cigarette aerosol Abbreviations: EC, electronic cigarette; NM, not measured.
Particulate matter:
∗The findings are a comparison between e-cigarette daily usage and the regulatory limits of chronic Permissible Daily Exposure from inhalation medications outlined by the US Pharmacopeia for cadmium, chromium, copper, lead and nickel, the Minimal Risk Level outlined by the Agency for Toxic Substances and Disease Registry for manganese and the Recommended Exposure Limit outlined by the National Institute for Occupational Safety and Health for aluminum, barium, iron, tin, titanium, zinc and zirconium, referring to a daily inhalation volume of 20 m3 air and a 10-h volume of 8.3 m3; values are in μg.
Particulate matter:
Carbonyls and other content E-cigarette makers do not fully disclose information on the chemicals that can be released or synthesized during use. The chemicals in the e-cigarette vapor can be different than with the liquid. Once vaporized, the ingredients in the e-liquid go through chemical reactions that form new compounds not previously found in the liquid. Many chemicals including carbonyl compounds such as formaldehyde, acetaldehyde, acrolein, and glyoxal can inadvertently be produced when the nichrome wire (heating element) that touches the e-liquid is heated and chemically reacted with the liquid. Acrolein and other carbonyls have been found by in e-cigarette vapors that were created by unmodified e-cigarettes, indicating that formation of these compounds could be more common than previously thought.A 2017 review found "Increasing the battery voltage from 3.3 V to 4.8 V doubles the amount of e-liquid vapourized and increases the total aldehyde generation more than threefold, with acrolein emission increasing tenfold." A 2014 study stated that "increasing the voltage from 3.2–4.8 V resulted in a 4 to >200 times increase in the formaldehyde, acetaldehyde, and acetone levels". The amount of carbonyl compounds in e-cigarette aerosols varies substantially, not only among different brands but also among different samples of the same products, from 100-fold less than tobacco to nearly equivalent values.The propylene glycol-containing liquids produced the most amounts of carbonyls in e-cigarette aerosols. Propylene glycol could turn into propylene oxide when heated and aerosolized. Glycerin may generate acrolein when heated at hotter temperatures. Some e-cigarette products had acrolein identified in the e-cigarette vapor, at greatly lower amounts than in cigarette smoke. Several e-cigarette companies have replaced glycerin and propylene glycol with ethylene glycol. In 2014, most e-cigarettes companies began to use water and glycerin as replacement for propylene glycol.In 2015, manufacturers attempted to reduce the formation of formaldehyde and metal substances of the e-cigarette vapor by producing an e-liquid in which propylene glycol is replaced by glycerin. Acetol, beta-nicotyrine, butanal, crotonaldehyde, glyceraldehyde, glycidol, glyoxal, dihydroxyacetone, dioxolanes, lactic acid, methylglyoxal, myosmine, oxalic acid, propanal, pyruvic acid, and vinyl alcohol isomers have been found in the e-cigarette vapor. Hydroxymethylfurfural and furfural have been found in the e-cigarette vapors. The amounts of furans in the e-cigarette vapors were highly associated with power of the e-cigarette and amount of sweetener.The amount of carbonyls vary greatly among different companies and within various samples of the same e-cigarettes. Oxidants and reactive oxygen species (OX/ROS) have been found in the e-cigarette vapor. OX/ROS could react with other chemicals in the e-cigarette vapor because they are highly reactive, causing alterations its chemical composition. E-cigarette vapor have been found to contain OX/ROS at about 100 times less than with cigarette smoke. A 2018 review found e-cigarette vapor containing reactive oxygen radicals seem to be similar to levels in traditional cigarettes. Glyoxal and methylglyoxal found in e-cigarette vapors are not found in cigarette smoke.
Particulate matter:
Contamination with various chemicals have been identified. Some products contained trace amounts of the drugs tadalafil and rimonabant. The amount of either of these substances that is able to transfer from liquid to vapor phase is low.The nicotine impurities in the e-liquid varies greatly across companies. The levels of toxic chemicals in e-cigarette vapor is in some cases similar to that of nicotine replacement products. Tobacco-specific nitrosamines (TSNAs) such as nicotine-derived nitrosamine ketone (NNK) and N-Nitrosonornicotine (NNN) and tobacco-specific impurities have been found in the e-cigarette vapor at very low levels, comparable to amounts found in nicotine replacement products. A 2014 study that tested 12 e-cigarette devices found that most of them contained tobacco-specific nitrosamines in the e-cigarette vapor. In contrast, the one nicotine inhaler tested did not contain tobacco-specific nitrosamines.N-Nitrosoanabasine and N'-Nitrosoanatabine have been found in the e-cigarette vapor at lower levels than cigarette smoke. Tobacco-specific nitrosamines (TSNAs), nicotine-derived nitrosamine ketone (NNK), N-Nitrosonornicotine (NNN), and N′-nitrosoanatabine have been found in the e-cigarette vapor at different levels between different devices. Since e-liquid production is not rigorously regulated, some e-liquids can have amounts of impurities higher compared to limits for pharmaceutical-grade nicotine products.m-Xylene, p-Xylene, o-Xylene, ethyl acetate, ethanol, methanol, pyridine, acetylpyrazine, 2,3,5-trimethylpyrazine, octamethylcyclotetrasiloxane, catechol, m-Cresol, and o-Cresol have been found in the e-cigarette vapor. A 2017 study found that "The maximum detected concentrations of benzene, methanol, and ethanol in the samples were higher than their authorized maximum limits as residual solvents in pharmaceutical products." Trace amounts of toluene and xylene have been found in the e-cigarette vapor.Polycyclic aromatic hydrocarbons (PAHs), aldehydes, volatile organic compounds (VOCs), phenolic compounds, flavors, tobacco alkaloids, o-Methyl benzaldehyde, 1-Methyl phenanthrene, anthracene, phenanthrene, pyrene, and cresol have been found in the e-cigarette vapor. While the cause of these differing concentrations of minor tobacco alkaloids is unknown, Lisko and colleagues (2015) speculated potential reasons may derive from the e-liquid extraction process (i.e., purification and manufacturing) used to obtain nicotine from tobacco, as well as poor quality control of e-liquid products. In some studies, small quantities of VOCs including styrene have been found in the e-cigarette vapor. A 2014 study found the amounts of PAHs were above specified safe exposure limits.Low levels of isoprene, acetic acid, 2-butanodione, acetone, propanol, and diacetin, and traces of apple oil (3-methylbutyl-3-methylbutanoate) have been found in the e-cigarette vapor. Flavoring substances from roasted coffee beans have been found in the e-cigarette vapor. The aroma chemicals acetamide and cumarine have been found in the e-cigarette vapor. Acrylonitrile and ethylbenzene have been found in the e-cigarette vapor. Benzene and 1,3-Butadiene have been found in the e-cigarette vapor at many-fold lower than in cigarette smoke.Some e-cigarettes contain diacetyl and acetaldehyde in the e-cigarette vapor. Diacetyl and acetylpropionyl have been found at greater levels in the e-cigarette vapor than is accepted by the National Institute for Occupational Safety and Health, although diacetyl and acetylpropionyl are normally found at lower levels in e-cigarettes than with traditional cigarettes. A 2018 PHE report stated that diacetyl was identified at hundreds of times in lesser amounts than found in cigarette smoke. A 2016 WHO report found that acetaldehyde from second-hand vapor was between two and eight times greater compared to background air levels.
Particulate matter:
Formaldehyde A 2016 WHO report found that formaldehyde from second-hand vapor was around 20% greater compared to background air levels. Normal usage of e-cigarettes generates very low levels of formaldehyde. Different power settings reached significant differences in the amount of formaldehyde in the e-cigarette vapor across different devices. Later-generation e-cigarette devices can create greater amounts of carcinogens. Some later-generation e-cigarettes let users increase the volume of vapor by adjusting the battery output voltage.Depending on the heating temperature, the carcinogens in the e-cigarette vapor may surpass the levels of cigarette smoke. E-cigarettes devices using higher voltage batteries can produce carcinogens including formaldehyde at levels comparable to cigarette smoke. The later-generation and "tank-style" devices with higher voltages (5.0 V) could produce formaldehyde at comparable or greater levels than in cigarette smoke.A 2015 study hypothesized from the data that at high voltage (5.0 V), a user, "vaping at a rate of 3 mL/day, would inhale 14.4 ± 3.3 mg of formaldehyde per day in formaldehyde-releasing agents." The 2015 study used a puffing machine showed that a third-generation e-cigarette turned on to the maximum setting would create levels of formaldehyde between five and 15 times greater than with cigarette smoke. A 2015 PHE report found that high levels of formaldehyde only occurred in overheated "dry-puffing", and that "dry puffs are aversive and are avoided rather than inhaled", and "At normal settings, there was no or negligible formaldehyde release."A 2018 study confirmed e-cigarettes can emit formaldehyde at high levels more than 5 times higher than what is reported for cigarette smoke) at moderate temperatures and under conditions that have been reported to be non-averse to users. But e-cigarette users may "learn" to overcome the unpleasant taste due to elevated aldehyde formation, when the nicotine craving is high enough. High voltage e-cigarettes are capable of producing large amounts of carbonyls. Reduced voltage (3.0 V) e-cigarettes had e-cigarette aerosol levels of formaldehyde and acetaldehyde roughly 13 and 807-fold less than with cigarette smoke.
Chemical analysis of e-cigarette cartridges, solutions, and aerosol:
Abbreviations: TSNA, tobacco specific nitrosoamines; LC-MS, liquid chromatography-mass spectrometry; MAO-A and B, monoamineoxidase A and B; PAH, polycyclic aromatic hydrocarbons; GS-MS, gas chromatography – mass spectrometry; ICP-MS, inductively coupled plasma – mass spectrometry; CO, carbon monoxide, VOC, volatile organic compounds; UPLC-MS, ultra-performance liquid chromatography-mass spectrometry; HPLC-DAD-MMI-MS, high performance liquid chromatography-diode array detector-multi-mode ionization-mass spectrometry.
Aldehydes in e-cigarette aerosol:
∗Abbreviations: <LOQ, below the limit of quantitation but above the limit of detection; N.D., not detected; N.T., not tested.
Tobacco-specific nitrosamines in nicotine-containing products:
∗ng/g, but not for gum and patch. ng/gum piece is for gum and ng/patch is for patch.
Comparison of levels of toxicants in e-cigarette aerosol:
Abbreviations: μg, microgram; ng, nanogram; ND, not detected.
Comparison of levels of toxicants in e-cigarette aerosol:
∗Fifteen puffs were chosen to estimate the nicotine delivery of one traditional cigarette.Each e-cigarette cartridge, which varies across manufacturers, and each cartridge produces 10 to 250 puffs of vapor. This correlates to 5 to 30 traditional cigarettes. A puff usually lasts for 3 to 4 seconds. A 2014 study found there is wide differences in daily puffs in experienced vapers, which typically varies from 120 to 225 puffs per day. From puff-to-puff e-cigarettes do not provide as much nicotine as traditional cigarettes. A 2016 review found "The nicotine contained in the aerosol from 13 puffs of an e-cigarette in which the nicotine concentration of the liquid is 18 mg per milliliter has been estimated to be similar to the amount in the smoke of a typical tobacco cigarette, which contains approximately 0.5 mg of nicotine." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Leap year starting on Tuesday**
Leap year starting on Tuesday:
A leap year starting on Tuesday is any year with 366 days (i.e. it includes 29 February) that begins on Tuesday, 1 January, and ends on Wednesday, 31 December. Its dominical letters hence are FE. The most recent year of such kind was 2008 and the next one will be 2036 in the Gregorian calendar or, likewise 2020 and 2048 in the obsolete Julian calendar.
Leap year starting on Tuesday:
Any leap year that starts on Tuesday, Friday or Saturday has only one Friday the 13th; the only one in this leap year occurs in June. Common years starting on Wednesday share this characteristic.
Applicable years:
Gregorian Calendar Leap years that begin on Tuesday, along with those starting on Wednesday, occur at a rate of approximately 14.43% (14 out of 97) of all total leap years in a 400-year cycle of the Gregorian calendar. Thus, their overall occurrence is 3.5% (14 out of 400).
Applicable years:
400 year cycle century 1: 8, 36, 64, 92 century 2: 104, 132, 160, 188 century 3: 228, 256, 284 century 4: 324, 352, 380 Julian Calendar Like all leap year types, the one starting with 1 January on a Tuesday occurs exactly once in a 28-year cycle in the Julian calendar, i.e. in 3.57% of years. As the Julian calendar repeats after 28 years that means it will also repeat after 700 years, i.e. 25 cycles. The year's position in the cycle is given by the formula ((year + 8) mod 28) + 1).
Holidays:
International Valentine's Day falls on a Thursday The leap day (February 29) falls on a Friday World Day for Grandparents and the Elderly falls on July 27 Halloween falls on a Friday Christmas Day falls on a Thursday Roman Catholic Solemnities Epiphany falls on a Sunday Candlemas falls on a Saturday Saint Joseph's Day falls on a Wednesday The Annunciation of Jesus falls on a Tuesday The Nativity of John the Baptist falls on a Tuesday The Solemnity of Saints Peter and Paul falls on a Sunday The Transfiguration of Jesus falls on a Wednesday The Assumption of Mary falls on a Friday The Exaltation of the Holy Cross falls on a Sunday All Saints' Day falls on a Saturday All Souls' Day falls on a Sunday The Feast of Christ the King falls on November 23 (or on October 26 in versions of the calendar between 1925 and 1962) The First Sunday of Advent falls on November 30 The Immaculate Conception falls on a Monday Gaudete Sunday falls on December 14 Rorate Sunday falls on December 21 Australia and New Zealand Australia Day falls on a Saturday Waitangi Day falls on a Wednesday Daylight saving ends on April 6 ANZAC Day falls on a Friday Mother's Day falls on May 11 Father's Day falls on its latest possible date, September 7 Daylight saving begins on September 28 in New Zealand and October 5 in Australia British Isles Saint David's Day falls on a Saturday Mother's Day falls on March 2, March 9, March 16, March 23 or March 30 Saint Patrick's Day falls on a Monday Daylight saving begins on March 30 Saint George's Day falls on a Wednesday Father's Day falls on its earliest possible date, June 15 Orangeman's Day falls on a Saturday Daylight saving ends on October 26 Guy Fawkes Night falls on a Wednesday Saint Andrew's Day falls on a Sunday Canada Daylight saving begins on March 9 Mother's Day falls on May 11 Victoria Day falls on May 19 Father's Day falls on its earliest possible date, June 15 Canada Day falls on a Tuesday Labour Day falls on its earliest possible date, September 1 Thanksgiving Day falls on October 13 Daylight saving ends on November 2 United States Martin Luther King Jr. Day falls on its latest possible date, January 21 President's Day falls on February 18 Daylight saving begins on March 9 Mother's Day falls on May 11 Memorial Day falls on May 26 Father's Day falls on its earliest possible date, June 15 Juneteenth falls on a Thursday Independence Day falls on a Friday Labor Day falls on its earliest possible date, September 1 Grandparents' Day falls on its earliest possible date, September 7 Columbus Day falls on October 13 Daylight saving ends on November 2 Election Day falls on November 4 Thanksgiving Day falls on November 27 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**XAP044**
XAP044:
XAP044 is a drug which acts as a potent and selective antagonist of the metabotropic glutamate receptor 7 (mGluR7). It inhibits long-term potentiation in the amygdala and inhibits responses associated with stress and anxiety in animal models, as well as being used to study the role of mGluR7 in various other processes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Prochlorperazine**
Prochlorperazine:
Prochlorperazine, formerly sold under the brand name Compazine among others, is a medication used to treat nausea, migraines, schizophrenia, psychosis and anxiety. It is a less preferred medication for anxiety. It may be taken by mouth, rectally, injection into a vein, or injection into a muscle.Common side effects include sleepiness, blurry vision, low blood pressure, and dizziness. Serious side effects may include movement disorders including tardive dyskinesia and neuroleptic malignant syndrome. Use in pregnancy and breastfeeding is generally not recommended. It is a typical antipsychotic which is believed to work by reducing the action of dopamine in the brain.Prochlorperazine was approved for medical use in the United States in 1956. It is available as a generic medication. In 2020, it was the 355th most commonly prescribed medication in the United States, with more than 600 thousand prescriptions.
Medical uses:
Vomiting Prochlorperazine is used to prevent vomiting caused by chemotherapy, radiation therapy and in the pre- and postoperative setting. A 2015 Cochrane review found no differences in efficacy among drugs commonly used for this purpose in emergency rooms.
Migraine Prochlorperazine, generally by intravenous, is used to treat migraine. Such use is recommended by The American Headache Society. A 2019 systematic review found prochlorperazine was nearly three times as likely as metoclopramide to relieve headache within 60 minutes of administration.
Labyrinthitis In the UK prochlorperazine maleate has been used for labyrinthitis, which include not only nausea and vertigo, but spatial and temporal 'jerking' and distortion.
Side effects:
Sedation is very common, and extrapyramidal side effects are common and include restlessness, dystonic reactions, pseudoparkinsonism, and akathisia; the extrapyramidal symptoms can affect 2% of people at low doses, whereas higher doses may affect as many as 40% of people.Prochlorperazine can also cause a life-threatening condition called neuroleptic malignant syndrome (NMS). Some symptoms of NMS include high fever, stiff muscles, neck muscle spasm, confusion, irregular pulse or blood pressure, fast heart rate (tachycardia), sweating, abnormal heart rhythms (arrhythmias). Research from the Veterans Administration and United States Food and Drug Administration show injection site reactions. Adverse effects are similar in children.
Side effects:
Warning The FDA approved label for prochlorperazine includes a warning for increased risk of mortality in elderly patients with dementia related psychosis.
Side effects:
Discontinuation The British National Formulary recommends a gradual withdrawal when discontinuing antipsychotics to avoid acute withdrawal syndrome or rapid relapse. Symptoms of withdrawal commonly include nausea, vomiting, and loss of appetite. Other symptoms may include restlessness, increased sweating, and trouble sleeping. Less commonly there may be a feeling of the world spinning, numbness, or muscle pains. Symptoms generally resolve after a short period of time.There is tentative evidence that discontinuation of antipsychotics can result in psychosis. It may also result in reoccurrence of the condition that is being treated. Rarely tardive dyskinesia can occur when the medication is stopped.
Pharmacology:
Prochlorperazine is thought to exert its antipsychotic effects by blocking dopamine receptors.Prochlorperazine is analogous to chlorpromazine; both of these agents antagonize dopaminergic D2 receptors in various pathways of the central nervous system. This D2 blockade results in antipsychotic, antiemetic and other effects. Hyperprolactinemia is a side effect of dopamine antagonists as blockade of D2 receptors within the tuberoinfundibular pathway results in increased plasma levels of prolactin due to increased secretion by lactotrophs in the anterior pituitary.
Pharmacology:
Following intramuscular injection, the antiemetic action is evident within 5 to 10 minutes and lasts for 3 to 4 hours. Rapid action is also noted after buccal treatment. With oral dosing, the start of action is delayed but the duration somewhat longer (approximately 6 hours).
Society and culture:
In the United Kingdom, prochlorperazine is available for the treatment of nausea caused by migraine as a tablet dissolved in the mouth, and in Australia as a tablet swallowed whole. In the UK, it is available via a prescription and as a pharmacy medicine, meaning it does not require a prescription but is only available after talking with a pharmacist.
Society and culture:
Marketing Prochlorperazine is available as tablets, suppositories, and in an injectable form.As of September 2017 it was marketed under the trade names Ametil, Antinaus, Buccastem, Bukatel, Chlormeprazine, Chloropernazine, Compazine, Compro, Daolin, Dhaperazine, Emedrotec, Emetiral, Eminorm, Lotamin, Mitil, Mormal, Nautisol, Novamin, Novomit, Proazine, Procalm, Prochlorperazin, Prochlorperazine, Prochlorpérazine, Prochlorperazinum, Prochlozine, Proclorperazina, Promat, Promin, Promtil, Roumin, Scripto-metic, Seratil, Stemetil, Steremal, Vergon, Vestil, and Volimin.It was also marketed at that time as a combination drug for humans with paracetamol as Vestil-A, as a combination drug for veterinary use, with isopropamide as Darbazine.
Research:
Alexza Pharmaceuticals studied an inhaled form of prochlorperazine for the treatment of migraine through Phase II trials under the development name AT-001; development was discontinued in 2011.
Synthesis:
The alkylation of 2-chlorophenothiazine (1) and 1-(3-Chloropropyl)-4-methylpiperazine [104-16-5] (2) in the presence of sodamide gives Prochlorperazine (3); or by alkylation of 2-Chloro-10-(3-chloropropyl)phenothiazine [2765-59-5] (4) and 1-methylpiperazine (5). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gojūshiho**
Gojūshiho:
Gojūshiho (五十四歩, lit. 54 steps) is a kata practiced in karate. Gojushiho was developed by Sokon Matsumura, one of the key founders of Okinawan martial arts and named it "Uesheishi", which literally means 54 methods in Chinese. In some styles of karate, there are two versions of this kata - Gojūshiho Shō and Gojūshiho Dai. An advantage of the two versions of the kata is to better master the difficult techniques presented therein, but not without facing some confusion, for many sequences are the same and others only slightly different. The embusen of both Gojūshiho Shō and Gojūshiho Dai are nearly identical. Gojūshiho Shō begins straight off with a wide variety of advanced techniques and, as such, is highly recommended for study. Gojūshiho Dai consists of many advanced open-handed techniques and attacks to the collar-bone.
Gojūshiho:
Gojushiho movement is quite similar with Aikido grappling technique in terms of flowing knife hand or "tate-shuto-uke" or vertical knife hand block. "Tate-shuto-uke" does not resemble other shuto uke which resemble as "block technique". Rather it was throwing technique in "aiki-jujutsu". Another "shuto" technique as "shuto-nagashi-uke" or "knife-hand-flowing-block" has become the unique characteristic of Gojushiho because of flowing movement which is not merely interpreted as "block", but "throw".
Gojūshiho:
Gojūshiho Shō and Gojūshiho Dai are two versions in Shotokan of the Shōrin-ryū kata called Useishi (54) or Gojūshiho. The oft-repeated story about the JKA having to rename the Gojushiho kata due to a tournament mix-up; and Kanazawa Hirokazu, because of his seniority, keeping the original names in his SKIF organisation is without foundation. In fact, Kanazawa is on record as saying that when he formed SKIF he changed the names of the two kata as he felt that the “sho” designation suited the smaller, more difficult, kata better. Kanazawa also mentioned that this kata was introduced into the JKA before its sibling, and this explained why the JKA decided to call it “dai” when they introduced the second Gojushiho into the syllabus.
Gojūshiho:
This kata is also practiced in Tang Soo Do and is called O Sip Sa Bo in Korean. And it is said that it also has some influences of Ng Ying Kungfu (Chinese: 五形功夫). Due to its difficulty, this kata is often reserved for advanced students, usually for those who are 6th degree black belts and above.
Gojushiho is also practiced in Goshin Kagen Goju Karate, a modified style of Goju founded by Hanshi Gerald Thomson | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Muck (gambling)**
Muck (gambling):
Mucking is the discarding of cards in card games. Depending on the game, it may be a regular part of play or it may be considered cheating.
Poker:
In poker, it most often refers to the discard pile into which players may throw their folded hands, and into which the dealer places burned cards. It also refers to when a player is folding his hand (face down) without saying anything. In fact, the hand is not folded until it reaches the muck (it can be taken back and used if the dealer did not take the hand yet). The practice of mucking cards when discarding helps to ensure that no other player can reliably determine which cards were in the folded hand.
Poker:
In poker, the term may also refer to the action that a player who has not folded may take; he can have his hand "mucked" if another player attempts to discard but one or more cards end up in the live players hand. This is why many players will place a chip or other object on their cards: it helps to prevent errant cards from entering their hand. Sometimes they are referred to as card covers, card guards or card protector.
Poker:
Mucking as a strategy In some variations of poker a player may "muck" their cards in order to reinforce a bluff while preserving their image on the table.
Other card games:
Mucking or hand mucking may also refer to a form of sleight of hand, and, if used in a card game, is cheating. A player conceals a card through sleight of hand, removing it from play so that it may later be inserted back into the game to the cheater's advantage. For example, in blackjack a cheating player might remove an ace from the table to use the next time he is dealt a ten to make a blackjack. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nine oils**
Nine oils:
In the 19th century, the nine oils was a preparation, or liniment, which was rubbed into the skin to relieve aches, such as over bruises. "Nine oils" was apparently developed in veterinary medicine, for treating horses, but later was adopted for human medical use.
Nine oils:
According to one 19th-century druggists' book, oils used in the preparation included: train oil; that is, whale oil or the oil of the blubber of another marine mammal oil of turpentine oil of bricks, the oil obtained by the distillation of pieces of brick saturated with rapeseed oil or olive oil oil of amber spirit of camphor Barbados tar, a kind of greenish petroleum found in Barbados oil of vitriol; that is, sulfuric acidHowever, it is certain that many "nine oils" preparations did not contain these ingredients, and in fact it is possible that the name "nine oils" never referred to any specific combination of compounds. The writer James Greenwood, in 1883, put these words in the mouth of the street-doctor "Dr. Quackinbosh", in his series of articles Toilers in London, by One of the Crowd, originally serialized in the Daily Telegraph: When I first started I worked Woolwich with my "miraculous Nine Oils." Men who work at heavy lifting and hauling, and are likely to get strains and ricks of the back, have a superstitious belief in the "Nine Oils." It is the same wherever you go. What are they? what, the original Nine? Blessed if I know, nor they don't know either. But that don't make any difference. I used to give 'em one – sperm oil – and call it the Nine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kiskatom Formation**
Kiskatom Formation:
The Kiskatom Formation is a geologic formation in New York. It preserves fossils dating back to the Devonian period. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mosquito-borne disease**
Mosquito-borne disease:
Mosquito-borne diseases or mosquito-borne illnesses are diseases caused by bacteria, viruses or parasites transmitted by mosquitoes. Nearly 700 million people get a mosquito-borne illness each year resulting in over 725,000 deaths.Diseases transmitted by mosquitoes include malaria, dengue, West Nile virus, chikungunya, yellow fever, filariasis, tularemia, dirofilariasis, Japanese encephalitis, Saint Louis encephalitis, Western equine encephalitis, Eastern equine encephalitis, Venezuelan equine encephalitis, Ross River fever, Barmah Forest fever, La Crosse encephalitis, and Zika fever, as well as newly detected Keystone virus and Rift Valley fever. There is no evidence as of April 2020 that COVID-19 can be transmitted by mosquitoes, and it is extremely unlikely this could occur.
Types:
Protozoa The female mosquito of the genus Anopheles may carry the malaria parasite. Four different species of protozoa cause malaria: Plasmodium falciparum, Plasmodium malariae, Plasmodium ovale and Plasmodium vivax (see Plasmodium). Worldwide, malaria is a leading cause of premature mortality, particularly in children under the age of five, with an estimated 207 million cases and more than half a million deaths in 2012, according to the World Malaria Report 2013 published by WHO. The death toll increased to one million as of 2018 according to the American Mosquito Control Association.
Types:
Myiasis Botflies are known to parasitize humans or other mammalians, causing myiasis, and to use mosquitoes as intermediate vector agents to deposit eggs on a host. The human botfly Dermatobia hominis attaches its eggs to the underside of a mosquito, and when the mosquito takes a blood meal from a human or an animal, the body heat of the mammalian host induces hatching of the larvae.
Types:
Helminthiasis Some species of mosquito can carry the filariasis worm, a parasite that causes a disfiguring condition (often referred to as elephantiasis) characterized by a great swelling of several parts of the body; worldwide, around 40 million people are living with a filariasis disability.
Types:
Virus The viral diseases yellow fever, dengue fever, Zika fever and chikungunya are transmitted mostly by Aedes aegypti mosquitoes.Other viral diseases like epidemic polyarthritis, Rift Valley fever, Ross River fever, St. Louis encephalitis, West Nile fever, Japanese encephalitis, La Crosse encephalitis and several other encephalitic diseases are carried by several different mosquitoes. Eastern equine encephalitis (EEE) and Western equine encephalitis (WEE) occur in the United States where they cause disease in humans, horses, and some bird species. Because of the high mortality rate, EEE and WEE are regarded as two of the most serious mosquito-borne diseases in the United States. Symptoms range from mild flu-like illness to encephalitis, coma, and death.Viruses carried by arthropods such as mosquitoes or ticks are known collectively as arboviruses. West Nile virus was accidentally introduced into the United States in 1999 and by 2003 had spread to almost every state with over 3,000 cases in 2006.
Types:
Other species of Aedes as well as Culex and Culiseta are also involved in the transmission of disease.Myxomatosis is spread by biting insects, including mosquitoes.
Transmission:
A mosquito's period of feeding is often undetected; the bite only becomes apparent because of the immune reaction it provokes. When a mosquito bites a human, it injects saliva and anti-coagulants. For any given individual, with the initial bite there is no reaction but with subsequent bites the body's immune system develops antibodies and a bite becomes inflamed and itchy within 24 hours. This is the usual reaction in young children. With more bites, the sensitivity of the human immune system increases, and an itchy red hive appears in minutes where the immune response has broken capillary blood vessels and fluid has collected under the skin. This type of reaction is common in older children and adults. Some adults can become desensitized to mosquitoes and have little or no reaction to their bites, while others can become hyper-sensitive with bites causing blistering, bruising, and large inflammatory reactions, a response known as skeeter syndrome.One study found Dengue virus and Zika virus altered the skin bacteria of rats in a way that caused their body odor to be more attractive to mosquitoes.
Signs and symptoms:
Symptoms of illness are specific to the type of viral infection and vary in severity, based on the individuals infected.
Zika virus Symptoms vary in severity, from mild unnoticeable symptoms to more common symptoms like fever, rash, headache, achy muscle and joints, and conjunctivitis. Symptoms can last several days to weeks, but death resulting from this infection is rare.
Signs and symptoms:
West Nile virus, dengue fever Most people infected with the West Nile virus usually do not develop symptoms. However, some individuals can develop cases of severe fatigue, weakness, headaches, body aches, joint and muscle pain, vomiting, diarrhea, and rash, which can last for weeks or months. More serious symptoms have a greater risk of appearing in people over 60 years of age, or those with cancer, diabetes, hypertension, and kidney disease.Dengue fever is mostly characterized by high fever, headaches, joint pain, and rash. However, more severe instances can lead to hemorrhagic fever, internal bleeding, and breathing difficulty, which can be fatal.
Signs and symptoms:
Chikungunya People infected with this virus can develop sudden onset fever along with debilitating joint and muscle pain, rash, headache, nausea, and fatigue. Symptoms can last a few days or be prolonged to weeks and months. Although patients can recover completely, there have been cases in which joint pain has persisted for several months and can extend beyond that for years. Other people can develop heart complications, eye problems, and even neurological complications.
Mechanism:
Mosquitoes carrying such arboviruses stay healthy because their immune systems recognizes the virions as foreign particles and "chop off" the virus' genetic coding, rendering it inert. Human infection with a mosquito-borne virus occurs when a female mosquito bites someone while its immune system is still in the process of destroying the virus's harmful coding. It is not completely known how mosquitoes handle eukaryotic parasites to carry them without being harmed. Data has shown that the malaria parasite Plasmodium falciparum alters the mosquito vector's feeding behavior by increasing frequency of biting in infected mosquitoes, thus increasing the chance of transmitting the parasite.The mechanism of transmission of this disease starts with the injection of the parasite into the victim's blood when malaria-infected female Anopheles mosquitoes bite into a human being. The parasite uses human liver cells as hosts for maturation where it will continue to replicate and grow, moving into other areas of the body via the bloodstream. The spread of this infection cycle then continues when other mosquitoes bite the same individual. The result will cause that mosquito to ingest the parasite and allow it to transmit the Malaria disease into another person through the same mode of bite injection. Flaviviridae viruses transmissible via vectors like mosquitoes include West Nile virus and yellow fever virus, which are single stranded, positive-sense RNA viruses enveloped in a protein coat.
Mechanism:
Once inside the host's body, the virus will attach itself to a cell's surface through receptor-mediated endocytosis. This essentially means that the proteins and DNA material of the virus are ingested into the host cell. The viral RNA material will undergo several changes and processes inside the host's cell so that it can release more viral RNA that can then be replicated and assembled to infect neighboring host cells. Mosquito-borne flaviviruses also encode viral antagonists to the innate immune system in order to cause persistent infection in mosquitoes and a broad spectrum of diseases in humans. The data on transmissibility via insect vectors of hepatitis C virus, also belonging to family Flaviviridae (as well as for hepatitis B virus, belonging to family Hepadnaviridae) are inconclusive. WHO states that "There is no insect vector or animal reservoir for HCV.", while there are experimental data supporting at least the presence of [PCR]-detectable hepatitis C viral RNA in Culex mosquitoes for up to 13 days.Currently, there are no specific vaccine therapies for West Nile virus approved for humans; however, vaccines are available and some show promise for animals, as a means to intervene with the mechanism of spreading such pathogens.
Diagnosis:
Doctors can typically identify a mosquito bite by sight.A doctor will perform a physical examination and ask about medical history as well as any travel history. Be ready to give details on any international trips, including the dates you were traveling, the countries you visited and any contact you had with mosquitoes.
Dengue fever Diagnosing dengue fever can be difficult, as its symptoms often overlap with many other diseases such as malaria and typhoid fever. Laboratory tests can detect evidence of the dengue viruses, however the results often come back too late to assist in directing treatment.
Diagnosis:
West Nile virus Medical testing can confirm the presence of West Nile fever or a West Nile-related illness, such as meningitis or encephalitis. If infected, a blood test may show a rising level of antibodies to the West Nile virus. A lumbar puncture (spinal tap) is the most common way to diagnose meningitis, by analyzing the cerebrospinal fluid surrounding your brain and spinal cord. The fluid sample may show an elevated white cell count and antibodies to the West Nile virus if you were exposed. In some cases, an electroencephalography (EEG) or magnetic resonance imaging (MRI) scan can help detect brain inflammation.
Diagnosis:
Zika virus A Zika virus infection might be suspected if symptoms are present and an individual has traveled to an area with known Zika virus transmission. Zika virus can only be confirmed by a laboratory test of body fluids, such as urine or saliva, or by blood test.
Chikungunya Laboratory blood tests can identify evidence of chikungunya or other similar viruses such as dengue and Zika. Blood test may confirm the presence of IgM and IgG anti-chikungunya antibodies. IgM antibodies are highest 3 to 5 weeks after the beginning of symptoms and will continue be present for about 2 months.
Prevention:
There is a re-emergence of mosquito vectored viruses (arthropod-borne viruses) called arboviruses carried by the Aedes aegypti mosquito. Examples are the Zika virus, chikungunya virus, yellow fever and dengue fever. The re-emergence of the viruses has been at a faster rate, and over a wider geographic area, than in the past. The rapid re-emergence is due to expanding global transportation networks, the mosquito's increasing ability to adapt to urban settings, the disruption of traditional land use and the inability to control expanding mosquito populations. Like malaria, arboviruses do not have a vaccine. (The only exception is yellow fever.) Prevention is focused on reducing the adult mosquito populations, controlling mosquito larvae and protecting individuals from mosquito bites. Depending on the mosquito vector, and the affected community, a variety of prevention methods may be deployed at one time.
Prevention:
Insecticidal nets and indoor residual spraying The use of insecticide treated mosquito nets (ITNs) are at the forefront of preventing mosquito bites that cause malaria. The prevalence of ITNs in sub-Saharan Africa has grown from 3% of households to 50% of households from 2000 to 2010 with over 254 million insecticide treated nets distributed throughout sub-Saharan Africa for use against the mosquito vectors Anopheles gambiae and Anopheles funestus which carry malaria. Because the Anopheles gambiae feeds indoors (endophagic) and rests indoors after feeding (endophilic), insecticide treated nets (ITNs) interrupt the mosquito's feeding pattern. The ITNs continue to offer protection, even after there are holes in the nets, because of their excito-repellency properties which reduce the number of mosquitoes that enter the home. The World Health Organization (WHO) recommends treating ITNs with the pyrethroid class of insecticides. There is an emerging concern of mosquito resistance to insecticides used in ITNs. Twenty-seven (27) sub-Saharan African countries have reported Anopheles vector resistance to pyrethroid insecticides.Indoor spraying of insecticides is another prevention method widely used to control mosquito vectors. To help control the Aedes aegypti mosquito, homes are sprayed indoors with residual insecticide applications. Indoor residual spraying (IRS) reduces the female mosquito population and mitigates the risk of dengue virus transmission. Indoor residual spraying is completed usually once or twice a year. Mosquitoes rest on walls and ceilings after feeding and are killed by the insecticide. Indoor spraying can be combined with spraying the exterior of the building to help reduce the number of mosquito larvae and subsequently, the number of adult mosquitoes.
Prevention:
Personal protection methods There are other methods that an individual can use to protect themselves from mosquito bites. Limiting exposure to mosquitoes from dusk to dawn when the majority of mosquitoes are active and wearing long sleeves and long pants during the period mosquitoes are most active. Placing screens on windows and doors is a simple and effective means of reducing the number of mosquitoes indoors. Anticipating mosquito contact and using a topical mosquito repellant with icaridin or DEET is also recommended. Draining or covering water receptacles, both indoor and outdoors, is also a simple but effective prevention method. Removing debris and tires, cleaning drains, and cleaning gutters help larval control and reduce the number of adult mosquitoes.
Prevention:
Vaccines There is a vaccine for yellow fever which was developed in the 1930s, the yellow 17D vaccine, and it is still in use today. The initial yellow fever vaccination provides lifelong protection for most people and provides immunity within 30 days of the vaccine. Reactions to the yellow fever vaccine have included mild headache and fever, and muscle aches. There are rare cases of individuals presenting with symptoms that mirror the disease itself. The risk of complications from the vaccine are greater for individuals over 60 years of age. In addition, the vaccine is not usually administered to babies under nine months of age, pregnant women, people with allergies to egg protein, and individuals living with AIDS/HIV. The World Health Organization (WHO) reports that 105 million people have been vaccinated for yellow fever in West Africa from 2000 to 2015.To date, there are relatively few vaccines against mosquito-borne diseases, this is due to the fact that most viruses and bacteria caused by mosquitos are highly mutatable. The National Institute of Allergy and Infectious Disease (NIAID) began Phase 1 clinical trials of a new vaccine that would be nearly universal in protecting against the majority of mosquito-borne diseases.
Prevention:
Education and community involvement The arboviruses have expanded their geographic range and infected populations that had no recent community knowledge of the diseases carried by the Aedes aegypti mosquito. Education and community awareness campaigns are necessary for prevention to be effective. Communities are educated on how the disease is spread, how they can protect themselves from infection and the symptoms of infection. Community health education programs can identify and address the social/economic and cultural issues that can hinder preventative measures. Community outreach and education programs can identify which preventative measures a community is most likely to employ. Leading to a targeted prevention method that has a higher chance of success in that particular community. Community outreach and education includes engaging community health workers and local healthcare providers, local schools and community organizations to educate the public on mosquito vector control and disease prevention.
Treatments:
Yellow fever Numerous drugs have been used to treat yellow fever disease with minimal satisfaction to date. Patients with multisystem organ involvement will require critical care support such as possible hemodialysis or mechanical ventilation. Rest, fluids, and acetaminophen are also known to relieve milder symptoms of fever and muscle pain. Due to hemorrhagic complications, aspirin should be avoided. Infected individuals should avoid mosquito exposure by staying indoors or using a mosquito net.
Treatments:
Dengue fever Dengue infection's therapeutic management is simple, cost effective and successful in saving lives by adequately performing timely institutionalized interventions. Treatment options are restricted, while no effective antiviral drugs for this infection have been accessible to date. Patients in the early phase of the dengue virus may recover without hospitalization. However, ongoing clinical research is in the works to find specific anti-dengue drugs.
Treatments:
Dengue fever occurs via aedes aegypti mosquito( it acts as a vector) Zika virus Zika virus vaccine clinical trials are to be conducted and established. There are efforts being put toward advancing antiviral therapeutics against zika virus for swift control. Present day Zika virus treatment is symptomatic through antipyretics and analgesics. Currently there are no publications regarding viral drug screening. Nevertheless, therapeutics for this infection have been used.
Treatments:
Chikungunya There are no treatment modalities for acute and chronic chikungunya that currently exist. Most treatment plans use supportive and symptomatic care like analgesics for pain and anti-inflammatories for inflammation caused by arthritis. In acute stages of this virus, rest, antipyretics and analgesics are used to subside symptoms. Most use non-steroidal anti-inflammatory drugs (NSAIDs). In some cases, joint pain may resolve from treatment but stiffness remains.
Treatments:
Latest treatment The sterile insect technique (SIT) uses irradiation to sterilize insect pests before releasing them in large numbers to mate with wild females. Since they do not produce any offspring, the population, and consequently the disease incidence, is reduced over time. Used successfully for decades to combat fruit flies and livestock pests such as screwworm and tsetse flies, the technique can be adapted also for some disease-transmitting mosquito species. Pilot projects are being initiated or are under way in different parts of the world.
Epidemiology:
Mosquito-borne diseases, such as dengue fever and malaria, typically affect developing countries and areas with tropical climates. Mosquito vectors are sensitive to climate changes and tend to follow seasonal patterns. Between years there are often dramatic shifts in incidence rates. The occurrence of this phenomenon in endemic areas makes mosquito-borne viruses difficult to treat.Dengue fever is caused by infection through viruses of the family Flaviviridae. The illness is most commonly transmitted by Aedes aegypti mosquitoes in tropical and subtropical regions. Dengue virus has four different serotypes, each of which are antigenically related but have limited cross-immunity to reinfection.Although dengue fever has a global incidence of 50-100 million cases, only several hundreds of thousands of these cases are life-threatening. The geographic prevalence of the disease can be examined by the spread of Aedes aegypti. Over the last twenty years, there has been a geographic spread of the disease. Dengue incidence rates have risen sharply within urban areas which have recently become endemic hot spots for the disease. The recent spread of Dengue can also be attributed to rapid population growth, increased coagulation in urban areas, and global travel. Without sufficient vector control, the dengue virus has evolved rapidly over time, posing challenges to both government and public health officials.Malaria is caused by a protozoan called Plasmodium falciparum. P. falciparum parasites are transmitted mainly by the Anopheles gambiae complex in rural Africa. In just this area, P. falciparum infections comprise an estimated 200 million clinical cases and 1 million annual deaths. 75% of individuals affected in this region are children. As with dengue, changing environmental conditions have led to novel disease characteristics. Due to increased illness severity, treatment complications, and mortality rates, many public health officials concede that malaria patterns are rapidly transforming in Africa. Scarcity of health services, rising instances of drug resistance, and changing vector migration patterns are factors that public health officials believe contribute to malaria's dissemination.
Epidemiology:
Climate heavily affects mosquito vectors of malaria and dengue. Climate patterns influence the lifespan of mosquitos as well as the rate and frequency of reproduction. Climate change impacts have been of great interest to those studying these diseases and their vectors. Additionally, climate impacts mosquito blood feeding patterns as well as extrinsic incubation periods. Climate consistency gives researchers an ability to accurately predict annual cycling of the disease but recent climate unpredictability has eroded researchers' ability to track the disease with such precision.
Advances in biological control of arboviruses:
In many insect species, such as Drosophila melanogaster, researchers found that a natural infection with the bacteria strain Wolbachia pipientis increases the fitness of the host by increasing resistance to RNA viral infections. Robert L. Glaser and Mark A. Meola investigated Wolbachia-induced resistance to West Nile virus (WNV) in Drosophila melanogaster (fruit flies). Two groups of fruit flies were naturally infected with Wolbachia. Glaser and Meola then cured one group of fruit flies of Wolbachia using tetracycline. Both the infected group and the cured groups were then infected with WNV. Flies infected with Wolbachia were found to have a changed phenotype that caused resistance to WNV. The phenotype was found to be caused by a “dominant, maternally transmitted, cytoplasmic factor”. The WNV-resistance phenotype was then reversed by curing the fruit flies of Wolbachia. Since Wolbachia is also maternally transmitted, it was found that the WNV-resistant phenotype is directly related to the Wolbachia infection. West Nile virus is transmitted to humans and animals through the Southern house mosquito, Culex quinquefasciatus. Glaser and Meola knew vector compatibility could be reduced through Wolbachia infection due to studies done with other species of mosquitoes, mainly, Aedes aegypti. Their goal was to transfer WNV resistance to Cx. quinquefasciatus by inoculating the embryos of the mosquito with the same strain of Wolbachia that naturally occurred in the fruit flies. Upon infection, Cx. quinquefasciatus showed an increased resistance to WNV that was transferable to offspring. The ability to genetically modify mosquitoes in the lab and then have the infected mosquitoes transmit it to their offspring showed that it was possible to transmit the bacteria to wild populations to decrease human infections.In 2011, Ary Hoffmann and associates produced the first case of Wolbachia-induced arbovirus resistance in wild populations of Aedes aegypti through a small project called Eliminate Dengue: Our Challenge. This was made possible by an engineered strain of Wolbachia termed wMel that came from D. melanogaster. The transfer of wMel from D. melanogaster into field-caged populations of the mosquito Aedes aegypti induced resistance to dengue, yellow fever, and chikungunya viruses. Although other strains of Wolbachia also reduced susceptibility to dengue infection, they also put a greater demand on the fitness of Ae. aegypti. wMel was different in that it was thought to only cost the organism a small portion of its fitness. wMel-infected Ae. aegypti were released into two residential areas in the city of Cairns, Australia over a 14-week period. Hoffmann and associates, released a total of 141,600 infected adult mosquitoes in Yorkeys Knob suburb and 157,300 in Gordonvale suburb. After release, the populations were monitored for three years to record the spread of wMel. Population monitoring was gauged by measuring larvae laid in traps. At the beginning of the monitoring period but still within the release period, it was found that wMel-infected Ae. aegypti had doubled in Yorkeys Knob and increased 1.5-fold in Gordonvale. Uninfected Ae. aegypti populations were in decline. By the end of the three years, wMel-infected Ae. aegypti had stable populations of about 90%. However, these populations were isolated to the Yorkeys Knob and Gordonvale suburbs due to unsuitable habitat surrounding the neighborhoods.Although populations flourished in these areas with nearly 100% transmission, no signs of spread were noted, proving disappointing for some. Following this experiment, Tom L. Schmidt and his colleagues conducted an experiment releasing Wolbachia-infected Aedes aegypti using different site selection methods occurred in different areas of Cairns during 2013. The release sites were monitored over two years. This time the release was done in urban areas that were adjacent to adequate habitat to encourage mosquito dispersal. Over the two years, the population doubled, and spatial spread was also increased, unlike the first release, giving ample satisfactory results. By increasing the spread of the Wolbachia-infected mosquitoes, the researchers were able to establish that population of a large city was possible if the mosquitoes were given adequate habitat to spread into upon release in different local locations throughout the city. An important detail in both of these studies is that no adverse effects on public health or the natural ecosystem occurred. This made it an extremely attractive alternative to traditional insecticide methods given the increased pesticide resistance occurring from heavy use.
Advances in biological control of arboviruses:
From the success seen in Australia, the researchers were able to begin operating in more threatened portions of the world. The Eliminate Dengue program spread to 10 countries throughout Asia, Latin America, and the Western Pacific blooming into the non-profit organization, World Mosquito Program, as of September 2017. They still use the same technique of infecting wild populations of Ae. aegypti as they did in Australia, but their target diseases now include Zika, chikungunya and yellow fever as well as dengue. Although not alone in their efforts to use Wolbachia-infected mosquitoes to reduce mosquito-borne disease, the World Mosquito Program method is praised for being self-sustaining in that it causes permanent phenotype change rather than reducing mosquito populations through cytoplasmic incompatibility through male-only dispersal. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ytterbium(III) bromide**
Ytterbium(III) bromide:
Ytterbium(III) bromide (YbBr3) is an inorganic chemical compound.
Refer to the adjacent table for the main properties of Ytterbium(III) bromide.
Preparation:
Dissolving ytterbium oxide into 40% hydrobromic acid forms YbBr3·6H2O crystals. After mixing the hydrate with ammonium bromide and heating it in a vacuum, anhydrous YbBr3 can be obtained.
Yb2O3 + 6 HBr → 2 YbBr3 + 3 H2OYtterbium(III) bromide can also be prepared by directly heating ytterbium oxide and ammonium bromide. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Linearizer**
Linearizer:
In signal processing, linearizers are electronic circuits which improve the non-linear behaviour of amplifiers to increase efficiency and maximum output power.
Linearizer:
Creating circuits with the inverted behaviour to the amplifier is one way to implement this concept. These circuits counteract the non-linearities of the amplifier and minimize the distortion of the signal. This increases linear operating range up to the saturation (maximum output power) of the amplifier. Linearized amplifiers have a significantly higher efficiency with improved signal quality. There are different concepts to linearize an amplifier, including pre- and post-distortion and feedback linearization. Most commonly used is pre-distortion linearization.
Linearizer:
The function performed by the linearizer in relation to the amplifier is very similar to that of eyeglasses in relation to the eye. The eye distorts the image seen; the glasses pre-distort the image. When these two distortions are combined, the result is a clear image. For the nearsighted, the glasses must pre-distort the image one way. For the farsighted, the image must be pre-distorted in the opposite way.
Functionality of Pre-distortion Linearizers:
Non-linearities occur in amplifiers due to decreasing amplification and changing phase when operated near saturation. This behavior is commonly referred to as gain or phase compression. The pre-distortion linearizer is designed to compensate these changes. The resulting behavior is commonly referred to as gain or phase expansion.
Functionality of Pre-distortion Linearizers:
A pre-distortion linearizer works by creating signal distortion (amplitude and phase) that is the complement of the signal distortion inherent in the High-Powered Amplifier. The signal to be amplified is first passed through the linearizer, distorting the signal, with no loss in gain. The distorted signal is then fed to the High-Powered Amplifier to be amplified. The distortion inherent in the High-Powered Amplifier negates the distortion introduced by the linearizer producing a near linear transfer characteristic.
Functionality of Pre-distortion Linearizers:
Figure 1 shows the amplification (gain) contingent upon input power. The gain compression of the amplifier starts above a certain input power level (red curve).
Functionality of Pre-distortion Linearizers:
By adding a pre-distortion linearizer (blue curve) in front of the amplifier, the gain compression effect is compensated up to a certain power level (green curve). The point where the gain of the total system is starting to drop off is pushed to a higher power level thereby increasing the linear operating range. In practice, the linear output power level of an amplifier increases significantly (up to four times).
Functionality of Pre-distortion Linearizers:
The increased linear operating range is also illustrated in Figure 2 (light blue area). The chart shows the relationship between input and output power of an amplifier with and without pre-distortion linearizer. The dotted line shows the output power of the amplifier as a function of the input power on a logarithmic scale. In this illustration the compression is shown as the deviation from the ideal 45° line. The amplifier with pre-distortion linearizer (solid line) deviates from the ideal line at a much higher power level. The light blue area illustrates the improved linear operating power range gained by adding a pre-distortion linearizer.
Advantages of Pre-distortion Linearizers:
Pre-distortion linearizers operate in the small signal area and increase the DC power consumption of the system only marginally. Additional advantages can be deduced including: Small foot print, low weight.
Low cost solution compared to use of higher power amplifier in similar linear operating range.
Lower environmental impact through improved efficiency.
Retrofit possible Adjustment to different amplifier types possible Highly reliable
Typical Properties of a Pre-distortion Linearizer:
Dimensions: Dependent on design in the range of a few cm Weight: 20g to 200g depending on features Frequencies: 1 to 50 GHz with different bandwidths Expansion levels: Gain: up to 10 dB, dependent on amplifier data Phase: up to 60°
Applications:
The preferred application of linearizers is in high power amplifiers using electron tubes (traveling wave tubes, Klystron tubes, magnetron tubes) or solid state amplifiers (GaN, GaAs, Si). These systems are used in broadband voice and data transfer applications including Satellite Communication, Broadband Internet, or HD/3D television. These applications require high signal quality. The optimization of the amplifier characteristics enables the ideal use of available power and leads to energy savings of up to 50%. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Brickfielder**
Brickfielder:
The Brickfielder is a hot and dry wind in Southern Australia that develops in the country's deserts in late spring and summer, which heavily raises temperatures in the southeast coast.
Etymology:
The term name was recorded in early 19th century, which emanated from the name of Brickfield Hill, a site which was a former brickworks in the centre of Sydney CBD. The area was associated with dusty wind that conveyed clouds of reddish dust from the brickworks over the emerging city. A more frequently used term for the winds is a "burster".
Development:
The brickfielder precedes the passage of a frontal zone of a low pressure system passing by, and causes severe dust storms that often last for days and led to its naming as the winds blow up red brick dust. It blows to the coastal regions in the south from the outback, reaching the capitals of Adelaide and Melbourne to south, and Sydney to the east. The dry northwesterly desert air from the interior of Australia transports dusty clouds alongside sudden hot spells that usually surpass 38C (100F) to places that feature a relatively mild climate. The temperature might rise up by 15 to 20°C within hours.
Effects:
The northern brickfielder is almost invariably followed by a strong "southerly buster," cloudy and cool from the ocean. The two winds are due to the same cause, viz. a cyclonic system over the Australian Bight. These systems frequently extend inland as a narrow V-shaped depression (the apex northward), bringing the winds from the north on their eastern sides and from the south on their western. Hence as the narrow system passes eastward the wind suddenly changes from north to south, and the thermometer has been known to fall 15 °F (−9 °C) in twenty minutes.On the coastal plains of New South Wales, such as in Western Sydney, the Brickfielder may be exacerbated by the southeast Australian foehn. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Complex lamellar vector field**
Complex lamellar vector field:
In vector calculus, a complex lamellar vector field is a vector field which is orthogonal to a family of surfaces. In the broader context of differential geometry, complex lamellar vector fields are more often called hypersurface-orthogonal vector fields. They can be characterized in a number of different ways, many of which involve the curl. A lamellar vector field is a special case given by vector fields with zero curl.
Complex lamellar vector field:
The adjective "lamellar" derives from the noun "lamella", which means a thin layer. The lamellae to which "lamellar vector field" refers are the surfaces of constant potential, or in the complex case, the surfaces orthogonal to the vector field. This language is particularly popular with authors in rational mechanics.
Complex lamellar vector fields:
In vector calculus, a complex lamellar vector field is a vector field in three dimensions which is orthogonal to its own curl. That is, 0.
The term lamellar vector field is sometimes used as a synonym for the special case of an irrotational vector field, meaning that ∇×F=0.
Complex lamellar vector fields are precisely those that are normal to a family of surfaces. An irrotational vector field is locally the gradient of a function, and is therefore orthogonal to the family of level surfaces (the equipotential surfaces). Any vector field can be decomposed as the sum of an irrotational vector field and a complex lamellar field.
Hypersurface-orthogonal vector fields:
In greater generality, a vector field F on a pseudo-Riemannian manifold is said to be hypersurface-orthogonal if through an arbitrary point there is a smoothly embedded hypersurface which, at all of its points, is orthogonal to the vector field. By the Frobenius theorem this is equivalent to requiring that the Lie bracket of any smooth vector fields orthogonal to F is still orthogonal to F.The condition of hypersurface-orthogonality can be rephrased in terms of the differential 1-form ω which is dual to F. The previously given Lie bracket condition can be reworked to require that the exterior derivative dω, when evaluated on any two tangent vectors which are orthogonal to F, is zero. This may also be phrased as the requirement that there is a smooth 1-form whose wedge product with ω equals dω.Alternatively, this may be written as the condition that the differential 3-form ω ∧ dω is zero. This can also be phrased, in terms of the Levi-Civita connection defined by the metric, as requiring that the totally anti-symmetric part of the 3-tensor field ωi∇j ωk is zero. Using a different formulation of the Frobenius theorem, it is also equivalent to require that ω is locally expressible as λ du for some functions λ and u.In the special case of vector fields on three-dimensional Euclidean space, the hypersurface-orthogonal condition is equivalent to the complex lamellar condition, as seen by rewriting ω ∧ dω in terms of the Hodge star operator as ∗⟨ω, ∗dω⟩, with ∗dω being the 1-form dual to the curl vector field.Hypersurface-orthogonal vector fields are particularly important in general relativity, where (among other reasons) the existence of a Killing vector field which is hypersurface-orthogonal is one of the requirements of a static spacetime. In this context, hypersurface-orthogonality is sometimes called irrotationality, although this is in conflict with the standard usage in three dimensions. Another name is rotation-freeness.An even more general notion, in the language of Pfaffian systems, is that of a completely integrable 1-form ω, which amounts to the condition ω ∧ dω = 0 as given above. In this context, there is no metric and so there is no notion of "orthogonality". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Duocarmycin**
Duocarmycin:
The duocarmycins are members of a series of related natural products first isolated from Streptomyces bacteria in 1978. They are notable for their extreme cytotoxicity and thus represent a class of exceptionally potent antitumour antibiotics.
Biological activity:
As small-molecule, synthetic, DNA minor groove binding alkylating agents, duocarmycins are suitable to target solid tumors. They bind to the minor groove of DNA and alkylate the nucleobase adenine at the N3 position. The irreversible alkylation of DNA disrupts the nucleic acid architecture, which eventually leads to tumor cell death. Analogues of naturally occurring antitumour agents, such as duocarmycins, represent a new class of highly potent antineoplastic compounds.The work of Dale L. Boger and others created a better understanding of the pharmacophore and mechanism of action of the duocarmycins. This research has led to synthetic analogs including adozelesin, bizelesin, and carzelesin which progressed into clinical trials for the treatment of cancer. Similar research that Boger utilized for comparison to his results involving elimination of cancerous tumors and antigens was centered around the use of similar immunoconjugates that were introduced to cancerous colon cells. These studies related to Boger's research involving antigen-specificity that is necessary to the success of the duocarmycins as antitumor treatments.
Duocarmycin analogues vs tubulin binders:
The duocarmycin have shown activity in a variety of multi-drug resistant (MDR) models. Agents that are part of this class of duocarmycins have the potency in the low picomolar range. This makes them suitable for maximizing the cell-killing potency of antibody-drug conjugates to which they are attached.
Antibody-drug conjugates:
The DNA modifying agents such as duocarmycin are being used in the development of antibody-drug conjugate or ADCs. Scientists at The Netherlands-based Synthon (formerly Syntarga) have combined a unique linkers with duocarmycin derivatives that have a hydroxyl group which is crucial for biological activity. Using this technology scientists aim to create ADCs having an optimal therapeutic window, balancing the effect of potent cell-killing agents on tumor cells versus healthy cells.
Synthetic analogs:
The synthetic analogs of duocarmycins include adozelesin, bizelesin, and carzelesin. As members of the cyclopropylpyrroloindole family, these investigational drugs have progressed into clinical trials for the treatment of cancer.
Bizelesin Bizelesin is antineoplastic antibiotic which binds to the minor groove of DNA and induces interstrand cross-linking of DNA, thereby inhibiting DNA replication and RNA synthesis. Bizelesin also enhances p53 and p21 induction and triggers G2/M cell-cycle arrest, resulting in cell senescence without apoptosis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Amortized analysis**
Amortized analysis:
In computer science, amortized analysis is a method for analyzing a given algorithm's complexity, or how much of a resource, especially time or memory, it takes to execute. The motivation for amortized analysis is that looking at the worst-case run time can be too pessimistic. Instead, amortized analysis averages the running times of operations in a sequence over that sequence.: 306 As a conclusion: "Amortized analysis is a useful tool that complements other techniques such as worst-case and average-case analysis.": 14 For a given operation of an algorithm, certain situations (e.g., input parametrizations or data structure contents) may imply a significant cost in resources, whereas other situations may not be as costly. The amortized analysis considers both the costly and less costly operations together over the whole sequence of operations. This may include accounting for different types of input, length of the input, and other factors that affect its performance.
History:
Amortized analysis initially emerged from a method called aggregate analysis, which is now subsumed by amortized analysis. The technique was first formally introduced by Robert Tarjan in his 1985 paper Amortized Computational Complexity, which addressed the need for a more useful form of analysis than the common probabilistic methods used. Amortization was initially used for very specific types of algorithms, particularly those involving binary trees and union operations. However, it is now ubiquitous and comes into play when analyzing many other algorithms as well.
Method:
Amortized analysis requires knowledge of which series of operations are possible. This is most commonly the case with data structures, which have state that persists between operations. The basic idea is that a worst-case operation can alter the state in such a way that the worst case cannot occur again for a long time, thus "amortizing" its cost.
There are generally three methods for performing amortized analysis: the aggregate method, the accounting method, and the potential method. All of these give correct answers; the choice of which to use depends on which is most convenient for a particular situation.
Aggregate analysis determines the upper bound T(n) on the total cost of a sequence of n operations, then calculates the amortized cost to be T(n) / n.
Method:
The accounting method is a form of aggregate analysis which assigns to each operation an amortized cost which may differ from its actual cost. Early operations have an amortized cost higher than their actual cost, which accumulates a saved "credit" that pays for later operations having an amortized cost lower than their actual cost. Because the credit begins at zero, the actual cost of a sequence of operations equals the amortized cost minus the accumulated credit. Because the credit is required to be non-negative, the amortized cost is an upper bound on the actual cost. Usually, many short-running operations accumulate such credit in small increments, while rare long-running operations decrease it drastically.
Method:
The potential method is a form of the accounting method where the saved credit is computed as a function (the "potential") of the state of the data structure. The amortized cost is the immediate cost plus the change in potential.
Examples:
Dynamic array Consider a dynamic array that grows in size as more elements are added to it, such as ArrayList in Java or std::vector in C++. If we started out with a dynamic array of size 4, we could push 4 elements onto it, and each operation would take constant time. Yet pushing a fifth element onto that array would take longer as the array would have to create a new array of double the current size (8), copy the old elements onto the new array, and then add the new element. The next three push operations would similarly take constant time, and then the subsequent addition would require another slow doubling of the array size.
Examples:
In general if we consider an arbitrary number of pushes n + 1 to an array of size n, we notice that push operations take constant time except for the last one which takes Θ(n) time to perform the size doubling operation. Since there were n + 1 operations total we can take the average of this and find that pushing elements onto the dynamic array takes: nΘ(1)+Θ(n)n+1=Θ(1) , constant time.
Examples:
Queue Shown is a Ruby implementation of a Queue, a FIFO data structure: The enqueue operation just pushes an element onto the input array; this operation does not depend on the lengths of either input or output and therefore runs in constant time.
Examples:
However the dequeue operation is more complicated. If the output array already has some elements in it, then dequeue runs in constant time; otherwise, dequeue takes O(n) time to add all the elements onto the output array from the input array, where n is the current length of the input array. After copying n elements from input, we can perform n dequeue operations, each taking constant time, before the output array is empty again. Thus, we can perform a sequence of n dequeue operations in only O(n) time, which implies that the amortized time of each dequeue operation is O(1) .Alternatively, we can charge the cost of copying any item from the input array to the output array to the earlier enqueue operation for that item. This charging scheme doubles the amortized time for enqueue but reduces the amortized time for dequeue to O(1)
Common use:
In common usage, an "amortized algorithm" is one that an amortized analysis has shown to perform well.
Online algorithms commonly use amortized analysis.
Literature:
"Lecture 7: Amortized Analysis" (PDF). Carnegie Mellon University. Retrieved 14 March 2015.
Allan Borodin and Ran El-Yaniv (1998). Online Computation and Competitive Analysis. pp. 20, 141. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bristle**
Bristle:
A bristle is a stiff hair or feather (natural or artificial), either on an animal, such as a pig, a plant, or on a tool such as a brush or broom.
Synthetic types:
Synthetic materials such as nylon are also used to make bristles in items such as brooms and sweepers. Bristles are often used to make brushes for cleaning purposes, as they are strongly abrasive; common examples include the toothbrush and toilet brush. The bristle brush and the scrub brush are common household cleaning tools, often used to remove dirt or grease from pots and pans. Bristles are also used on brushes other than for cleaning, notably paintbrushes.
Synthetic types:
Bristles are distinguished as flagged (split, bushy ends) or unflagged; these are also known as flocked or unflocked bristles. In cleaning applications, flagged bristles are suited for dry cleaning (due to picking up dust better than unflagged), and unflagged suited for wet cleaning (due to flagged ends becoming dirty and matted when wet). In painting, flagged bristles yield more even application.
Natural types:
Bristles are found on pig breeds, instead of fur. Because the density is less than with fur, pigs are vulnerable to sunburn. One breed, the Tamworth pig, is endowed with a very dense bristle structure such that sunburn damage to skin is minimized. Animals named for their bristles include bristlebirds, the bristle-thighed curlew, the bristle-spined porcupine, and the Trinity bristle snail.
Natural types:
Bristles also anchor worms to the soil to help them move. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fingerspitzengefühl**
Fingerspitzengefühl:
Fingerspitzengefühl [ˈfɪŋɐˌʃpɪtsənɡəˌfyːl] is a German term, literally meaning "finger tips feeling" and meaning intuitive flair or instinct, which has been adopted by the English language as a loanword. It describes a great situational awareness, and the ability to respond most appropriately and tactfully. It can also be applied to diplomats, bearers of bad news, or to describe a superior ability to respond to an escalated situation. The term is sometimes used to describe the instinctive play of certain football players.
Social context:
In social context, Fingerspitzengefühl suggests a combination of "tact, diplomacy and a certain amount of sensitivity to the feelings of others". It is a quality that can enable a person to "negotiate tricky social situations".
In literal terms, it means a physical skill appearing to be controlled by the nerves in the extremities, as in a machinist hand lathing steel to micrometer tolerances.
Military context:
In military terminology, it is used for the stated ability of some military commanders, such as Field-Marshal Erwin Rommel, to describe "the instinctive and immediate response to battle situations", a quality needed to maintain, with great accuracy and attention to detail, an ever-changing operational and tactical situation by maintaining a mental map of the battlefield. The idiom is intended to evoke a military commander who is in such intimate communication with the battlefield that it is as though he has a fingertip on each critical point. In this sense the term is synonymous with the English expression of "keeping one's finger on the pulse", and was expressed in the 18th and 19th centuries as "having a feel for combat".
Military context:
The term is only figurative, and cannot in itself give a realistic picture of the ability being described. It is cognitively related to personal possession of multiple intelligences, notably those pertinent to visual and spatial data processing. The term suggests that in addition to any discursive processing of information that the commander may be conducting (such as mentally considering a specific plan), the commander is automatically establishing cognitive relationships between disparate pieces of information as they arrive, and is able to immediately re-synthesise their mental model of the battlefield.
Military context:
Even though there is no physical connection between the commander and his troops, other than conduits for discursive information such as radio signals, it is as if the commander had their own sensitive presence in each spot.
Military context:
One of the functions of a static map is to allow a traveler to decide upon a course of action suitable for getting from one point to another. In times of war, the terrain and the troops and weapons deployed upon it can be changed much more rapidly than cartographers can change their maps. A commander with Fingerspitzengefühl would hold such a map in their mind, and adjust it by incorporating any significant information that was received.
Military context:
Colonel Mehta Basti Ram was said to have Fingerspitzengefühl.
Related concepts:
The concept may be compared to ideas about intuition and neural net programming. The same phenomenon, but conceptualized in a radically different way, seems to be described by D.T. Suzuki in swordsmanship teaching stories recounted in his Zen and Japanese Culture, and given in analytical detail in Zen Buddhism and Psychoanalysis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flame ionization detector**
Flame ionization detector:
A flame ionization detector (FID) is a scientific instrument that measures analytes in a gas stream. It is frequently used as a detector in gas chromatography. The measurement of ion per unit time make this a mass sensitive instrument. Standalone FIDs can also be used in applications such as landfill gas monitoring, fugitive emissions monitoring and internal combustion engine emissions measurement in stationary or portable instruments.
History:
The first flame ionization detectors were developed simultaneously and independently in 1957 by McWilliam and Dewar at Imperial Chemical Industries of Australia and New Zealand (ICIANZ, see Orica history) Central Research Laboratory, Ascot Vale, Melbourne, Australia. and by Harley and Pretorius at the University of Pretoria in Pretoria, South Africa.In 1959, Perkin Elmer Corp. included a flame ionization detector in its Vapor Fractometer.
Operating principle:
The operation of the FID is based on the detection of ions formed during combustion of organic compounds in a hydrogen flame. The generation of these ions is proportional to the concentration of organic species in the sample gas stream.
Operating principle:
To detect these ions, two electrodes are used to provide a potential difference. The positive electrode acts as the nozzle head where the flame is produced. The other, negative electrode is positioned above the flame. When first designed, the negative electrode was either tear-drop shaped or angular piece of platinum. Today, the design has been modified into a tubular electrode, commonly referred to as a collector plate. The ions thus are attracted to the collector plate and upon hitting the plate, induce a current. This current is measured with a high-impedance picoammeter and fed into an integrator. The manner in which the final data is displayed is based on the computer and software. In general, a graph is displayed that has time on the x-axis and total ion on the y-axis.
Operating principle:
The current measured corresponds roughly to the proportion of reduced carbon atoms in the flame. Specifically how the ions are produced is not necessarily understood, but the response of the detector is determined by the number of carbon atoms (ions) hitting the detector per unit time. This makes the detector sensitive to the mass rather than the concentration, which is useful because the response of the detector is not greatly affected by changes in the carrier gas flow rate.
Response factor:
FID measurements are usually reported "as methane," meaning as the quantity of methane which would produce the same response. The same quantity of different chemicals produces different amounts of current, depending on the elemental composition of the chemicals. The response factor of the detector for different chemicals can be used to convert current measurements into actual amounts of each chemical.
Response factor:
Hydrocarbons generally have response factors that are equal to the number of carbon atoms in their molecule (more carbon atoms produce greater current), while oxygenates and other species that contain heteroatoms tend to have a lower response factor. Carbon monoxide and carbon dioxide are not detectable by FID.
FID measurements are often labelled "total hydrocarbons" or "total hydrocarbon content" (THC), although a more accurate name would be "total volatile hydrocarbon content" (TVHC), as hydrocarbons which have condensed out are not detected, even though they are important, for example safety when handling compressed oxygen.
Description:
The design of the flame ionization detector varies from manufacturer to manufacturer, but the principles are the same. Most commonly, the FID is attached to a gas chromatography system.
Description:
The eluent exits the gas chromatography column (A) and enters the FID detector’s oven (B). The oven is needed to make sure that as soon as the eluent exits the column, it does not come out of the gaseous phase and deposit on the interface between the column and FID. This deposition would result in loss of eluent and errors in detection. As the eluent travels up the FID, it is first mixed with the hydrogen fuel (C) and then with the oxidant (D). The eluent/fuel/oxidant mixture continues to travel up to the nozzle head where a positive bias voltage exists. This positive bias helps to repel the oxidized carbon ions created by the flame (E) pyrolyzing the eluent. The ions (F) are repelled up toward the collector plates (G) which are connected to a very sensitive ammeter, which detects the ions hitting the plates, then feeds that signal to an amplifier, integrator, and display system(H). The products of the flame are finally vented out of the detector through the exhaust port (J).
Advantages and disadvantages:
Advantages Flame ionization detectors are used very widely in gas chromatography because of a number of advantages.
Cost: Flame ionization detectors are relatively inexpensive to acquire and operate.
Low maintenance requirements: Apart from cleaning or replacing the FID jet, these detectors require little maintenance.
Rugged construction: FIDs are relatively resistant to misuse.
Linearity and detection ranges: FIDs can measure organic substance concentration at very low (10−13 g/s) and very high levels, having a linear response range of 107 g/s.
Advantages and disadvantages:
Disadvantages Flame ionization detectors cannot detect inorganic substances and some highly oxygenated or functionalized species like infrared and laser technology can. In some systems, CO and CO2 can be detected in the FID using a methanizer, which is a bed of Ni catalyst that reduces CO and CO2 to methane, which can be in turn detected by the FID. The methanizer is limited by its inability to reduce compounds other than CO and CO2 and its tendency to be poisoned by a number of chemicals commonly found in gas chromatography effluents.
Advantages and disadvantages:
Another important disadvantage is that the FID flame oxidizes all oxidizable compounds that pass through it; all hydrocarbons and oxygenates are oxidized to carbon dioxide and water and other heteroatoms are oxidized according to thermodynamics. For this reason, FIDs tend to be the last in a detector train and also cannot be used for preparatory work.
Advantages and disadvantages:
Alternative solution An improvement to the methanizer is the Polyarc reactor, which is a sequential reactor that oxidizes compounds before reducing them to methane. This method can be used to improve the response of the FID and allow for the detection of many more carbon-containing compounds. The complete conversion of compounds to methane and the now equivalent response in the detector also eliminates the need for calibrations and standards because response factors are all equivalent to those of methane. This allows for the rapid analysis of complex mixtures that contain molecules where standards are not available. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Social dining**
Social dining:
Social dining (by a group of people) is meeting either at someone's place or at restaurant to enjoy a meal together. It is a philosophy of using meals specifically as a means to connect with others: eat to socialize.
A brunch, dinner or supper party are popular examples of places to socially gather over food.Social dining differs from a dining club in the sense that it is not exclusive, but promotes an inclusive atmosphere. Friends and strangers alike can share the social dining experience.
History:
Social dining dates back to Ancient Greek cuisine when meals would be prepared for the purpose of gathering together during festivals or commemorations.
Influence of Technology:
Technology has made social dining a sharable experience through real-time updates, uploaded images and check-ins (at someone's place or at the restaurant). Conversations about meals happen between people present and then are shared with those who are connected to them afar. Websites such as Twitter, Facebook, FourSquare and Gastronaut all encourage people to discuss their dining activities in a virtual, social space. Apps can be downloaded to a user's smartphone to share updates.
Influence of Technology:
Some web-based services get people together to share a social meal, even to join local families for a social dining experience while travelling, to truly experience the local culture and culinary. There are also other social dining networks that let people do group meals at the homes of their users. Another way to experience social dining is by visiting a supper club.
Influence of Technology:
Social dining experiences can also be a source of revenue for host that enable them through different kind of website, those can be associated to airbnb business model. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Minitran**
Minitran:
Minitran is a commercial psychiatric drug (tranquilliser and antidepressant) manufactured in Greece by Adelco S.A. and sold in form of yellow-coloured sugar-coated tablets.It contains Amitriptyline hydrochloride and Perphenazine.
It is sold in the following forms: Minitran 2-10: 2 mg Perphenazine and 10 mg Amitriptyline hydrochloride in each tablet.
Minitran 2-25: 2 mg Perphenazine and 25 mg Amitriptyline hydrochloride in each tablet.
Minitran 4-10: 4 mg Perphenazine and 10 mg Amitriptyline hydrochloride in each tablet.
Minitran 4-25: 4 mg Perphenazine and 25 mg Amitriptyline hydrochloride in each tablet.Minitran is also a pharmaceutical drug for the treatment of Angina, manufactured by 3M.
Minitran:
It contains glyceryl trinitrate and is sold in patch form. It is sold in the following strengths: Minitran 5 contains 18 mg of glyceryl trinitrate and delivers 5 mg in 24 hours Minitran 10: contains 36 mg of glyceryl trinitrate and delivers 10 mg in 24 hours Minitran 15: contains 54 mg of glyceryl trinitrate and delivers 15 mg in 24 hoursIt is also marketed as Discotrine in some countries. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tick (pejorative)**
Tick (pejorative):
Tick, often also as the plural ticks, is a common term used in Germany's right-wing extremist environment to degrade and insult those who think differently, especially leftists and Punks.According to today's right-wing extremist ideology so-called ticks are seen as the main concept of the enemy and are regarded as "un-Germans ideologicall and culturally". The degradation of humans to ticks, i.e. parasites, ties in with the animal metaphors used in the language of National Socialism. The terms "pests" and "Jewish parasites" were widespread in National Socialism. Today these pest metaphors are also widely used in right-wing extremist music and can also be seen as indirect incitement for killing. Violence committed by right-wing extremists was often described as "tick slapping".In the punk or rapscene the term is used as an antonym and sometimes as a self-designation. The punk bands "Se Sichelzecken" and "ESA-Zecken" made the swearword part of their names.
Tick (pejorative):
In recent years the term tick has been made popular and has been used as a self-designation in the musical genre of tick rap. Some followers of the football club FC St. Pauli, especially in the ultra scene, also say "Wir sind Zecken" (we are ticks) in fan chants.In the aftermath of the Sea-Watch 3 affair and its intrusion into the port of Lampedusa the Italian Minister of the Interior, Matteo Salvini, insulted the German captain Carola Rackete as a "German tick" at a party celebration of the Lega in Barzago in July 2019. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dwork family**
Dwork family:
In algebraic geometry, a Dwork family is a one-parameter family of hypersurfaces depending on an integer n, studied by Bernard Dwork. Originally considered by Dwork in the context of local zeta-functions, such families have been shown to have relationships with mirror symmetry and extensions of the modularity theorem.
Definition:
The Dwork family is given by the equations x1n+x2n+⋯+xnn=−nλx1x2⋯xn, for all n≥1 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PTPRK**
PTPRK:
Receptor-type tyrosine-protein phosphatase kappa is an enzyme that in humans is encoded by the PTPRK gene. PTPRK is also known as PTPkappa and PTPκ.
Function:
The protein encoded by this gene is a member of the protein tyrosine phosphatase (PTP) family. Protein tyrosine phosphatases are protein enzymes that remove phosphate moieties from tyrosine residues on other proteins. Tyrosine kinases are enzymes that add phosphates to tyrosine residues, and are the opposing enzymes to PTPs. PTPs are known to be signaling molecules that regulate a variety of cellular processes including cell growth, differentiation, mitotic cycle, and oncogenic transformation.
Function:
The human PTPRK gene is located on the long arm of chromosome 6, a putative tumor suppressor region of the genome.
Function:
During development The same reporter construct used by Shen and colleagues, and described above was created by Skarnes et al. during a screen to identify genes important in mouse development. The transgenic mouse was created by combining a β-galactosidase (β-gal) reporter gene with a signal sequence and the transmembrane domain of the type I transmembrane protein CD4. If the transgene was incorporated into a gene with a signal sequence, β-gal activity would remain in the cytosol of the cell and therefore be active. If the reporter gene was incorporated into a gene that lacked a signal sequence, β-gal activity would be in the ER where it would lose β-gal activity. This construct inserted into the phosphatase domain of PTPkappa. Mice generated from these ES cells were viable, suggesting that PTPkappa phosphatase activity is not necessary for embryonic development.Additional studies have suggested a function for PTPkappa during nervous system development. PTPkappa promotes neurite outgrowth from embryonic cerebellar neurons, and thus may be involved in axonal extension or guidance in vivo. Neurites are extensions from neurons that can be considered the in vitro equivalent of axons and dendrites. The extension of cerebellar neurites on purified PTPkappa fusion proteins was demonstrated to require Grb2 and MEK1 activity.
Function:
In T cells PTPkappa has also been shown to regulate CD4+ positive T cell development. PTPkappa and the THEMIS gene are both deleted in the rat Long-Evans Cinnamon (LEC) strain, and are both required for the CD4+ T-cell deficiency observed in this strain of rats. Deletion of PTPkappa was shown to generate T-helper immunodeficiency in the LEC strain.By expressing a dominant negative form of PTPkappa or by using short-hairpin RNA for PTPkappa in bone-marrow derived stem cells, CD4(+) T cells development is inhibited. PTPkappa likely regulates T-cell development by positively regulating ERK1/2 phosphorylation via the regulation of MEK1/2 and c-Raf phosphorylation.
Function:
Cadherin-catenin signaling PTPkappa is localized to cell-cell contact sites, where it colocalizes and co-immunoprecipitates with β-catenin and plakoglobin/γ-catenin β-catenin may be a PTPkappa substrate. The presence of full-length PTPkappa in melanoma cells decreases the level of free-cytosolic β-catenin, which consequently reduces the level of nuclear β-catenin and reduces the expression of the β-catenin-regulated genes, cyclin D1 and c-myc. Expression of ful-length PTPkappa in melanoma cells that normally lack its expression results in reduced cell migration and cell proliferation. Because the presence of PTPkappa at the cell membrane was shown to sequester β-catenin to the plasma membrane, these data suggest that one mechanism whereby PTPkappa functions as a tumor suppressor is by regulating the intracellular localization of free-β-catenin.The intracellular fragments of PTPkappa, PΔE and PIC, are catalytically active, and can also dephosphorylate β-catenin. Tyrosine phosphorylated β-catenin translocates to the cell nucleus and activates TCF-mediated transcription to promote cell proliferation and migration. While full-length PTPkappa antagonizes TCF-mediated transcription, the PIC fragment augments it, perhaps by regulating other proteins in TCF-mediated transcription. This suggests that phosphatase activity of the PIC fragment opposes that of full-length PTPkappa.PTPkappa interacts by co-immunoprecipitation with E-cadherin, α-catenin and β-catenin in pancreatic acinar cells prior to the dissolution of adherens junctions in a rat model of pancreatitis. The authors suggest that the presence of PTPkappa at the plasma membrane in association with the cadherin/catenin complex is important for the maintenance of adherens junction in pancreatic acinar cells, much as it was suggested above in melanoma cells.
Function:
EGFR signaling Use of short interfering RNA (siRNA) of PTPkappa to reduce PTPkappa protein expression in the mammary epithelial cell line, MCF10A, resulted in increased cell proliferation. PTPkappa expression, conversely, was demonstrated to reduce cell proliferation in Chinese hamster ovary cells. The mechanism proposed to explain the influence of PTPkappa on cell proliferation is via PTPkappa dephosphorylation of the EGFR on tyrosines 1068 and 1173 directly. The reduction of PTPkappa expression in CHO cells with PTPkappa siRNA increased EGFR phosphorylation. Therefore, the hypothesis is that PTPkappa functions as a tumor suppressor gene by dephosphorylating and inactivating EGFR.In addition, glycosylation by N-acetylglucosaminyltransferase-V (GnT-V) has been shown to reduce full-length PTPkappa expression, likely via increasing its cleavage. This aberrant glycosylation has been shown to increase the phosphorylation of EGFR on tyrosine 1068, likely because of reduced plasma-membrane associated PTPkappa expression and hence reduced PTPkappa-mediated dephosphorylation of its membrane associated substrates, such as EGFR.
Structure:
PTPkappa possesses an extracellular region, a single transmembrane region, and two tandem catalytic domains, and thus represents a receptor-type PTP (RPTP). The extracellular region contains a meprin-A5 antigen-PTP mu (MAM) domain, an Ig-like domain and four fibronectin type III-like repeats. PTPkappa is a member of the R2B subfamily of RPTPs, which includes RPTPM, RPTPT, and RPTPU. PTPkappa shares most sequence similarity with PTPmu and PTPrho.
Structure:
Crystal structure analysis of the first phosphatase domain of PTPkappa demonstrates that it shares many conformational features with PTPmu, including an unhindered open conformation for the catalytically important WPD loop, and a phosphate binding loop for the active-site cysteine (Cys1083). PTPkappa exists as a monomer in solution, with the caveat that dimers of PTPkappa are observed depending on the nature of the buffer used.
Alternative splicing:
Alternative splicing of exons 16, 17a, and 20a has been described for PTPRK. Two novel forms of PTPRK were identified from mouse full-length cDNA sequences and were predicted to result in two PTPkappa splice variants: a secreted form of PTPkappa and a membrane tethered form.
Homophilic binding:
PTPkappa mediates homophilic cell-cell aggregation via its extracellular domain. PTPkappa only mediates binding between cells expressing PTPkappa (i.e. homophilic), and will not mediate cell aggregation between cells expressing PTPkappa, PTPmu or PTPrho (i.e. heterophilic).
Regulation:
Proteolysis and N-glycosylation Full-length PTPkappa protein is cleaved by furin to generate two cleaved fragments that remain associated at the plasma membrane, an extracellular (E) subunit and an intracellular phosphatase (P) subunit. In response to high cell density or calcium influx following trifluoperazine (TFP) stimulation, PTPkappa is further cleaved by ADAM 10 to yield a shed extracellular fragment and a membrane tethered intracellular fragment, PΔE. The membrane tethered PΔE fragment is further cleaved by the gamma secretase complex to yield a membrane-released fragment, PIC, that can translocate to the cellular nucleus, where it is catalytically active.Glycosylation of the extracellular domain of PTPkappa was demonstrated to occur preferentially in WiDr colon cancer cells that over-express N-acetylglucosaminyl transferase V (GnT-V). Over-expression of GnT-V in these cells increased the cleavage and shedding of PTPkappa ectodomain and increased migration of WiDr cells in transwell assays. As a result of glycosylation of PTPkappa by GnT-V, EGFR was phosphorylated on tyrosine 1068 and activated, and is likely the cause of the increased cell migration observed following PTPkappa cleavage.Shedding of PTPkappa may also be regulated by the presence of galectin-3 binding protein, as has been shown in WiDr cells. The authors suggest that the ratio of galectin-3 binding protein to galectin 3 influences the cleavage and shedding of PTPkappa, although the exact mechanism of how these proteins regulate PTPkappa cleavage was not determined.
Regulation:
By reactive oxygen species in cancer One mechanism whereby PTPkappa tyrosine phosphatase activity can be perturbed in cancer is via oxidative inhibition mediated by reactive oxygen species generated by either hydrogen peroxide in vitro or UV irradiation of skin cells in vivo. In cell-free assays, the presence of hydrogen peroxide reduces PTPkappa tyrosine phosphatase activity and increases EGFR tyrosine phosphorylation. UV-irradiation of primary human keratinocytes yields the same results, namely a reduction of PTPkappa tyrosine phosphatase activity and an increase in EGFR tyrosine phosphorylation. EGFR phosphorylation then leads to cell proliferation, suggesting that PTPkappa may function as a tumor suppressor in skin cancer in addition to melanoma.
Regulation:
Expression PTPkappa is expressed in human keratinocytes. TGFβ1 is a growth inhibitor in human keratinocytes. Stimulation of the cultured human keratinocyte cell line, HaCaT, with TGFβ1 increases the levels of PTPkappa (PTPRK) mRNA as assayed by northern blot analysis. TGFβ1 also increased PTPkappa mRNA and protein in normal and tumor mammary cell lines. HER2 overexpression reduced PTPkappa mRNA and protein expression.
Clinical significance:
Melanoma and skin cancer Expression analysis of PTPkappa mRNA in normal melanocytes and in melanoma cells and tissues demonstrated that PTPkappa is downregulated or absent 20% of the time in melanoma, suggesting that PTPkappa is a tumor suppressor gene in melanoma. A form of PTPkappa with a point mutation in the fourth fibronectin III repeat was identified to be a melanoma specific antigen recognized by CD4+ T cells in a melanoma patient with 10-year tumor-free survival after lymph node resection. This particular mutated form of PTPkappa was not identified in 10 other melanoma cell lines, and may thus represent a unique mutation in one patient.
Clinical significance:
Lymphoma PTPkappa was also identified as the putative tumor suppressor gene commonly deleted in primary central nervous system lymphomas (PCNSLs).Downregulation of PTPkappa was found to occur following Epstein-Barr Virus (EBV) infection of Hodgkin's Lymphomas cells.
Colorectal cancer Using a transposon-based genetic screen, researchers found that disruption of the PTPRK gene in gastrointestinal tract epithelium resulted in an intestinal lesion, classified as either an intraepithelial neoplasia, an adenocarcinoma or an adenoma.
Lung cancer PTPRK mRNA was shown to be significantly reduced by RT-PCR in human lung cancer-derived cell lines.
Prostate cancer PTPRK has also been shown to be downregulated in response to androgen stimulation in human LNCaP prostate cancer cells. The mechanism whereby PTPRK is downregulated is via the expression of a microRNA, miR-133b, which is upregulated in response to androgen stimulation.
Clinical significance:
Breast cancer Patients with reduced PTPRK transcript expression have shorter breast cancer survival times and are more likely to have breast cancer metastases or to die from breast cancer. In an experimental model of breast cancer, PTPRK was reduced in breast cancer cell lines with PTPRK ribozymes. In these cells, adhesion to matrigel, transwell migration, and cell growth were all increased following the reduction of PTPRK expression, again supporting a function for PTPRK as a tumor suppressor.
Clinical significance:
Glioma Assem and colleagues identified loss of heterozygosity (LOH) events in malignant glioma specimens, and identified PTPRK as a significant gene candidate in one LOH region. A significant correlation between the presence of PTPRK mutations and short patient survival time was observed. PTPRK was amplified from tumor cDNA to confirm the LOH observed. In these specimens, 6 different mutations were observed, two of which (one in each phosphatase domain) disrupted the enzymatic activity of PTPRK. Expression of wild-type PTPkappa in U87-MG and U251-MG cells resulted in a reduction in cell proliferation, migration and invasion. Expression of the variants of PTPkappa with mutations in the phosphatase domains, however, increased cell proliferation, migration and invasion, supporting a role for the involvement of the mutated variants of PTPkappa in tumorigenicity.
Clinical significance:
In development In situ hybridization localized PTPkappa mRNA to the brain, lung, skeletal muscle, heart, placenta, liver, kidney and intestines during development. PTPkappa was also found to be expressed in the developing retina, in nestin-positive radial progenitor cells and later in development, in the ganglion cell layer, inner plexiform layer and outer segments of photoreceptors. PTPkappa protein is observed in neural progenitor cells and radial glial cells of the developing mouse superior colliculus, as well.In the adult rat brain, PTPkappa mRNA is highly expressed in regions of the brain with cellular plasticity and growth, such as the olfactory bulb, the hippocampus and the cerebral cortex. PTPkappa mRNA is also observed in the adult mouse cerebellum.Using a β-galactosidase (β-gal) reporter gene inserted into the phosphatase domain of the murine PTPkappa (PTPRK) gene, Shen and colleagues determined the detailed expression pattern of endogenous PTPRK. β-gal activity was observed in many areas of the adult forebrain, including layers II and IV, and to a lesser extent in layer VI of the cortex. β-gal activity was also observed in apical dendrites of cortical pyramidal cells, the granule layer of the olfactory and accessory olfactory bulbs, the anterior hypothalamus, paraventricular nucleus, and in granule and pyramidal layers of the dentate gyrus and CA 1-3 regions of the hippocampus. In the midbrain, β-gal was observed in the subthalamic nucleus, the superior and inferior colliculi and in the red nucleus. β-gal activity was also observed in the neural retina, in the inner nuclear layer and in small ganglion cells of the ganglion cell layer.
Interactions:
PTPRK has been shown to interact with: Beta-catenin, E-cadherin (CDH-1), Epidermal growth factor receptor (EGFR), HER2, Plakoglobin, and α-catenin. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Reaction wheel**
Reaction wheel:
A reaction wheel (RW) is used primarily by spacecraft for three-axis attitude control, and does not require rockets or external applicators of torque. They provide a high pointing accuracy,: 362 and are particularly useful when the spacecraft must be rotated by very small amounts, such as keeping a telescope pointed at a star.
Reaction wheel:
A reaction wheel is sometimes operated as (and referred to as) a momentum wheel, by operating it at a constant (or near-constant) rotation speed, to provide a satellite with a large amount of stored angular momentum. Doing so alters the spacecraft's rotational dynamics so that disturbance torques perpendicular to one axis of the satellite (the axis parallel to the wheel's spin axis) do not result directly in spacecraft angular motion about the same axis as the disturbance torque; instead, they result in (generally smaller) angular motion (precession) of that spacecraft axis about a perpendicular axis. This has the effect of tending to stabilize that spacecraft axis to point in a nearly-fixed direction,: 362 allowing for a less-complicated attitude control system. Satellites using this "momentum-bias" stabilization approach include SCISAT-1; by orienting the momentum wheel's axis to be parallel to the orbit-normal vector, this satellite is in a "pitch momentum bias" configuration.
Reaction wheel:
A control moment gyroscope (CMG) is a related but different type of attitude actuator, generally consisting of a momentum wheel mounted in a one-axis or two-axis gimbal.: 362 When mounted to a rigid spacecraft, applying a constant torque to the wheel using one of the gimbal motors causes the spacecraft to develop a constant angular velocity about a perpendicular axis, thus allowing control of the spacecraft's pointing direction. CMGs are generally able to produce larger sustained torques than RWs with less motor heating, and are preferentially used in larger or more-agile (or both) spacecraft, including Skylab, Mir, and the International Space Station.
Theory:
Reaction wheels are used to control the attitude of a satellite without the use of thrusters, which reduces the mass fraction needed for fuel.
Theory:
They work by equipping the spacecraft with an electric motor attached to a flywheel, which, when its rotation speed is changed, causes the spacecraft to begin to counter-rotate proportionately through conservation of angular momentum. Reaction wheels can rotate a spacecraft only around its center of mass (see torque); they are not capable of moving the spacecraft from one place to another (see translational force).
Implementation:
For three-axis control, reaction wheels must be mounted along at least three directions, with extra wheels providing redundancy to the attitude control system. A redundant mounting configuration could consist of four wheels along tetrahedral axes, or a spare wheel carried in addition to a three axis configuration.: 369 Changes in speed (in either direction) are controlled electronically by computer. The strength of the materials used in a reaction wheel determine the speed at which the wheel would come apart, and therefore how much angular momentum it can store.
Implementation:
Since the reaction wheel is a small fraction of the spacecraft's total mass, easily controlled, temporary changes in its speed result in small changes in angle. The wheels therefore permit very precise changes in a spacecraft's attitude. For this reason, reaction wheels are often used to aim spacecraft carrying cameras or telescopes.
Implementation:
Over time, reaction wheels may build up enough stored momentum to exceed the maximum speed of the wheel, called saturation, which will need to be canceled. Designers therefore supplement reaction wheel systems with other attitude control mechanisms. In the presence of a magnetic field (as in low Earth orbit), a spacecraft can employ magnetorquers (better known as torque rods) to transfer angular momentum to Earth through its planetary magnetic field.: 368 In the absence of a magnetic field, the most efficient practice is to use either high-efficiency attitude jets such as ion thrusters, or small, lightweight solar sails placed in locations away from the spacecraft's center of mass, such as on solar cell arrays or projecting masts.
Examples of spacecraft using reaction wheels:
Beresheet Beresheet was launched on a Falcon 9 rocket on 22 February 2019 1:45 UTC, with the goal of landing on the Moon. Beresheet uses the low-energy transfer technique to save fuel. Since its fourth maneuver in its elliptical orbit, to prevent shakes when the amount of liquid fuel ran low, there was a need to use a reaction wheel.
Examples of spacecraft using reaction wheels:
James Webb Space Telescope The James Webb Space Telescope has six reaction wheels built by Rockwell Collins Deutschland.
LightSail 2 LightSail 2 was launched on 25 June 2019, focused around the concept of a solar sail. LightSail 2 uses a reaction wheel system to change orientation by very small amounts, allowing it to receive different amounts of momentum from the light across the sail, resulting in a higher altitude.
Failures and mission impact:
The failure of one or more reaction wheels can cause a spacecraft to lose its ability to maintain attitude (orientation) and thus potentially cause a mission failure. Recent studies conclude that these failures can be correlated with space weather effects. These events probably caused failures by inducing electrostatic discharge in the steel ball bearings of Ithaco wheels, compromising the smoothness of the mechanism.
Failures and mission impact:
Hubble Space Telescope Two servicing missions to the Hubble Space Telescope have replaced a reaction wheel. In February 1997, the Second Servicing Mission (STS-82) replaced one after 'electrical anomalies', rather than any mechanical problem. Study of the returned mechanism provided a rare opportunity to study equipment that had undergone long-term service (seven years) in space, particularly for the effects of vacuum on lubricants. The lubricating compound was found to be in 'excellent condition'. In 2002, during Servicing Mission 3B (STS-109), astronauts from the shuttle Columbia replaced another reaction wheel. Neither of these wheels had failed and Hubble was designed with four redundant wheels, and maintained pointing ability so long as three were functional.
Failures and mission impact:
Hayabusa In 2004, during the mission of the Hayabusa spacecraft, an X-axis reaction wheel failed. The Y-axis wheel failed in 2005, causing the craft to rely on chemical thrusters to maintain attitude control.
Failures and mission impact:
Kepler From July 2012 to May 11, 2013, two out of the four reaction wheels in the Kepler telescope failed. This loss severely affected Kepler's ability to maintain a sufficiently precise orientation to continue its original mission. On August 15, 2013, engineers concluded that Kepler's reaction wheels cannot be recovered and that planet-searching using the transit method (measuring changes in star brightness caused by orbiting planets) could not continue. Although the failed reaction wheels still function, they are experiencing friction exceeding acceptable levels, and consequently hindering the ability of the telescope to properly orient itself. The Kepler telescope was returned to its "point rest state", a stable configuration that uses small amounts of thruster fuel to compensate for the failed reaction wheels, while the Kepler team considered alternative uses for Kepler that do not require the extreme accuracy in its orientation needed by the original mission. On May 16, 2014, NASA extended the Kepler mission to a new mission named K2, which uses Kepler differently, but allows it to continue searching for exoplanets. On October 30, 2018, NASA announced the end of the Kepler mission after it was determined that the fuel supply had been exhausted.
Failures and mission impact:
Dawn The NASA space probe Dawn had excess friction in one reaction wheel in June 2010. It was originally scheduled to depart Vesta and begin its two-and-a-half-year journey to Ceres on August 26, 2012; however, a problem with another of the spacecraft's reaction wheels forced Dawn to briefly delay its departure from Vesta's gravity until September 5, 2012, and it planned to use thruster jets instead of the reaction wheels during the three-year journey to Ceres. The loss of the reaction wheels limited the camera observations on the approach to Ceres.
Failures and mission impact:
Swift Observatory On the evening of Tuesday, January 18, 2022, a possible failure of one of the Swift Observatory's reaction wheels caused the mission control team to power off the suspected wheel, putting the observatory in safe mode as a precaution. This was the first time a reaction wheel failed on Swift in 17 years. Swift resumed science operations on February 17, 2022. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Leaf power**
Leaf power:
In the mathematical area of graph theory, a k-leaf power of a tree T is a graph G whose vertices are the leaves of T and whose edges connect pairs of leaves whose distance in T is at most k. That is, G is an induced subgraph of the graph power Tk , induced by the leaves of T. For a graph G constructed in this way, T is called a k-leaf root of G.
Leaf power:
A graph is a leaf power if it is a k-leaf power for some k. These graphs have applications in phylogeny, the problem of reconstructing evolutionary trees.
Related classes of graphs:
Since powers of strongly chordal graphs are strongly chordal and trees are strongly chordal, it follows that leaf powers are strongly chordal graphs. Actually, leaf powers form a proper subclass of strongly chordal graphs; a graph is a leaf power if and only if it is a fixed tolerance NeST graph and such graphs are a proper subclass of strongly chordal graphs.In Brandstädt et al. (2010) it is shown that interval graphs and the larger class of rooted directed path graphs are leaf powers. The indifference graphs are exactly the leaf powers whose underlying trees are caterpillar trees.
Related classes of graphs:
The k-leaf powers for bounded values of k have bounded clique-width, but this is not true of leaf powers with unbounded exponents.
Structure and recognition:
A graph is a 3-leaf power if and only if it is a (bull, dart, gem)-free chordal graph.
Based on this characterization and similar ones, 3-leaf powers can be recognized in linear time.Characterizations of 4-leaf powers are given by Rautenbach (2006) and Brandstädt, Le & Sritharan (2008), which also enable linear time recognition. Recognition of the 5-leaf and 6-leaf power graphs are also solved in linear time by Chang and Ko (2007) and Ducoffe (2018), respectively.
For k ≥ 7 the recognition problem of k-leaf powers was unsolved for a long time, but Lafond (2021) showed that k-leaf powers can be recognized in polynomial time for any fixed k. However, the high dependency on the parameter k makes this algorithm unsuitable for practical use.
Also, it has been proved that recognizing k-leaf powers is fixed-parameter tractable when parameterized by k and the degeneracy of the input graph. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**In-crowd algorithm**
In-crowd algorithm:
The in-crowd algorithm is a numerical method for solving basis pursuit denoising quickly; faster than any other algorithm for large, sparse problems. This algorithm is an active set method, which minimizes iteratively sub-problems of the global basis pursuit denoising: min x12‖y−Ax‖22+λ‖x‖1.
In-crowd algorithm:
where y is the observed signal, x is the sparse signal to be recovered, Ax is the expected signal under x , and λ is the regularization parameter trading off signal fidelity and simplicity. The simplicity is here measured using the sparsity of the solution x , measure through its ℓ1 -norm. The active set strategies are very efficient in this context as only few coefficient are expected to be non-zero. Thus, if they can be identified, solving the problem restricted to these coefficients yield the solution. Here, the features are greedily selected based on the absolute value of their gradient at the current estimate.
In-crowd algorithm:
Other active-set methods for the basis pursuit denoising includes BLITZ, where the selection of the active set is performed using the duality gap of the problem, and The Feature Sign Search, where the features are included based on the estimate of their sign.
Algorithm:
It consists of the following: Declare x to be 0, so the unexplained residual r=y Declare the active set I to be the empty set, and Ic to be its complement (the inactive set) Calculate the usefulness uj=|⟨rAj⟩| for each component in Ic If on Ic , no uj>λ , terminate Otherwise, add 25 components to I based on their usefulness Solve basis pursuit denoising exactly on I , and throw out any component of I whose value attains exactly 0. This problem is dense, so quadratic programming techniques work very well for this sub problem.
Algorithm:
Update r=y−Ax - n.b. can be computed in the subproblem as all elements outside of I are 0 Go to step 3.Since every time the in-crowd algorithm performs a global search it adds up to L components to the active set, it can be a factor of L faster than the best alternative algorithms when this search is computationally expensive. A theorem guarantees that the global optimum is reached in spite of the many-at-a-time nature of the in-crowd algorithm. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pulsed inductive thruster**
Pulsed inductive thruster:
A pulsed inductive thruster (PIT) is a form of ion thruster, used in spacecraft propulsion. It is a plasma propulsion engine using perpendicular electric and magnetic fields to accelerate a propellant with no electrode.
Operation:
A nozzle releases a puff of gas which spreads across a flat spiraling induction coil of wire about 1 meter across. A bank of capacitors releases a pulse of high voltage electric current of tens of kilovolts lasting 10 microseconds into the coil, generating a radial magnetic field. This induces a circular electrical field in the gas, ionizing it and causing charged particles (free electrons and ions) to revolve in the opposite direction as the original pulse of current. Because the motion of this induced current flow is perpendicular to the magnetic field, the plasma is accelerated out into space by the Lorentz force at a high exhaust velocity (10 to 100 km/s).
Advantages:
Unlike an electrostatic ion thruster which uses an electric field to accelerate only one species (positive ions), a PIT uses the Lorentz body force acting upon all charged particles within a quasi-neutral plasma. Unlike most other ion and plasma thrusters, it also requires no electrodes (which are susceptible to erosion) and its power can be scaled up simply by increasing the number of pulses per second. A 1-megawatt system would pulse 200 times per second.
Advantages:
Pulsed inductive thrusters can maintain constant specific impulse and thrust efficiency over a wide range of input power levels by adjusting the pulse rate to maintain a constant discharge energy per pulse. It has demonstrated efficiency greater than 50%.Pulsed inductive thrusters can use a wide range of gases as a propellant, such as water, hydrazine, ammonia, argon, or xenon, among many others. Due to this ability, it has been suggested to use PITs for Martian missions: an orbiter could refuel by scooping CO2 from the atmosphere of Mars, compressing the gas and liquefying it into storage tanks for the return journey or another interplanetary mission, whilst orbiting the planet.
Developments:
Early development began with fundamental proof-of-concept studies performed in the mid-1960s. NASA conducts experiments on this device since the early 1980s.
PIT Mk V, VI and VII NGST (Northrop Grumman Space Technology), as a contractor for NASA, built several experimental PITs.
Research efforts during the first period (1965–1973) were aimed at understanding the structure of an inductive current sheet and evaluating different concepts for propellant injection and preionization.
In the second period (1979–1988), the focus shifted more towards developing a true propulsion system and increasing the performance of the base design through incremental design changes, with the build of Mk I and Mk IV prototypes.
Developments:
The third period (1991-today) began with the introduction of a new PIT thruster design known as the Mk V. It evolved into the Mk VI, developed to reproduce Mk V single-shot tests, which completely characterize thruster performance. It uses an improved coil of hollow copper tube construction and an improved propellant valve, but is electrically identical to the Mk V, using the same capacitors and switches. The Mk VII (early 2000s) has the same geometry as Mk VI, but is designed for high pulse frequency and long-duration firing with a liquid-cooled coil, longer-life capacitors, and fast, high-power solid-state switches. The goal for Mk VII is to demonstrate up to 50 pulses per second at the rated efficiency and impulse bit at 200 kW of input power in a single thruster. Mk VII design is the base for the most recent NuPIT (Nuclear-electric PIT).The PIT has obtained relatively high performance in the laboratory environment, but it still requires additional advancements in switching technology and energy storage before becoming practical for high-power in-space applications, with the need for a nuclear-based onboard power source.
Developments:
FARAD FARAD, which stands for Faraday accelerator with radio-frequency assisted discharge, is a lower-power alternative to the PIT that has the potential for space operation using current technologies.In the PIT, both propellant ionization and acceleration are performed by the HV pulse of current in the induction coil, while FARAD uses a separate inductive RF discharge to preionize the propellant before it is accelerated by the current pulse. This preionization allows FARAD to operate at much lower discharge energies than the PIT (100 joules per pulse vs 4 kilojoules per pulse) and allows for a reduction in the thruster's size. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Scan tool (automotive)**
Scan tool (automotive):
An automotive scan tool (scanner) is an electronic tool used to interface with, diagnose and, sometimes, reprogram vehicle control modules.There are many types from just as many manufacturers, one of the most familiar being the Snap-On Inc. "brick", or MT2500/MTG2500. Snap-On, Hella Gutmann Solutions, OTC/SPX, Xtool india, Autel, Launch, Vetronix/Bosch and a number of other companies produce various types of scan tools, from simple code readers to highly capable bi-directional computers with programming capabilities.
Scan tool (automotive):
The scan tool is connected to the vehicle's data link connector (DLC) and, depending on the particular tool, may only read out diagnostic trouble codes or DTC's (this would be considered a "code reader") or may have more capabilities. Actual scan tools will display live data stream (inputs and outputs), have bi-directional controls (the ability to make the controllers do things outside of normal operations) and may even be able to calibrate/program modules within certain parameters. However, a typical scan tool does not have the ability to fully reprogram modules because it requires a J-2534 pass-through device and specific software.
Scan tool (automotive):
Voltas IT created a new generation diagnostic tool – OBDeleven, the device which easily connects to the car, monitors all systems, and activates new car's features. It supports Audi, Volkswagen, SEAT, Škoda, Lamborghini, and Bentley.OBD 1 vs OBD 2 the vehicle will also dictate what the scan tool is able to display. If the vehicle is equipped with OBD 1 it will have significantly less available data when compared to a vehicle equipped with OBD 2.When a vehicle detects a problem, it generates a DTC code which is a unique code that corresponds to the specific problem detected. The code is usually a combination of letters and numbers.
Scan tool (automotive):
DTC codes are read by a diagnostic tool, such as an OBD 2 scanner, which is plugged into the vehicle's diagnostic port. The tool communicates with the vehicle's onboard computer and retrieves the DTC codes. The codes are then interpreted by the mechanic or technician to determine the specific problem with the vehicle. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Systematic Parasitology**
Systematic Parasitology:
Systematic Parasitology is a monthly peer-reviewed medical journal covering all aspects of the taxonomy and systematics of parasites. It was established in 1979 and is published by Springer Science+Business Media. The editor-in-chief is Aneta Kostadinova (Academy of Sciences of the Czech Republic).
Abstracting and indexing:
The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2013 impact factor of 1.035. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.