text
stringlengths
60
353k
source
stringclasses
2 values
**SILC (protocol)** SILC (protocol): SILC (Secure Internet Live Conferencing protocol) is a protocol that provides secure synchronous conferencing services (very much like IRC) over the Internet. Components: The SILC protocol can be divided in three main parts: SILC Key Exchange (SKE) protocol, SILC Authentication protocol and SILC Packet protocol. SILC protocol additionally defines SILC Commands that are used to manage the SILC session. SILC provides channels (groups), nicknames, private messages, and other common features. However, SILC nicknames, in contrast to many other protocols (e.g. IRC), are not unique; a user is able to use any nickname, even if one is already in use. The real identification in the protocol is performed by unique Client ID. The SILC protocol uses this to overcome nickname collision, a problem present in many other protocols. All messages sent in a SILC network are binary, allowing them to contain any type of data, including text, video, audio, and other multimedia data. Components: The SKE protocol is used to establish session key and other security parameters for protecting the SILC Packet protocol. The SKE itself is based on the Diffie–Hellman key exchange algorithm (a form of asymmetric cryptography) and the exchange is protected with digital signatures. The SILC Authentication protocol is performed after successful SKE protocol execution to authenticate a client and/or a server. The authentication may be based on passphrase or on digital signatures, and if successful gives access to the relevant SILC network. The SILC Packet protocol is intended to be a secure binary packet protocol, assuring that the content of each packet (consisting of a packet header and packet payload) is secured and authenticated. The packets are secured using algorithms based on symmetric cryptography and authenticated by using Message Authentication Code algorithm, HMAC. Components: SILC channels (groups) are protected by using symmetric channel keys. It is optionally possible to digitally sign all channel messages. It is also possible to protect messages with a privately generated channel key that has been previously agreed upon by channel members. Private messages between users in a SILC network are protected with session keys. It is, however, possible to execute SKE protocol between two users and use the generated key to protect private messages. Private messages may be optionally digitally signed. When messages are secured with key material generated with the SKE protocol or previously agreed upon key material (for example, passphrases) SILC provides security even when the SILC server may be compromised. History: SILC was designed by Pekka Riikonen between 1996 and 1999 and first released in public in summer 2000. A client and a server were written. Protocol specifications were proposed, but ultimately request for publication was denied in June 2004 by IESG and no RFC has been published to date. At present time, there are several clients, the most advanced being the official SILC client and an irssi plugin. SILC protocol is also integrated to the popular Pidgin instant messaging client. Other GUI clients are Silky and Colloquy. The Silky client was put on hold and abandoned on the 18th of July 2007, due to inactivity for several years. The latest news on the Silky website was that the client was to be completely rewritten. As of 2008, three SILC protocol implementations have been written. Most SILC clients use libsilc, part of the SILC Toolkit. The SILC Toolkit is dual-licensed and distributed under both the GNU General Public License (GPL) and the revised BSD license. Security: As described in the SILC FAQ, chats are secured through the generation of symmetric encryption keys. These keys have to be generated somewhere, and this occurs on the server. This means that chats might be compromised, if the server itself is compromised. This is just a version of the man-in-the-middle attack. The solution offered is that chat members generate their own public-private keypair for asymmetric encryption. The private key is shared only by the chat members, and this is done out of band. The public key is used to encrypt messages into the channel. This approach is still open to compromise, if one of the members of the chat should have their private key compromised, or if they should share the key with another, without agreement of the group. Networks: SILC uses a similar pattern to IRC, in that there is no global "SILC network" but many small independent networks consisting of one or several servers each, although it is claimed that SILC can scale better with many servers in a single network. The "original" network is called SILCNet, at the silc.silcnet.org round-robin. However, as of May 2014, it has only one active (though unstable) server out of four. Most SILC networks have shut down due to declining popularity of SILC.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Edema** Edema: Edema, also spelled oedema, and also known as fluid retention, dropsy, hydropsy and swelling, is the build-up of fluid in the body's tissue. Most commonly, the legs or arms are affected. Symptoms may include skin which feels tight, the area may feel heavy, and joint stiffness. Other symptoms depend on the underlying cause.Causes may include venous insufficiency, heart failure, kidney problems, low protein levels, liver problems, deep vein thrombosis, infections, angioedema, certain medications, and lymphedema. It may also occur in immobile patients (stroke, spinal cord injury, aging), or with temporary immobility such as prolonged sitting or standing, and during menstruation or pregnancy. The condition is more concerning if it starts suddenly, or pain or shortness of breath is present.Treatment depends on the underlying cause. If the underlying mechanism involves sodium retention, decreased salt intake and a diuretic may be used. Elevating the legs and support stockings may be useful for edema of the legs. Older people are more commonly affected. The word is from the Greek οἴδημα oídēma meaning 'swelling'. Signs and symptoms: Specific area An edema will occur in specific organs as part of inflammations, tendinitis or pancreatitis, for instance. Certain organs develop edema through tissue specific mechanisms. Examples of edema in specific organs: Peripheral edema (“dependent” edema of legs) is extracellular fluid accumulation in the lower extremities caused by the effects of gravity, and occurs when fluid pools in the lower parts of the body, including the feet, legs, or hands. This often occurs in immobile patients, such as paraplegics or quadriplegics, pregnant women, or in otherwise healthy people due to hypervolemia or maintaining a standing or seated posture for an extended period of time. It can occur due to diminished venous return of blood to the heart due to congestive heart failure or pulmonary hypertension. It can also occur in patients with increased hydrostatic venous pressure or decreased oncotic venous pressure, due to obstruction of lymphatic or venous vessels draining the lower extremity. Certain drugs (for example, amlodipine) can cause pedal edema. Signs and symptoms: Cerebral edema is extracellular fluid accumulation in the brain. It can occur in toxic or abnormal metabolic states and conditions such as systemic lupus or reduced oxygen at high altitudes. It causes drowsiness or loss of consciousness, leading to brain herniation and death. Signs and symptoms: Pulmonary edema occurs when the pressure in blood vessels in the lung is raised because of obstruction to the removal of blood via the pulmonary veins. This is usually due to failure of the left ventricle of the heart. It can also occur in altitude sickness or on inhalation of toxic chemicals. Pulmonary edema produces shortness of breath. Pleural effusions may occur when fluid also accumulates in the pleural cavity. Signs and symptoms: Edema may also be found in the cornea of the eye with glaucoma, severe conjunctivitis, keratitis, or after surgery. Affected people may perceive coloured haloes around bright lights. Edema surrounding the eyes is called periorbital edema (puffy eyes) . The periorbital tissues are most noticeably swollen immediately after waking, perhaps as a result of the gravitational redistribution of fluid in the horizontal position. Common appearances of cutaneous edema are observed with mosquito bites, spider bites, bee stings (wheal and flare), and skin contact with certain plants such as poison ivy or western poison oak, the latter of which are termed contact dermatitis. Signs and symptoms: Another cutaneous form of edema is myxedema, which is caused by increased deposition of connective tissue. In myxedema (and a variety of other rarer conditions) edema is caused by an increased tendency of the tissue to hold water within its extracellular space. In myxedema, this is due to an increase in hydrophilic carbohydrate-rich molecules (perhaps mostly hyaluronin) deposited in the tissue matrix. Edema forms more easily in dependent areas in the elderly (sitting in chairs at home or on aeroplanes) and this is not well understood. Estrogens alter body weight in part through changes in tissue water content. There may be a variety of poorly understood situations in which transfer of water from tissue matrix to lymphatics is impaired because of changes in the hydrophilicity of the tissue or failure of the 'wicking' function of terminal lymphatic capillaries. Signs and symptoms: Myoedema is localized mounding of muscle tissue due to percussive pressure, such as flicking the relaxed muscle with the forefinger and thumb. It produces a mound, visible, firm and non-tender at the point of tactile stimulus approximately 1-2 seconds after stimulus, subsiding back to normal after 5-10 seconds. It is a sign in hypothyroid myopathy, such as Hoffmann syndrome. Signs and symptoms: In lymphedema, abnormal removal of interstitial fluid is caused by failure of the lymphatic system. This may be due to obstruction from, for example, pressure from a cancer or enlarged lymph nodes, destruction of lymph vessels by radiotherapy, or infiltration of the lymphatics by infection (such as elephantiasis). It is most commonly due to a failure of the pumping action of muscles due to immobility, most strikingly in conditions such as multiple sclerosis, or paraplegia. It has been suggested that the edema that occurs in some people following use of aspirin-like cyclo-oxygenase inhibitors such as ibuprofen or indomethacin may be due to inhibition of lymph heart action. Signs and symptoms: Generalized A rise in hydrostatic pressure occurs in cardiac failure. A fall in osmotic pressure occurs in nephrotic syndrome and liver failure.Causes of edema which are generalized to the whole body can cause edema in multiple organs and peripherally. For example, severe heart failure can cause pulmonary edema, pleural effusions, ascites and peripheral edema. Such severe systemic edema is called anasarca. In rare cases, a parvovirus B19 infection may cause generalized edemas.Although a low plasma oncotic pressure is widely cited for the edema of nephrotic syndrome, most physicians note that the edema may occur before there is any significant protein in the urine (proteinuria) or fall in plasma protein level. Most forms of nephrotic syndrome are due to biochemical and structural changes in the basement membrane of capillaries in the kidney glomeruli, and these changes occur, if to a lesser degree, in the vessels of most other tissues of the body. Thus the resulting increase in permeability that leads to protein in the urine can explain the edema if all other vessels are more permeable as well.As well as the previously mentioned conditions, edemas often occur during the late stages of pregnancy in some women. This is more common with those of a history of pulmonary problems or poor circulation also being intensified if arthritis is already present in that particular woman. Women who already have arthritic problems most often have to seek medical help for pain caused from over-reactive swelling. Edemas that occur during pregnancy are usually found in the lower part of the leg, usually from the calf down. Signs and symptoms: Hydrops fetalis is a condition in a baby characterized by an accumulation of fluid in at least two body compartments. Cause: Heart The pumping force of the heart should help to keep a normal pressure within the blood vessels. But if the heart begins to fail (a condition known as congestive heart failure) the pressure changes can cause very severe water retention. In this condition water retention is mostly visible in the legs, feet and ankles, but water also collects in the lungs, where it causes a chronic cough. This condition is usually treated with diuretics; otherwise, the water retention may cause breathing problems and additional stress on the heart. Cause: Kidneys Another cause of severe water retention is kidney failure, where the kidneys are no longer able to filter fluid out of the blood and turn it into urine. Kidney disease often starts with inflammation, for instance in the case of diseases such as nephrotic syndrome or lupus. This type of water retention is usually visible in the form of swollen legs and ankles. Cause: Liver Cirrhosis (scarring) of the liver is a common cause of edema in the legs and abdominal cavity. Others Swollen legs, feet and ankles are common in late pregnancy. The problem is partly caused by the weight of the uterus on the major veins of the pelvis. It usually clears up after delivery of the baby, and is mostly not a cause for concern, though it should always be reported to a doctor. Cause: Lack of exercise is another common cause of water retention in the legs. Exercise helps the leg veins work against gravity to return blood to the heart. If blood travels too slowly and starts to pool in the leg veins, the pressure can force too much fluid out of the leg capillaries into the tissue spaces. The capillaries may break, leaving small blood marks under the skin. The veins themselves can become swollen, painful and distorted – a condition known as varicose veins. Muscle action is needed not only to keep blood flowing through the veins but also to stimulate the lymphatic system to fulfil its "overflow" function. Long-haul flights, lengthy bed-rest, immobility caused by disability and so on, are all potential causes of water retention. Even very small exercises such as rotating ankles and wiggling toes can help to reduce it.Certain medications are prone to causing water retention. These include estrogens, thereby including drugs for hormone replacement therapy or the combined oral contraceptive pill, as well as non-steroidal anti-inflammatory drugs and beta-blockers.Premenstrual water retention, causing bloating and breast tenderness, is common. Mechanism: Six factors can contribute to the formation of edema: increased hydrostatic pressure; reduced colloidal or oncotic pressure within blood vessels; increased tissue colloidal or oncotic pressure; increased blood vessel wall permeability (such as inflammation); obstruction of fluid clearance in the lymphatic system; changes in the water-retaining properties of the tissues themselves. Raised hydrostatic pressure often reflects retention of water and sodium by the kidneys.Generation of interstitial fluid is regulated by the forces of the Starling equation. Hydrostatic pressure within blood vessels tends to cause water to filter out into the tissue. This leads to a difference in protein concentration between blood plasma and tissue. As a result, the colloidal or oncotic pressure of the higher level of protein in the plasma tends to draw water back into the blood vessels from the tissue. Starling's equation states that the rate of leakage of fluid is determined by the difference between the two forces and also by the permeability of the vessel wall to water, which determines the rate of flow for a given force imbalance. Most water leakage occurs in capillaries or post capillary venules, which have a semi-permeable membrane wall that allows water to pass more freely than protein. (The protein is said to be reflected and the efficiency of reflection is given by a reflection constant of up to 1.) If the gaps between the cells of the vessel wall open up then permeability to water is increased first, but as the gaps increase in size permeability to protein also increases with a fall in reflection coefficient.Changes in the variables in Starling's equation can contribute to the formation of edemas either by an increase in hydrostatic pressure within the blood vessel, a decrease in the oncotic pressure within the blood vessel or an increase in vessel wall permeability. The latter has two effects. It allows water to flow more freely and it reduces the colloidal or oncotic pressure difference by allowing protein to leave the vessel more easily.Another set of vessels known as the lymphatic system acts like an "overflow" and can return much excess fluid to the bloodstream. But even the lymphatic system can be overwhelmed, and if there is simply too much fluid, or if the lymphatic system is congested, then the fluid will remain in the tissues, causing swellings in legs, ankles, feet, abdomen or any other part of the body. Diagnosis: Edema may be described as pitting edema, or non-pitting edema. Pitting edema is when, after pressure is applied to a small area, the indentation persists after the release of the pressure. Peripheral pitting edema, as shown in the illustration, is the more common type, resulting from water retention. It can be caused by systemic diseases, pregnancy in some women, either directly or as a result of heart failure, or local conditions such as varicose veins, thrombophlebitis, insect bites, and dermatitis.Non-pitting edema is observed when the indentation does not persist. It is associated with such conditions as lymphedema, lipedema, and myxedema. Diagnosis: Edema caused by malnutrition defines kwashiorkor, an acute form of childhood protein-energy malnutrition characterized by edema, irritability, anorexia, ulcerating dermatoses, and an enlarged liver with fatty infiltrates. Treatment: When possible, treatment involves resolving the underlying cause. Many cases of heart or kidney disease are treated with diuretics.Treatment may also involve positioning the affected body parts to improve drainage. For example, swelling in feet or ankles may be reduced by having the person lie down in bed or sit with the feet propped up on cushions. Intermittent pneumatic compression can be used to pressurize tissue in a limb, forcing fluids—both blood and lymph—to flow out of the compressed area.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Klung Wilhelmy Science Award** Klung Wilhelmy Science Award: The Klung Wilhelmy Science Award is an annual German award in the field of science, alternating annually between the categories of chemistry and physics. It is bestowed upon outstanding younger German scientists under the age of 40. Previous award names: 1973 to 2001 – Otto-Klung-Award 2001 to 2007 – Otto-Klung-Weberbank-Award 2007 to 2013 – Klung-Wilhelmy-Weberbank-Award Selection process: The prizewinners are selected by permanent committees at the Institutes of Chemistry and Biochemistry and the Department of Physics at the Free University of Berlin, with additional input from professors at other universities. Proposals and nominations by nationally and internationally renowned scientists are also taken into consideration. Self-nominations will not be accepted. Selection process: The final decision on the selection recommendations is made by the following foundations: the Otto Klung Foundation at the Free University of Berlin and the Dr. Wilhelmy Foundation. The stated aim of these foundations is to strengthen the promotion of outstanding scientific achievements and to reward internationally accredited innovative approaches. Five of the previously chosen prizewinners later received the Nobel Prize. Selection process: The prize was first awarded in 1973 by the Otto Klung Foundation. Since 2007, the prize has become one of the highest privately funded scientific endowments in Germany. The annual award ceremony, which has been held in November, is open to the public. Recipients: From 1973 to 1978, the Otto Klung Foundation acting alone and trying to foster young academics presented the Otto-Klung-Award as a Junior Researcher Prize for outstanding scientific achievement to graduate students and postdoctoral students of the Free University of Berlin, Departments of Chemistry and Physics: Klaus-Peter Dinse (Physics 1973), Wolf-Dietrich Hunnius and Rolf Minkwitz (Chemistry 1974), Michael Grunze (Chemistry 1975), Günther Kerker (Physics 1976), Wolfgang Lubitz (Chemistry 1977), Andreas Gaupp (Physics 1978).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**(R)-3-amino-2-methylpropionate—pyruvate transaminase** (R)-3-amino-2-methylpropionate—pyruvate transaminase: In enzymology, a (R)-3-amino-2-methylpropionate—pyruvate transaminase (EC 2.6.1.40) is an enzyme that catalyzes the chemical reaction (R)-3-amino-2-methylpropanoate + pyruvate ⇌ 2-methyl-3-oxopropanoate + L-alanineThus, the two substrates of this enzyme are (R)-3-amino-2-methylpropanoate and pyruvate, whereas its two products are 2-methyl-3-oxopropanoate and L-alanine. (R)-3-amino-2-methylpropionate—pyruvate transaminase: This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is (R)-3-amino-2-methylpropanoate:pyruvate aminotransferase. Other names in common use include D-3-aminoisobutyrate-pyruvate transaminase, beta-aminoisobutyrate-pyruvate aminotransferase, D-3-aminoisobutyrate-pyruvate aminotransferase, D-3-aminoisobutyrate-pyruvate transaminase, (R)-3-amino-2-methylpropionate transaminase, and D-beta-aminoisobutyrate:pyruvate aminotransferase. But some additional information is that this enzyme catalyzed it transamination with L isomer, but D isomer in natural form, inactive as substrate. Also other names of enzymes similar to this contains, L-3-aminoisobutyrate transaminase, beta-aminobutyric transaminase, L-3-aminoisobutyric aminotransferase, and beta-aminoisobutyrate-alpha-ketoglutarate transaminase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Geocell (cartography)** Geocell (cartography): In geographic information systems, a geocell (or geo-cell) is a patch on the surface of the Earth that is 1 degree of latitude by 1 degree of longitude in extent.At the equator, a geocell is approximately a 111x111 kilometres (69 mi) square - but the east-west dimension of geocells gradually decreases and the shape of the geocell becomes increasingly trapezoidal towards the poles. At the North and South poles, geocells are distorted into long, thin triangles which are still approximately 111 kilometres (69 mi) in the north/south direction but with a base of just 969 metres (3,179 ft).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Superconductor classification** Superconductor classification: Superconductors can be classified in accordance with several criteria that depend on physical properties, current understanding, and the expense of cooling them or their material. By their magnetic properties: Type I superconductors: those having just one critical field, Hc, and changing abruptly from one state to the other when it is reached. Type II superconductors: having two critical fields, Hc1 and Hc2, being a perfect superconductor under the lower critical field (Hc1) and leaving completely the superconducting state to a normal conducting state above the upper critical field (Hc2), being in a mixed state when between the critical fields. Type-1.5 superconductor – Multicomponent superconductors characterized by two or more coherence lengths By the understanding we have about them: Conventional superconductors: those that can be fully explained with the BCS theory or related theories. By the understanding we have about them: Unconventional superconductors: those that failed to be explained using such theories, e.g.: Heavy fermion superconductorsThis criterion is important, as the BCS theory has explained the properties of conventional superconductors since 1957, yet there have been no satisfactory theories to explain unconventional superconductors fully. In most cases, type I superconductors are conventional, but there are several exceptions such as niobium, which is both conventional and type II. By their critical temperature: Low-temperature superconductors, or LTS: those whose critical temperature is below 77 K. High-temperature superconductors, or HTS: those whose critical temperature is above 77 K. Room-temperature superconductors: those whose critical temperature is above 273 K.77 K is used as the split to emphasize whether or not superconductivity in the materials can be achieved with liquid nitrogen (whose boiling point is 77K), which is much more feasible than liquid helium (an alternative to achieve the temperatures needed to get low-temperature superconductors). By material constituents and structure: Some pure elements, such as lead or mercury (but not all pure elements, as some never reach the superconducting phase). Some allotropes of carbon, such as fullerenes, nanotubes, or diamond.Most superconductors made of pure elements are type I (except niobium, technetium, vanadium, silicon, and the above-mentioned Carbon allotropes)Alloys, such as Niobium-titanium (NbTi), whose superconducting properties were discovered in 1962. Ceramics (often insulators in the normal state), which include Cuprates i.e. copper oxides (often layered, not isotropic) The YBCO family, which are several yttrium-barium-copper oxides, especially YBa2Cu3O7. They are the most famous high-temperature superconductors. Iron-based superconductors, including the oxypnictides Magnesium diboride (MgB2), whose critical temperature is 39K, being the conventional superconductor with the highest known temperature. non-cuprate oxides such as BKBO Palladates – palladium compounds. othereg the "metallic" compounds Hg3NbF6 and Hg3TaF6 are both superconductors below 7 K (−266.15 °C; −447.07 °F).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bottle-shock** Bottle-shock: Bottle-shock or Bottle-sickness is a temporary condition of wine characterized by muted or disjointed fruit flavors. It often occurs immediately after bottling or when wines (usually fragile wines) are given an additional dose of sulfur (in the form of sulfur dioxide or sulfite solution). After a few weeks, the condition usually disappears.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kuratowski's closure-complement problem** Kuratowski's closure-complement problem: In point-set topology, Kuratowski's closure-complement problem asks for the largest number of distinct sets obtainable by repeatedly applying the set operations of closure and complement to a given starting subset of a topological space. The answer is 14. This result was first published by Kazimierz Kuratowski in 1922. It gained additional exposure in Kuratowski's fundamental monograph Topologie (first published in French in 1933; the first English translation appeared in 1966) before achieving fame as a textbook exercise in John L. Kelley's 1955 classic, General Topology. Proof: Letting S denote an arbitrary subset of a topological space, write kS for the closure of S , and cS for the complement of S . The following three identities imply that no more than 14 distinct sets are obtainable: kkS=kS . (The closure operation is idempotent.) ccS=S . (The complement operation is an involution.) kckckckcS=kckcS . (Or equivalently kckckckS=kckckckccS=kckS , using identity (2)).The first two are trivial. The third follows from the identity kikiS=kiS where iS is the interior of S which is equal to the complement of the closure of the complement of S , iS=ckcS . (The operation ki=kckc is idempotent.) A subset realizing the maximum of 14 is called a 14-set. The space of real numbers under the usual topology contains 14-sets. Here is one example: (0,1)∪(1,2)∪{3}∪([4,5]∩Q), where (1,2) denotes an open interval and [4,5] denotes a closed interval. Let X denote this set. Then the following 14 sets are accessible: X , the set shown above. Proof: cX=(−∞,0]∪{1}∪[2,3)∪(3,4)∪((4,5)∖Q)∪(5,∞) kcX=(−∞,0]∪{1}∪[2,∞) ckcX=(0,1)∪(1,2) kckcX=[0,2] ckckcX=(−∞,0)∪(2,∞) kckckcX=(−∞,0]∪[2,∞) ckckckcX=(0,2) kX=[0,2]∪{3}∪[4,5] ckX=(−∞,0)∪(2,3)∪(3,4)∪(5,∞) kckX=(−∞,0]∪[2,4]∪[5,∞) ckckX=(0,2)∪(4,5) kckckX=[0,2]∪[4,5] ckckckX=(∞,0)∪(2,4)∪(5,∞) Further results: Despite its origin within the context of a topological space, Kuratowski's closure-complement problem is actually more algebraic than topological. A surprising abundance of closely related problems and results have appeared since 1960, many of which have little or nothing to do with point-set topology.The closure-complement operations yield a monoid that can be used to classify topological spaces.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Plabutsch Formation** Plabutsch Formation: The Plabutsch Formation is a geologic formation in Austria. It preserves fossils dating back to the Devonian period.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Super Tonks–Girardeau gas** Super Tonks–Girardeau gas: In physics, the super-Tonks–Girardeau gas represents an excited quantum gas phase with strong attractive interactions in a one-dimensional spatial geometry. Super Tonks–Girardeau gas: Usually, strongly attractive quantum gases are expected to form dense particle clusters and lose all gas-like properties. But in 2005, it was proposed by Stefano Giorgini and co-workers that there is a many-body state of attractively interacting bosons that does not decay in one-dimensional systems. If prepared in a special way, this lowest gas-like state should be stable and show new quantum mechanical properties. Super Tonks–Girardeau gas: Particles in a super-Tonks gas should be strongly correlated and show long range order with a Luttinger Liquid parameter K<1. Since each particle occupies a certain volume, the gas properties are similar to a classical gas of hard rods. Despite the mutual attraction, the single particle wave functions separate and the bosons behave similar to fermions with repulsive, long-range interaction. Super Tonks–Girardeau gas: To prepare the super-Tonks–Girardeau phase it is necessary to increase the repulsive interaction strength all the way through the Tonks–Girardeau regime up to infinity. Sudden switching from infinitely strong repulsive to infinitely attractive interactions stabilizes the gas against collapse and connects the ground state of the Tonks gas to the excited state of the super-Tonks gas. Experimental realization: The super-Tonks–Girardeau gas was experimentally observed in Ref. using an ultracold gas of cesium atoms. Reducing the magnitude of the attractive interactions caused the gas to became unstable to collapse into cluster-like bound states. Repulsive dipolar interactions stabilize the gas when instead using highly magnetic dysprosium atoms. This enabled the creation of prethermal quantum many-body scar states via the topological pumping of these super-Tonks-Girardeau gases.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dalrymple's sign** Dalrymple's sign: Dalrymple's sign is a widened palpebral (eyelid) opening, or eyelid spasm, seen in thyrotoxicosis (as seen in Graves' disease, exophthalmic goitre and other hyperthyroid conditions), causing abnormal wideness of the palpebral fissure. As a result of the retraction of the upper eyelid, the white of the sclera is visible at the upper margin of the cornea in direct outward stare. It is named after British ophthalmologist, John Dalrymple (1803–1852). Dalrymple's sign: Other eye signs described within the symptomology of Graves' disease are Stellwag's sign (rare blinking), Rosenbach's sign (tremor of the eyelids), and Jelink's sign (hyperpigmentation of the eyelid).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Portable hyperbaric bag** Portable hyperbaric bag: A portable hyperbaric bag, of which one brand is the Gamow (pronounced [ˈɡamɔf]) bag, is an inflatable pressure bag large enough to accommodate a person. The patient can be placed inside the bag, which is then sealed and inflated with a foot pump. Within minutes, the effective altitude can be decreased by 1000 m to as much as 3000 m (3281 to 9743 feet) depending on the elevation. The bag is pressurised to 14.0–29.3 kPa (105–220 mmHg); the pressure gradient is regulated by pop-off valves set to the target pressure. History: The Gamow bag was named after its inventor, Igor Gamow, son of George Gamow. Igor Gamow originally designed a predecessor to the Gamow bag called "The Bubble" to study the effect of high altitude on stamina and performance in athletes. Gamow later re-designed "The Bubble" into a bag that could be used in high-altitude wilderness. Application: It is primarily used for treating severe cases of altitude sickness, high-altitude cerebral edema, and high-altitude pulmonary edema. Like office-based hyperbaric medicine, the Gamow bag uses increased partial pressure of oxygen for therapy of hypobaric injury but has the advantage of portability for field use. Patients typically are treated in 1-hour increments and then are reevaluated.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Blank-firing adapter** Blank-firing adapter: A blank-firing adapter or blank-firing attachment (BFA), sometimes called a blank adapter or blank attachment, is a device used in conjunction with blank ammunition for safety reasons, functional reasons or a combination of them both. Blank firing adapters are required for allowing blank ammunition to cycle the bolts of most semi-automatic and automatic firearms. It can also be a safety feature designed to break up the (wooden or plastic) plugs replacing the bullet in military blanks (with the added benefit that a live round mistakenly fired will spend most of the energy smashing through the BFA, reducing both the range and damage inflicted) as well as divert the hot gases from a blank discharge out to the sides, reducing the risk of injury to the target of an aimed shot. Design: The design of the blank firing adapter depends on the intended use. Different designs are used for different firearm actions and different user needs. For military use, BFAs are made obvious in order that they can be seen to be fitted. In addition to its role in cycling the action of a weapon, the BFA also prevents debris from the blank round (and in some cases, inadvertently loaded live rounds) escaping. This debris could cause injury to nearby personnel if the BFA has not been fitted or has fallen off. Use in motion picture special effects, however, requires the BFA to be hidden from casual view, so as not to disturb the illusion of live ammunition being fired, a feat usually accomplished with replacement barrels, the diameter of which is significantly reduced just in front of the chamber. Design: Gas operated and blowback firearms BFAs for blowback and gas-operated firearms are relatively simple. These weapons depend on high pressures in the chamber generated by the combustion of the propellant to push the breech block to the rear, allowing another round to be chambered and fired. If a blank round is used, there is no bullet to seal the barrel, and the combustion gases exit through the muzzle without building up enough pressure to rechamber the next round.Simple BFAs for these firearms consist of a metal plug with screw threads. The adapter is attached to the muzzle of the firearm, and may attach to or replace the muzzle brake if there is one. A means is provided to allow some of the powder gases to escape. This can be adjustable, allowing it to regulate the amount of pressure used to rechamber the next round.A drawback to the use of BFAs in gas-operated firearms is the amount of propellant residue that builds up in the barrel. Since only a limited amount of residue can escape (compared to when live ammunition is in use), the barrel can foul very quickly. Extreme care must be taken to ensure cleanliness of the barrel following BFA use to avoid damage to the weapon and injury to the operator due to barrel fouling. Design: Recoil operated firearms Since blank cartridges generate very little recoil, far less than that produced by a live round, the recoil operation mechanism is not suitable for use with blanks. BFAs used with recoil-operated firearms typically replace the locked breech of a recoil operated firearm with a simple blowback system using a restricted barrel, similar to a gas operated BFA. Short recoil operated pistols, the most common type used for self-defense and by police, are typically converted with a simple barrel replacement; the replacement barrel will lack the locking lugs to lock the slide to the frame, and will be built with an adjustable restrictor to control the chamber pressure. On designs with tilting barrels (again, the majority of modern designs are like this), there may also be a provision for tilting the barrel back, simulating the unlocking of the slide – this is visible in US Patent 5,585,589 listed below. Design: One notable exception to the design of BFAs for recoil operated firearms is machine guns based on short recoil designs, such as the German MG 34 and its descendants. These designs use a muzzle booster to add energy to the recoiling parts, and BFAs for these designs simply replace the muzzle booster with one that provides far more boost with blank cartridges. The M2 heavy machine gun, while it does not use a muzzle booster normally, can use a similar muzzle-booster-derived BFA.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hopf algebroid** Hopf algebroid: In mathematics, in the theory of Hopf algebras, a Hopf algebroid is a generalisation of weak Hopf algebras, certain skew Hopf algebras and commutative Hopf k-algebroids. If k is a field, a commutative k-algebroid is a cogroupoid object in the category of k-algebras; the category of such is hence dual to the category of groupoid k-schemes. This commutative version has been used in 1970-s in algebraic geometry and stable homotopy theory. The generalization of Hopf algebroids and its main part of the structure, associative bialgebroids, to the noncommutative base algebra was introduced by J.-H. Lu in 1996 as a result on work on groupoids in Poisson geometry (later shown equivalent in nontrivial way to a construction of Takeuchi from the 1970s and another by Xu around the year 2000). They may be loosely thought of as Hopf algebras over a noncommutative base ring, where weak Hopf algebras become Hopf algebras over a separable algebra. It is a theorem that a Hopf algebroid satisfying a finite projectivity condition over a separable algebra is a weak Hopf algebra, and conversely a weak Hopf algebra H is a Hopf algebroid over its separable subalgebra HL. The antipode axioms have been changed by G. Böhm and K. Szlachányi (J. Algebra) in 2004 for tensor categorical reasons and to accommodate examples associated to depth two Frobenius algebra extensions. Definition: The main motivation behind of the definition of a Hopf algebroidpg301-302 is its a commutative algebraic representation of an algebraic stack which can be presented as affine schemes. More generally, Hopf algebroids encode the data of presheaves of groupoids on the category Aff of affine schemes. That is, if we have a groupoid object of affine schemes s,t:X1⇉X0 with an identity map ι:X0→X1 giving an embedding of objects into the arrows, we can take as our definition of a Hopf algebroid as the dual objects in commutative rings CRing which encodes this structure. Note that this process is essentially an application of the Yoneda lemma to the definition of the groupoid schemes in the category Aff of affine schemes. Since we may want to fix a base ring, we will instead consider the category CRing k of commutative k -algebras. Definition: Scheme-theoretic definition Algebraic objects in the definition A Hopf algebroid over a commutative ring k is a pair of k -algebras (A,Γ) in CRing k such that their functor of points Hom Hom k(A,−) encodes a groupoid in Aff . If we fix B as some object in CRing k , then Hom k(A,B) is the set of objects in the groupoid and Hom k(Γ,B) is the set of arrows. This translates to having maps left unit/ source map right unit/ target map coproduct/ composition map counit/ identity map conjugation/ inverse map where the text on the left hand side of the slash is the traditional word used for the map of algebras giving the Hopf algebroid structure and the text on the right hand side of the slash is what corresponding structure on the groupoid Hom Hom k(A,−) these maps correspond to, meaning their dual maps from the Yoneda embedding gives the structure of a groupoid. For example, Hom Hom k(A,−) corresponds to the source map s Axioms these maps must satisfy In addition to these maps, they satisfy a host of axioms dual to the axioms of a groupoid. Note we will fix B as some object in CRing k giving Id A , meaning the dual counit map ε∗ acts as a two-sided identity for the objects in Hom k(A,B) Id Id Id Γ , meaning composing an arrow with the identity leaves that arrow unchanged Id Id Id Γ corresponds to the associativity of composition of morphisms cηR=ηL and cηL=ηR , translates to inverting a morphism interchanges the source and target Id Γ , meaning the inverse of the inverse is the original map These exists maps Γ⊗AΓ⇉Γ encoding the composition of a morphism with its inverse on either side gives the identity morphism. This can be encoded by the commutative diagram below where the dashed arrows represent the existence of these two arrowswhere c⋅Γ is the map c⋅Γ(γ1⊗γ2)=c(γ1)γ2 and Γ⋅c(γ1⊗γ2)=γ1c(γ2) Additional structures In addition to the standard definition of a Hopf-algebroid, there are also graded commutative Hopf-algebroids which are pairs of graded commutative algebras (A,Γ) with graded commutative structure maps given above. Definition: Also, a graded Hopf algebroid (A,Γ) is said to be connected if the right and left sub A -modules Γ0↪Γ are both isomorphic to A Another definition A left Hopf algebroid (H, R) is a left bialgebroid together with an antipode: the bialgebroid (H, R) consists of a total algebra H and a base algebra R and two mappings, an algebra homomorphism s: R → H called a source map, an algebra anti-homomorphism t: R → H called a target map, such that the commutativity condition s(r1) t(r2) = t(r2) s(r1) is satisfied for all r1, r2 ∈ R. The axioms resemble those of a Hopf algebra but are complicated by the possibility that R is a non-commutative algebra or its images under s and t are not in the center of H. In particular a left bialgebroid (H, R) has an R-R-bimodule structure on H which prefers the left side as follows: r1 ⋅ h ⋅ r2 = s(r1) t(r2) h for all h in H, r1, r2 ∈ R. There is a coproduct Δ: H → H ⊗R H and counit ε: H → R that make (H, R, Δ, ε) an R-coring (with axioms like that of a coalgebra such that all mappings are R-R-bimodule homomorphisms and all tensors over R). Additionally the bialgebroid (H, R) must satisfy Δ(ab) = Δ(a)Δ(b) for all a, b in H, and a condition to make sure this last condition makes sense: every image point Δ(a) satisfies a(1) t(r) ⊗ a(2) = a(1) ⊗ a(2) s(r) for all r in R. Also Δ(1) = 1 ⊗ 1. The counit is required to satisfy ε(1H) = 1R and the condition ε(ab) = ε(as(ε(b))) = ε(at(ε(b))). Definition: The antipode S: H → H is usually taken to be an algebra anti-automorphism satisfying conditions of exchanging the source and target maps and satisfying two axioms like Hopf algebra antipode axioms; see the references in Lu or in Böhm-Szlachányi for a more example-category friendly, though somewhat more complicated, set of axioms for the antipode S. The latter set of axioms depend on the axioms of a right bialgebroid as well, which are a straightforward switching of left to right, s with t, of the axioms for a left bialgebroid given above. Examples: From algebraic topology One of the main motivating examples of a Hopf algebroid is the pair (π∗(E),E∗(E)) for a spectrum E . For example, the Hopf algebroids MU MU MU )) , BP BP BP )) , for the spectra representing complex cobordism and Brown-Peterson homology, and truncations of them are widely studied in algebraic topology. This is because of their use in the Adams-Novikov spectral sequence for computing the stable homotopy groups of spheres. Examples: Hopf algebroid corepresenting stack of formal group laws There is a Hopf-algebroid which corepresents the stack of formal group laws MFG which is constructed using algebraic topology. If we let MP denote the spectrum MP MU there is a Hopf algebroid MP MP MP ) corepresenting the stack MFG . This means, there is an isomorphism of functors MP MP MP )](−) where the functor on the right sends a commutative ring B to the groupoid MP MP MP )(B) Other examples As an example of left bialgebroid, take R to be any algebra over a field k. Let H be its algebra of linear self-mappings. Let s(r) be left multiplication by r on R; let t(r) be right multiplication by r on R. H is a left bialgebroid over R, which may be seen as follows. From the fact that H ⊗R H ≅ Homk(R ⊗ R, R) one may define a coproduct by Δ(f)(r ⊗ u) = f(ru) for each linear transformation f from R to itself and all r, u in R. Coassociativity of the coproduct follows from associativity of the product on R. A counit is given by ε(f) = f(1). The counit axioms of a coring follow from the identity element condition on multiplication in R. The reader will be amused, or at least edified, to check that (H, R) is a left bialgebroid. In case R is an Azumaya algebra, in which case H is isomorphic to R ⊗ R, an antipode comes from transposing tensors, which makes H a Hopf algebroid over R. Another class of examples comes from letting R be the ground field; in this case, the Hopf algebroid (H, R) is a Hopf algebra.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kerosene lamp** Kerosene lamp: A kerosene lamp (also known as a paraffin lamp in some countries) is a type of lighting device that uses kerosene as a fuel. Kerosene lamps have a wick or mantle as light source, protected by a glass chimney or globe; lamps may be used on a table, or hand-held lanterns may be used for portable lighting. Like oil lamps, they are useful for lighting without electricity, such as in regions without rural electrification, in electrified areas during power outages, at campsites, and on boats. There are three types of kerosene lamp: flat-wick, central-draft (tubular round wick), and mantle lamp. Kerosene lanterns meant for portable use have a flat wick and are made in dead-flame, hot-blast, and cold-blast variants. Kerosene lamp: Pressurized kerosene lamps use a gas mantle; these are known as Petromax, Tilley lamps, or Coleman lamps, among other manufacturers. They produce more light per unit of fuel than wick-type lamps, but are more complex and expensive in construction and more complex to operate. A hand-pump pressurizes air, which forces liquid fuel from a reservoir into a gas chamber. Vapor from the chamber burns, heating a mantle to incandescence and also providing heat. Kerosene lamp: Kerosene lamps are widely used for lighting in rural areas of Africa and Asia, where electricity is not distributed or is too costly. As of 2005, kerosene and other fuel-based illumination methods consume an estimated 77 billion litres (20 billion US gallons) of fuel per year, equivalent to 8.0 million gigajoules (1.3 million barrels of oil equivalent) per day. This is comparable to annual U.S. jet-fuel consumption of 76 billion litres (20 billion US gallons) per year. History: In 1813, John Tilley invented the hydro-pneumatic blowpipe. In 1818, William Henry Tilley, gas fitters, was manufacturing gas lamps in Stoke Newington.In 1846, Abraham Pineo Gesner invented a substitute for whale oil for lighting, distilled from coal. Later made from petroleum, kerosene became a popular lighting fuel. Modern and most popular versions of the paraffin lamp were later constructed by Polish inventor and pharmacist Ignacy Łukasiewicz, in Lviv in 1853. It was a significant improvement over lamps designed to burn vegetable or sperm oil. History: In 1914, the Coleman Lantern pressure lamp was introduced by the Coleman Company.In 1919, Tilley High-Pressure Gas Company started using kerosene as a fuel for lamps. Types: Flat-wick lamp A flat-wick lamp is a simple type of kerosene lamp, which burns kerosene drawn up through a wick by capillary action. If this type of lamp is broken, it can easily start a fire. A flat-wick lamp has a fuel tank (fount), with the lamp burner attached. Attached to the fuel tank, four prongs hold the glass chimney, which acts to prevent the flame from being blown out and enhances a thermally induced draft. The glass chimney needs a "throat", or slight constriction, to create the proper draft for complete combustion of the fuel; the draft carries more air (oxygen) past the flame, helping to produce a smokeless light, which is brighter than an open flame would produce. Types: The chimney is used for a more important duty. The mantle/wick holder has holes around the outer edges. When the lantern is lit and a chimney is attached, the thermally induced draft draws air through these holes and passes over the top of the mantle, just as a chimney in your house. This has a cooling effect and keeps the mantle from over heating. Without a properly installed chimney, a definite safety condition exists. This is even more important if using Aladdin lamps. They also have a thinner chimney to induce a faster air-flow. This information should be adhered to regardless of the type of lantern in use. Types: The lamp burner has a flat wick, usually made of cotton. The lower part of the wick dips into the fount and absorbs the kerosene; the top part of the wick extends out of the wick tube of the lamp burner, which includes a wick-adjustment mechanism. Adjusting how much of the wick extends above the wick tube controls the flame. The wick tube surrounds the wick and ensures that the correct amount of air reaches the lamp burner. Adjustment is usually done by means of a small knob operating a cric, which is a toothed metal sprocket bearing against the wick. If the wick is too high, and extends beyond the burner cone at the top of the wick tube, the lamp will produce smoke and soot (unburned carbon). When the lamp is lit, the kerosene that the wick has absorbed burns and produces a clear, bright, yellow flame. As the kerosene burns, capillary action in the wick draws more kerosene up from the fuel tank. All kerosene flat-wick lamps use the dead-flame burner design, where the flame is fed cold air from below, and hot air exits above. Types: This type of lamp was very widely used by railways, both on the front and rear of trains and for hand signals, due to its reliability. At a time when there were few competing light sources at night outside major towns, the limited brightness of these lamps was adequate and could be seen at sufficient distance to serve as a warning or signal. Types: Central-draft (tubular round wick) lamp A central-draft lamp, or Argand lamp, works in the same manner as the flat-wick lamp. The burner is equipped with a tall glass chimney, of around 12 inches (300 mm) tall or taller, to provide the powerful draft this lamp requires to burn properly. The burner uses a wick, usually made of cotton, that is made of a wide, flat wick rolled into a tube, the seam of which is then stitched together to form the complete wick. The tubular wick is then mounted into a "carrier", which is some form of a toothed rack that engages into the gears of the wick-raising mechanism of the burner and allows the wick to be raised and lowered. The wick rides in between the inner and outer wick tubes; the inner wick tube (central draft tube) provides the "central draft" or draft that supplies air to the flame spreader. When the lamp is lit, the central draft tube supplies air to the flame spreader that spreads out the flame into a ring of fire and allows the lamp to burn cleanly. Types: Mantle lamp A variation on the "central-draft" lamp is the mantle lamp. The mantle is a roughly pear-shaped mesh made of fabric placed over the burner. The mantle typically contains thorium or other rare-earth salts; on first use the cloth burns away, and the rare-earth salts are converted to oxides, leaving a very fragile structure, which incandesces (glows brightly) upon exposure to the heat of the burner flame. Mantle lamps are considerably brighter than flat- or round-wick lamps, produce a whiter light and generate more heat. Mantle lamps typically use fuel faster than a flat-wick lamp, but slower than a center-draft round-wick, as they depend on a small flame heating a mantle, rather than having all the light coming from the flame itself. Types: Mantle lamps are nearly always bright enough to benefit from a lampshade, and a few mantle lamps may be enough to heat a small building in cold weather. Mantle lamps, because of the higher temperature at which they operate, do not produce much odor, except when first lit or extinguished. Like flat- and round-wick lamps, they can be adjusted for brightness; however, caution must be used, because if set too high, the lamp chimney and the mantle can become covered with black areas of soot. A lamp set too high will burn off its soot harmlessly if quickly turned down, but if not caught soon enough, the soot itself can ignite, and a "runaway lamp" condition can result. Types: All unpressurized mantle lamps are based on the Argand lamp that was improved by the Clamond basket mantle. These lamps were popular from 1882 until shortly after WWII, when rural electrification made them obsolete. Aladdin Lamps is the only maker of this style lamp today. Even they, are now marketing electric fixtures that fit the old style lamps. Types: Large fixed pressurized kerosene mantle lamps were used in lighthouse beacons for navigation of ships, brighter and with lower fuel consumption than oil lamps used before. An early version of the gas mantle lamp, kerosene was vaporized by a secondary burner, which pressurized the kerosene tank that supplied the central draught. Like all gas mantle lamps, the only purpose of the burner is to hold the flame that heats the mantle, which is 4-5 times as bright as the wick itself. The Coleman Lantern is the direct descendant of this type lamp. Types: Kerosene lantern A kerosene lantern, also known as a "barn lantern" or "hurricane lantern", is a flat-wick lamp made for portable and outdoor use. They are made of soldered or crimped-together sheet-metal stampings, with tin-plated sheet steel being the most common material, followed by brass and copper. There are three types: dead-flame, hot-blast, and cold-blast. Both hot-blast and cold-blast designs are called tubular lanterns and are safer than dead-flame lamps, as tipping over a tubular lantern cuts off the oxygen flow to the burner and will extinguish the flame within seconds. Types: The earliest portable kerosene "glass globe" lanterns, of the 1850s and 1860s, were of the dead-flame type, meaning that it had an open wick, but the airflow to the flame was strictly controlled in an upward motion by a combination of vents at the bottom of the burner and an open topped chimney. This had the effect of removing side-to-side drafts and thus significantly reducing or even eliminating the flickering that can occur with an exposed flame. Types: Later lanterns, such as the hot-blast and cold-blast lanterns, took this airflow control even further by partially or fully enclosing the wick in a "deflector" or "burner cone" and then channeling the air to be supplied for combustion at the wick while at the same time pre-heating the air for combustion. Types: The hot-blast design, also known as a "tubular lantern" due to the metal tubes used in its construction, was invented by John H. Irwin and was patented on May 4, 1869. As noted in the patent, the "novel mode of constructing a lantern whereby the wind, instead of acting upon the flame in such a manner as to extinguish it, serves to support or sustain and prevent the extinguishment thereof." This improvement essentially redirected wind which might normally tend to extinguish the flame of an unprotected dead-flame lantern, instead is redirected, slowed, pre-heated, and supplied to the burner to actually support and promote the combustion of the fuel. Types: Later, Irwin improved upon this design by inventing and patenting his cold-blast design on May 6, 1873. This design is similar to his earlier "hot-blast" design, except that the oxygen-depleted hot combustion byproducts are redirected and prevented from recirculating back to the burner by redesigning the intake products, so that only oxygen-rich, fresh air is drawn from the atmosphere into the lamp ("the inlets for fresh air are placed out of the ascending current of products of combustion, and said products are thereby prevented from entering [the air intake]"). The primary benefit of this design compared to the earlier "hot-blast" design was to maximize the amount of oxygen available for combustion by ensuring that only fresh air is supplied to the burner, thereby increasing the brightness and stability of the flame. Safety: Combustion Contamination of lamp fuel with even a small amount of gasoline results in a lower flash point and higher vapor pressure for the fuel, with potentially dangerous consequences. Vapors from spilled fuel may ignite; vapor trapped above liquid fuel may lead to excess pressure and fires. Kerosene lamps are still extensively used in areas without electrical lighting; the cost and dangers of combustion lighting are a continuing concern in many countries. Safety: Inhalation The World Health Organization considers kerosene to be a polluting fuel and recommends that “governments and practitioners immediately stop promoting its household use”. Kerosene smoke contains high levels of harmful particulate matter, and household use of kerosene is associated with higher risks of cancer, respiratory infections, asthma, tuberculosis, cataract, and adverse pregnancy outcomes. Performance: Flat-wick lamps have the lowest light output, center-draft round-wick lamps have 3–4 times the output of flat-wick lamps, and pressurized lamps have higher output yet; the range is from 8 to 100 lumens. A kerosene lamp producing 37 lumens for 4 hours per day consumes about 3 litres (6.3 US pt; 5.3 imp pt) of kerosene per month. 12.57 lumens = 1 CP
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dynamite Duke** Dynamite Duke: Dynamite Duke (Japanese: ダイナマイトデューク, Hepburn: Dainamaito Dūku) is a 1989 action arcade game developed by Seibu Kaihatsu. It was later ported to the Master System, Mega Drive/Genesis and X68000. Being a Cabal-based shooter, it can be considered a follow-up to Seibu's Empire City: 1931 and Dead Angle. The Double Dynamites: It is a version with simultaneous 2 player support. In addition, there are other changes: Life gauges are shown with visible bars, where Duke and bosses all have 11 bars of life. In the English version, it is no longer possible to refill the life bar by adding credits after completing Mission 1. With the exception of Mission 9, there are more enemies on screen, including boss battles. In the high score entry screen, a countdown timer is shown. Story: A top scientist decides to utilise a secret formula to develop his very own army of evil mutant warriors, so he can become the ruler of the world. It is up to Dynamite Duke: a man who is armed with a cybernetic arm along with a machine gun to foil his evil plan. Gameplay: The Arcade version has 9 stages, while the Genesis version only has 6 stages. Reception: In Japan, Game Machine listed Dynamite Duke on their October 1, 1989 issue as being the fourteenth most-successful table arcade unit of the month.Mean Machines gave the Mega Drive/Genesis version a 79%, commenting that it was only visually better than the Master System version and that it "lacks lasting appeal". Levi Buchanan of IGN rated the Genesis game a 5.0 (Meh) for a dismal value, 30 minutes of play value. MegaTech magazine gave an overall score of 73 out of 100 commenting the game "provides plenty of blasting fun and frolics" and criticizes its lack of challenge. Console XS gave the Genesis an overall score of 77/100 praising the behind the shoulder gameplay perspective and the well drawn enemies although criticizing the gameplay being too easy. They also reviewed the Master system version and gave a score of 72/100 and felt the game was similar to Operation Wolf but with far superior graphics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**3G** 3G: 3G is the third generation of wireless mobile telecommunications technology. It is the upgrade over 2G, 2.5G, GPRS and 2.75G Enhanced Data Rates for GSM Evolution networks, offering faster data transfer, and better voice quality. This network was superseded by 4G, and later on by 5G. This network is based on a set of standards used for mobile devices and mobile telecommunications use services and networks that comply with the International Mobile Telecommunications-2000 (IMT-2000) specifications by the International Telecommunication Union. 3G finds application in wireless voice telephony, mobile Internet access, fixed wireless Internet access, video calls and mobile TV.3G telecommunication networks support services that provide an information transfer rate of at least 144 kbit/s. Later 3G releases, often denoted 3.5G and 3.75G, also provide mobile broadband access of several Mbit/s to smartphones and mobile modems in laptop computers. This ensures it can be applied to wireless voice calls, mobile Internet access, fixed wireless Internet access, video calls and mobile TV technologies. 3G: A new generation of cellular standards has appeared approximately every tenth year since 1G systems were introduced in 1979 and the early to mid-1980s. Each generation is characterized by new frequency bands, higher data rates and non–backward-compatible transmission technology. The first commercial 3G networks were introduced in mid-2001. Overview: Several telecommunications companies marketed wireless mobile Internet services as 3G, indicating that the advertised service was provided over a 3G wireless network. However, 3G services have largely been supplanted in marketing by 4G and 5G services in most areas of the world. Services advertised as 3G are required to meet IMT-2000 technical standards, including standards for reliability and speed (data transfer rates). To meet the IMT-2000 standards, a system must provide peak data rates of at least 144 kbit/s. However, many services advertised as 3G provide higher speed than the minimum technical requirements for a 3G service. Subsequent 3G releases, denoted 3.5G and 3.75G, provided mobile broadband access of several Mbit/s for smartphones and mobile modems in laptop computers.3G branded standards: The UMTS (Universal Mobile Telecommunications System) system, standardized by 3GPP in 2001, was used in Europe, Japan, China (with a different radio interface) and other regions predominated by GSM (Global Systems for Mobile) 2G system infrastructure. The cell phones are typically UMTS and GSM hybrids. Several radio interfaces are offered, sharing the same infrastructure: The original and most widespread radio interface is called W-CDMA (Wideband Code Division Multiple Access). Overview: The TD-SCDMA radio interface was commercialized in 2009 and only offered in China. The latest UMTS release, HSPA+, can provide peak data rates up to 56 Mbit/s in the downlink in theory (28 Mbit/s in existing services) and 22 Mbit/s in the uplink. Overview: The CDMA2000 system, first offered in 2002, standardized by 3GPP2, used especially in North America and South Korea, sharing infrastructure with the IS-95 2G standard. The cell phones are typically CDMA2000 and IS-95 hybrids. The latest release EVDO Rev. B offers peak rates of 14.7 Mbit/s downstream.The 3G systems and radio interfaces are based on spread spectrum radio transmission technology. While the GSM EDGE standard ("2.9G"), DECT cordless phones and Mobile WiMAX standards formally also fulfill the IMT-2000 requirements and are approved as 3G standards by ITU, these are typically not branded as 3G and are based on completely different technologies. Overview: The common standards complying with the IMT2000/3G standard are: EDGE, a revision by the 3GPP organization to the older 2G GSM based transmission methods, which utilizes the same switching nodes, base station sites, and frequencies as GPRS, but includes a new base station and cellphone RF circuits. It is based on the three times as efficient 8PSK modulation scheme as a supplement to the original GMSK modulation scheme. EDGE is still used extensively due to its ease of upgrade from existing 2G GSM infrastructure and cell phones. Overview: EDGE combined with the GPRS 2.5G technology is called EGPRS, and allows peak data rates in the order of 200 kbit/s, just like the original UMTS WCDMA versions and thus formally fulfill the IMT2000 requirements on 3G systems. However, in practice, EDGE is seldom marketed as a 3G system, but a 2.9G system. EDGE shows slightly better system spectral efficiency than the original UMTS and CDMA2000 systems, but it is difficult to reach much higher peak data rates due to the limited GSM spectral bandwidth of 200 kHz, and it is thus a dead end. Overview: EDGE was also a mode in the IS-136 TDMA system, no longer used. Evolved EDGE, the latest revision, has peaks of 1 Mbit/s downstream and 400 kbit/s upstream but is not commercially used. The Universal Mobile Telecommunications System, created and revised by the 3GPP. The family is a full revision from GSM in terms of encoding methods and hardware, although some GSM sites can be retrofitted to broadcast in the UMTS/W-CDMA format. W-CDMA is the most common deployment, commonly operated on the 2,100 MHz band. A few others use the 850, 900, and 1,900 MHz bands. HSPA is an amalgamation of several upgrades to the original W-CDMA standard and offers speeds of 14.4 Mbit/s down and 5.76 Mbit/s up. HSPA is backward-compatible and uses the same frequencies as W-CDMA. HSPA+, a further revision and upgrade of HSPA, can provide theoretical peak data rates up to 168 Mbit/s in the downlink and 22 Mbit/s in the uplink, using a combination of air interface improvements as well as multi-carrier HSPA and MIMO. Technically though, MIMO and DC-HSPA can be used without the "+" enhancements of HSPA+. The CDMA2000 system, or IS-2000, including CDMA2000 1x and CDMA2000 High Rate Packet Data (or EVDO), standardized by 3GPP2 (differing from the 3GPP), evolving from the original IS-95 CDMA system, is used especially in North America, China, India, Pakistan, Japan, South Korea, Southeast Asia, Europe, and Africa. Overview: CDMA2000 1x Rev. E has an increased voice capacity (by three times the original amount) compared to Rev. 0 EVDO Rev. B offers downstream peak rates of 14.7 Mbit/s while Rev. C enhanced existing and new terminal user experience.While DECT cordless phones and Mobile WiMAX standards formally also fulfill the IMT-2000 requirements, they are not usually considered due to their rarity and unsuitability for usage with mobile phones. Overview: Break-up of 3G systems The 3G (UMTS and CDMA2000) research and development projects started in 1992. In 1999, ITU approved five radio interfaces for IMT-2000 as a part of the ITU-R M.1457 Recommendation; WiMAX was added in 2007.There are evolutionary standards (EDGE and CDMA) that are backward-compatible extensions to pre-existing 2G networks as well as revolutionary standards that require all-new network hardware and frequency allocations. The cell phones use UMTS in combination with 2G GSM standards and bandwidths, but do not support EDGE. The latter group is the UMTS family, which consists of standards developed for IMT-2000, as well as the independently developed standards DECT and WiMAX, which were included because they fit the IMT-2000 definition. Overview: While EDGE fulfills the 3G specifications, most GSM/UMTS phones report EDGE ("2.75G") and UMTS ("3G") functionality. History: 3G technology was the result of research and development work carried out by the International Telecommunication Union (ITU) in the early 1980s. 3G specifications and standards were developed in fifteen years. The technical specifications were made available to the public under the name IMT-2000. The communication spectrum between 400 MHz to 3 GHz was allocated for 3G. Both the government and communication companies approved the 3G standard. The first pre-commercial 3G network was launched by NTT DoCoMo in Japan in 1998, branded as FOMA. It was first available in May 2001 as a pre-release (test) of W-CDMA technology. The first commercial launch of 3G was also by NTT DoCoMo in Japan on 1 October 2001, although it was initially somewhat limited in scope; broader availability of the system was delayed by apparent concerns over its reliability.The first European pre-commercial network was an UMTS network on the Isle of Man by Manx Telecom, the operator then owned by British Telecom, and the first commercial network (also UMTS based W-CDMA) in Europe was opened for business by Telenor in December 2001 with no commercial handsets and thus no paying customers. History: The first network to go commercially live was by SK Telecom in South Korea on the CDMA-based 1xEV-DO technology in January 2002. By May 2002, the second South Korean 3G network was by KT on EV-DO and thus the South Koreans were the first to see competition among 3G operators. History: The first commercial United States 3G network was by Monet Mobile Networks, on CDMA2000 1x EV-DO technology, but the network provider later shut down operations. The second 3G network operator in the US was Verizon Wireless in July 2002, also on CDMA2000 1x EV-DO. AT&T Mobility was also a true 3G UMTS network, having completed its upgrade of the 3G network to HSUPA. History: The first commercial United Kingdom 3G network was started by Hutchison Telecom which was originally behind Orange S.A. In 2003, it announced first commercial third generation or 3G mobile phone network in the UK. History: The first pre-commercial demonstration network in the southern hemisphere was built in Adelaide, South Australia, by m.Net Corporation in February 2002 using UMTS on 2100 MHz. This was a demonstration network for the 2002 IT World Congress. The first commercial 3G network was launched by Hutchison Telecommunications branded as Three or "3" in June 2003.In India, on 11 December 2008, the first 3G mobile and internet services were launched by a state-owned company, Mahanagar Telecom Nigam Limited (MTNL), within the metropolitan cities of Delhi and Mumbai. After MTNL, another state-owned company, Bharat Sanchar Nigam Limited (BSNL), began deploying the 3G networks country-wide. History: Emtel launched the first 3G network in Africa. History: Adoption Japan was one of the first countries to adopt 3G, the reason being the process of 3G spectrum allocation, which in Japan was awarded without much upfront cost. The frequency spectrum was allocated in the US and Europe based on auctioning, thereby requiring a huge initial investment for any company wishing to provide 3G services. European companies collectively paid over 100 billion dollars in their spectrum auctions.Nepal Telecom adopted 3G Service for the first time in southern Asia. However, its 3G was relatively slow to be adopted in Nepal. In some instances, 3G networks do not use the same radio frequencies as 2G, so mobile operators must build entirely new networks and license entirely new frequencies, especially to achieve high data transmission rates. Other countries' delays were due to the expenses of upgrading transmission hardware, especially for UMTS, whose deployment required the replacement of most broadcast towers. Due to these issues and difficulties with deployment, many carriers could not or delayed the acquisition of these updated capabilities. History: In December 2007, 190 3G networks were operating in 40 countries and 154 HSDPA networks were operating in 71 countries, according to the Global Mobile Suppliers Association (GSA). In Asia, Europe, Canada, and the US, telecommunication companies use W-CDMA technology with the support of around 100 terminal designs to operate 3G mobile networks. History: The roll-out of 3G networks was delayed by the enormous costs of additional spectrum licensing fees in some countries. The license fees in some European countries were particularly high, bolstered by government auctions of a limited number of licenses and sealed bid auctions, and initial excitement over 3G's potential. This led to a telecoms crash that ran concurrently with similar crashes in the fibre-optic and dot.com fields. History: The 3G standard is perhaps well known because of a massive expansion of the mobile communications market post-2G and advances of the consumer mobile phone. An especially notable development during this time is the smartphone (for example, the iPhone, and the Android family), combining the abilities of a PDA with a mobile phone, leading to widespread demand for mobile internet connectivity. 3G has also introduced the term "mobile broadband" because its speed and capability made it a viable alternative for internet browsing, and USB Modems connecting to 3G networks, and now 4G became increasingly common. History: Market penetration By June 2007, the 200 millionth 3G subscriber had been connected of which 10 million were in Nepal and 8.2 million in India. This 200 millionth is only 6.7% of the 3 billion mobile phone subscriptions worldwide. (When counting CDMA2000 1x RTT customers—max bitrate 72% of the 200 kbit/s which defines 3G—the total size of the nearly-3G subscriber base was 475 million as of June 2007, which was 15.8% of all subscribers worldwide.) In the countries where 3G was launched first – Japan and South Korea – 3G penetration is over 70%. In Europe the leading country for 3G penetration is Italy with a third of its subscribers migrated to 3G. Other leading countries for 3G use include Nepal, UK, Austria, Australia and Singapore at the 32% migration level. History: According to ITU estimates, as of Q4 2012 there were 2096 million active mobile-broadband subscribers worldwide out of a total of 6835 million subscribers—this is just over 30%. About half the mobile-broadband subscriptions are for subscribers in developed nations, 934 million out of 1600 million total, well over 50%. Note however that there is a distinction between a phone with mobile-broadband connectivity and a smart phone with a large display and so on—although according to the ITU and informatandm.com the US has 321 million mobile subscriptions, including 256 million that are 3G or 4G, which is both 80% of the subscriber base and 80% of the US population, according to ComScore just a year earlier in Q4 2011 only about 42% of people surveyed in the US reported they owned a smart phone. In Japan, 3G penetration was similar at about 81%, but smart phone ownership was lower at about 17%. In China, there were 486.5 million 3G subscribers in June 2014, in a population of 1,385,566,537 (2013 UN estimate). History: Decline and decommissions Since the increasing adoption of 4G networks across the globe, 3G use has been in decline. Several operators around the world have already or are in the process of shutting down their 3G networks (see table below). In several places, 3G is being shut down while its older predecessor 2G is being kept in operation; Vodafone Europe is doing this, citing 2G's usefulness as a low-power fall-back. EE in the UK have indicated that they plan to phase out 3G by 2023 with the spectrum being used to enhance 5G capacity. In the US, Verizon was planning to shut down its 3G services at the end of 2020 (later delayed to the end of 2022), while T-Mobile/Sprint is planning to do so on 31 March 2022, and AT&T is planning to do so in February 2022.Currently 3G around the world is declining in availability and support. Technology that depends on 3G for usage will soon become inoperable in many places. For example, the European Union plans to ensure that member countries maintain 2G networks as a fallback, so 3G devices that are backwards compatible with 2G frequencies can continue to be used. However, in countries that plan to decommission 2G networks or have already done so as well, such as the United States and Singapore, devices supporting only 3G and backwards compatible with 2G will soon no longer be operable. As of February 2022, less than 1% of cell phone customers in the United States used 3G; AT&T offered free replacement devices to some customers in the run-up to its shutdown. Patents: It has been estimated that there are almost 8,000 patents declared essential (FRAND) related to the 483 technical specifications which form the 3GPP and 3GPP2 standards. Twelve companies accounted in 2004 for 90% of the patents (Qualcomm, Ericsson, Nokia, Motorola, Philips, NTT DoCoMo, Siemens, Mitsubishi, Fujitsu, Hitachi, InterDigital, and Matsushita). Even then, some patents essential to 3G might not have been declared by their patent holders. It is believed that Nortel and Lucent have undisclosed patents essential to these standards.Furthermore, the existing 3G Patent Platform Partnership Patent pool has little impact on FRAND protection because it excludes the four largest patent owners for 3G. Features: Data rates ITU has not provided a clear definition of the data rate that users can expect from 3G equipment or providers. Thus users sold 3G service may not be able to point to a standard and say that the rates it specifies are not being met. While stating in commentary that "it is expected that IMT-2000 will provide higher transmission rates: a minimum data rate of 2 Mbit/s for stationary or walking users, and 348 kbit/s in a moving vehicle," the ITU does not actually clearly specify minimum required rates, nor required average rates, nor what modes of the interfaces qualify as 3G, so various data rates are sold as '3G' in the market. Features: In a market implementation, 3G downlink data speeds defined by telecom service providers vary depending on the underlying technology deployed; up to 384kbit/s for UMTS (WCDMA), up to 7.2Mbit/sec for HSPA, and a theoretical maximum of 21.1 Mbit/s for HSPA+ and 42.2 Mbit/s for DC-HSPA+ (technically 3.5G, but usually clubbed under the tradename of 3G).Compare data speeds with 3.5G and 4G. Features: Security 3G networks offer greater security than their 2G predecessors. By allowing the UE (User Equipment) to authenticate the network it is attaching to, the user can be sure the network is the intended one and not an impersonator. 3G networks use the KASUMI block cipher instead of the older A5/1 stream cipher. However, a number of serious weaknesses in the KASUMI cipher have been identified.In addition to the 3G network infrastructure security, end-to-end security is offered when application frameworks such as IMS are accessed, although this is not strictly a 3G property. Features: Applications of 3G The bandwidth and location information available to 3G devices gives rise to applications not previously available to mobile phone users. It became possible to conveniently surf the internet on a 3G network on the go with minimum hassle, and do many other tasks previously a slow and difficult hassle on 2G. Medical devices, fire alarms, ankle monitors use this network for accomplishing their designated tasks alongside mobile phone users. This network marked the first for a cellular communications network to be used in such a wide variety of tasks, kick-starting the beginning of widespread usage of cellular networks. Evolution: Both 3GPP and 3GPP2 are working on the extensions to 3G standards that are based on an all-IP network infrastructure and using advanced wireless technologies such as MIMO. These specifications already display features characteristic for IMT-Advanced (4G), the successor of 3G. However, falling short of the bandwidth requirements for 4G (which is 1 Gbit/s for stationary and 100 Mbit/s for mobile operation), these standards are classified as 3.9G or Pre-4G. Evolution: 3GPP plans to meet the 4G goals with LTE Advanced, whereas Qualcomm has halted UMB development in favour of the LTE family.On 14 December 2009, TeliaSonera announced in an official press release that "We are very proud to be the first operator in the world to offer our customers 4G services." With the launch of their LTE network, initially they are offering pre-4G (or beyond 3G) services in Stockholm, Sweden and Oslo, Norway.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sporogenesis** Sporogenesis: Sporogenesis is the production of spores in biology. The term is also used to refer to the process of reproduction via spores. Reproductive spores were found to be formed in eukaryotic organisms, such as plants, algae and fungi, during their normal reproductive life cycle. Dormant spores are formed, for example by certain fungi and algae, primarily in response to unfavorable growing conditions. Most eukaryotic spores are haploid and form through cell division, though some types are diploid or dikaryons and form through cell fusion. Reproduction via spores: Reproductive spores are generally the result of cell division, most commonly meiosis (e.g. in plant sporophytes). Sporic meiosis is needed to complete the sexual life cycle of the organisms using it. Reproduction via spores: In some cases, sporogenesis occurs via mitosis (e.g. in some fungi and algae). Mitotic sporogenesis is a form of asexual reproduction. Examples are the conidial fungi Aspergillus and Penicillium, for which mitospore formation appears to be the primary mode of reproduction. Other fungi, such as ascomycetes, utilize both mitotic and meiotic spores. The red alga Polysiphonia alternates between mitotic and meiotic sporogenesis and both processes are required to complete its complex reproductive life cycle. Reproduction via spores: In the case of dormant spores in eukaryotes, sporogenesis often occurs as a result of fertilization or karyogamy forming a diploid spore equivalent to a zygote. Therefore, zygospores are the result of sexual reproduction. Reproduction via spores: Reproduction via spores involves the spreading of the spores by water or air. Algae and some fungi (chytrids) often use motile zoospores that can swim to new locations before developing into sessile organisms. Airborne spores are obvious in fungi, for example when they are released from puffballs. Other fungi have more active spore dispersal mechanisms. For example, the fungus Pilobolus can shoot its sporangia towards light. Plant spores designed for dispersal are also referred to as diaspores. Plant spores are most obvious in the reproduction of ferns and mosses. However, they also exist in flowering plants where they develop hidden inside the flower. For example, the pollen grains of flowering plants develop out of microspores produced in the anthers. Reproduction via spores: Reproductive spores grow into multicellular haploid individuals or sporelings. In heterosporous organisms, two types of spores exist: microspores give rise to males and megaspores to females. In homosporous organisms, all spores look alike and grow into individuals carrying reproductive parts of both genders. Formation of reproductive spores: Sporogenesis occurs in reproductive structures termed sporangia. The process involves sporogenous cells (sporocytes, also called spore mother cells) undergoing cell division to give rise to spores. In meiotic sporogenesis, a diploid spore mother cell within the sporangium undergoes meiosis, producing a tetrad of haploid spores. In organisms that are heterosporous, two types of spores occur: Microsporangia produce male microspores, and megasporangia produce female megaspores. In megasporogenesis, often three of the four spores degenerate after meiosis, whereas in microsporogenesis all four microspores survive. Formation of reproductive spores: In gymnosperms, such as conifers, microspores are produced through meiosis from microsporocytes in microstrobili or male cones. In flowering plants, microspores are produced in the anthers of flowers. Each anther contains four pollen sacs, which contain the microsporocytes. After meiosis, each microspore undergoes mitotic cell division, giving rise to multicellular pollen grains (six nuclei in gymnosperms, three nuclei in flowering plants). Formation of reproductive spores: Megasporogenesis occurs in megastrobili in conifers (for example a pine cone) and inside the ovule in the flowers of flowering plants. A megasporocyte inside a megasporangium or ovule undergoes meiosis, producing four megaspores. Only one is a functional megaspore whereas the others stay dysfunctional or degenerate. The megaspore undergoes several mitotic divisions to develop into a female gametophyte (for example the seven-cell/eight-nuclei embryo sac in flowering plants). Formation of reproductive spores: Mitospore formation Some fungi and algae produce mitospores through mitotic cell division within a sporangium. In fungi, such mitospores are referred to as conidia. Formation of dormant spores: Some algae, and fungi form resting spores made to survive unfavorable conditions. Typically, changes in the environment from favorable to unfavorable growing conditions will trigger a switch from asexual reproduction to sexual reproduction in these organisms. The resulting spores are protected through the formation of a thick cell wall and can withstand harsh conditions such as drought or extreme temperatures. Examples are chlamydospores, teliospores, zygospores, and myxospores. Similar survival structures produced in some bacteria are known as endospores. Formation of dormant spores: Chlamydospore and teliospore formation Chlamydospores are generally multicellular, asexual structures. Teliospores are a form of chlamydospore produced through the fusion of cells or hyphae where the nuclei of the fused cells stay separate. These nuclei undergo karyogamy and meiosis upon germination of the spore. Formation of dormant spores: Zygospore, oospore and auxospore formation Zygospores are formed in certain fungi (zygomycota, for example Rhizopus) and some algae (for example Chlamydomonas). The zygospore forms through the isogamic fusion of two cells (motile single cells in Chlamydomonas) or sexual conjugation between two hyphae (in zygomycota). Plasmogamy is followed by karyogamy, therefore zygospores are diploid (zygotes). They will undergo zygotic meiosis upon germinating. Formation of dormant spores: In oomycetes, the zygote forms through the fertilization of an egg cell with a sperm nucleus and enters a resting stage as a diploid, thick-walled oospore. The germinating oospore undergoes mitosis and gives rise to diploid hyphae which reproduce asexually via mitotic zoospores as long as conditions are favorable. In diatoms, fertilization gives rise to a zygote termed auxospore. Besides sexual reproduction and as a resting stage, the function of an auxospore is the restoration of the original cell size, as diatoms get progressively smaller during mitotic cell division. Auxospores divide by mitosis. Endospore formation The term sporogenesis can also refer to endospore formation in bacteria, which allows the cells to survive unfavorable conditions. Endospores are not reproductive structures and their formation does not require cell fusion or division. Instead, they form through the production of an encapsulating spore coat within the spore-forming cell. Parts of the spore: There are many parts of the spore 'plant'. The structure enclosing a group of spores is called a sporangium.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Estrogen receptor beta** Estrogen receptor beta: Estrogen receptor beta (ERβ) also known as NR3A2 (nuclear receptor subfamily 3, group A, member 2) is one of two main types of estrogen receptor—a nuclear receptor which is activated by the sex hormone estrogen. In humans ERβ is encoded by the ESR2 gene. Function: ERβ is a member of the family of estrogen receptors and the superfamily of nuclear receptor transcription factors. The gene product contains an N-terminal DNA binding domain and C-terminal ligand binding domain and is localized to the nucleus, cytoplasm, and mitochondria. Upon binding to 17-β-estradiol, estriol or related ligands, the encoded protein forms homo-dimers or hetero-dimers with estrogen receptor α that interact with specific DNA sequences to activate transcription. Some isoforms dominantly inhibit the activity of other estrogen receptor family members. Several alternatively spliced transcript variants of this gene have been described, but the full-length nature of some of these variants has not been fully characterized.ERβ may inhibit cell proliferation and opposes the actions of ERα in reproductive tissue. ERβ may also have an important role in adaptive function of the lung during pregnancy.ERβ is a potent tumor suppressor and plays a crucial role in many cancer types such as prostate cancer and ovarian cancer. Function: Mammary gland ERβ knockout mice show normal mammary gland development at puberty and are able to lactate normally. The mammary glands of adult virgin female mice are indistinguishable from those of age-matched wild-type virgin female mice. This is in contrast to ERα knockout mice, in which a complete absence of mammary gland development at puberty and thereafter is observed. Administration of the selective ERβ agonist ERB-041 to immature ovariectomized female rats produced no observable effects in the mammary glands, further indicating that the ERβ is non-mammotrophic.Although ERβ is not required for pubertal development of the mammary glands, it may be involved in terminal differentiation in pregnancy, and may also be necessary to maintain the organization and differentiation of mammary epithelium in adulthood. In old female ERβ knockout mice, severe cystic mammary disease that is similar in appearance to postmenopausal mastopathy develops, whereas this does not occur in aged wild-type female mice. However, ERβ knockout mice are not only deficient in ERβ signaling in the mammary glands, but also have deficient progesterone exposure due to impairment of corpora lutea formation. This complicates attribution of the preceding findings to mammary ERβ signaling.Selective ERβ agonism with diarylpropionitrile (DPN) has been found to counteract the proliferative effects in the mammary glands of selective ERα agonism with propylpyrazoletriol (PPT) in ovariectomized postmenopausal female rats. Similarly, overexpression of ERβ via lentiviral infection in mature virgin female rats decreases mammary proliferation. ERα signaling has proliferative effects in both normal breast and breast cancer cell lines, whereas ERβ has generally antiproliferative effects in such cell lines. However, ERβ has been found to have proliferative effects in some breast cell lines.Expression of ERα and ERβ in the mammary gland have been found to vary throughout the menstrual cycle and in an ovariectomized state in female rats. Whereas mammary ERα in rhesus macaques is downregulated in response to increased estradiol levels, expression of ERβ in the mammary glands is not. Expression of ERα and ERβ in the mammary glands also differs throughout life in female mice. Mammary ERα expression is higher and mammary ERβ expression lower in younger female mice, while mammary ERα expression is lower and mammary ERβ expression higher in older female mice as well as in parous female mice. Mammary proliferation and estrogen sensitivity is higher in young female mice than in old or parous female mice, particularly during pubertal mammary gland development. Tissue distribution: ERβ is expressed by many tissues including the uterus, blood monocytes and tissue macrophages, colonic and pulmonary epithelial cells and in prostatic epithelium and in malignant counterparts of these tissues. Also, ERβ is found throughout the brain at different concentrations in different neuron clusters. ERβ is also highly expressed in normal breast epithelium, although its expression declines with cancer progression. ERβ is expressed in all subtypes of breast cancer. Controversy regarding ERβ protein expression has hindered study of ERβ, but highly sensitive monoclonal antibodies have been produced and well-validated to address these issues. ERβ abnormalities: ERβ function is related to various cardiovascular targets including ATP-binding cassette transporter A1 (ABCA1) and apolipoprotein A1 (ApoA-1). Polymorphism may affect ERβ function and lead to altered responses in postmenopausal women receiving hormone replacement therapy. Abnormalities in gene expression associated with ERβ have also been linked to autism spectrum disorder. Disease: Cardiovascular disease Mutations in ERβ have been shown to influence cardiomyocytes, the cells that comprise the largest part of the heart, and can lead to an increased risk of cardiovascular disease (CVD). There is a disparity in prevalence of CVD between pre- and post-menopausal women, and the difference can be attributed to estrogen levels. Many types of ERβ receptors exist in order to help regulate gene expression and subsequent health in the body, but binding of 17βE2 (a naturally occurring estrogen) specifically improves cardiac metabolism. The heart utilizes a lot of energy in the form of ATP to properly pump blood and maintain physiological requirements in order to live, and 17βE2 helps by increasing these myocardial ATP levels and respiratory function.In addition, 17βE2 can alter myocardial signaling pathways and stimulate myocyte regeneration, which can aid in inhibiting myocyte cell death. The ERβ signaling pathway plays a role in both vasodilation and arterial dilation, which contributes to an individual having a healthy heart rate and a decrease in blood pressure. This regulation can increase endothelial function and arterial perfusion, both of which are important to myocyte health. Thus, alterations in this signaling pathways due to ERβ mutation could lead to myocyte cell death from physiological stress. While ERα has a more profound role in regeneration after myocyte cell death, ERβ can still help by increasing endothelial progenitor cell activation and subsequent cardiac function. Disease: Alzheimer's disease Genetic variation in ERβ is both sex and age dependent and ERβ polymorphism can lead to accelerated brain aging, cognitive impairment, and development of AD pathology. Similar to CVD, post-menopausal women have an increased risk of developing Alzheimer's disease (AD) due to a loss of estrogen, which affects proper aging of the hippocampus, neural survival and regeneration, and amyloid metabolism. ERβ mRNA is highly expressed in hippocampal formation, an area of the brain that is associated with memory. This expression contributes to increased neuronal survival and helps protect against neurodegenerative diseases such as AD. The pathology of AD is also associated with accumulation of amyloid beta peptide (Aβ). While a proper concentration of Aβ in the brain is important for healthy functioning, too much can lead to cognitive impairment. Thus, ERβ helps control Aβ levels by maintaining the protein it is derived from, β-amyloid precursor protein. ERβ helps by up-regulating insulin-degrading enzyme (IDE), which leads to β-amyloid degradation when accumulation levels begin to rise. However, in AD, lack of ERβ causes a decrease in this degradation and an increase in plaque build-up.ERβ also plays a role in regulating APOE, a risk factor for AD that redistributes lipids across cells. APOE expression in the hippocampus is specifically regulated by 17βE2, affecting learning and memory in individuals afflicted with AD. Thus, estrogen therapy via an ERβ-targeted approach can be used as a prevention method for AD either before or at the onset of menopause. Interactions between ERα and ERβ can lead to antagonistic actions in the brain, so an ERβ-targeted approach can increase therapeutic neural responses independently of ERα. Therapeutically, ERβ can be used in both men and women in order to regulate plaque formation in the brain. Neuroprotective benefits: Synaptic strength and plasticity ERβ levels can dictate both synaptic strength and neuroplasticity through neural structure modifications. Variations in endogenous estrogen levels cause changes in dendritic architecture in the hippocampus, which affects neural signaling and plasticity. Specifically, lower estrogen levels lead to decreased dendritic spines and improper signaling, inhibiting plasticity of the brain. However, treatment of 17βE2 can reverse this affect, giving it the ability to modify hippocampal structure. As a result of the relationship between dendritic architecture and long-term potentiation (LTP), ERβ can enhance LTP and lead to an increase in synaptic strength. Furthermore, 17βE2 promotes neurogenesis in developing hippocampal neurons and neurons in the subventricular zone and dentate gyrus of the adult human brain. Specifically, ERβ increases the proliferation of progenitor cells to create new neurons and can be increased later in life through 17βE2 treatment. Ligands: Agonists Non-selective Endogenous estrogens (e.g., estradiol, estrone, estriol, estetrol) Natural estrogens (e.g., conjugated estrogens) Synthetic estrogens (e.g., ethinylestradiol, diethylstilbestrol) Selective Agonists of ERβ selective over ERα include: Antagonists Non-selective Selective estrogen receptor modulators (e.g., tamoxifen, raloxifene) Antiestrogens (e.g., fulvestrant, ICI-164384) Selective Antagonists of ERβ selective over ERα include: PHTPP (R,R)-Tetrahydrochrysene ((R,R)-THC) – actually not selective over ERα, but rather an agonist instead of antagonist of ERα Affinities Interactions: Estrogen receptor beta has been shown to interact with:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Adaptive representation** Adaptive representation: Adaptive representation is an extension by Francis Heylighen to Kant's theory of knowledge. Adaptive representation: According to Kant, perception passes by the filters of the mind who observes the phenomena. In this line, there exists in the human mind invariant and a priori principles of experience. As an example, one may have imprinted in the brain a Cartesian representation of space, a notion of time, color separation and others. This may be called "static representation". Adaptive representation: Heylighen has proposed a revision of these Kantian ideas, in which these principles are not supposed to be invariant and necessary. Instead, alternative principles exist for the organization of experience in adaptive representations. This opens a path for new investigations in the philosophy of mind and human cognition.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alclad** Alclad: Alclad is a corrosion-resistant aluminium sheet formed from high-purity aluminium surface layers metallurgically bonded (rolled onto) to high-strength aluminium alloy core material. It has a melting point of about 500 °C (932 °F). Alclad is a trademark of Alcoa but the term is also used generically. Alclad: Since the late 1920s, Alclad has been produced as an aviation-grade material, being first used by the sector in the construction of the ZMC-2 airship. The material has significantly more resistance to corrosion than most aluminium-based alloys, for only a modest increase in weight, making Alclad attractive for building various elements of aircraft, such as the fuselage, structural members, skin, and cowling. Accordingly, it became a relatively popular material for aircraft manufacturing. Details: The material was described in NACA-TN-259 of August 1927, as "a new corrosion resistant aluminium product which is markedly superior to the present strong alloys. Its use should result in greatly increased life of a structural part. Alclad is a heat-treated aluminium, copper, manganese, magnesium alloy that has the corrosion resistance of pure metal at the surface and the strength of the strong alloy underneath. Of particular importance is the thorough character of the union between the alloy and the pure aluminium. Preliminary results of salt spray tests (24 weeks of exposure) show changes in tensile strength and elongation of Alclad 17ST, when any occurred, to be so small as to be well within the limits of experimental error." In applications involving aircraft construction, Alclad has proven to have increased resistance to corrosion at the expense of increased weight when compared to sheet aluminium.As pure aluminium possesses a relatively greater resistance to corrosion over the majority of aluminium alloys, it was soon recognised that a thin coating of pure aluminium over the exterior surface of those alloys would take advantage of the superior qualities of both materials. Thus, a key advantage of Alclad over most aluminium alloys is its high corrosion resistance. However, considerable care must be taken while working on an Alclad-covered exterior surface, such as while cleaning the skin of an aircraft, to avoid scarring the surface to expose the vulnerable alloy underneath and prematurely age those elements.Due to its relatively shiny natural finish, it is often considered to be cosmetically pleasing when used for external elements, particularly during restoration efforts. It has been observed that some fabrication techniques, such as welding, are not suitable when used in conjunction with Alclad. Mild cleaners with a neutral pH value and finer abrasives are recommended for cleaning and polishing Alclad surfaces. It is common for waterproof wax and other inhibitive coverings to be applied to further reduce corrosion. In the twenty-first century, research and evaluation was underway into new coatings and application techniques. History: Alclad sheeting has become a widely used material within the aviation industry for the construction of aircraft due to its favourable qualities, such as a high fatigue resistance and its strength. During the first half of the twentieth century, substantial studies were conducted into the corrosion qualities of various lightweight aluminium alloys for aviation purposes. The first aircraft to be constructed from Alclad was the all-metal US Navy airship ZMC-2, which was constructed in 1927 at Naval Air Station Grosse Ile. Prior to this, aluminium had been used on the pioneering zeppelins constructed by Ferdinand Zeppelin.Alclad has been most commonly present in certain elements of an aircraft, including the fuselage, structural members, skin, and cowls. The aluminium alloy that Alclad is derived from has become one of the most commonly used of all aluminium-based alloys. While unclad aluminium has also continued to be extensively used on modern aircraft, which has a lower weight than Alclad, it is more prone to corrosion; the alternating use of the two materials is often defined by the specific components or elements that are composed of them. In aviation-grade Alclad, the thickness of the outer cladding layer typically varies between 1% and 15% of the total thickness.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cold shield** Cold shield: A cold shield is a device to protect an object from unwanted heating by thermal radiation or light. Usually it is a cooled object with low absorption and high reflectivity. It can be found in molecular beam epitaxy chambers to protect the growth areas from thermal radiation from hot sources. In cryostats, a radiation shield protects a sample from infrared radiation. An infrared detector is protected from thermal background radiation outside its optical field of view. These devices are usually cooled to the same temperature as the detector. Cold shields are typically used in IR optical devices for military, scientific and industrial applications to protect IR sensors from stray IR radiation (lowering noise figures). Most cold shield applications require near instantaneous cooling, making low mass of the structure very important. Therefore, electroforming is the preferred method of fabricating cold shields.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tunable diode laser absorption spectroscopy** Tunable diode laser absorption spectroscopy: Tunable diode laser absorption spectroscopy (TDLAS, sometimes referred to as TDLS, TLS or TLAS) is a technique for measuring the concentration of certain species such as methane, water vapor and many more, in a gaseous mixture using tunable diode lasers and laser absorption spectrometry. The advantage of TDLAS over other techniques for concentration measurement is its ability to achieve very low detection limits (of the order of ppb). Apart from concentration, it is also possible to determine the temperature, pressure, velocity and mass flux of the gas under observation. TDLAS is by far the most common laser based absorption technique for quantitative assessments of species in gas phase. Working: A basic TDLAS setup consists of a tunable diode laser light source, transmitting (i.e. beam shaping) optics, optically accessible absorbing medium, receiving optics and detector/s. The emission wavelength of the tunable diode laser, viz. VCSEL, DFB, etc., is tuned over the characteristic absorption lines of a species in the gas in the path of the laser beam. This causes a reduction of the measured signal intensity due to absorption, which can be detected by a photodiode, and then used to determine the gas concentration and other properties as described later.Different diode lasers are used based on the application and the range over which tuning is to be performed. Typical examples are InGaAsP/InP (tunable over 900 nm to 1.6 μm), InGaAsP/InAsP (tunable over 1.6 μm to 2.2 μm), etc. These lasers can be tuned by either adjusting their temperature or by changing injection current density into the gain medium. While temperature changes allow tuning over 100 cm−1, it is limited by slow tuning rates (a few hertz), due to the thermal inertia of the system. On the other hand, adjusting the injection current can provide tuning at rates as high as ~10 GHz, but it is restricted to a smaller range (about 1 to 2 cm−1) over which the tuning can be performed. The typical laser linewidth is of the order of 10−3 cm−1 or smaller. Additional tuning, and linewidth narrowing, methods include the use of extracavity dispersive optics. Basic principles: Concentration measurement The basic principle behind the TDLAS technique is simple. The focus here is on a single absorption line in the absorption spectrum of a particular species of interest. To start, the wavelength of a diode laser is tuned over a particular absorption line of interest and the intensity of the transmitted radiation is measured. The transmitted intensity can be related to the concentration of the species present by the Beer-Lambert law, which states that when a radiation of wavenumber (ν~) passes through an absorbing medium, the intensity variation along the path of the beam is given by, exp exp ⁡(−σ(ν~)NL) where, I(ν~) is the transmitted intensity of the radiation after it has traversed a distance L through the medium, I0(ν~) is the initial intensity of the radiation, α(ν~)=σ(ν~)N=S(T)ϕ(ν~−ν~0) is the absorbance of the medium, σ(ν~) is the absorption cross-section of the absorbing species, N is the number density of the absorbing species, S(T) is the line strength (i.e. the total absorption per molecule) of the absorbing species at temperature T ,ϕ(ν~−ν~0) is the lineshape function for the particular absorption line. Sometimes also represented by g(ν~−ν~0) ,ν~0 is the center frequency of the spectrum. Basic principles: Temperature measurement The above relation requires that the temperature T of the absorbing species is known. However, it is possible to overcome this difficulty and measure the temperature simultaneously. There are number of ways to measure the temperature. A widely applied method, which can measure the temperature simultaneously, uses the fact that the line strength S(T) is a function of temperature alone. Here two different absorption lines for the same species are probed while sweeping the laser across the absorption spectrum, the ratio of the integrated absorbance, is then a function of temperature alone. Basic principles: exp ⁡[−hc(E1−E2)k(1T−1T0)] where, T0 is some reference temperature at which the line strengths are known, ΔE=(E1−E2) is the difference in the lower energy levels involved in the transitions for the lines being probed.Another way to measure the temperature is by relating the FWHM of the probed absorption line to the Doppler line width of the species at that temperature. This is given by, ln 7.1623 10 −7)TM where, m is the weight of one molecule of the species, and M is the molecular weight of the species.Note: In the last expression, T is in kelvins and M is in g/mol. Basic principles: However, this method can be used, only when the gas pressure is low (of the order of few mbar). At higher pressures (tens of millibars or more), pressure or collisional broadening becomes important and the lineshape is no longer a function of temperature alone. Basic principles: Velocity measurement The effect of a mean flow of the gas in the path of the laser beam can be seen as a shift in the absorption spectrum, also known as Doppler shift. The shift in the frequency spectrum is related to the mean flow velocity by, cos ⁡θ where, θ is the angle between the flow direction and the laser beam direction.Note : Δν~D is not same as the one mentioned before where it refers to the width of the spectrum. The shift is usually very small (3×10−5 cm−1 ms−1 for near-IR diode laser) and the shift-to-width ratio is of the order of 10−4. Limitations and means of improvement: The main disadvantage of absorption spectrometry (AS) as well as laser absorption spectrometry (LAS) in general is that it relies on a measurement of a small change of a signal on top of a large background. Any noise introduced by the light source or the optical system will deteriorate the detectability of the technique. The sensitivity of direct absorption techniques is therefore often limited to an absorbance of ~10−3, far away from the shot noise level, which for single pass direct AS (DAS) is in the 10−7 – 10−8 range. Since this is insufficient for many types of applications, AS is seldom used in its simplest mode of operation. Limitations and means of improvement: There are basically two ways to improve on the situation; one is to reduce the noise in the signal, the other is to increase the absorption. The former can be achieved by the use of a modulation technique, whereas the latter can be obtained by placing the gas inside a cavity in which the light passes through the sample several times, thus increasing the interaction length. If the technique is applied to trace species detection, it is also possible to enhance the signal by performing detection at wavelengths where the transitions have larger line strengths, e.g. using fundamental vibrational bands or electronic transitions. Limitations and means of improvement: Modulation techniques Modulation techniques make use of the fact that technical noise usually decreases with increasing frequency (which is why it is often referred to as 1/f noise) and improve the signal to noise ratio by encoding and detecting the absorption signal at a high frequency, where the noise level is low. The most common modulation techniques are wavelength modulation spectroscopy (WMS) and frequency modulation spectroscopy (FMS). Limitations and means of improvement: In WMS the wavelength of the light is continuously scanned across the absorption profile, and the signal is detected at a harmonic of the modulation frequency. Limitations and means of improvement: In FMS, the light is modulated at a much higher frequency but with a lower modulation index. As a result, a pair of sidebands separated from the carrier by the modulation frequency appears, giving rise to a so-called FM-triplet. The signal at the modulation frequency is a sum of the beat signals of the carrier with each of the two sidebands. Since these two sidebands are fully out of phase with each other, the two beat signals cancel in the absence of absorbers. However, an alteration of any of the sidebands, either by absorption or dispersion, or a phase shift of the carrier, will give rise to an unbalance between the two beat signals, and therefore a net-signal. Limitations and means of improvement: Although in theory baseline-free, both modulation techniques are usually limited by residual amplitude modulation (RAM), either from the laser or from multiple reflections in the optical system (etalon effects). If these noise contributions are held low, the sensitivity can be brought into the 10−5 – 10−6 range or even better. Limitations and means of improvement: In general the absorption imprints are generated by a straight line light propagation through a volume with the specific gas. To further enhance the signal, the pathway of the light travel can be increased with multi-pass cells. There is however a variety of the WMS-technique that utilizes the narrow line absorption from gases for sensing even when the gases are situated in closed compartments (e.g. pores) inside solid materia. The technique is referred to as gas in scattering media absorption spectroscopy (GASMAS). Limitations and means of improvement: Cavity-enhanced absorption spectrometry (CEAS) The second way of improving the detectability of TDLAS technique is to extend the interaction length. This can be obtained by placing the species inside a cavity in which the light bounces back and forth many times, whereby the interaction length can be increased considerably. This has led to a group of techniques denoted as cavity enhanced AS (CEAS). The cavity can either be placed inside the laser, giving rise to intracavity AS, or outside, when it is referred to as an external cavity. Although the former technique can provide a high sensitivity, its practical applicability is limited because of all the non-linear processes involved. Limitations and means of improvement: External cavities can either be of multi-pass type, i.e. Herriott or White cells, of non- resonant type (off-axis alignment), or of resonant type, most often working as a Fabry–Pérot (FP) etalon. Multi-pass cells, which typically can provide an enhanced interaction length of up to ~2 orders of magnitude, are nowaday common together with TDLAS. Limitations and means of improvement: Resonant cavities can provide a much larger path length enhancement, in the order of the finesse of the cavity, F, which for a balanced cavity with high reflecting mirrors with reflectivities of ~99.99–99.999% can be ~ 104 to 105. It should be clear that if all this increase in interaction length can be utilized efficiently, this vouches for a significant increase in detectability. A problem with resonant cavities is that a high finesse cavity has very narrow cavity modes, often in the low kHz range (the width of the cavity modes is given by FSR/F, where FSR is the free-spectral range of the cavity, which is given by c/2L, where c is the speed of light and L is the cavity length). Since cw lasers often have free-running linewidths in the MHz range, and pulsed even larger, it is non-trivial to couple laser light effectively into a high finesse cavity. Limitations and means of improvement: The most important resonant CEAS techniques are cavity ring-down spectrometry (CRDS), integrated cavity output spectroscopy (ICOS) or cavity enhanced absorption spectroscopy (CEAS), phase-shift cavity ring-down spectroscopy (PS-CRDS) and Continuous wave Cavity Enhanced Absorption Spectrometry (cw-CEAS), either with optical locking, referred to as (OF-CEAS), as has been demonstrated Romanini et al. or by electronic locking., as for example is done in the Noise-Immune Cavity-Enhanced Optical-Heterodyne Molecular Spectroscopy (NICE-OHMS) technique. or combination of frequency modulation and optical feedback locking CEAS, referred to as (FM-OF-CEAS).The most important non-resonant CEAS techniques are off-axis ICOS (OA-ICOS) or off-axis CEAS (OA-CEAS), wavelength modulation off-axis CEAS (WM-OA-CEAS), off-axis phase-shift cavity enhanced absorption spectroscopy (off-axis PS-CEAS).These resonant and non-resonant cavity enhanced absorption techniques have so far not been used that frequently with TDLAS. However, since the field is developing fast, they will presumably be more used with TDLAS in the future. Applications: Freeze-drying (lyophilization) cycle development and optimization for pharmaceuticals. Flow diagnostics in hypersonic/re-entry speed research facilities and scramjet combustors. Applications: Oxygen tunable diode laser spectrometers play an important role in safety applications in a wide range of industrial processes, for this reason, TDLS are often an integral part of modern chemical plants. The fast response time compared to other technologies for measuring gas composition, and the immunity to many background gasses and environmental conditions makes TDL technology a commonly selected technology for monitoring of combustible gasses in process environments. This technology is employed on flares, in vessel headspace and in other locations where explosive atmospheres must be prevented from forming. According to a 2018 research study, TDL technology is the 4th most commonly selected technology for gas analysis in Chemical Processing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Boiler stay** Boiler stay: A boiler stay is an internal structural element of a boiler. Where the shell of a boiler or other pressure vessel is made of cylindrical or (part) spherical elements, the internal pressure will be contained without distortion. However, flat surfaces of any significant size will distort under pressure, tending to bulge. Stays of various types are used to support these surfaces by tying them together to resist pressure. Some boiler configurations require a great deal of staying. A large locomotive boiler may require several thousand stays to support the firebox. In water tube boilers, stays were sometimes used between their main chambers, and could themselves be water tubes. Stayless firebox: A cylindrical firebox may be self-supporting without stays because of its shape. Knuckle joint is used in Diagonal stays in boiler
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Clarifying agent** Clarifying agent: Clarifying agents are used to remove suspended solids from liquids by inducing flocculation, causing the solids to form larger aggregates that can be easily removed after they either float to the surface or sink to the bottom of the containment vessel. Process: Particles finer than 0.1 µm (10−7m) in water remain continuously in motion due to electrostatic charge (often negative) which causes them to repel each other. Once their electrostatic charge is neutralized by the use of a coagulant chemical, the finer particles start to collide and agglomerate (collect together) under the influence of Van der Waals forces. These larger and heavier particles are called flocs. Process: Flocculants, or flocculating agents (also known as flocking agents), are chemicals that promote flocculation by causing colloids and other suspended particles in liquids to aggregate, forming a floc. Flocculants are used in water treatment processes to improve the sedimentation or filterability of small particles. For example, a flocculant may be used in swimming pool or drinking water filtration to aid removal of microscopic particles which would otherwise cause the water to be turbid (cloudy) and which would be difficult or impossible to remove by filtration alone. Process: Many flocculants are multivalent cations such as aluminium, iron, calcium or magnesium. These positively charged molecules interact with negatively charged particles and molecules to reduce the barriers to aggregation. In addition, many of these chemicals, under appropriate pH and other conditions such as temperature and salinity, react with water to form insoluble hydroxides which, upon precipitating, link together to form long chains or meshes, physically trapping small particles into the larger floc. Process: Long-chain polymer flocculants, such as modified polyacrylamides, are manufactured and sold by the flocculant producing business. These can be supplied in dry or liquid form for use in the flocculation process. The most common liquid polyacrylamide is supplied as an emulsion with 10–40% actives and the rest is a non-aqueous carrier fluid, surfactants and latex. This form allows easy handling of viscous polymers at high concentrations. These emulsion polymers require "activation" – inversion of the emulsion so that the polymers molecules form an aqueous solution. Agents: alum aluminium chlorohydrate aluminium sulfate calcium oxide calcium hydroxide iron(II) sulfate (ferrous sulfate) iron(III) chloride (ferric chloride) polyacrylamide polyDADMAC sodium aluminate sodium silicateThe following natural products are used as flocculants: Chitosan Isinglass Moringa oleifera seeds (Horseradish tree) Gelatin Strychnos potatorum seeds (Nirmali nut tree) Guar gum Alginates (brown seaweed extracts)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Clear cell sarcoma** Clear cell sarcoma: Clear cell sarcoma is a rare form of cancer called a sarcoma. It is known to occur mainly in the soft tissues and dermis. Rare forms were thought to occur in the gastrointestinal tract before they were discovered to be different and redesignated as GNET. Recurrence is common.Clear cell sarcoma's neoplastic cells express the EWSR1-ATF1 fusion gene in a majority of cases or a EWSR1-CREB1, EWSR1-CREM, or EWSR1-DDIT3 fusion gene in a small subset of cases (see FET gene family of fusion genes). Clear cell sarcoma of the soft tissues in adults is not related to the pediatric tumor known as clear cell sarcoma of the kidney. Signs and symptoms: It presents as a slow growing mass that especially affects tendons and aponeuroses and it is deeply situated. Patients often perceive it as a lump or hard mass. It causes either pain or tenderness but only until it becomes large enough. This kind of tumor is commonly found in the extremities especially around the knee, feet and ankle. Patients diagnosed with clear cell sarcoma are usually between the ages of 20 and 40. Pathology: Despite the name clear cell sarcoma, the tumor cells do not necessarily need to have clear cytoplasm. The lesion has a distinctly nested growth pattern with a mixture of spindle, epithelioid and tumor giant cells. Approximately two thirds of the tumors contain melanin pigment. Clear cell sarcoma, similar to melanoma, has consistent positivity for S-100, HMB-45, and MITF. Diagnosis: Imaging studies such as X-rays, computed tomography scans, or MRI may be required to diagnose clear-cell sarcoma together with a physical exam. Normally a biopsy is also necessary. Furthermore, a chest CT, a bone scan and positron emission tomography (PET) may be part of the tests in order to evaluate areas where metastases occur. Treatment: Treatment depends upon the site and the extent of the disease. Clear cell sarcoma is usually treated with surgery in the first place in order to remove the tumor. The surgical procedure is then followed by radiation and sometimes chemotherapy. Few cases of clear cell sarcoma respond to chemotherapy. Several types of targeted therapy that may be of benefit to clear cell sarcoma people are currently under investigation. Prognosis: When the tumor is large and there is presence of necrosis and local recurrence, the prognosis is poor. Presence of metastasis occurs in more than 50% cases and the common places of its occurrence are the bone, lymph node and lungs. Five-year survival rates, which are reported to be between 50 and 65%, can be misleading because the disease is prone to late metastasis or recurrence. Ten and twenty-year survival rates are 33% and 10%, respectively.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sodium- and chloride-dependent glycine transporter 1** Sodium- and chloride-dependent glycine transporter 1: Sodium- and chloride-dependent glycine transporter 1, also known as glycine transporter 1, is a protein that in humans is encoded by the SLC6A9 gene which is promising therapeutic target for treatment of diabetes and obesity. Selective inhibitors: Elevation of extracellular synaptic glycine concentration by blockade of GlyT1 has been hypothesized to potentiate NMDA receptor function in vivo and to represent a rational approach for the treatment of schizophrenia and cognitive disorders. Several drug candidates have reached clinical trials. ASP2535 Bitopertin (RG1678), which has entered phase II trials for the treatment of schizophrenia Iclepertin (BI 425809) by Boehringer Ingelheim which is thought to improve cognitive impairment due to schizophrenia Org 25935 (Sch 900435) PF-03463275 (in phase II trial) Pesampator (PF-04958242) by Pfizer Sarcosine which is thought to improve cognitive impairment due to schizophrenia
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dynamical Theory of Crystal Lattices** Dynamical Theory of Crystal Lattices: Dynamical Theory of Crystal Lattices is a book in solid state physics, authored collaboratively by Max Born and Kun Huang. The book was originally started by Born in c. 1940, and was finished in the 1950s by Huang in consultation with Born. The text is considered a classical treatise on the subject of lattice dynamics, phonon theory, and elasticity in crystalline solids, but excluding metals and other complex solids with order/disorder phenomena. J. D. Eshelby, Melvin Lax, and A. J. C. Wilson reviewed the book in 1955, among several others.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Warm inflation** Warm inflation: In physical cosmology, warm inflation is one of two dynamical realizations of cosmological inflation. The other is the standard scenario, sometimes called cold inflation.In warm inflation radiation production occurs concurrently with inflationary expansion. This is consistent with the conditions necessary for inflation as given by the Friedmann equations of general relativity, which simply require that the vacuum energy density dominates the energy content of the universe at time of inflation, and so does not prohibit some radiation to be present. As such the most general picture of inflation would include a radiation energy density component. The presence of radiation during inflation implies the inflationary phase could smoothly end into a radiation-dominated era without a distinctively separate reheating phase, thus providing a solution to the graceful exit problem of inflation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kepler conjecture** Kepler conjecture: The Kepler conjecture, named after the 17th-century mathematician and astronomer Johannes Kepler, is a mathematical theorem about sphere packing in three-dimensional Euclidean space. It states that no arrangement of equally sized spheres filling space has a greater average density than that of the cubic close packing (face-centered cubic) and hexagonal close packing arrangements. The density of these arrangements is around 74.05%. Kepler conjecture: In 1998, Thomas Hales, following an approach suggested by Fejes Tóth (1953), announced that he had a proof of the Kepler conjecture. Hales' proof is a proof by exhaustion involving the checking of many individual cases using complex computer calculations. Referees said that they were "99% certain" of the correctness of Hales' proof, and the Kepler conjecture was accepted as a theorem. In 2014, the Flyspeck project team, headed by Hales, announced the completion of a formal proof of the Kepler conjecture using a combination of the Isabelle and HOL Light proof assistants. In 2017, the formal proof was accepted by the journal Forum of Mathematics, Pi. Background: Imagine filling a large container with small equal-sized spheres: Say a porcelain gallon jug with identical marbles. The "density" of the arrangement is equal to the total volume of all the marbles, divided by the volume of the jug. To maximize the number of marbles in the jug means to create an arrangement of marbles stacked between the sides and bottom of the jug, that has the highest possible density, so that the marbles are packed together as closely as possible. Background: Experiment shows that dropping the marbles in randomly, with no effort to arrange them tightly, will achieve a density of around 65%. However, a higher density can be achieved by carefully arranging the marbles as follows: For the first layer of marbles, arrange them in a hexagonal lattice (the honeycomb pattern) Put the next layer of marbles in the lowest lying gaps you can find above and between the marbles in the first layer, regardless of pattern Continue with the same procedure of filling in the lowest gaps in the prior layer, for the third and remaining layers, until the marbles reach the top edge of the jug.At each step there are at least two choices of how to place the next layer, so this otherwise unplanned method of stacking the spheres creates an uncountably infinite number of equally dense packings. The best known of these are called cubic close packing and hexagonal close packing. Each of these arrangements has an average density of 0.740480489 … The Kepler conjecture says that this is the best that can be done – no other arrangement of marbles has a higher average density: Despite there being astoundingly many different arrangements possible that follow the same procedure as steps 1–3, no packing (according to the procedure or not) can possibly fit more marbles into the same jug. Origins: The conjecture was first stated by Johannes Kepler (1611) in his paper 'On the six-cornered snowflake'. He had started to study arrangements of spheres as a result of his correspondence with the English mathematician and astronomer Thomas Harriot in 1606. Harriot was a friend and assistant of Sir Walter Raleigh, who had asked Harriot to find formulas for counting stacked cannonballs, an assignment which in turn led Raleigh's mathematician acquaintance into wondering about what the best way to stack cannonballs was. Harriot published a study of various stacking patterns in 1591, and went on to develop an early version of atomic theory. Nineteenth century: Kepler did not have a proof of the conjecture, and the next step was taken by Carl Friedrich Gauss (1831), who proved that the Kepler conjecture is true if the spheres have to be arranged in a regular lattice. Nineteenth century: This meant that any packing arrangement that disproved the Kepler conjecture would have to be an irregular one. But eliminating all possible irregular arrangements is very difficult, and this is what made the Kepler conjecture so hard to prove. In fact, there are irregular arrangements that are denser than the cubic close packing arrangement over a small enough volume, but any attempt to extend these arrangements to fill a larger volume is now known to always reduce their density. Nineteenth century: After Gauss, no further progress was made towards proving the Kepler conjecture in the nineteenth century. In 1900 David Hilbert included it in his list of twenty three unsolved problems of mathematics—it forms part of Hilbert's eighteenth problem. Twentieth century: The next step toward a solution was taken by László Fejes Tóth. Fejes Tóth (1953) showed that the problem of determining the maximum density of all arrangements (regular and irregular) could be reduced to a finite (but very large) number of calculations. This meant that a proof by exhaustion was, in principle, possible. As Fejes Tóth realised, a fast enough computer could turn this theoretical result into a practical approach to the problem. Twentieth century: Meanwhile, attempts were made to find an upper bound for the maximum density of any possible arrangement of spheres. English mathematician Claude Ambrose Rogers (see Rogers (1958)) established an upper bound value of about 78%, and subsequent efforts by other mathematicians reduced this value slightly, but this was still much larger than the cubic close packing density of about 74%. Twentieth century: In 1990, Wu-Yi Hsiang claimed to have proven the Kepler conjecture. The proof was praised by Encyclopædia Britannica and Science and Hsiang was also honored at joint meetings of AMS-MAA. Wu-Yi Hsiang (1993, 2001) claimed to prove the Kepler conjecture using geometric methods. However Gábor Fejes Tóth (the son of László Fejes Tóth) stated in his review of the paper "As far as details are concerned, my opinion is that many of the key statements have no acceptable proofs." Hales (1994) gave a detailed criticism of Hsiang's work, to which Hsiang (1995) responded. The current consensus is that Hsiang's proof is incomplete. Hales' proof: Following the approach suggested by Fejes Tóth (1953), Thomas Hales, then at the University of Michigan, determined that the maximum density of all arrangements could be found by minimizing a function with 150 variables. In 1992, assisted by his graduate student Samuel Ferguson, he embarked on a research program to systematically apply linear programming methods to find a lower bound on the value of this function for each one of a set of over 5,000 different configurations of spheres. If a lower bound (for the function value) could be found for every one of these configurations that was greater than the value of the function for the cubic close packing arrangement, then the Kepler conjecture would be proved. To find lower bounds for all cases involved solving about 100,000 linear programming problems. Hales' proof: When presenting the progress of his project in 1996, Hales said that the end was in sight, but it might take "a year or two" to complete. In August 1998 Hales announced that the proof was complete. At that stage, it consisted of 250 pages of notes and 3 gigabytes of computer programs, data and results. Hales' proof: Despite the unusual nature of the proof, the editors of the Annals of Mathematics agreed to publish it, provided it was accepted by a panel of twelve referees. In 2003, after four years of work, the head of the referee's panel, Gábor Fejes Tóth, reported that the panel were "99% certain" of the correctness of the proof, but they could not certify the correctness of all of the computer calculations. Hales' proof: Hales (2005) published a 100-page paper describing the non-computer part of his proof in detail. Hales & Ferguson (2006) and several subsequent papers described the computational portions. Hales and Ferguson received the Fulkerson Prize for outstanding papers in the area of discrete mathematics for 2009. Hales' proof: A formal proof In January 2003, Hales announced the start of a collaborative project to produce a complete formal proof of the Kepler conjecture. The aim was to remove any remaining uncertainty about the validity of the proof by creating a formal proof that can be verified by automated proof checking software such as HOL Light and Isabelle. This project was called Flyspeck – an expansion of the acronym FPK standing for Formal Proof of Kepler. At first, Hales estimated that producing a complete formal proof would take around 20 years of work. Hales published a "blueprint" for the formal proof in 2012; the completion of the project was announced on August 10, 2014. In January 2015 Hales and 21 collaborators posted a paper titled "A formal proof of the Kepler conjecture" on the arXiv, claiming to have proved the conjecture. In 2017, the formal proof was accepted by the journal Forum of Mathematics. Related problems: Thue's theorem The regular hexagonal packing is the densest circle packing in the plane (1890). The density is π⁄√12. The 2-dimensional analog of the Kepler conjecture; the proof is elementary. Henk and Ziegler attribute this result to Lagrange, in 1773 (see references, pag. 770). A simple proof by Chau and Chung from 2010 uses the Delaunay triangulation for the set of points that are centers of circles in a saturated circle packing.The hexagonal honeycomb conjecture The most efficient partition of the plane into equal areas is the regular hexagonal tiling. Related to Thue's theorem.Dodecahedral conjecture The volume of the Voronoi polyhedron of a sphere in a packing of equal spheres is at least the volume of a regular dodecahedron with inradius 1. McLaughlin's proof, for which he received the 1999 Morgan Prize. Related problems: A related problem, whose proof uses similar techniques to Hales' proof of the Kepler conjecture. Conjecture by L. Fejes Tóth in the 1950s.The Kelvin problem What is the most efficient foam in 3 dimensions? This was conjectured to be solved by the Kelvin structure, and this was widely believed for over 100 years, until disproved in 1993 by the discovery of the Weaire–Phelan structure. The surprising discovery of the Weaire–Phelan structure and disproof of the Kelvin conjecture is one reason for the caution in accepting Hales' proof of the Kepler conjecture.Sphere packing in higher dimensions In 2016, Maryna Viazovska announced proofs of the optimal sphere packings in dimensions 8 and 24. However, the optimal sphere packing question in dimensions other than 1, 2, 3, 8, and 24 is still open.Ulam's packing conjecture It is unknown whether there is a convex solid whose optimal packing density is lower than that of the sphere.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Learning disability** Learning disability: Learning disability, learning disorder, or learning difficulty (British English) is a condition in the brain that causes difficulties comprehending or processing information and can be caused by several different factors. Given the "difficulty learning in a typical manner", this does not exclude the ability to learn in a different manner. Therefore, some people can be more accurately described as having a "learning difference", thus avoiding any misconception of being disabled with a possible lack of an ability to learn and possible negative stereotyping. In the United Kingdom, the term "learning disability" generally refers to an intellectual disability, while conditions such as dyslexia and dyspraxia are usually referred to as "learning difficulties".While learning disability and learning disorder are often used interchangeably, they differ in many ways. Disorder refers to significant learning problems in an academic area. These problems, however, are not enough to warrant an official diagnosis. Learning disability, on the other hand, is an official clinical diagnosis, whereby the individual meets certain criteria, as determined by a professional (such as a psychologist, psychiatrist, speech-language pathologist, or pediatrician). The difference is in the degree, frequency, and intensity of reported symptoms and problems, and thus the two should not be confused. When the term "learning disorder" is used, it describes a group of disorders characterized by inadequate development of specific academic, language, and speech skills. Types of learning disorders include reading (dyslexia), arithmetic (dyscalculia) and writing (dysgraphia).The unknown factor is the disorder that affects the brain's ability to receive and process information. This disorder can make it problematic for a person to learn as quickly or in the same way as someone who is not affected by a learning disability. People with a learning disability have trouble performing specific types of skills or completing tasks if left to figure things out by themselves or if taught in conventional ways. Learning disability: Individuals with learning disabilities can face unique challenges that are often pervasive throughout the lifespan. Depending on the type and severity of the disability, interventions, and current technologies may be used to help the individual learn strategies that will foster future success. Some interventions can be quite simplistic, while others are intricate and complex. Current technologies may require student training to be effective classroom supports. Teachers, parents, and schools can create plans together that tailor intervention and accommodations to aid the individuals in successfully becoming independent learners. A multi-disciplinary team frequently helps to design the intervention and to coordinate the execution of the intervention with teachers and parents. This team frequently includes school psychologists, special educators, speech therapists (pathologists), occupational therapists, psychologists, ESL teachers, literacy coaches, and/or reading specialists. Definition: Representatives of organizations committed to the education and welfare of individuals with learning disabilities are known as National Joint Committee on Learning Disabilities (NJCLD). The NJCLD used the term 'learning disability' to indicate a discrepancy between a child's apparent capacity to learn and their level of achievement. Several difficulties existed, however, with the NJCLD standard of defining learning disability. One such difficulty was its belief of central nervous system dysfunction as a basis of understanding and diagnosing learning disability. This conflicted with the fact that many individuals who experienced central nervous system dysfunction, such as those with cerebral palsy, did not experience disabilities in learning. On the other hand, those individuals who experienced multiple handicapping conditions along with learning disability frequently received inappropriate assessment, planning, and instruction. The NJCLD notes that it is possible for learning disability to occur simultaneously with other handicapping conditions, however, the two should not be directly linked together or confused. Definition: In the 1980s, NJCLD, therefore, defined the term learning disability as: a heterogeneous group of disorders manifested by significant difficulties in the acquisition and use of listening, speaking, reading, writing, reasoning or mathematical abilities. These disorders are intrinsic to the individual and presumed to be due to Central Nervous System Dysfunction. Even though a learning disability may occur concomitantly with other handicapping conditions (e.g. sensory impairment, intellectual disability, social and emotional disturbance) or environmental influences (e.g. cultural differences, insufficient/inappropriate instruction, psychogenic factors) it is not the direct result of those conditions or influences. Definition: The 2002 LD Roundtable produced the following definition:Concept of LD: Strong converging evidence supports the validity of the concept of specific learning disabilities (SLD). This evidence is particularly impressive because it converges across different indicators and methodologies. The central concept of SLD involves disorders of learning and cognition that are intrinsic to the individual. SLD are specific in the sense that these disorders each significantly affect a relatively narrow range of academic and performance outcomes. SLD may occur in combination with other disabling conditions, but they are not due primarily to other conditions, such as intellectual disability, behavioral disturbance, lack of opportunities to learn, or primary sensory deficits.The issue of defining learning disabilities has generated significant and ongoing controversy. The term "learning disability" does not exist in DSM-IV, but it has been added to the DSM-5. The DSM-5 does not limit learning disorders to a particular diagnosis such as reading, mathematics, or written expression. Instead, it is a single diagnosis criterion describing drawbacks in general academic skills and includes detailed specifiers for the areas of reading, mathematics, and written expression. Definition: United States and Canada In the United States and Canada, the terms learning disability and learning disorder (LD) refer to a group of disorders that affect a broad range of academic and functional skills including the ability to speak, listen, read, write, spell, reason, organize information, and do math. People with learning disabilities generally have intelligence that is average or higher. Definition: Legislation in the United States The Section 504 of the Rehabilitation Act 1973, effective May 1977, guarantees certain rights to people with disabilities, especially in the cases of education and work, such being in schools, colleges and university settings.The Individuals with Disabilities Education Act, formerly known as the Education for All Handicapped Children Act, is a United States federal law that governs how states and public agencies provide early intervention, special education and related services to children with disabilities. It addresses the educational needs of children with disabilities from birth to the age of 21. Considered as a civil rights law, states are not required to participate. Definition: Canada In Canada, the first association in support of children with learning disabilities was founded in 1962 by a group of concerned parents. Originally called the Association for Children with Learning Disabilities, the Learning Disabilities Association of Canada – LDAC was created to provide awareness and services for individuals with learning disabilities, their families, at work, and the community. Since education is largely the responsibility of each province and territory in Canada, provinces and territories have jurisdiction over the education of individuals with learning disabilities, which allows the development of policies and support programs that reflect the unique multicultural, linguistic, and socioeconomic conditions of its area. Definition: United Kingdom In the UK, terms such as specific learning difficulty (SpLD), developmental dyslexia, developmental coordination disorder and dyscalculia are used to cover the range of learning difficulties referred to in the United States as "learning disabilities". In the UK, the term "learning disability" refers to a range of developmental disabilities or conditions that are almost invariably associated with more severe generalized cognitive impairment. The Lancet defines 'learning disability' as a "significant general impairment in intellectual functioning acquired during childhood", and states that roughly one in 50 British adults have one. Definition: Japan In Japan, acknowledgement and support for students with learning disabilities has been a fairly recent development, and has improved drastically since the start of the 21st Century. The first definition for learning disability was coined in 1999, and in 2001, the Enrichment Project for the Support System for Students with Learning Disabilities was established. Since then, there have been significant efforts to screen children for learning disabilities, provide follow-up support, and provide networking between schools and specialists. Effects: The effects of having a learning disability or learning difference are not limited to educational outcomes: individuals with learning disabilities may experience social problems as well. Neuropsychological differences can affect the accurate perception of social cues with peers. Researchers argue persons with learning disabilities not only experience negative effects as a result of their learning distinctions, but also as a result of carrying a stigmatizing label. It has generally been difficult to determine the efficacy of special education services because of data and methodological limitations. Emerging research suggests adolescents with learning disabilities experience poorer academic outcomes even compared to peers who began high school with similar levels of achievement and comparable behaviors. It seems their poorer outcomes may be at least partially due to the lower expectations of their teachers; national data show teachers hold expectations for students labeled with learning disabilities that are inconsistent with their academic potential (as evidenced by test scores and learning behaviors). It has been said that there is a strong connection between children with a learning disability and their educational performance.Many studies have been done to assess the correlation between learning disability and self-esteem. These studies have shown that an individual's self-esteem is indeed affected by their own awareness of their learning disability. Students with a positive perception of their academic abilities generally tend to have higher self-esteem than those who do not, regardless of their actual academic achievement. However, studies have also shown that several other factors can influence self-esteem. Skills in non-academic areas, such as athletics and arts, improve self-esteem. Also, a positive perception of one's physical appearance has also been shown to have positive effects of self-esteem. Another important finding is that students with learning disabilities are able to distinguish between academic skill and intellectual capacity. This demonstrates that students who acknowledge their academic limitations but are also aware of their potential to succeed in other intellectual tasks see themselves as intellectually competent individuals, which increases their self-esteem.Research involving individuals with learning disabilities who exhibit challenging behaviors who are subsequently treated with antipsychotic medications provides little evidence that any benefits outweigh the risk. Causes: The causes for learning disabilities are not well understood, and sometimes there is no apparent cause for a learning disability. However, some causes of neurological impairments include: Heredity and genetics: Learning disabilities are often linked through genetics and run in the family. Children who have learning disabilities often have parents who have the same struggles. Children of parents who had less than 12 years of school are more likely to have a reading disability. Some children have spontaneous mutations (i.e. not present in either parent) which can cause developmental disorders including learning disabilities. One study estimated that about one in 300 children had such spontaneous mutations, for example a fault in the CDK13 gene which is associated with learning and communication difficulties in the children affected. Causes: Problems during pregnancy and birth: A learning disability can result from anomalies in the developing brain, illness or injury. Risk factors are fetal exposure to alcohol or drugs and low birth weight (3 pounds or less). These children are more likely to develop a disability in math or reading. Children who are born prematurely, late, have a longer labor than usual, or have trouble receiving oxygen are more likely to develop a learning disability. Causes: Accidents after birth: Learning disabilities can also be caused by head injuries, malnutrition, or by toxic exposure (such as heavy metals or pesticides). Diagnosis: IQ-achievement discrepancy Learning disabilities can be identified by psychiatrists, speech language pathologists, school psychologists, clinical psychologists, counseling psychologists, neuropsychologists, speech language pathologists, and other learning disability specialists through a combination of intelligence testing, academic achievement testing, classroom performance, and social interaction and aptitude. Other areas of assessment may include perception, cognition, memory, attention, and language abilities. The resulting information is used to determine whether a child's academic performance is commensurate with their cognitive ability. If a child's cognitive ability is much higher than their academic performance, the student is often diagnosed with a learning disability. The DSM-IV and many school systems and government programs diagnose learning disabilities in this way (DSM-IV uses the term "disorder" rather than "disability"). Diagnosis: Although the discrepancy model has dominated the school system for many years, there has been substantial criticism of this approach among researchers. Recent research has provided little evidence that a discrepancy between formally measured IQ and achievement is a clear indicator of LD. Furthermore, diagnosing on the basis of a discrepancy does not predict the effectiveness of treatment. Low academic achievers who do not have a discrepancy with IQ (i.e. their IQ scores are also low) appear to benefit from treatment just as much as low academic achievers who do have a discrepancy with IQ (i.e. their IQ scores are higher than their academic performance would suggest). Diagnosis: Since 1998 there have been attempts to create a reference index more useful than IQ to generate predicted scores on achievement tests. For example, for a student whose vocabulary and general knowledge scores matches their reading comprehension score a teacher could assume that reading comprehension can be supported through work in vocabulary and general knowledge. If the reading comprehension score is lower in the appropriate statistical sense it would be necessary to first rule out things like vision problems Response to intervention Much current research has focused on a treatment-oriented diagnostic process known as response to intervention (RTI). Researcher recommendations for implementing such a model include early screening for all students, placing those students who are having difficulty into research-based early intervention programs, rather than waiting until they meet diagnostic criteria. Their performance can be closely monitored to determine whether increasingly intense intervention results in adequate progress. Those who respond will not require further intervention. Those who do not respond adequately to regular classroom instruction (often called "Tier 1 instruction") and a more intensive intervention (often called "Tier 2" intervention) are considered "non-responders." These students can then be referred for further assistance through special education, in which case they are often identified with a learning disability. Some models of RTI include a third tier of intervention before a child is identified as having a learning disability. Diagnosis: A primary benefit of such a model is that it would not be necessary to wait for a child to be sufficiently far behind to qualify for assistance. This may enable more children to receive assistance before experiencing significant failure, which may, in turn, result in fewer children who need intensive and expensive special education services. In the United States, the 2004 reauthorization of the Individuals with Disabilities Education Act permitted states and school districts to use RTI as a method of identifying students with learning disabilities. RTI is now the primary means of identification of learning disabilities in Florida. Diagnosis: The process does not take into account children's individual neuropsychological factors such as phonological awareness and memory, that can inform design instruction. By not taking into account specific cognitive processes, RTI fails to inform educators about a students' relative strengths and weaknesses Second, RTI by design takes considerably longer than established techniques, often many months to find an appropriate tier of intervention. Third, it requires a strong intervention program before students can be identified with a learning disability. Lastly, RTI is considered a regular education initiative and consists of members of general education teachers, in conjunction with other qualified professionals. Occupational therapists in particular can support students in the educational setting by helping children in academic and non-academic areas of school including the classroom, recess and meal time. They can provide strategies, therapeutic interventions, suggestions for adaptive equipment, and environmental modifications. Occupational therapists can work closely with the child's teacher and parents to facilitate educational goals specific to each child under an RTI and/or IEP. Diagnosis: Latino English language learners Demographers in the United States report that there has been a significant increase in immigrant children in the United States over the past two decades. This information is vital because it has been and will continue to affect both students and how educators approach teaching methods. Various teaching strategies are more successful for students that are linguistic or culturally diverse versus traditional methods of teaching used for students whose first language is English. It is then also true that the proper way to diagnose a learning disability in English language learners (ELL) differs. In the United States, there has been a growing need to develop the knowledge and skills necessary to provide effective school psychological services, specifically for those professionals who work with immigrant populations.Currently, there are no standardized guidelines for the process of diagnosing ELL with specific learning disabilities (SLD). This is a problem since many students will fall through the cracks as educators are unable to clearly assess if a student's delay is due to a language barrier or true learning disability. Without a clear diagnosis, many students will suffer because they will not be provided with the tools they need to succeed in the public education school system. For example, in many occasions teachers have suggested retention or have taken no action at all when they lack experience working with English language learners. Students were commonly pushed toward testing, based on an assumption that their poor academic performance or behavioral difficulties indicated a need for special education. Linguistically responsive psychologist understand that second language acquisition is a process and they understand how to support ELLs' growth in language and academically. When ELLs are referred for a psychoeducational assessment, it is difficult to isolate and disentangle what are the effects of the language acquisition process, from poor quality educational services, from what may be academic difficulties that result from processing disorders, attention problems, and learning disabilities. Additionally not having trained staff and faculty becomes more of an issue when staff is unaware of numerous types of psychological factors that immigrant children in the U.S. dealing could be potentially dealing with. These factors that include acculturation, fear and/or worry of deportation, separation from social supports such as parents, language barriers, disruptions in learning experiences, stigmatization, economic challenge, and risk factors associated with poverty. In the United States, there are no set policies mandating that all districts employ bilingual school psychologist, nor are schools equipped with specific tools and resources to assist immigrant children and families. Many school districts do not have the proper personnel that is able to communicate with this population. Diagnosis: Spanish-speaking ELL A well trained bilingual school psychologist will be able to administer and interpret assessment all psychological testing tool. Also, an emphasis is placed on informal assessment measures such as language samples, observations, interviews, and rating scales as well as curriculum-based measurement to complement information gathered from formal assessments. A compilation of these tests is used to assess whether an ELL student has a learning disability or merely is academically delayed because of language barriers or environmental factor. Many schools do not have a school psychologist with the proper training nor access to appropriate tools. Also, many school districts frown upon taking the appropriate steps to diagnosing ELL students. Diagnosis: Assessment Many normed assessments can be used in evaluating skills in the primary academic domains: reading, including word recognition, fluency, and comprehension; mathematics, including computation and problem solving; and written expression, including handwriting, spelling and composition. Diagnosis: The most commonly used comprehensive achievement tests include the Woodcock-Johnson IV (WJ IV), Wechsler Individual Achievement Test II (WIAT II), the Wide Range Achievement Test III (WRAT III), and the Stanford Achievement Test–10th edition. These tests include measures of many academic domains that are reliable in identifying areas of difficulty.In the reading domain, there are also specialized tests that can be used to obtain details about specific reading deficits. Assessments that measure multiple domains of reading include Gray's Diagnostic Reading Tests–2nd edition (GDRT II) and the Stanford Diagnostic Reading Assessment. Assessments that measure reading subskills include the Gray Oral Reading Test IV – Fourth Edition (GORT IV), Gray Silent Reading Test, Comprehensive Test of Phonological Processing (CTOPP), Tests of Oral Reading and Comprehension Skills (TORCS), Test of Reading Comprehension 3 (TORC-3), Test of Word Reading Efficiency (TOWRE), and the Test of Reading Fluency. A more comprehensive list of reading assessments may be obtained from the Southwest Educational Development Laboratory.The purpose of assessment is to determine what is needed for intervention, which also requires consideration of contextual variables and whether there are comorbid disorders that must also be identified and treated, such as behavioral issues or language delays. These contextual variables are often assessed using parent and teacher questionnaire forms that rate the students' behaviors and compares them to standardized norms. Diagnosis: However, caution should be made when suspecting the person with a learning disability may also have dementia, especially as people with Down's syndrome may have the neuroanatomical profile but not the associated clinical signs and symptoms. Examination can be carried out of executive functioning as well as social and cognitive abilities but may need adaptation of standardized tests to take account of special needs. Types: Learning disabilities can be categorized by either the type of information processing affected by the disability or by the specific difficulties caused by a processing deficit. By stage of information processing Learning disabilities fall into broad categories based on the four stages of information processing used in learning: input, integration, storage, and output. Many learning disabilities are a compilation of a few types of abnormalities occurring at the same time, as well as with social difficulties and emotional or behavioral disorders. Types: Input: This is the information perceived through the senses, such as visual and auditory perception. Difficulties with visual perception can cause problems with recognizing the shape, position, or size of items seen. There can be problems with sequencing, which can relate to deficits with processing time intervals or temporal perception. Difficulties with auditory perception can make it difficult to screen out competing sounds in order to focus on one of them, such as the sound of the teacher's voice in a classroom setting. Some children appear to be unable to process tactile input. For example, they may seem insensitive to pain or dislike being touched. Types: Integration: This is the stage during which perceived input is interpreted, categorized, placed in a sequence, or related to previous learning. Students with problems in these areas may be unable to tell a story in the correct sequence, unable to memorize sequences of information such as the days of the week, able to understand a new concept but be unable to generalize it to other areas of learning, or able to learn facts but be unable to put the facts together to see the "big picture." A poor vocabulary may contribute to problems with comprehension. Types: Storage: Problems with memory can occur with short-term or working memory, or with long-term memory. Most memory difficulties occur with one's short-term memory, which can make it difficult to learn new material without more repetitions than usual. Difficulties with visual memory can impede learning to spell. Types: Output: Information comes out of the brain either through words, that is, language output, or through muscle activity, such as gesturing, writing or drawing. Difficulties with language output can create problems with spoken language. Such difficulties include answering a question on demand, in which one must retrieve information from storage, organize our thoughts, and put the thoughts into words before we speak. It can also cause trouble with written language for the same reasons. Difficulties with motor abilities can cause problems with gross and fine motor skills. People with gross motor difficulties may be clumsy, that is, they may be prone to stumbling, falling, or bumping into things. They may also have trouble running, climbing, or learning to ride a bicycle. People with fine motor difficulties may have trouble with handwriting, buttoning shirts, or tying shoelaces. Types: By function impaired Deficits in any area of information processing can manifest in a variety of specific learning disabilities. It is possible for an individual to have more than one of these difficulties. This is referred to as comorbidity or co-occurrence of learning disabilities. In the UK, the term dual diagnosis is often used to refer to co-occurrence of learning difficulties. Types: Reading disorder (ICD-10 and DSM-IV codes: F81.0/315.00) Reading disorder is the most common learning disability. Of all students with specific learning disabilities, 70–80% have deficits in reading. The term "Developmental Dyslexia" is often used as a synonym for reading disability; however, many researchers assert that there are different types of reading disabilities, of which dyslexia is one. A reading disability can affect any part of the reading process, including difficulty with accurate or fluent word recognition, or both, word decoding, reading rate, prosody (oral reading with expression), and reading comprehension. Before the term "dyslexia" came to prominence, this learning disability used to be known as "word blindness." Common indicators of reading disability include difficulty with phonemic awareness—the ability to break up words into their component sounds, and difficulty with matching letter combinations to specific sounds (sound-symbol correspondence). Types: Disorder of written expression (ICD-10 and DSM-IV-TR codes 315.2) The DSM-IV-TR criteria for a disorder of written expression is writing skills (as measured by a standardized test or functional assessment) that fall substantially below those expected based on the individual's chronological age, measured intelligence, and age-appropriate education, (Criterion A). This difficulty must also cause significant impairment to academic achievement and tasks that require composition of written text (Criterion B), and if a sensory deficit is present, the difficulties with writing skills must exceed those typically associated with the sensory deficit, (Criterion C).Individuals with a diagnosis of a disorder of written expression typically have a combination of difficulties in their abilities with written expression as evidenced by grammatical and punctuation errors within sentences, poor paragraph organization, multiple spelling errors, and excessively poor penmanship. A disorder in spelling or handwriting without other difficulties of written expression do not generally qualify for this diagnosis. If poor handwriting is due to an impairment in the individuals' motor coordination, a diagnosis of developmental coordination disorder should be considered. Types: By a number of organizations, the term "dysgraphia" has been used as an overarching term for all disorders of written expression. Math disability (ICD-10 and DSM-IV codes F81.2-3/315.1) Sometimes called dyscalculia, a math disability involves difficulties such as learning math concepts (such as quantity, place value, and time), difficulty memorizing math facts, difficulty organizing numbers, and understanding how problems are organized on the page. Dyscalculics are often referred to as having poor "number sense". Non ICD-10/DSM Nonverbal learning disability: Nonverbal learning disabilities often manifest in motor clumsiness, poor visual-spatial skills, problematic social relationships, difficulty with mathematics, and poor organizational skills. These individuals often have specific strengths in the verbal domains, including early speech, large vocabulary, early reading and spelling skills, excellent rote memory and auditory retention, and eloquent self-expression. Disorders of speaking and listening: Difficulties that often co-occur with learning disabilities include difficulty with memory, social skills and executive functions (such as organizational skills and time management). Management: Interventions include: Mastery model: Learners work at their own level of mastery. Practice Gain fundamental skills before moving onto the next level Note: this approach is most likely to be used with adult learners or outside the mainstream school system. Management: Direct instruction:Emphasizes carefully planned lessons for small learning increments Scripted lesson plans Rapid-paced interaction between teacher and students Correcting mistakes immediately Achievement-based grouping Frequent progress assessments Classroom adjustments: Special seating assignments Alternative or modified assignments Modified testing procedures Quiet environment Special equipment: Word processors with spell checkers and dictionaries Text-to-speech and speech-to-text programs Talking calculators Books on tape Computer-based activities Classroom assistants: Note-takers Readers Proofreaders Scribes Special education: Prescribed hours in a resource room Placement in a resource room Enrollment in a special school or a separate classroom in a regular school for learning disabled students Individual education plan (IEP) Educational therapySternberg has argued that early remediation can greatly reduce the number of children meeting diagnostic criteria for learning disabilities. He has also suggested that the focus on learning disabilities and the provision of accommodations in school fails to acknowledge that people have a range of strengths and weaknesses, and places undue emphasis on academic success by insisting that people should receive additional support in this arena but not in music or sports. Other research has pinpointed the use of resource rooms as an important—yet often politicized component of educating students with learning disabilities. Management: Helping Individuals with Learning Disabilities Many individuals with learning disabilities may not openly disclose their condition. Some experts say that an instructor directly asking or assuming potential disabilities could cause potential harm to an individual's self esteem. In addition, if information about certain disabilities were made aware, it may be beneficial to be mindful about one's approach regarding the disability and avoid vocabulary that may insinuate that the learning disability is an obstacle or shortcoming as this may potentially be harmful to an individual's mental health and self esteem. Research suggests that accumulating positive experiences such as success in interpersonal relationships, achievements, and overcoming stress leads to the formation of self-esteem leading to the acceptance of one's disability and a better life outcome. This suggests that working with the disability may result in more positive outcomes rather than attempting to fix it. As an instructor or tutor, it may be helpful to consider asking the needs of individuals with disabilities as they know their disability the best. Some question to consider: What part of the assignment do you want to focus on? Where in our space would you most prefer to work? What tools or technologies do you tend to use most frequently when you write? Are you comfortable reading your paper out loud or would you prefer if I read it? How do you learn best (i.e. Do you learn best by doing, seeing, or hearing)? Society and culture: School laws Schools in the United States have a legal obligation to new arrivals to the country, including undocumented students. The landmark Supreme Court ruling Plyler v. Doe (1982) grants all children, no matter their legal status, the right to a free education. This ruling suggests that as a country we acknowledge that we have a population of students with specific needs that differ from those of native speakers. Additionally specifically in regards to ELL's the supreme court ruling Lau v. Nichols (1974) stated that equal treatment in school did not mean equal educational opportunity. Thus if a school teaches a lesson in a language that students do not understand then they are effectively worthless. This ruling is also supported by English language development services provided in schools, but these rulings do not require the individuals that teach and provide services to have any specific training nor is licensing different from a typical teacher or services provider. Society and culture: Issues Regarding Standardized Testing Problems still exist regarding the fairness of standardized testing. Providing testing accommodations to students with learning disabilities has become increasingly common. One of such issues that introduce iniquity to those with disabilities is the handwriting bias. The handwriting bias involves the tendency of raters to identify more personally with authors of handwritten essays compared to word-processed essays resulting in awarding a higher rating to the handwritten essays despite both essays being identical in terms of content. Several studies have analyzed the differences in standardized scores of handwriting and word-processed (typed) essays between students with and without disability. Results suggest handwritten essays of students with and without disabilities consistently received higher scores compared to word processed versions. Society and culture: Critique of the medical model Learning disability theory is founded in the medical model of disability, in that disability is perceived as an individual deficit that is biological in origin. Researchers working within a social model of disability assert that there are social or structural causes of disability or the assignation of the label of disability, and even that disability is entirely socially constructed. Since the turn of the 19th century, education in the United States has been geared toward producing citizens who can effectively contribute to a capitalistic society, with a cultural premium on efficiency and science. More agrarian cultures, for example, do not even use learning ability as a measure of adult adequacy, whereas the diagnosis of learning disabilities is prevalent in Western capitalistic societies because of the high value placed on speed, literacy, and numeracy in both the labor force and school system. Society and culture: Culture There are three patterns that are well known in regards to mainstream students and minority labels in the United States: "A higher percentage of minority children than of white children are assigned to special education"; "within special education, white children are assigned to less restrictive programs than are their minority counterparts"; "the data — driven by inconsistent methods of diagnosis, treatment, and funding — make the overall system difficult to describe or change".In the present day, it has been reported that white districts have more children from minority backgrounds enrolled in special education than they do majority students. "It was also suggested that districts with a higher percentage of minority faculty had fewer minority students placed in special education suggesting that 'minority students are treated differently in predominantly white districts than in predominantly minority districts'".Educators have only recently started to look into the effects of culture on learning disabilities. If a teacher ignores a student's culturally diverse background, the student will suffer in the class. "The cultural repertoires of students from cultural learning disorder backgrounds have an impact on their learning, school progress, and behavior in the classroom". These students may then act out and not excel in the classroom and will, therefore, be misdiagnosed: "Overall, the data indicates that there is a persistent concern regarding the misdiagnosis and inappropriate placement of students from diverse backgrounds in special education classes since the 1975". Society and culture: Social roots of learning disabilities in the U.S. Society and culture: Learning disabilities have a disproportionate identification of racial and ethnic minorities and students who have low socioeconomic status (SES). While some attribute the disproportionate identification of racial/ethnic minorities to racist practices or cultural misunderstanding, others have argued that racial/ethnic minorities are overidentified because of their lower status. Similarities were noted between the behaviors of "brain-injured" and lower class students as early as the 1960s. The distinction between race/ethnicity and SES is important to the extent that these considerations contribute to the provision of services to children in need. Society and culture: While many studies have considered only one characteristic of the student at a time, or used district- or school-level data to examine this issue, more recent studies have used large national student-level datasets and sophisticated methodology to find that the disproportionate identification of African American students with learning disabilities can be attributed to their average lower SES, while the disproportionate identification of Latino youth seems to be attributable to difficulties in distinguishing between linguistic proficiency and learning ability. Although the contributing factors are complicated and interrelated, it is possible to discern which factors really drive disproportionate identification by considering a multitude of student characteristics simultaneously. For instance, if high SES minorities have rates of identification that are similar to the rates among high SES Whites, and low SES minorities have rates of identification that are similar to the rates among low SES Whites, we can know that the seemingly higher rates of identification among minorities result from their greater likelihood to have low SES. Summarily, because the risk of identification for White students who have low SES is similar to that of Black students who have low SES, future research and policy reform should focus on identifying the shared qualities or experiences of low SES youth that lead to their disproportionate identification, rather than focusing exclusively on racial/ethnic minorities. It remains to be determined why lower SES youth are at higher risk of incidence, or possibly just of identification, with learning disabilities. Society and culture: Learning disabilities in adulthood A common misconception about those with learning disabilities is that they outgrow it as they enter adulthood. This is often not the case and most adults with learning disabilities still require resources and care to help manage their disability. One resource available is the Adult Basic Education (ABE) programs, at the state level. ABE programs are allotted certain amounts of funds per state in order to provide resources for adults with learning disabilities. This includes resources to help them learn basic life skills in order to provide for themselves. ABE programs also provide help for adults who lack a high school diploma or an equivalent. These programs teach skills to help adults get into the workforce or into a further level of education. There is a certain pathway that these adults and instructors should follow in order to ensure these adults have the abilities needed to succeed in life. Some ABE programs offer GED preparation programs to support adults through the process to get a GED. It is important to note that ABE programs do not always have the expected outcome on things like employment. Participants in ABE programs are given tools to help them succeed and get a job but, employment is dependent on more than just a guarantee of a job post-ABE. Employment varies based on the level of growth a participant experiences in an ABE program, the personality and behavior of the participant, and the job market they are entering into following completion of an ABE program.Another program to assist adults with disabilities are federal programs called "home and community based services" (HCBS). Medicaid funds these programs for many people through a fee waiver system, however, there are still lots of people on a stand-by list. These programs are primarily used for adults with Autism Spectrum Disorders. HCBS programs offer service more dedicated to caring for the adult, not so much providing resources for them to transition into the workforce. Some services provided are: therapy, social skills training, support groups, and counseling. Contrast with other conditions: People with an IQ lower than 70 are usually characterized as having an intellectual disability and are not included under most definitions of learning disabilities because their difficulty in learning are considered to be related directly to their overall low intelligence. Contrast with other conditions: Attention-deficit hyperactivity disorder (ADHD) is often studied in connection with learning disabilities, but it is not actually included in the standard definitions of learning disabilities. Individuals with ADHD may struggle with learning, but can often learn adequately once successfully treated for the ADHD. A person can have ADHD but not learning disabilities or have learning disabilities without having ADHD. The conditions can co-occur.People diagnosed with ADHD sometimes have impaired learning. Some of the struggles people with ADHD have might include lack of motivation, high levels of anxiety, and the inability to process information. There are studies that suggest people with ADHD generally have a positive attitude toward academics and, with medication and developed study skills, can perform just as well as individuals without learning disabilities. Also, using alternate sources of gathering information, such as websites, study groups and learning centers, can help a person with ADHD be academically successful.Before the discovery of ADHD, it was technically included in the definition of LDs since it has a very pronounced effect on the "executive functions" required for learning. Thus historically, ADHD was not clearly distinguished from other disabilities related to learning. Therefore, when a person presents with difficulties in learning, ADHD should be considered as well. Scientific research continues to explore the traits, struggles, effective learning styles and comorbid LDs of those with ADHD. Learning disabilities affects the writing process: The ability to express one's thoughts and opinions in an organized fashion and in written form is an essential life skill that individuals have been taught and practiced repetitively since youth. The writing process includes, but is not limited to: understanding the genre, style, reading, critical thinking, writing and proofreading. In the case of individuals possessing a learning disability, deficits may be present that could impair the individuals' ability to carry out these necessary steps and express their thoughts in an organized manner. Reading is a crucial step to quality writing, oftentimes, it is practiced from a young age. Reading increases the attention span, allows exposure to a variety of genres and writing styles, and allows for the accumulation of a wide range of vocabulary. Studies suggest that students with learning disabilities typically have difficulty with word recognition, the process of connecting the text to its meaning. This makes the reading process slow and cognitively laborious, which can be a very frustrating experience, causing students with learning disabilities to spend less time reading compared to their classmates. This in turn can negatively affect vocabulary acquisition and comprehension development of the individual. In the context of standardized test taking, studies show that the strongest predictor of the level of performance during standardized essay writing was vocabulary complexity, specifically, the number of words with more than two syllables. Studies have suggested that individuals with ADHD tend to use simple structures and vocabulary. This puts many students with learning disabilities at a disadvantage since their knowledge of complex vocabulary usually does not compare to their peers. Based on such patterns, early interventions such as reading and writing curriculums from a young age could provide opportunities for vocabulary acquisition and development. In addition, some students with learning disabilities tend to have difficulty separating the different stages of writing and devote little time to the planning stage. Oftentimes, they attempt to simultaneously reflect on their spelling while putting ideas together causing them to overload their attention system and make a number of spelling mistakes. All together, the tendency of students with learning disabilities to dedicate little time to the planning and revision process compared to their peers often results in a lower level of coherence and quality of their written composition and a lower quality rating in the case for standardized tests. There is a lack of research in this area due to the complex relationship between the brain and one's ability to articulate ideas in writing. More research should be conducted in order to assess these factors and test the effectiveness of various intervention techniques.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ramification group** Ramification group: In number theory, more specifically in local class field theory, the ramification groups are a filtration of the Galois group of a local field extension, which gives detailed information on the ramification phenomena of the extension. Ramification theory of valuations: In mathematics, the ramification theory of valuations studies the set of extensions of a valuation v of a field K to an extension L of K. It is a generalization of the ramification theory of Dedekind domains.The structure of the set of extensions is known better when L/K is Galois. Ramification theory of valuations: Decomposition group and inertia group Let (K, v) be a valued field and let L be a finite Galois extension of K. Let Sv be the set of equivalence classes of extensions of v to L and let G be the Galois group of L over K. Then G acts on Sv by σ[w] = [w ∘ σ] (i.e. w is a representative of the equivalence class [w] ∈ Sv and [w] is sent to the equivalence class of the composition of w with the automorphism σ : L → L; this is independent of the choice of w in [w]). In fact, this action is transitive. Ramification theory of valuations: Given a fixed extension w of v to L, the decomposition group of w is the stabilizer subgroup Gw of [w], i.e. it is the subgroup of G consisting of all elements that fix the equivalence class [w] ∈ Sv. Ramification theory of valuations: Let mw denote the maximal ideal of w inside the valuation ring Rw of w. The inertia group of w is the subgroup Iw of Gw consisting of elements σ such that σx ≡ x (mod mw) for all x in Rw. In other words, Iw consists of the elements of the decomposition group that act trivially on the residue field of w. It is a normal subgroup of Gw. Ramification theory of valuations: The reduced ramification index e(w/v) is independent of w and is denoted e(v). Similarly, the relative degree f(w/v) is also independent of w and is denoted f(v). Ramification groups in lower numbering: Ramification groups are a refinement of the Galois group G of a finite L/K Galois extension of local fields. We shall write w,OL,p for the valuation, the ring of integers and its maximal ideal for L . As a consequence of Hensel's lemma, one can write OL=OK[α] for some α∈L where OK is the ring of integers of K . (This is stronger than the primitive element theorem.) Then, for each integer i≥−1 , we define Gi to be the set of all s∈G that satisfies the following equivalent conditions. Ramification groups in lower numbering: (i) s operates trivially on OL/pi+1. (ii) w(s(x)−x)≥i+1 for all x∈OL (iii) 1. The group Gi is called i -th ramification group. They form a decreasing filtration, G−1=G⊃G0⊃G1⊃…{∗}. In fact, the Gi are normal by (i) and trivial for sufficiently large i by (iii). For the lowest indices, it is customary to call G0 the inertia subgroup of G because of its relation to splitting of prime ideals, while G1 the wild inertia subgroup of G . The quotient G0/G1 is called the tame quotient. The Galois group G and its subgroups Gi are studied by employing the above filtration or, more specifically, the corresponding quotients. In particular, Gal ⁡(l/k), where l,k are the (finite) residue fields of L,K .G0=1⇔L/K is unramified. Ramification groups in lower numbering: G1=1⇔L/K is tamely ramified (i.e., the ramification index is prime to the residue characteristic.)The study of ramification groups reduces to the totally ramified case since one has Gi=(G0)i for i≥0 One also defines the function iG(s)=w(s(α)−α),s∈G . (ii) in the above shows iG is independent of choice of α and, moreover, the study of the filtration Gi is essentially equivalent to that of iG . iG satisfies the following: for s,t∈G ,iG(s)≥i+1⇔s∈Gi. Ramification groups in lower numbering: Fix a uniformizer π of L . Then s↦s(π)/π induces the injection Gi/Gi+1→UL,i/UL,i+1,i≥0 where UL,0=OL×,UL,i=1+pi . (The map actually does not depend on the choice of the uniformizer.) It follows from this G0/G1 is cyclic of order prime to p Gi/Gi+1 is a product of cyclic groups of order p .In particular, G1 is a p-group and G0 is solvable. Ramification groups in lower numbering: The ramification groups can be used to compute the different DL/K of the extension L/K and that of subextensions: w(DL/K)=∑s≠1iG(s)=∑i=0∞(|Gi|−1). If H is a normal subgroup of G , then, for σ∈G , iG/H(σ)=1eL/K∑s↦σiG(s) .Combining this with the above one obtains: for a subextension F/K corresponding to H ,vF(DF/K)=1eL/F∑s∉HiG(s). If s∈Gi,t∈Gj,i,j≥1 , then sts−1t−1∈Gi+j+1 . In the terminology of Lazard, this can be understood to mean the Lie algebra gr ⁡(G1)=∑i≥1Gi/Gi+1 is abelian. Ramification groups in lower numbering: Example: the cyclotomic extension The ramification groups for a cyclotomic extension := Qp(ζ)/Qp , where ζ is a pn -th primitive root of unity, can be described explicitly: Gs=Gal(Kn/Ke), where e is chosen such that pe−1≤s<pe Example: a quartic extension Let K be the extension of Q2 generated by x1=2+2 . The conjugates of x1 are x2=2−2 , x3 = − x1 , x4 = − x2 A little computation shows that the quotient of any two of these is a unit. Hence they all generate the same ideal; call it π. 2 generates π2; (2)=π4. Ramification groups in lower numbering: Now x1 − x3 = 2 x1 , which is in π5. and x1−x2=4−22, which is in π3. Various methods show that the Galois group of K is C4 , cyclic of order 4. Also: G0=G1=G2=C4. and 13 24 ). 11 , so that the different 11 x1 satisfies X4 − 4X2 + 2, which has discriminant 2048 = 211. Ramification groups in upper numbering: If u is a real number ≥−1 , let Gu denote Gi where i the least integer ≥u . In other words, 1. Ramification groups in upper numbering: Define ϕ by ϕ(u)=∫0udt(G0:Gt) where, by convention, (G0:Gt) is equal to (G−1:G0)−1 if t=−1 and is equal to 1 for −1<t≤0 . Then ϕ(u)=u for −1≤u≤0 . It is immediate that ϕ is continuous and strictly increasing, and thus has the continuous inverse function ψ defined on [−1,∞) . Define Gv=Gψ(v) .Gv is then called the v-th ramification group in upper numbering. In other words, Gϕ(u)=Gu . Note G−1=G,G0=G0 . The upper numbering is defined so as to be compatible with passage to quotients: if H is normal in G , then (G/H)v=GvH/H for all v (whereas lower numbering is compatible with passage to subgroups.) Herbrand's theorem Herbrand's theorem states that the ramification groups in the lower numbering satisfy GuH/H=(G/H)v (for v=ϕL/F(u) where L/F is the subextension corresponding to H ), and that the ramification groups in the upper numbering satisfy GuH/H=(G/H)u . This allows one to define ramification groups in the upper numbering for infinite Galois extensions (such as the absolute Galois group of a local field) from the inverse system of ramification groups for finite subextensions. Ramification groups in upper numbering: The upper numbering for an abelian extension is important because of the Hasse–Arf theorem. It states that if G is abelian, then the jumps in the filtration Gv are integers; i.e., Gi=Gi+1 whenever ϕ(i) is not an integer.The upper numbering is compatible with the filtration of the norm residue group by the unit groups under the Artin isomorphism. The image of Gn(L/K) under the isomorphism G(L/K)ab↔K∗/NL/K(L∗) is just UKn/(UKn∩NL/K(L∗)).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Schumacher criteria** Schumacher criteria: Schumacher criteria are diagnostic criteria that were previously used for identifying multiple sclerosis (MS). Multiple sclerosis, understood as a central nervous system (CNS) condition, can be difficult to diagnose since its signs and symptoms may be similar to other medical problems. Medical organizations have created diagnostic criteria to ease and standardize the diagnostic process especially in the first stages of the disease. Schumacher criteria were the first internationally recognized criteria for diagnosis, and introduced concepts still in use, as CDMS (clinically definite MS). Schumacher criteria: Sometimes it has been stated that the only proved diagnosis of MS is autopsy, or occasionally biopsy, where lesions typical of MS can be detected through histopathological techniques, and that sensitivity and specificity should be calculated for any given criteria Context: Historically, the first widespread set of criteria were the Schumacher criteria (also spelled sometimes Schumacker). Currently, testing of cerebrospinal fluid obtained from a lumbar puncture can provide evidence of chronic inflammation of the central nervous system, looking for oligoclonal bands of IgG on electrophoresis, which are inflammation markers found in 75–85% of people with MS., but at the time of Schumacher criteria, oligoclonal bands tests were not available, and they also lacked MRI. Context: The most commonly used diagnostic tools at that time were evoked potentials. The nervous system of a person with MS responds less actively to stimulation of the optic nerve and sensory nerves due to demyelination of such pathways. These brain responses can be examined using visual and sensory evoked potentials.Therefore, clinical data alone had to be used for a diagnosis of MS. Schumacher et al. proposed three classifications based in clinical observation: CDMS (clinically definite), PrMS(probable MS) and PsMS(possible MS). Summary: To get a diagnosis of CDMS a patient must show the following: Clinical signs of a problem in the CNS Dissemination in space, shown by clinical evidence of damage in two or more areas of CNS. Summary: Evidence of white matter involvement Dissemination in time shown by one of these: Two or more relapses (each lasting ≥ 24 hr and separated by at least 1 month) or disability progression (slow or stepwise) Patient should be between 10 and 50 yr old at time of examination No better explanation for patient’s symptoms and signs should existThe last condition, no better explanation for symptoms, has been heavily criticised, but it has been preserved and it is currently included in the new McDonald criteria in the form that "no better explanation should exist for MRI observations" Influence: These criteria were later substituted by Poser criteria and McDonald criteria. Poser criteria introduced the CNS oligoclonal bands into the diagnosis criteria, while McDonald criteria focus on a demonstration with clinical, laboratory and radiologic data of the dissemination of MS lesions in time and space for non-invasive MS diagnosis. All the later criteria were heavily influenced by the original Schumacher work.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Srimad Bhagavata Book 2** Srimad Bhagavata Book 2: The Srimad Bhagavata is one of the main books of Hindu philosophy. The Bhagavata is a devotional account of the Supreme Being and His incarnations. The second book of the Srimad Bhagavata covers the creation of the universe according to Hindu Mythology and gives a summary of the Bhagavata. This book consists of 10 chapters. The Bhagavata is authored by Veda Vyasa and the source material for this summary is the translation presented by Swami Tapasyananda. Additional material and analysis is included. For the events leading up to this point, see Srimad Bhagavata Book 1. Chapter 1: Suka Rishi is very happy to hear Parikshit's question, as its answer will benefit mankind. Suka Rishi tells Parikshit that even though he was already completely established in the formless, infinite Atman, his mind was still attracted to Narayana’s playful acts in His incarnations. This is an example of the power of Bhakti (devotion) to Narayana even over knowledge of the Atman. Whoever hears the Bhagavata with faith will through Bhakti reach the highest state. Sri Suka explains to Parikshit the True purpose of life Life is for the attainment of the Spiritual Goal (not only material goals) Life is very short and should not be wasted Importance of meditation and renunciation in order to purify the mind. Beauty of the Bhagavata and story of Vishnu and His playful acts. This will lead to love of Vishnu. At least think of renunciation when Death is approaching. Meditate on the cosmic form (Viratpurusha) Description of the cosmic form, each part represents something abstract. For example, the Vedas are the top of Vishnu's head, and the wind is Vishnu's breath. All the worlds are parts of Vishnu's body. Chapter 2: Importance of renunciation Meditate on the form of Narayana in the space within the heart This form is similar to Vishnu's 4-armed form with the lotus, discus, mace, and conch One should meditate on the whole form until the mind is steady One should then meditate on each part of the Lord individually starting with the feet and going up to the face If an individual cannot do this, he/she can meditate on the entire universe as a form of the Lord (Viratrupa) Gross substances are made of subtler substances This idea is one of the fundamental axioms of Hindu philosophy. Chapter 2: For example, jewelry (a gross substance) is made of gold (a subtler substance) One should raise the energy from the lower Chakra (Muladhara) all the way to the highest one (Brahmarandhra). The idea of renunciation is to merge the gross senses/elements/organs into their subtler versions one by one until that (the Mahattattva, or great element) is merged in Prakriti. At this point the Jiva (individual) is one with Brahman, the Supreme. Chapter 3: People who want a specific object or status should worship a specific deity. Worshipping Brahma gives Vedic learning and powers Worshipping Sri (Lakshmi) gives wealth However, all the deities get their powers from the Supreme. By worshipping the Supreme, one gets all their material and spiritual desires fulfilled. The True purpose of life is to worship the Supreme and develop Bhakti (devotion). Chapter 4: Parikshit asks Suka Rishi numerous questions about the Supreme, his power Maya (the power of illusion), and the knowledge of the Atman (soul) Suka Rishi's praises of the Supreme Director of the creation, preservation, and dissolution of the universe One who gives all rewards (including liberation) and punishments Thinking of and worshipping the Supreme destroys all sins Having numerous self-willed incarnations for the good of all The creator of the Vedas Creates and lives in the bodies of all Chapter 5: Narada believes that Brahma created the universe and praises Brahma as the Supreme, but wonders then why Brahma had to do great austerities Brahma takes the opportunity to praise Vasudeva, the true creator of the universe, as the one even above Brahma Brahma describes the creation of the universe: Vishnu created Maya (the agent that causes people to associate the world of objects with themselves) Vishnu is Himself unaffected by Maya The existence of Maya led to the creation of matter, Kala (Time), Karma (the effects of actions on the future), Swabhava (Nature), and the Jiva (individual soul) Everything is an aspect of Narayana Even though the Iswara (Narayana) and Jiva (individual) are part of the same Atman (all-pervading spirit and life-force), Iswara knows the Truth about this and is free, while the Jiva thinks himself to be mortal, and is bound This idea is one of the key ideas of the philosophy taught in the Bhagavata. Chapter 5: This led to the creation of Mahattattva, which led to the creation of the subsequent categories. The first of the categories is the 3 Gunas (or modes of nature). They are Sattva (good), Rajas (average), and Tamas (bad). Ahankara (the ego) is mostly Tamas The elements were evolved from Tamas in the following order Space (with the property of Sound) Wind (with the property of Touch) Fire (with the property of Sight) Water (with the property of Taste) Earth (with the property of Smell) This ordering describes how the gross elements evolved from the subtle elements. The Manas (mind) was born out of the Sattva aspect of Ahankara The Buddhi (intellect) and Prana (life-breath) were born out of the Rajas aspect of Ahankara From these come the sense organs and organs of action The universe was created, but existed in an inert state. Narayana entered into the universe, and gave it life All the worlds are part of Narayana's universal form. Chapter 6: Brahma continues explaining about Narayana's Universal Form to Narada The Purusha is Narayana's Universal Form, which is described in Purusha Sukta Each part of the Purusha is the original and complete prototype and contains all the related senses and objects. An infinitesimal part is found in the human body. The Supreme is unaffected by all creation, and is in the ultimate state of Sat-Chit-Ananda (existence-knowledge-bliss). Chapter 6: Worldly life is restricted to the 3 worlds of Bhu (earth), Bhava (the intermediate regions), and Svah (Heaven) There are 4 regions higher than this – these are achieved by the paths of knowledge and God-dedicated correct action Vidya (knowledge of the Supreme) and Avidya (ignorance of the Spiritual Truth) Vidya leads to Moksha Avidya leads to being bound in the cycle of Samsara (worldly life) All Yajna is an offering of the Purusha to the Purusha done by the Purusha The individual performing the Yajna, the materials involved, and the goal of the Yajna are all Purusha! The Supreme directs Brahma to create the world Chapter 7: Summary of the Narayana's Incarnations and glories told by Brahma to Narada Muni: Cosmic Boar – to rescue the Earth Suyajna – removes the sufferings of all. For this, He is called Hari (the remover of sufferings). Chapter 7: Kapila Muni – to give the Supreme Knowledge (Sankyha philosophy) Dattatreya – the son of Rishi Atri and Anasuya Sanaka, Sanandana, Sanatana, Sanatakumara – 4 sons of Brahma who are among the greatest of sages Nara and Narayana – 2 great sages Narayana came to help Dhruva after his prayers Prithu – a very good king who brought out multiple resources from the earth Rishaba – a very great sage Hayagriva – with the neck of a horse Cosmic Fish – saved the earth during the deluge Divine Tortoise – during the churning of the ocean Man-lion – to help Prahlada and destroy Hiranyakasipu To save the lordly elephant (Gajendra) Vamana – to win back the worlds from Mahabali and give it to Indra Vishnu incarnates in each Manvantra to protect the Manu Dhanvantari – cures men from diseases by the power of His name Parasurama – destroyed the rulers as they had become corrupt Sri Rama – to destroy Ravana and teach the worlds about righteous living Narayana's incarnation as Krishna is given special emphasis Summary of Krishna's deeds, especially His charming playful childhood mischief Narayana's future incarnations Narayana's powers, glories, and incarnations are infinite Bhramaji concludes that whoever recites and/or hears about these incarnations of the Lord with faith and devotion, and enjoys thinking about the Lord's actions, will be free from Maya and eventually reach the Highest State Chapter 8: Narayana quickly enters the heart of one who thinks of His glories constantly. This completely purifies the devotee. Parikshit asks Sri Suka 20 questions whose answers form the Bhagavata: Nature of the Atman Difference between man and God Creation The result of actions Incarnations Duties Rituals Chapter 9: Maya (Narayana's illusory power) causes the Jiva to identify itself with the body Brahma is unsure how to proceed with creation and hears Tapa, Tapa (Meditate! Meditate!) Brahma meditates for many divine years and Vishnu appears Brahma sees Vishnu's Realm (Vaikuntha) where all beings have Vishnu's wonderful 4-armed form Vaikuntha is beyond constraints of worldly life (such as Time) Sri Devi lives in Vaikuntha constantly praising Vishnu Tapas (Meditation) is the core of Vishnu's being Everything is Vishnu Brahma asks Vishnu for the knowledge of His powers and the creation of the universe Brahma wants to understand how to proceed with the creation of the Universe (Brahma's duty) without having pride/attachment in his position and accomplishments Vishnu teaches Brahma the Supreme Knowledge Only the Supreme exists before creation and after dissolution Maya is a reflection superimposed on the Atman without any reality of its own, which does not change the Atman in any way The elements combine into things but still keep their pure forms The Supreme creates the beings but is not bound by them in any way The Supreme Spirit creates and persists through everything, but is not affected by them and their destruction The knowledge given in the Bhagavata comes from Narayana, who taught It to Brahma, who taught It to Narada (his son), who taught It to Maharishi Veda Vyasa, who taught It to Suka Rishi Chapter 10: The Viratpurusha (Cosmic Person) created the Cosmic Waters in order to have a place to exist. Therefore, the Viratpurusha is known as Narayana, the one who rests in the water. Chapter 10: Description of the Gross Cosmic Form of Narayana Every part of the Cosmic Form was formed with 4 entities: The place (such as the mouth) The organ (such as the tongue) The sense object (such as taste) The Deity (such as Varuna) The first part to develop was the mouth, in order to satisfy hunger Suka Rishi describes the creation of the categories in the creative cycle (Kalpa) More details will be given in the next book of the Bhagavata.For the continuation of the Bhagavata, see Srimad Bhagavata Book 3.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Design around** Design around: In the field of patents, the phrase "to design around" means to invent an alternative to a patented invention that does not infringe the patent's claims. The phrase can also refer to the invention itself. Design around: Design-arounds are considered to be one of the benefits of patent law. By providing monopoly rights to inventors in exchange for disclosing how to make and use their inventions, others are given both the information and incentive to invent competitive alternatives that design around the original patent. In the field of vaccines, for example, design-arounds are considered fairly easy. It is often possible to use the original patent as a guide for developing an alternative that does not infringe the original patent.Design-arounds can be a defense against patent trolls. The amount of license fee that a patent troll can demand is limited by the alternative of the cost of designing around the troll's patent(s).In order to defend against design-arounds, inventors often develop a large portfolio of interlocking patents, sometimes called a patent thicket. Thus a competitor will have to avoid many patents when designing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Genu recurvatum** Genu recurvatum: Genu recurvatum is a deformity in the knee joint, so that the knee bends backwards. In this deformity, excessive extension occurs in the tibiofemoral joint. Genu recurvatum is also called knee hyperextension and back knee. This deformity is more common in women and people with familial ligamentous laxity. Hyperextension of the knee may be mild, moderate or severe. The normal range of motion (ROM) of the knee joint is from 0 to 135 degrees in an adult. Full knee extension should be no more than 10 degrees. In genu recurvatum, normal extension is increased. The development of genu recurvatum may lead to knee pain and knee osteoarthritis. Causes: The following factors may be involved in causing this deformity: Inherent laxity of the knee ligaments Weakness of biceps femoris muscle Instability of the knee joint due to ligaments and joint capsule injuries Inappropriate alignment of the tibia and femur Malunion of the bones around the knee Weakness in the hip extensor muscles Gastrocnemius muscle weakness (in standing position) Upper motor neuron lesion (for example, hemiplegia as the result of a cerebrovascular accident) Lower motor neuron lesion (for example, in post-polio syndrome) Deficit in joint proprioception Lower limb length discrepancy Congenital genu recurvatum Cerebral palsy Muscular dystrophy Limited dorsiflexion (plantar flexion contracture) Popliteus muscle weakness Connective tissue disorders. In these disorders, there are excessive joint mobility (joint hypermobility) problems. These disorders include: Marfan syndrome Loeys–Dietz syndrome Ehlers–Danlos syndrome Benign hypermobile joint syndrome Osteogenesis imperfecta disease Pathophysiology: The most important factors of knee stability include: Ligaments of the knee: The knee joint is stabilized by four main ligaments: Anterior cruciate ligament (ACL). The ACL has an important role in stabilization of knee extension movement by preventing the knee from hyperextending. Posterior cruciate ligament (PCL) Medial collateral ligament (MCL) Lateral collateral ligament (LCL) Joint capsule or articular capsule (especially posterior knee capsule) Quadriceps femoris muscle Appropriate alignment of the femur and tibia (especially in knee extension position ) Treatment: Treatment generally includes the following: Sometimes pharmacologic therapy for initial disease treatment Physical therapy: physiotherapy will be beneficial in patient with complaint of pain, discomfort. Occupational therapy Use of appropriate assistive devices such as orthoses Surgery Incidence: This condition is considered to be rare, with about 1 in 100,000 births being affected by the congenital form of genu recurvatum, although it's a common feature in some disorders, such as in joint hypermobility, which affects 1 in 30 people.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Palivizumab** Palivizumab: Palivizumab, sold under the brand name Synagis, is a monoclonal antibody produced by recombinant DNA technology used to prevent severe disease caused by respiratory syncytial virus (RSV) infections. It is recommended for infants at high-risk for RSV due to conditions such as prematurity or other medical problems including heart or lung diseases.The most common side effects include fever and rash.Palivizumab is a humanized monoclonal antibody (IgG) directed against an epitope in the A antigenic site of the F protein of RSV. In two phase III clinical trials in the pediatric population, palivizumab reduced the risk of hospitalization due to RSV infection by 55% and 45%. Palivizumab is dosed once a month via intramuscular (IM) injection, to be administered throughout the duration of the RSV season, which in based on past trends has started in Mid-September to Mid-November.Palivizumab targets the fusion protein of RSV, inhibiting its entry into the cell and thereby preventing infection. Palivizumab was approved for medical use in 1998. Medical use: Palivizumab is indicated for the prevention of serious lower respiratory tract disease requiring hospitalization caused by the respiratory syncytial virus (RSV) in children at high risk for RSV disease: children born at 35 weeks of gestation or less and less than six months of age at the onset of the RSV season; children less than two years of age and requiring treatment for bronchopulmonary dysplasia within the last six months; children less than two years of age and with hemodynamically significant congenital heart disease.The American Academy of Pediatrics has published guidelines for the use of palivizumab. The most recent updates to these recommendations are based on new information regarding RSV seasonality, palivizumab pharmacokinetics, the incidence of bronchiolitis hospitalizations, the effect of gestational age and other risk factors on RSV hospitalization rates, the mortality of children hospitalized with RSV infection, the effect of prophylaxis on wheezing, and palivizumab-resistant RSV isolates. Medical use: RSV Prophylaxis All infants younger than one year who were born at <29 weeks (i.e. ≤28 weeks, 6 days) of gestation are recommended to use palivizumab. Infants younger than one year with bronchopulmonary dysplasia (i.e. who were born at <32 weeks gestation and required supplemental oxygen for the first 28 days after birth) and infants younger than two years with bronchopulmonary dysplasia who require medical therapy (e.g. supplemental oxygen, glucocorticoids, diuretics) within six months of the anticipated RSV season are recommended to use palivizumab as prophylaxis. Taking palivizumab prophylactically decreases the number of RSV infections, decreases wheezing, and may decrease the rate of hospitalization attributed to RSV. There are few negative side effects reported. It is not clear if palivizumab is effective and safe for the other medical conditions that put them at a higher risk for serious cases of RSV such as deficiencies in their immune system.Since the risk of RSV decreases after the first year following birth, the use of palivizumab for children more than 12 months of age is generally not recommended with the exception of premature infants who need supplemental oxygen, bronchodilator therapy, or steroid therapy at the time of their second RSV season. Medical use: RSV Prophylaxis Target Groups Infants younger than one year of age with hemodynamically significant congenital heart disease. Infants younger than one year of age with neuromuscular disorders impairing the ability to clear secretions from the upper airways or pulmonary abnormalities. Children younger than two years of age who are immunocompromised (e.g. those with severe combined immunodeficiency; those younger than two years of age who have undergone lung transplantation or hematopoietic stem cell transplantation) during the RSV season. Children with Down syndrome who have additional risk factors for lower respiratory tract infections such as congenital heart disease, chronic lung disease, or premature birth. Alaska Native and American Indian infants.Decisions regarding palivizumab prophylaxis for children in these groups should be made on a case-by-case basis. Medical use: RSV Treatment Because palivizumab is a passive antibody, it is ineffective in the treatment of RSV infection, and its administration is not recommended for this indication. A 2019 Cochrane review found no differences in palivizumab and placebo on outcomes of mortality, length of hospital stay, and adverse events in infants and children aged up to 3 years old with RSV. Larger RCTs will be required before palivizumab can be recommended as a treatment option. If an infant has an RSV infection despite the use of palivizumab during the RSV season, monthly doses of palivizumab may be discontinued for the rest of the RSV season due to the low risk of re-hospitalization. Current studies are in progress to determine new treatments for RSV rather than solely prophylaxis. Mechanism of action: Palivizumab is a monoclonal antibody that targets the fusion (F) glycoprotein on the surface of RSV, and deactivates it. The F protein is a membrane protein responsible for fusing the virus with its target cell and is highly conserved among subgroups of RSV. Deactivating the F protein prevents the virus from fusing with its target's cell membrane and prevents the virus from entering the host cell. Mechanism of action: Pharmacodynamics Palivizumab has demonstrated a significantly higher affinity and potency in neutralizing both A and B subtypes of RSV when compared with RSV-IGIV. Treatment with 2.5 mg/kg of palivizumab led to a serum concentration of 25-30 μg/mL in cotton rats and reduced RSV titers by 99% in their lungs. Mechanism of action: Pharmacokinetics Absorption A 2008 meta-analysis found that palivizumab absorption was quicker in the pediatric population compared to adults (ka = 1.01/day vs. ka = 0.373/day). The intramuscular bioavailability of this drug is approximately 70% in healthy young adults. Current recommendation for RSV immunoprophylaxis is administration of 5 x 15 mg/kg doses of palivizumab to maintain body concentrations above 40 μg/mL. Mechanism of action: Distribution The volume of distribution is approximately 4.1 liters. Mechanism of action: Clearance Palivizumab has a drug clearance (CL) of approximately 198 ml/day. The half-life of this drug is approximately 20 days with three doses sustaining body concentrations that will last the entire RSV season (5 to 6 months). A 2008 meta-analysis estimated clearance in the pediatric population by considering maturation of CL and body weight which showed a significant reduction compared to adults. Side effects: Palivizumab use may cause side effects, which include, but are not limited to: Sore throat Runny nose Redness or irritation at the injection site Vomiting DiarrheaSome more serious side effects include: Severe skin rash Itching Hives (urticaria) Difficulty breathing Contraindications: Contraindications for the use of palivizumab include hypersensitivity reactions upon exposure to palivizumab. Serious cases of anaphylaxis have been reported after exposure to palivizumab. Signs of hypersensitivity include hives, shortness of breath, hypotension, and unresponsiveness. No other contraindications for palivizumab have been reported. Further studies are needed to determine if any drug-drug interactions exist as none have been conducted as of yet. Cost: Palivizumab is a relatively expensive medication, with a 100-mg vial ranging from $904 to $1866. Multiple studies done by both the manufacturer and independent researchers to determine the cost-effectiveness of palivizumab have found conflicting results. The heterogeneity between these studies makes them difficult to compare. Given that there is no consensus about the cost-effectiveness of palivizumab, usage largely depends on the location of care and individual risk factors.A 2013 meta-analysis reported that palivizumab prophylaxis was a dominant strategy with an incremental cost-effectiveness ratio of $2,526,203 per quality-adjusted life-year (QALY). It also showed an incremental cost-effectiveness ratio for preterm infants between $5188 and $791,265 per QALY, from the payer perspective. However, as previously stated, the cost-effectiveness of palivizumab is undecided, and this meta-analysis is only one example of society can benefit from palivizumab prophylaxis. History: The disease burden of RSV in young infants and its global prevalence have prompted attempts for vaccine development. As of 2019, there was no approved vaccine for RSV prevention. A formalin-inactivated RSV vaccine (FIRSV) was studied in the 1960s. The immunized children who were exposed to the virus in the community developed an enhanced form of RSV disease presented by wheezing, fever, and bronchopneumonia. This enhanced form of the disease led to 80% hospitalization in the recipients of FIRSV compared to 5% in the control group. Additionally, 2 fatalities occurred among the vaccine recipients upon reinfection in subsequent years. Subsequent attempts to develop an attenuated live virus vaccine with optimal immune response and minimal reactogenicity have been unsuccessful. Further research on animal subjects suggested that intravenously administered immunoglobulin with high RSV neutralizing activity can protect against RSV infection. In 1995, the U.S. Food and Drug Administration (FDA) approved the use of RespiGam (RSV-IGIV) for the prevention of serious lower respiratory tract infection caused by RSV in children younger than 24 months of age with bronchopulmonary dysplasia or a history of premature birth. The success of the RSV-IGIV demonstrated efficacy in immunoprophylaxis and prompted research into further technologies. Thus, Palivizumab was developed as an antibody that was found to be fifty times more potent than its predecessor. This antibody has been widely used for RSV since 1998 when it was approved. Palivizumab, originally known as MEDI-493, was developed as an RSV immune prophylaxis tool that was easier to administer and more effective than the current tools of that time (the 1990s). It was developed over a 10-year period by MedImmune Inc. by combining human and mouse DNA. Specifically, antibody production was stimulated in a mouse model following immunization with RSV. The antibody-producing B cells were isolated from the mouse's spleen and fused with mouse myeloma cell lines. The antibodies were then humanized by cloning and sequencing the DNA from both the heavy and light chains of the monoclonal antibody. Overall, the monoclonal antibody is 95% similar to other human antibodies with the other 5% having DNA origins from the original mouse.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Domestic flight** Domestic flight: A domestic flight is a form of commercial flight within civil aviation where the departure and the arrival take place in the same country.Airports serving domestic flights only are known as domestic airports. Domestic flights are generally cheaper and shorter than most international flights. Some international flights may be cheaper than domestic ones due to the short distance between the pair of cities in different countries, and also because domestic flights might, in smaller countries, mainly be used by high paying business travellers, while leisure travellers use road or rail domestically. Domestic flight: Domestic flights are the only sector of aviation not exhibiting a global long term growth trend due to many smaller countries increasingly replacing short domestic routes with high speed rail; that said, most of the busiest air routes in the world are domestic flights.Some smaller countries, like Singapore, have no scheduled domestic flights. Medium-sized countries like the Netherlands have very few domestic flights; most of them are merely a leg between small regional airports such as Groningen Airport Eelde, Maastricht Aachen Airport and Rotterdam The Hague Airport to pick up passengers from various parts of the country before proceeding to international destinations. In June 2013, Dutch MP Liesbeth van Tongeren (GreenLeft, previously Greenpeace Netherlands director) proposed to prohibit domestic flights in the Netherlands with the argument that they are needlessly inefficient, polluting and expensive, but Environment Secretary Wilma Mansveld (Labour Party) said such a ban would violate EU regulations that allow airlines to fly domestically. Some political debaters propose short-haul flight bans in a number of countries.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**One-parameter group** One-parameter group: In mathematics, a one-parameter group or one-parameter subgroup usually means a continuous group homomorphism φ:R→G from the real line R (as an additive group) to some other topological group G . If φ is injective then φ(R) , the image, will be a subgroup of G that is isomorphic to R as an additive group. One-parameter groups were introduced by Sophus Lie in 1893 to define infinitesimal transformations. According to Lie, an infinitesimal transformation is an infinitely small transformation of the one-parameter group that it generates. It is these infinitesimal transformations that generate a Lie algebra that is used to describe a Lie group of any dimension. One-parameter group: The action of a one-parameter group on a set is known as a flow. A smooth vector field on a manifold, at a point, induces a local flow - a one parameter group of local diffeomorphisms, sending points along integral curves of the vector field. The local flow of a vector field is used to define the Lie derivative of tensor fields along the vector field. Examples: Such one-parameter groups are of basic importance in the theory of Lie groups, for which every element of the associated Lie algebra defines such a homomorphism, the exponential map. In the case of matrix groups it is given by the matrix exponential. Another important case is seen in functional analysis, with G being the group of unitary operators on a Hilbert space. See Stone's theorem on one-parameter unitary groups. In his 1957 monograph Lie Groups, P. M. Cohn gives the following theorem on page 58: Any connected 1-dimensional Lie group is analytically isomorphic either to the additive group of real numbers R , or to T , the additive group of real numbers mod 1 . In particular, every 1-dimensional Lie group is locally isomorphic to R Physics: In physics, one-parameter groups describe dynamical systems. Furthermore, whenever a system of physical laws admits a one-parameter group of differentiable symmetries, then there is a conserved quantity, by Noether's theorem. Physics: In the study of spacetime the use of the unit hyperbola to calibrate spatio-temporal measurements has become common since Hermann Minkowski discussed it in 1908. The principle of relativity was reduced to arbitrariness of which diameter of the unit hyperbola was used to determine a world-line. Using the parametrization of the hyperbola with hyperbolic angle, the theory of special relativity provided a calculus of relative motion with the one-parameter group indexed by rapidity. The rapidity replaces the velocity in kinematics and dynamics of relativity theory. Since rapidity is unbounded, the one-parameter group it stands upon is non-compact. The rapidity concept was introduced by E.T. Whittaker in 1910, and named by Alfred Robb the next year. The rapidity parameter amounts to the length of a hyperbolic versor, a concept of the nineteenth century. Mathematical physicists James Cockle, William Kingdon Clifford, and Alexander Macfarlane had all employed in their writings an equivalent mapping of the Cartesian plane by operator cosh sinh ⁡a) , where a is the hyperbolic angle and r2=+1 In GL(n,ℂ): An important example in the theory of Lie groups arises when G is taken to be GL(n;C) , the group of invertible n×n matrices with complex entries. In that case, a basic result is the following: Theorem: Suppose φ:R→GL(n;C) is a one-parameter group. Then there exists a unique n×n matrix X such that φ(t)=etX for all t∈R .It follows from this result that φ is differentiable, even though this was not an assumption of the theorem. The matrix X can then be recovered from φ as dφ(t)dt|t=0=ddt|t=0etX=(XetX)|t=0=Xe0=X .This result can be used, for example, to show that any continuous homomorphism between matrix Lie groups is smooth. Topology: A technical complication is that φ(R) as a subspace of G may carry a topology that is coarser than that on R ; this may happen in cases where φ is injective. Think for example of the case where G is a torus T , and φ is constructed by winding a straight line round T at an irrational slope. Topology: In that case the induced topology may not be the standard one of the real line.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Contourlet** Contourlet: Contourlets form a multiresolution directional tight frame designed to efficiently approximate images made of smooth regions separated by smooth boundaries. The contourlet transform has a fast implementation based on a Laplacian pyramid decomposition followed by directional filterbanks applied on each bandpass subband. Contourlet transform: Introduction and motivation In the field of geometrical image transforms, there are many 1-D transforms designed for detecting or capturing the geometry of image information, such as the Fourier and wavelet transform. However, the ability of 1-D transform processing of the intrinsic geometrical structures, such as smoothness of curves, is limited in one direction, then more powerful representations are required in higher dimensions. The contourlet transform which was proposed by Do and Vetterli in 2002, is a new two-dimensional transform method for image representations. The contourlet transform has properties of multiresolution, localization, directionality, critical sampling and anisotropy. Its basic functions are multiscale and multidimensional. The contours of original images, which are the dominant features in natural images, can be captured effectively with a few coefficients by using contourlet transform. Contourlet transform: The contourlet transform is inspired by the human visual system and Curvelet transform which can capture the smoothness of the contour of images with different elongated shapes and in variety of directions. However, it is difficult to sampling on a rectangular grid for Curvelet transform since Curvelet transform was developed in continuous domain and directions other than horizontal and vertical are very different on rectangular grid. Therefore, the contourlet transform was proposed initially as a directional multiresolution transform in the discrete domain. Contourlet transform: Definition The contourlet transform uses a double filter bank structure to get the smooth contours of images. In this double filter bank, the Laplacian pyramid (LP) is first used to capture the point discontinuities, and then a directional filter bank (DFB) is used to form those point discontinuities into linear structures.The Laplacian pyramid (LP) decomposition only produce one bandpass image in a multidimensional signal processing, that can avoid frequency scrambling. And directional filter bank (DFB) is only fit for high frequency since it will leak the low frequency of signals in its directional subbands. This is the reason to combine DFB with LP, which is multiscale decomposition and remove the low frequency. Therefore, image signals pass through LP subbands to get bandpass signals and pass those signals through DFB to capture the directional information of image. This double filter bank structure of combination of LP and DFB is also called as pyramid directional filter bank (PDFB), and this transform is approximate the original image by using basic contour, so it is also called discrete contourlet transform. Contourlet transform: The properties of discrete contourlet transform If perfect-reconstruction filters are used for both the LP decomposition and DFB, then the discrete contourlet transform can reconstruct the original image perfectly, which means it provides a frame operator. If orthogonal filters are used for both the LP decomposition and DFB, then the discrete contourlet transform provides a tight frame which bounds equal to 1. Contourlet transform: The upper bound for the redundancy ratio of the discrete contourlet transform is 4/3 If the j pyramidal level of LP applies to lj level DFB, the basis images of the contourlet transform have the size of width ≈ 2j and length ≈ 2j+lj−2 When FIR is used, the computational complexity of the discrete contourlet transform is O(N) for N-pixel images. Nonsubsampled contourlet transform: Motivation and applications The contourlet transform has a number of useful features and qualities, but it also has its flaws. One of the more notable variations of the contourlet transform was developed and proposed by da Cunha, Zhou and Do in 2006. The nonsubsampled contourlet transform (NSCT) was developed mainly because the contourlet transform is not shift invariant. The reason for this lies in the up-sampling and down-sampling present in both the Laplacian Pyramid and the directional filter banks. The method used in this variation was inspired by the nonsubsampled wavelet transform or the stationary wavelet transform which were computed with the à trous algorithm.Though the contourlet and this variant are relatively new, they have been used in many different applications including synthetic aperture radar despeckling, image enhancement and texture classification. Nonsubsampled contourlet transform: Basic concept To retain the directional and multiscale properties of the transform, the Laplacian Pyramid was replaced with a nonsubsampled pyramid structure to retain the multiscale property, and a nonsubsampled directional filter bank for directionality. The first major notable difference is that upsampling and downsampling are removed from both processes. Instead the filters in both the Laplacian Pyramid and the directional filter banks are upsampled. Though this mitigates the shift invariance issue a new issue is now present with aliasing and the directional filter bank. When processing the coarser levels of the pyramid there is potential for aliasing and loss in resolution. This issue is avoided though by upsampling the directional filter bank filters as was done with the filters from the pyramidal filter bank.The next issue that lies with this transform is the design of the filters for both filter banks. According to the authors there were some properties that they desired with this transform such as: perfect reconstruction, a sharp frequency response, easy implementation and linear-phase filters. These features were implemented by first removing the tight frame requirement and then using a mapping to design the filters and then implementing a ladder type structure. These changes lead to a transform that is not only efficient but performs well in comparison to other similar and in some cases more advanced transforms when denoising and enhancing images. Variations of the contourlet transform: Wavelet-based contourlet transform Although the wavelet transform is not optimal in capturing the 2-D singularities of images, it can take the place of LP decomposition in the double filter bank structure to make the contourlet transform a non-redundant image transform. The wavelet-based contourlet transform is similar to the original contourlet transform, and it also consists of two filter bank stages. In the first stage, the wavelet transform is used to do the sub-band decomposition instead of the Laplacian pyramid (LP) in the contourlet transform. And the second stage of the wavelet-based contourlet transform is still a directional filter bank (DFB) to provide the link of singular points. One of the advantages to the wavelet-based contourlet transform is that the wavelet-based contourlet packets are similar to the wavelet packets which allows quad-tree decomposition of both low-pass and high-pass channels and then apply the DFB on each sub-band. Variations of the contourlet transform: The hidden Markov tree (HMT) model for the contourlet transform Based on the study of statistics of contourlet coefficients of natural images, the HMT model for the contourlet transform is proposed. The statistics show that the contourlet coefficients are highly non-Gaussian, high interaction dependent on all their eight neighbors and high inter-direction dependent on their cousins. Therefore, the HMT model, that captures the highly non-Gaussian property, is used to get the dependence on neighborhood through the links between the hidden states of the coefficients. This HMT model of contourlet transform coefficients has better results than original contourlet transform and other HMT modeled transforms in denoising and texture retrieval, since it restores edges better visually. Variations of the contourlet transform: Contourlet transform with sharp frequency localization An alternative or variation of the contourlet transform was proposed by Lu and Do in 2006. This new proposed method was intended as a remedy to fix non-localized basis images in frequency. The issue with the original contourlet transform was that when the contourlet transform was used with imperfect filter bank filters aliasing occurs and the frequency domain resolution is affected. There are two contributing factors to the aliasing, the first is the periodicity of 2-D frequency spectra and the second is an inherent flaw in the critical sampling of the directional filter banks. This new method mitigates these issues by changing the method of multiscale decomposition. As mentioned before, the original contourlet used the Laplacian Pyramid for multiscale decomposition. This new method as proposed by Lu and Do uses a multiscale pyramid that can be adjusted by applying low pass or high pass filters for the different levels. This method fixes multiple issues, it reduces the amount of cross terms and localizes the basis images in frequency, removes aliasing and has proven in some instances more effective in denoising images. Though it fixes all of those issues, this method requires more filters than the original contourlet transform and still has both the up-sampling and down-sampling operations meaning it is not shift-invariant. Variations of the contourlet transform: Image enhancement based on nonsubsampled contourlet transform In prior studies the contourlet transform has proven effective in the denoising of images but in this method the researchers developed a method of image enhancement. When enhancing images preservation and the enhancement of important data is of paramount importance. The contourlet transform meets this criterion to some extent with its ability to denoise and detect edges. This transform first passes the image through the multiscale decomposition by way of the nonsubsampled laplacian pyramid. After that, the noise variance for each sub-band is calculated and relative to local statistics of the image it is classified as either noise, a weak edge or strong edge. The strong edges are retained, the weak edges are enhanced and the noise is discarded. This method of image enhancement significantly outperformed the nonsubsampled wavelet transform (NSWT) both qualitatively and quantitatively. Though this method outperformed the NSWT there still lies the issue of the complexity of designing adequate filter banks and fine tuning the filters for specific applications of which further study will be required. Applications: Image Denoising Image Enhancement Image Restoration Image Despeckling
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Maximum-minimums identity** Maximum-minimums identity: In mathematics, the maximum-minimums identity is a relation between the maximum element of a set S of n numbers and the minima of the 2n − 1 non-empty subsets of S. Let S = {x1, x2, ..., xn}. The identity states that max min min min {x1,x2,…,xn}, or conversely min max max max {x1,x2,…,xn}. For a probabilistic proof, see the reference.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cricopharyngeal spasm** Cricopharyngeal spasm: Cricopharyngeal spasms occur in the cricopharyngeus muscle of the pharynx. Cricopharyngeal spasm is an uncomfortable but harmless and temporary disorder. Signs and symptoms: Sensation of a 'lump' in the back of the throat Throat feels swollen Discomfort - Lump can often feel quite big and pain is occasional Symptoms normally worse in the evening Stress aggravates the symptoms Saliva is difficult to swallow, yet food is easy to swallow - eating, in fact, often makes the tightness go away for a time 'Lump' sensation comes and goes from day to day Symptoms can persist for very long periods, often several months. Signs and symptoms: The symptoms can be mimicked by pushing on the cartilage in the neck, just below the Adam's apple Physiology: There are two sphincters in the oesophagus. They are normally contracted and they relax when one swallows so that food can pass through them going to the stomach. They then squeeze closed again to prevent regurgitation of the stomach contents and prevent air from entering the digestive system. If this normal contraction becomes a spasm, these symptoms begin. Causes: Causes include stress and anxiety. Other causes are not yet clear. The condition persists in the autonomic nervous system even when the original stress is relieved. Causes: An assumption in psychiatry is that a lack of serotonin can be associated with depression and anxiety. A further assumption is that a low levels of serotonin can causes spasms in the cervical area. A plausible explanation for the cricopharyngeal spasms is a lack of neurotransmitter preventing the central nervous system from detecting that the eosophagus is closed, so that the upper esophagus sphincter becomes, randomly, hypertonic. Causes: The condition can appear as a symptom of the generalized anxiety disorder. Early signs are other symptoms like difficulty or inability to eat (loss of appetite, satiety after swallowing minor quantities), headache, dry mouth at night, sleeping issues, tremor, tension in the neck, in the throat, abdominal, stomach or chest pain etc. The sequence can result from a recent stress, panic attack or worry. Causes: The subject heads to cricopharyngeal spasms when, for instance, eating pasty food requiring more throat cleanings, like peanuts, pumpkin seeds and other nuts, becomes painful. Continuous swallowing appears with the spasms as the brain interprets the feeling as something stuck. The vagus nerves seems to play a role in the mother condition through a neurovegetative hyperactivity or dysautonomia. It innerves the inferior pharyngeal constrictor muscle where the cricopharyngeal spasms occur. Throat spasms can also appear after an accident, a disease, may be caused or worsened by GERD. There may be hereditary factors. In the context of long covid psychiatrists envisioned a potential relationship with an immune reaction, involving cytokines, that would persist quietly. However, due the anxiogenic situation, stress was again present when the symptoms started. Diagnosis: These spasms are frequently misunderstood by the patient to be cancer due to the 'lump in the throat' feeling (Globus pharyngis) that is symptomatic of this syndrome. Diagnosis: All the anatomic examinations can appear normal despite the condition. The throat endoscopy can objectify that nothing is stuck, that there is no lesion or inflammation. The barium swallow can miss that the sphincter is hypertonic if it does not happen during the examination, or if the sphincter still relaxes enough for the food bolus to go through. The esophageal manometry cannot detect any abnormal wave. Diagnosis: The cricopharyngeal spasms ("feeling that something is stuck") occur in the cricopharyngeal part of the inferior pharyngeal constrictor muscle, at the bottom of the throat. They cause muscle tension on the cricoid cartilage, leading to a globus feeling. Pharyngeal spasms, a more common source of a globus feeling, cause tension on the thyroid cartilage. They move up and down, left and right in the pharyngeal muscles. Both may be present. Diagnosis: The patient complains about the signs and symptoms enumerated above. The pain causes dry deglutition and dry deglutition adds to the pain, triggering a vicious circle. The spams start after dry deglutition, after the meals or randomly during the day. They can start (and stop) brutally. Or softly, by the feeling that a small pill is stuck, frictions around it, then the impression that a ball is stuck. When the spasms last long they can give the impression of a knife stabbed in the throat. Diagnosis: The cricopharyngeal spasms can be, for instance, formally diagnosed as part of the more general condition. For instance, did the patient recently encounter other symptoms of the generalized anxiety disorder? Does the patient have neurovegetative symptoms? Are there symptoms of dysautonomia? Is there evidence of a lack of serotonin, like no sleep (melatonin is generated from serotonin)? Is there any other psychiatric condition? Cricopharyngeal spasms remain a rare symptom. Difficulties for the patient to describe an unusual symptom and for the practitioners to figure out the condition can entail a prompt diagnosis. Treatment: The condition is known to be temporary. In some individuals it can disappear by itself without medication. For others, it can stagnate or worsen until appropriate medical care is given. Since the problem can last, medical specialists are not readily available and potential treatments act slowly, patience is required. During that time, finding distractions and support is a first help. Attention should be paid to not increase the levels of stress and anxiety, or fall into depression because of the symptom or its root cause. The medical specialists to consult are ENT specialist and psychiatrist: The ENT specialist to perform a throat examination (search for lesions, inflammation, signs of reflux, nerve issues, sinister causes etc.). Complementary examinations can also be prescribed. The psychiatrist to assess the root causes, elaborate an appropriate treatment and follow the progresses.A cure for the condition exists and number of treatments may provide a relief. Treatment: Treatments based on medicines Antispasmodic medicines (immediate benefit) Nifedipine, in small doses (2x 5 mg per day, 10 mg per day in slow release or as much as the blood pressure allows it), can be prescribed in an attempt to provide a first relief, by blocking the esophageal spasms that may be involved and reduce the reflux going up to the throat. Treatment: Muscle relaxants (benefit obtained on the short-term) Clonazepam (Rivotril), diazepam (Valium) and lorazepam (Ativan) and other benzodiazepines relax the muscles in the throat, slow or halt the contractions. (In some people, benzodiazepines taken on the long term may have addictive properties.) Anti-depressants (benefit and solution obtained on the mid-term) Serotonin reuptake inhibitors (escitalopram etc.) address the root cause related to a low level of serotonin. It takes 6 weeks to deliver the first effects. Treatment: Tricyclic anti-depressants (Pamelor etc.), taken in small doses, have been having positive results recently, according to the Cleveland Clinic. Treatment: Proton-pump inhibitors, or other medicines acting against reflux, if signs of reflux are found, until they disappear.A typical treatment that can be prescribed starts, for instance, with nidefipine (as long as it brings a relief), a benzodiazepine (one month maximum) that has a myorelaxant effect and that can be chosen to simultaneously address other faces of the problem (anxiety, sleeping issue) and a well-tolerated anti-depressant like escitalopram (long enough so that the problem does not come back). Treatment: Treatments based on other factors Stress reduction Take notes of what improves and worsens the symptoms. High stress levels make the spasms more noticeable. Psychologists provide custom tips and tricks. Mindfulness, with professionals, or smartphone apps. Breathing exercises such as cardiac coherence. Wellness, spa. Sport. Physiotherapy Neck stretching may provide temporary relief. Hands are placed on each clavicle as you hyperextend your neck (looking at the ceiling). Protracting the jaw with the neck extended will stretch your neck. Hold this position for 20–30 seconds. Warm fluids Hot fluids may be helpful for some people with cricopharyngeal spasm (or other oesophageal disorders). Herbal tea.Other therapies Transcutaneous stimulation of vagus nerve through the ear proved to reduce symptoms of that family (long lasting, on the way of the vagus nerve) according to a study realized in the context of long covid. Botox injections may temporarily disable the muscle and provide relief for 3–4 months per injection.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Number form** Number form: A number form is a mental map of numbers, which automatically and involuntarily appears whenever someone who experiences number-forms thinks of numbers. Numbers are mapped into distinct spatial locations and the mapping may be different across individuals. Number forms were first documented and named by Sir Francis Galton in his The Visions of Sane Persons. Later research has identified them as a type of synesthesia. Neural mechanisms: It has been suggested that number-forms are a result of cross-activation between regions of the parietal lobe that are involved in numerical cognition and angular gyrus for spatial cognition. Since the areas that process numerical and spatial representations are close to each other, this may contribute to the increased cross-activation. Synesthetes display larger P3b amplitudes for month cues compared to non-synesthetes, but similar N1 and P3b responses for arrow (← or →) and word (left or right) cues. Reaction time research: Reaction time studies have shown that number-form synesthetes are faster to say which of two numbers is larger when the numbers are arranged in a manner consistent with their number-form, suggesting that number forms are automatically evoked. This can be thought of as a spatial Stroop task, in which space is not relevant to the task, but which can hinder performance despite its irrelevance. The fact that synesthetes cannot ignore the spatial arrangement of the numbers on the screen demonstrates that numbers are automatically evoking spatial cues. The reaction times for valid cues are smaller than invalid cues (words and arrows), but in synesthetes the response time differences for months are larger than those of non-synesthetes. Differences with number line: These number forms can be distinguished from the non-conscious mental number lines by the fact that they are conscious, idiosyncratic and stable across the person's lifespan. Although this form of synesthesia has not been as intensively studied as grapheme–color synesthesia, Hubbard and colleagues have argued that similar neural mechanisms might be involved, but acting in different brain regions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Forbidden subgraph problem** Forbidden subgraph problem: In extremal graph theory, the forbidden subgraph problem is the following problem: given a graph G , find the maximal number of edges ex ⁡(n,G) an n -vertex graph can have such that it does not have a subgraph isomorphic to G . In this context, G is called a forbidden subgraph.An equivalent problem is how many edges in an n -vertex graph guarantee that it has a subgraph isomorphic to G Definitions: The extremal number ex ⁡(n,G) is the maximum number of edges in an n -vertex graph containing no subgraph isomorphic to G . Kr is the complete graph on r vertices. T(n,r) is the Turán graph: a complete r -partite graph on n vertices, with vertices distributed between parts as equally as possible. The chromatic number χ(G) of G is the minimum number of colors needed to color the vertices of G such that no two adjacent vertices have the same color. Upper bounds: Turán's theorem Turán's theorem states that for positive integers n,r satisfying n≥r≥3 , ex {\textstyle \operatorname {ex} (n,K_{r})=\left(1-{\frac {1}{r-1}}\right){\frac {n^{2}}{2}}.} This solves the forbidden subgraph problem for G=Kr . Equality cases for Turán's theorem come from the Turán graph T(n,r−1) This result can be generalized to arbitrary graphs G by considering the chromatic number χ(G) of G . Note that T(n,r) can be colored with r colors and thus has no subgraphs with chromatic number greater than r . In particular, T(n,χ(G)−1) has no subgraphs isomorphic to G . This suggests that the general equality cases for the forbidden subgraph problem may be related to the equality cases for G=Kr . This intuition turns out to be correct, up to o(n2) error. Upper bounds: Erdős–Stone theorem Erdős–Stone theorem states that for all positive integers n and all graphs G , ex {\textstyle \operatorname {ex} (n,G)=\left(1-{\frac {1}{\chi (G)-1}}+o(1)\right){\binom {n}{2}}.} When G is not bipartite, this gives us a first-order approximation of ex ⁡(n,G) Bipartite graphs For bipartite graphs G , the Erdős–Stone theorem only tells us that ex ⁡(n,G)=o(n2) . The forbidden subgraph problem for bipartite graphs is known as the Zarankiewicz problem, and it is unsolved in general. Upper bounds: Progress on the Zarankiewicz problem includes following theorem: Kővári–Sós–Turán theorem. For every pair of positive integers s,t with t≥s≥1 , there exists some constant C (independent of n ) such that ex {\textstyle \operatorname {ex} (n,K_{s,t})\leq Cn^{2-{\frac {1}{s}}}} for every positive integer n .Another result for bipartite graphs is the case of even cycles, G=C2k,k≥2 . Even cycles are handled by considering a root vertex and paths branching out from this vertex. If two paths of the same length k have the same endpoint and do not overlap, then they create a cycle of length 2k . This gives the following theorem. Upper bounds: Theorem (Bondy and Simonovits, 1974). There exists some constant C such that ex {\textstyle \operatorname {ex} (n,C_{2k})\leq Cn^{1+{\frac {1}{k}}}} for every positive integer n and positive integer k≥2 .A powerful lemma in extremal graph theory is dependent random choice. This lemma allows us to handle bipartite graphs with bounded degree in one part: Theorem (Alon, Krivelevich, and Sudakov, 2003). Let G be a bipartite graph with vertex parts A and B such that every vertex in A has degree at most r . Then there exists a constant C (dependent only on G ) such that ex {\textstyle \operatorname {ex} (n,G)\leq Cn^{2-{\frac {1}{r}}}} for every positive integer n .In general, we have the following conjecture.: Rational Exponents Conjecture (Erdős and Simonovits). For any finite family L of graphs, if there is a bipartite L∈L , then there exists a rational α∈[0,1) such that ex ⁡(n,L)=Θ(n1+α) .A survey by Füredi and Simonovits describes progress on the forbidden subgraph problem in more detail. Lower bounds: There are various techniques used for obtaining the lower bounds. Lower bounds: Probabilistic method While this method mostly gives weak bounds, the theory of random graphs is a rapidly developing subject. It is based on the idea that if we take a graph randomly with a sufficiently small density, the graph would contain only a small number of subgraphs of G inside it. These copies can be removed by removing one edge from every copy of G in the graph, giving us a G free graph. The probabilistic method can be used to prove ex ⁡(n,G)≥cn2−v(G)−2e(G)−1 where c is a constant only depending on the graph G . For the construction we can take the Erdős-Rényi random graph G(n,p) , that is the graph with n vertices and the edge been any two vertices drawn with probability p , independently. After computing the expected number of copies of G in G(n,p) by linearity of expectation, we remove one edge from each such copy of G and we are left with a G -free graph in the end. The expected number of edges remaining can be found to be ex ⁡(n,G)≥cn2−v(G)−2e(G)−1 for a constant c depending on G . Therefore, at least one n -vertex graph exists with at least as many edges as the expected number. Lower bounds: This method can also be used to find the constructions of a graph for bounds on the girth of the graph. The girth, denoted by g(G) , is the length of the shortest cycle of the graph. Note that for g(G)>2k , the graph must forbid all the cycles with length less than equal to 2k . By linearity of expectation,the expected number of such forbidden cycles is equal to the sum of the expected number of cycles Ci (for i=3,...,n−1,n .). We again remove the edges from each copy of a forbidden graph and end up with a graph free of smaller cycles and g(G)>2k , giving us c0n1+12k−1 edges in the graph left. Lower bounds: Algebraic constructions For specific cases, improvements have been made by finding algebraic constructions. A common feature for such constructions is that it involves the use of geometry to construct a graph, with vertices representing geometric objects and edges according to the algebraic relations between the vertices. We end up with no subgraph of G , purely due to purely geometric reasons, while the graph has a large number of edges to be a strong bound due to way the incidences were defined. The following proof by Erdős, Rényi, and Sős establishing the lower bound on ex ⁡(n,K2,2) as (12−o(1))n3/2. Lower bounds: , demonstrates the power of this method. Lower bounds: First, suppose that n=p2−1 for some prime p . Consider the polarity graph G with vertices elements of Fp2−{0,0} and edges between vertices (x,y) and (a,b) if and only if ax+by=1 in Fp . This graph is K2,2 -free because a system of two linear equations in Fp cannot have more than one solution. A vertex (a,b) (assume b≠0 ) is connected to (x,1−axb) for any x∈Fp , for a total of at least p−1 edges (subtracted 1 in case (a,b)=(x,1−axb) ). So there are at least 12(p2−1)(p−1)=(12−o(1))p3=(12−o(1))n3/2 edges, as desired. For general n , we can take p=(1−o(1))n with p≤n+1 (which is possible because there exists a prime p in the interval 0.525 ,k] for sufficiently large k ) and construct a polarity graph using such p , then adding n−p2+1 isolated vertices, which do not affect the asymptotic value. The following theorem is a similar result for K3,3 Theorem (Brown, 1966). ex ⁡(n,K3,3)≥(12−o(1))n5/3. Lower bounds: Proof outline. Like in the previous theorem, we can take n=p3 for prime p and let the vertices of our graph be elements of Fp3 . This time, vertices (a,b,c) and (x,y,z) are connected if and only if (x−a)2+(y−b)2+(z−c)2=u in Fp , for some specifically chosen u . Then this is K3,3 -free since at most two points lie in the intersection of three spheres. Then since the value of (x−a)2+(y−b)2+(z−c)2 is almost uniform across Fp , each point should have around p2 edges, so the total number of edges is (12−o(1))p2⋅p3=(12−o(1))n5/3 .However, it remains an open question to tighten the lower bound for ex ⁡(n,Kt,t) for t≥4 Theorem (Alon et al., 1999) For t≥(s−1)!+1 , ex ⁡(n,Ks,t)=Θ(n2−1s). Lower bounds: Randomized Algebraic constructions This technique combines the above two ideas. It uses random polynomial type relations when defining the incidences between vertices, which are in some algebraic set. Using this technique to prove the following theorem. Lower bounds: Theorem: For every s≥2 , there exists some t such that ex ⁡(n,Ks,t)≥(12−o(1))n1−1s Proof outline: We take the largest prime power q with qs≤n . Due to the prime gaps, we have q=(1−o(1))n1s . Let f∈Fq[x1,x2,⋯,xs,y1,y2,⋯,ys]≤d be a random polynomial in Fq with degree at most d=s2 in X=(X1,X2,...,Xs) and Y=(Y1,Y2,...,Ys) and satisfying f(X,Y)=f(Y,X) . Let the graph G have the vertex set Fqs such that two vertices x,y are adjacent if f(x,y)=0 . We fix a set U⊂Fqs , and defining a set ZU as the elements of Fqs not in U satisfying f(x,u)=0 for all elements u∈U . By the Lang–Weil bound, we obtain that for q sufficiently large enough, we have |ZU|≤C or |ZU|>q2 for some constant C .Now, we compute the expected number of U such that ZU has size greater than C , and remove a vertex from each such U . The resulting graph turns out to be Ks,C+1 free, and at least one graph exists with the expectation of the number of edges of this resulting graph. Supersaturation: Supersaturation refers to a variant of the forbidden subgraph problem, where we consider when some h -uniform graph G contains many copies of some forbidden subgraph H . Intuitively, one would expect this to once G contains significantly more than ex ⁡(n,H) edges. We introduce Turán density to formalize this notion. Turán density The Turán density of a h -uniform graph G is defined to be lim ex ⁡(n,G)(nh). Supersaturation: It is true that ex ⁡(n,G)(nh) is in fact positive and monotone decreasing, so the limit must therefore exist. As an example, Turán's Theorem gives that π(Kr+1)=1−1r , and the Erdős–Stone theorem gives that π(G)=1−1χ(H)−1 . In particular, for bipartite G , π(G)=0 . Determining the Turán density π(H) is equivalent to determining ex ⁡(n,G) up to an o(n2) error. Supersaturation: Supersaturation Theorem Consider an h -uniform hypergraph H with v vertices. The supersaturation theorem states that for every ϵ>0 , there exists a δ>0 such that if G is a graph on n vertices and at least (π(H)+ϵ)(n2) edges for n sufficiently large, then there are at least δnv(H) copies of H . Equivalently, we can restate this theorem as the following: If a graph G with n vertices has o(nv(H)) copies of H , then there are at most π(H)(n2)+o(n2) edges in G Applications We may solve various forbidden subgraph problems by considering supersaturation-type problems. We restate and give a proof sketch of the Kővári–Sós–Turán theorem below: Kővári–Sós–Turán theorem. For every pair of positive integers s,t with t≥s≥1 , there exists some constant C (independent of n ) such that ex {\textstyle \operatorname {ex} (n,K_{s,t})\leq Cn^{2-{\frac {1}{s}}}} for every positive integer n Proof. Let G be a 2 -graph on n vertices, and consider the number of copies of K1,s in G . Given a vertex of degree d , we get exactly (ds) copies of K1,s rooted at this vertex, for a total of deg ⁡(v)s) copies. Here, (ks)=0 when 0≤k<s . By convexity, there are at total of at least n(2e(G)/ns) copies of K1,s . Moreover, there are clearly (ns) subsets of s vertices, so if there are more than (t−1)(ns) copies of K1,s , then by the Pigeonhole Principle there must exist a subset of s vertices which form the set of leaves of at least t of these copies, forming a Ks,t . Therefore, there exists an occurrence of Ks,t as long as we have n(2e(G)/ns)>(t−1)(ns) . In other words, we have an occurrence if e(G)sns−1≥O(ns) , which simplifies to e(G)≥O(n2−1s) , which is the statement of the theorem. In this proof, we are using the supersaturation method by considering the number of occurrences of a smaller subgraph. Typically, applications of the supersaturation method do not use the supersaturation theorem. Instead, the structure often involves finding a subgraph H′ of some forbidden subgraph H and showing that if it appears too many times in G , then H must appear in G as well. Other theorems regarding the forbidden subgraph problem which can be solved with supersaturation include: Generalizations: The problem may be generalized for a set of forbidden subgraphs S : find the maximal number of edges in an n -vertex graph which does not have a subgraph isomorphic to any graph from S .There are also hypergraph versions of forbidden subgraph problems that are much more difficult. For instance, Turán's problem may be generalized to asking for the largest number of edges in an n -vertex 3-uniform hypergraph that contains no tetrahedra. The analog of the Turán construction would be to partition the vertices into almost equal subsets V1,V2,V3 , and connect vertices x,y,z by a 3-edge if they are all in different Vi s, or if two of them are in Vi and the third is in Vi+1 (where V4=V1 ). This is tetrahedron-free, and the edge density is 5/9 . However, the best known upper bound is 0.562, using the technique of flag algebras.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**JANUS clinical trial data repository** JANUS clinical trial data repository: Janus clinical trial data repository is a clinical trial data repository (or data warehouse) standard as sanctioned by the U.S. Food and Drug Administration (FDA). It was named for the Roman god Janus (mythology), who had two faces, one that could see in the past and one that could see in the future. The analogy is that the Janus data repository would enable the FDA and the pharmaceutical industry to both look retrospectively into past clinical trials, and also relative to one or more current clinical trials (or even future clinical trials thru better enablement of clinical trial design). JANUS clinical trial data repository: The Janus data model is a relational database model, and is based on SDTM as a standard, in terms of many of its basic concepts such as the loading and storing of findings, events, interventions and inclusion data. However, Janus itself is a data warehouse independent of any single clinical trials submission standard. For example, Janus can store pre-clinical trial (non-human) submission information as well, in the form of the SEND non-clinical standard. JANUS clinical trial data repository: The goals of Janus are as follows: To create an integrated data platform for most commercial tools for review, analysis and reporting Reduce the overall cost of existing information gathering and submissions development processes as well as review and analysis of information Provide a common data model that is based on the SDTM standard to represent four classes of clinical data submitted to regulatory agencies: tabulation datasets, patient profiles, listings, etc. JANUS clinical trial data repository: Provides central access to standardized data, and provide common data views across collaborative partners. Support cross-trial analyses for data mining and help detect clinical trends and address clinical hypotheses, and perform more advanced, robust analysis. This will enable the ability to contrast and compare data from multiple clinical trials to help improve efficacy and safety. Facilitate a more efficient review process and ability to locate and query data more easily through automated processes and data standards. Provide a potentially broader data view for all clinical trials with proper security, de-identified patient data, and proper agreements in place to share data.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TTC3P1** TTC3P1: Tetratricopeptide repeat domain 3 pseudogene 1 is a protein that in humans is encoded by the TTC3P1 gene. Aliases for TTC3P1 Gene: GeneCards Symbol: TTC3P1 2 Tetratricopeptide Repeat Domain 3 Pseudogene 1 2 3 5 RNF105L 2 3 5 TTC3L 3 5 Tetratricopeptide Repeat Domain 3-Like 2 External Ids for TTC3P1 Gene HGNC: 23318 NCBI Entrez Gene: 286495 Ensembl: ENSG00000215105 Previous HGNC Symbols for TTC3P1 Gene TTC3L Previous GeneCards Identifiers for TTC3P1 Gene GC0XM074961 Summaries for TTC3P1 Gene: GeneCards Summary for TTC3P1 Gene TTC3P1 (Tetratricopeptide Repeat Domain 3 Pseudogene 1) is a Pseudogene. Additional gene information for TTC3P1 Gene HGNC (23318) NCBI Entrez Gene (286495) Ensembl (ENSG00000215105) Alliance of Genome Resources Search for TTC3P1 at DataMed Search for TTC3P1 at HumanCyc
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Magnetic shoe closures** Magnetic shoe closures: Magnetic shoe closures are devices that close shoes using two magnetic bits attached to the shoelaces. The closures can be applied to most shoes. Description: Magnetic shoe closure devices close shoes using two magnetic bits attached to the shoelaces, and where the bits take the place of the knot and bow. The closures can be applied to most shoes as they are not directly linked to them, but the consumer ties them to the shoelaces. Reception: A 2016 review in The Denver Post indicated the solution held tight for various activities, though it is perhaps "difficult to maneuver for some physically challenged individuals."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Yuxiang shredded pork** Yuxiang shredded pork: Yuxiang shredded pork (simplified Chinese: 鱼香肉丝; traditional Chinese: 魚香肉絲; pinyin: yúxiāng ròusī; sometimes translated as fish-flavored pork slices, or more vaguely as shredded pork with garlic sauce) is a common dish in Sichuan cuisine. Yuxiang is one of the main traditional flavors in Sichuan. History: Hostess theory One day, while cooking dinner, it is said that a hostess who had just finished cooking fish poured the fish seasoning into a different pot, in which pork was already being made. When her husband came home from work, so hungry that he started eating immediately, He began to impatiently ask his wife how it had been made. After her husband's repeated questioning, she finally relayed what happened, and this unintentional innovation eventually spread. History: War theory Some people say that Yuxiang shredded pork is an innovative dish in modern China, because 1,328 Sichuan-style dishes were included in the "Chengdu Overview" published in 1909, but there was no Yuxiang shredded pork. Moreover, the name "Yuxiang Shredded Pork" was named after Chiang Kai-shek's chef during the Anti-Japanese War. Due to the shortage of materials during the war, many ingredients were also replaced with cheap ingredients, but the dishes were sweet, spicy, salty and fresh, so they were called "Yuxiang pork shreds". Characteristics: Yuxiang (sometimes translated as "fish flavor") is made of pao la jiao (泡椒 'Sichuan pickled chili pepper'), chuan seasoning salt, soy sauce (regular light), white sugar, bruised ginger, garlic and green star but no fish. This seasoning has nothing to do with fish, instead imitating the seasoning and method that people in Sichuan use when cooking fish. The seasoning contains salty, sweet, sour, hot, and fresh tastes, making the food more delicious. Preparation: In order to make Yuxiang Shredded Pork, some raw ingredients are indispensable. Main raw materials Pork water, vinegar, ginger, garlic, pickled pepper sugar, salt, oil, cooking wine, soy sauce. Variation: Ingredients With increase of life quality, people add different ingredients into Yuxiang Shredded Pork to make it more delicious, such as black fungus, carrots, etc. However, no matter what kind of changes are made to the dish, its major ingredient is always pork. People select and use thirty percent fat and seventy percent lean pork, shredding and frying, which can make the food softer. Variation: Various Dishes of Yuxiang With the improvement of people's living standards and the acceptance of fish-flavored taste gradually improved, the Yuxiang Shredded Pork was spread over different part of China. People in different regions also made different innovations in Yuxiang dishes, according to their eating habits and regional characteristics. Other dishes using Yuxiang have also appeared, for example, as Yuxiang Pork Liver, Yuxiang Eggplant, and Yuxiang Three Silk (pork silk, Tofu silk, green pepper silk). (魚香三絲).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Verbal Behavior** Verbal Behavior: Verbal Behavior is a 1957 book by psychologist B. F. Skinner, in which he describes what he calls verbal behavior, or what was traditionally called linguistics. Skinner's work describes the controlling elements of verbal behavior with terminology invented for the analysis - echoics, mands, tacts, autoclitics and others - as well as carefully defined uses of ordinary terms such as audience. Origins: The origin of Verbal Behavior was an outgrowth of a series of lectures first presented at the University of Minnesota in the early 1940s and developed further in his summer lectures at Columbia and William James lectures at Harvard in the decade before the book's publication. Research: Skinner's analysis of verbal behavior drew heavily on methods of literary analysis. This tradition has continued. The book Verbal Behavior is almost entirely theoretical, involving little experimental research in the work itself. Many research papers and applied extensions based on Verbal Behavior have been done since its publication. Functional analysis: Skinner's Verbal Behavior also introduced the autoclitic and six elementary operants: mand, tact, audience relation, echoic, textual, and intraverbal. For Skinner, the proper object of study is behavior itself, analyzed without reference to hypothetical (mental) structures, but rather with reference to the functional relationships of the behavior in the environment in which it occurs. This analysis extends Ernst Mach's pragmatic inductive position in physics, and extends even further a disinclination towards hypothesis-making and testing. Verbal Behavior is divided into 5 parts with 19 chapters. The first chapter sets the stage for this work, a functional analysis of verbal behavior. Skinner presents verbal behavior as a function of controlling consequences and stimuli, not as the product of a special inherent capacity. Neither does he ask us to be satisfied with simply describing the structure, or patterns, of behavior. Skinner deals with some alternative, traditional formulations, and moves on to his own functional position. General problems: In the ascertaining of the strength of a response Skinner suggests some criteria for strength (probability): emission, energy-level, speed, and repetition. He notes that these are all very limited means for inferring the strength of a response as they do not always vary together and they may come under the control of other factors. Emission is a yes/no measure, however the other three—energy-level, speed, repetition—comprise possible indications of relative strength. General problems: Emission – If a response is emitted it may tend to be interpreted as having some strength. Unusual or difficult conditions would tend to lend evidence to the inference of strength. Under typical conditions it becomes a less compelling basis for inferring strength. This is an inference that is either there or not, and has no gradation of value. General problems: Energy-level – Unlike emission as a basis for inference, energy-level (response magnitude) provides a basis for inferring the response has a strength with a high range of varying strength. Energy level is a basis from which we can infer a high tendency to respond. An energetic and strong "Water!" forms the basis for inferring the strength of the response as opposed to a weak, brief "Water". General problems: Speed – Speed is the speed of the response itself, or the latency from the time in which it could have occurred to the time in which it occurs. A response given quickly when prompted forms the basis for inferring a high strength. Repetition – "Water! Water! Water!" may be emitted and used as an indication of relative strength compared to the speedy and/or energetic emission of "Water!". In this way repetition can be used as a way to infer strength. Mands: Chapter Three of Skinner's work Verbal Behavior discusses a functional relationship called the mand. Mand is verbal behavior under functional control of satiation or deprivation (that is, motivating operations) followed by characteristic reinforcement often specified by the response. A mand is typically a demand, command, or request. The mand is often said to "describe its own reinforcer" although this is not always the case, especially as Skinner's definition of verbal behavior does not require that mands be vocal. A loud knock at the door, may be a mand "open the door" and a servant may be called by a hand clap as much as a child might "ask for milk". Mands: Lamarre & Holland (1985) study on mands demonstrated the role of motivating operations. The authors contrived motivating operations for objects by training behavior chains that could not be completed without certain objects. The participants learned to mand for these missing objects, which they had previously only been able to tact... Behavior under the control of verbal stimuli: Textual In Chapter Four Skinner notes forms of control by verbal stimuli. One form is textual behavior which refers to the type of behavior we might typically call reading or writing. A vocal response is controlled by a verbal stimulus that is not heard. There are two different modalities involved ("reading"). If they are the same they become "copying text" (see Jack Michael on copying text), if they are heard, then written, it becomes "taking dictation", and so on. Behavior under the control of verbal stimuli: Echoic Skinner was one of the first to seriously consider the role of imitation in language learning. He introduced this concept into his book Verbal Behavior with the concept of the echoic. It is a behavior under the functional control of a verbal stimulus. The verbal response and the verbal stimulus share what is called point to point correspondence (a formal similarity.) The speaker repeats what is said. In echoic behavior, the stimulus is auditory and response is vocal. It is often seen in early shaping behavior. For example, in learning a new language, a teacher might say "parsimonious" and then say "can you say it?" to induce an echoic response. Winokur (1978) is one example of research about echoic relations. Tacts: Chapter Five of Verbal Behavior discusses the tact in depth. A tact is said to "make contact with" the world, and refers to behavior that is under functional control of a non-verbal stimulus and generalized conditioned reinforcement. The controlling stimulus is nonverbal, "the whole of the physical environment". In linguistic terms, the tact might be regarded as "expressive labelling". Tact is the most useful form of verbal behaviour to other listeners, as it extends the listeners contact with the environment. In contrast, the tact is the most useful form of verbal behaviour to the speaker as it allows to contact tangible reinforcement. Tacts can undergo many extensions: generic, metaphoric, metonymical, solecistic, nomination, and "guessing". It can also be involved in abstraction. Lowe, Horne, Harris & Randle (2002) would be one example of recent work in tacts. Intraverbal: Intraverbals are verbal behavior under the control of other verbal behavior. Intraverbals are often studied by the use of classic association techniques. Audiences: Audience control is developed through long histories of reinforcement and punishment. Skinner's three-term contingency can be used to analyze how this works: the first term, the antecedent, refers to the audience, in whose presence the verbal response (the second term) occurs. The consequences of the response are the third term, and whether or not those consequences strengthen or weaken the response will affect whether that response will occur again in the presence of that audience. Through this process, audience control, or the probability that certain responses will occur in the presence of certain audiences, develops. Skinner notes that while audience control is developed due to histories with certain audiences, we do not have to have a long history with every listener in order to effectively engage in verbal behavior in their presence (p. 176). We can respond to new audiences (new stimuli) as we would to similar audiences with whom we have a history. Audiences: Negative audiences An audience that has punished certain kinds of verbal behavior is called a negative audience (p. 178): in the presence of this audience, the punished verbal behavior is less likely to occur. Skinner gives the examples of adults punishing certain verbal behavior of children, and a king punishing the verbal behavior of his subjects. Summary of verbal operants: The following table summarizes the new verbal operants in the analysis of verbal behavior. Verbal operants as a unit of analysis: Skinner notes his categories of verbal behavior: mand, textual, intraverbal, tact, audience relations, and notes how behavior might be classified. He notes that form alone is not sufficient (he uses the example of "fire!" having multiple possible relationships depending on the circumstances). Classification depends on knowing the circumstances under which the behavior is emitted. Skinner then notes that the "same response" may be emitted under different operant conditions. Skinner states: "Classification is not an end in itself. Even though any instance of verbal behavior can be shown to be a function of variables in one or more of these classes, there are other aspects to be treated. Such a formulation permits us to apply to verbal behavior concepts and laws which emerge from a more general analysis" (p. 187). Verbal operants as a unit of analysis: That is, classification alone does little to further the analysis—the functional relations controlling the operants outlined must be analyzed consistent with the general approach of a scientific analysis of behavior. Multiple causation: Skinner notes in this chapter how any given response is likely to be the result of multiple variables. Secondly, that any given variable usually affects multiple responses. The issue of multiple audiences is also addressed, as each audience is, as already noted, an occasion for strong and successful responding. Combining audiences produces differing tendencies to respond. Supplementary stimulation: Supplementary stimulation is a discussion to practical matters of controlling verbal behavior given the context of material which has been presented thus far. Issues of multiple control, and involving many of the elementary operants stated in previous chapters are discussed. New combinations of fragmentary responses: A special case of where multiple causation comes into play creating new verbal forms is in what Skinner describes as fragmentary responses. Such combinations are typically vocal, although this may be due to different conditions of self-editing rather than any special property. Such mutations may be "nonsense" and may not further the verbal interchange in which it occurs. Freudian slips may be one special case of fragmentary responses which tend to be given reinforcement and may discourage self-editing. This phenomenon appears to be more common in children, and in adults learning a second language. Fatigue, illness and insobriety may tend to produce fragmentary responding. Autoclitics: An autoclitic is a form of verbal behavior which modifies the functions of other forms of verbal behavior. For example, "I think it is raining" possesses the autoclitic "I think" which moderates the strength of the statement "it is raining". An example of research that involved autoclitics would be Lodhi & Greer (1989). Self-strengthening: Here Skinner draws a parallel to his position on self-control and notes: "A person controls his own behavior, verbal or otherwise, as he controls the behavior of others." Appropriate verbal behavior may be weak, as in forgetting a name, and in need of strengthening. It may have been inadequately learned, as in a foreign language. Repeating a formula, reciting a poem, and so on. The techniques are manipulating stimuli, changing the level of editing, the mechanical production of verbal behavior, changing motivational and emotional variables, incubation, and so on. Skinner gives an example of the use of some of these techniques provided by an author. Logical and scientific: The special audience in this case is one concerned with "successful action". Special methods of stimulus control are encouraged that will allow for maximum effectiveness. Skinner notes that "graphs, models, tables" are forms of texts that allow for this kind of development. The logical and scientific community also sharpens responses to assure accuracy and avoiding distortion. Little progress in the area of science has been made from a verbal behavior perspective; however, suggestions of a research agenda have been laid out. Tacting private events: Private events are events accessible to only the speaker. Public events are events that occur outside of an organism's skin that are observed by more than one individual. A headache is an example of a private event and a car accident is an example of a public event. Tacting private events: The tacting of private events by an organism is shaped by the verbal community who differentially reinforce a variety of behaviors and responses to the private events that occur (Catania, 2007, p. 9). For example, if a child verbally states, "a circle" when a circle is in the immediate environment, it may be a tact. If a child verbally states, "I have a toothache", she/he may be tacting a private event, whereas the stimulus is present to the speaker, but not the rest of the verbal community. Tacting private events: The verbal community shapes the original development and the maintenance or discontinuation of the tacts for private events (Catania, 2007, p. 232). An organism responds similarly to both private stimuli and public stimuli (Skinner, 1957, p. 130). However, it is harder for the verbal community to shape the verbal behavior associated with private events (Catania, 2007, p. 403). It may be more difficult to shape private events, but there are critical things that occur within an organism's skin that should not be excluded from our understanding of verbal behavior (Catania, 2007, p. 9). Tacting private events: Several concerns are associated with tacting private events. Skinner (1957) acknowledged two major dilemmas. First, he acknowledges our difficulty with predicting and controlling the stimuli associated with tacting private events (p. 130). Catania (2007) describes this as the unavailability of the stimulus to the members of the verbal community (p. 253). The second problem Skinner (1957) describes is our current inability to understand how the verbal behavior associated with private events is developed (p. 131). Tacting private events: Skinner (1957) continues to describe four potential ways a verbal community can encourage verbal behavior with no access to the stimuli of the speaker. He suggests the most frequent method is via "a common public accompaniment". An example might be that when a kid falls and starts bleeding, the caregiver tells them statements like, "you got hurt". Another method is the "collateral response" associated with the private stimulus. An example would be when a kid comes running and is crying and holding their hands over their knee, the caregiver might make a statement like, "you got hurt". The third way is when the verbal community provides reinforcement contingent on the overt behavior and the organism generalizes that to the private event that is occurring. Skinner refers to this as "metaphorical or metonymical extension". The final method that Skinner suggests may help form our verbal behavior is when the behavior is initially at a low level and then turns into a private event (Skinner, 1957, p. 134). This notion can be summarized by understanding that the verbal behavior of private events can be shaped through the verbal community by extending the language of tacts (Catania, 2007, p. 263). Tacting private events: Private events are limited and should not serve as "explanations of behavior" (Skinner, 1957, p. 254). Skinner (1957) continues to caution that, "the language of private events can easily distract us from the public causes of behavior" (see functions of behavior). Chomsky's review and replies: In 1959, Noam Chomsky published an influential critique of Verbal Behavior. Chomsky pointed out that children acquire their first language without being explicitly or overtly "taught" in a way that would be consistent with behaviorist theory (see Language acquisition and Poverty of the stimulus), and that Skinner's theories of "operants" and behavioral reinforcements are not able to account for the fact that people can speak and understand sentences that they have never heard before. Chomsky's review and replies: According to Frederick J. Newmeyer: Chomsky's review has come to be regarded as one of the foundational documents of the discipline of cognitive psychology, and even after the passage of twenty-five years it is considered the most important refutation of behaviorism. Of all his writings, it was the Skinner review which contributed most to spreading his reputation beyond the small circle of professional linguists. Chomsky's review and replies: Chomsky's 1959 review, amongst his other work of the period, is generally thought to have been influential in the decline of behaviorism's influence within linguistics, philosophy and cognitive science. One reply to it was Kenneth MacCorquodale's 1970 paper On Chomsky's Review of Skinner's Verbal Behavior. MacCorquodale argued that Chomsky did not possess an adequate understanding of either behavioral psychology in general, or the differences between Skinner's behaviorism and other varieties. As a consequence, he argued, Chomsky made several serious errors of logic. On account of these problems, MacCorquodale maintains that the review failed to demonstrate what it has often been cited as doing, implying that those most influenced by Chomsky's paper probably already substantially agreed with him. Chomsky's review has been further argued to misrepresent the work of Skinner and others, including by taking quotes out of context. Chomsky has maintained that the review was directed at the way Skinner's variant of behavioral psychology "was being used in Quinean empiricism and naturalization of philosophy". Current research: Current research in verbal behavior is published in The Analysis of Verbal Behavior (TAVB), and other Behavior Analytic journals such as The Journal of the Experimental Analysis of Behavior (JEAB) and the Journal of Applied Behavior Analysis (JABA). Also research is presented at poster sessions and conferences, such as at regional Behavior Analysis conventions or Association for Behavior Analysis (ABA) conventions nationally or internationally. There is also a Verbal Behavior Special Interest Group (SIG) of the Association for Behavior Analysis (ABA) which has a mailing list.Journal of Early and Intensive Behavior Intervention and the Journal of Speech-Language Pathology and Applied Behavior Analysis both publish clinical articles on interventions based on verbal behavior. Current research: Skinner has argued that his account of verbal behavior might have a strong evolutionary parallel. In Skinner's essay, Selection by Consequences he argued that operant conditioning was a part of a three-level process involving genetic evolution, cultural evolution and operant conditioning. All three processes, he argued, were examples of parallel processes of selection by consequences. David L. Hull, Rodney E. Langman and Sigrid S. Glenn have developed this parallel in detail. This topic continues to be a focus for behavior analysts. Behavior analysts have been working on developing ideas based on Verbal Behaviour for fifty years, and despite this, experience difficulty explaining generative verbal behavior.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NGLY1 deficiency** NGLY1 deficiency: NGLY1 deficiency is a very rare genetic disorder caused by biallelic pathogenic variants in NGLY1. It is an autosomal recessive disorder. Errors in deglycosylation are responsible for the symptoms of this condition. Clinically, most affected individuals display developmental delay, lack of tears, elevated liver transaminases and a movement disorder. NGLY1 deficiency is difficult to diagnose, and most individuals have been identified by exome sequencing. NGLY1 deficiency: NGLY1 deficiency causes a dysfunction in the endoplasmic reticulum-associated degradation pathway. NGLY1 encodes an enzyme, N-glycanase 1, that cleaves N-glycans. Without N-glycanase, N-glycosylated proteins that are misfolded in the endoplasmic reticulum cannot be degraded, and thus accumulate in the cytoplasm of cells. Signs and symptoms: Four common findings have been identified in a majority of patients: developmental delay or intellectual disability of varying degrees, lack of or greatly reduced tears, elevated liver transaminases, and a complex movement disorder. The elevated liver enzymes often resolve in childhood. In addition, approximately 50% of patients described with NGLY1 deficiency have seizures, which can vary in their difficulty to control. Other symptoms that have been reported in affected individuals include sleep apnea, constipation, scoliosis, oral motor defects, auditory neuropathy and peripheral neuropathy. Diagnosis: NGLY1 deficiency can be suspected based on clinical findings, however confirmation of the diagnosis requires the identification of biallelic pathogenic variants in NGLY1 through genetic testing. Traditional screening tests utilized for congenital disorders of glycosylation, including carbohydrate deficient transferrin are not diagnostic in NGLY1 deficiency. To date all variants identified as being causative of NGLY1 deficiency have been sequence variants, rather than copy number variants. This spectrum may change as additional cases are identified. A common nonsense variant (c.1201A>T (p.Arg401Ter)) accounts for approximately a third of pathogenic variants identified, and is associated with a more severe clinical course.There is also a biomarker for NGLY1 deficiency. When the NGLY1 protein is missing or not functioning correctly, a specific molecule called GlcNAc-Asn (GNA) accumulates. GNA is elevated in individuals with NGLY1 deficiency compared to individuals without the disease. Elevated GNA alone is not enough to confirm an NGLY1 deficiency diagnosis, but in combination with molecular genetic testing and clinical findings, it can provides additional support for NGLY1 deficiency. Treatment: There is no cure for NGLY1 deficiency. Supportive care is indicated for each patient based on their specific symptoms, and can include eye drops to manage the alacrima, pharmaceutical management of seizures, feeding therapy and physical therapy. Most potential treatment options for NGLY1 deficiency are in the pre-clinical stages. These include enzyme replacement therapy, as well as ENGase inhibitors. Epidemiology: Forty-seven patients with confirmed NGLY1 deficiency have been reported in the medical literature. Patient advocacy groups have reported approximately 100 patients have been identified. Currently, the majority of individuals reported with NGLY1 deficiency are of northern European descent, however this likely reflects an ascertainment bias in these early stages of the disorder. Affected individuals with African and Hispanic backgrounds have been identified. History: The first cases of NGLY1 deficiency were described in 2012. NGLY1 deficiency has received a large amount of attention, despite its rarity, due to the children of two media-savvy families being afflicted. The Wilseys, descendants of Dede Wilsey, founded the Grace Science Foundation in honor of their daughter, while Matt Might and his wife founded the Bertrand Might Research Fund. These foundations have contributed millions of dollars to research efforts.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hamburg Notation System** Hamburg Notation System: The Hamburg Sign Language Notation System, or HamNoSys, is a transcription system for all sign languages (not only for American Sign Language), with a direct correspondence between symbols and gesture aspects, such as hand location, shape and movement. It was developed in 1985 at the University of Hamburg, Germany. As of 2020, it is in its fourth revision. Though it has roots in Stokoe notation, HamNoSys does not identify with any specific national diversified fingerspelling system, and as such is intended for a wider range of applications than Stokoe which was designed specifically for ASL and only later adapted to other sign languages. Unlike SignWriting and the Stokoe system, it is not intended as a practical writing system. It's more like the International Phonetic Alphabet in that regard. Both systems are meant for use by linguistics, and include detail such as allophones that are not relevant to those actually using the language. The HamNoSys is not encoded in Unicode. Computer processing is made possible by a HamNoSysUnicode.ttf font, which uses Private Use Area characters.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MonoGame** MonoGame: MonoGame is a free and open source C# framework used by game developers to make games for multiple platforms and other systems. It is also used to make Windows and Windows Phone games run on other systems. It supports iOS, Android, macOS, tvOS, Linux, PlayStation 4, PlayStation Vita, Xbox One and Nintendo Switch. It implements the Microsoft XNA 4 application programming interface (API). It has been used for several games, including Bastion and Fez. History: MonoGame is a derivative of XNA Touch (September 2009) started by Jose Antonio Farias and Silver Sprite by Bill Reiss. The first official release of MonoGame was version 2.0 with a downloadable version 0.7 that was available from CodePlex. These early versions only supported 2D sprite-based games. The last official 2D-only version was released as 2.5.1 in June 2012. Since mid-2013, the framework has begun to be extended beyond XNA4 with the addition of new features like RenderTarget3D, support for multiple GameWindows, and a new cross-platform command line content building tool. Architecture: MonoGame attempts to fully implement the XNA 4 API. It accomplishes this across Microsoft platforms using SharpDX and DirectX. When targeting non-Microsoft platforms, platform specific capabilities are utilized by way of the OpenTK library. When targeting OS X, iOS, and/or Android, the Xamarin platform runtime is necessary. This runtime provides a tuned OpenTK implementation that allows the MonoGame team to focus on the core graphics tuning of the platform. Architecture: The graphics capabilities of MonoGame come from either OpenGL, OpenGL ES, or DirectX. Since MonoGame version 3, OpenGL 2 has been the focus for capabilities. The earlier releases of MonoGame (2.5) used OpenGL 1.x for graphics rendering. Utilizing OpenGL 2 allowed for MonoGame to support shaders to make more advanced rendering capabilities in the platform. Content management and distribution continues to follow the XNA 4 ContentManager model. The MonoGame team has created a new content building capability that can integrate with Microsoft Visual Studio to deliver the same content building capabilities to Windows 8 Desktop that Windows 7 users had used in Microsoft XNA.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Miniature book** Miniature book: A miniature book is a very small book. Standards for what may be termed a miniature rather than just a small book have changed through time. Today, most collectors consider a book to be miniature only if it is 3 inches or smaller in height, width, and thickness, particularly in the United States. Many collectors consider nineteenth-century and earlier books of 4 inches to fit in the category of miniatures. Book from 3–4 inches in all dimensions are termed macrominiature books. Books less than 1 inch in all dimensions are called microminiature books. Books less than 1/4 inch in all dimensions are known as ultra-microminiature books. History: Miniature books stretch back far in history; many collections contain cuneiform tablets stretching back thousands of years, and exquisite medieval Books of Hours. Printers began testing the limits of size not long after the technology of printing began, and around 200 miniature books were printed in the sixteenth century. Exquisite specimens from the 17th century abound. In the 19th century, technological innovations in printing enabled the creation of smaller and smaller type. Fine and popular additions alike grew in number throughout the 19th century in what was considered the golden age for miniature books. While some miniature books are objects of high craft, bound in fine Moroccan leather, with gilt decoration and excellent examples of woodcuts, etchings, and watermarks, others are cheap, disposable, sometimes highly functional items not expected to survive. Today, miniature books are produced both as fine works of craft and as commercial products found in chain bookstores. History: Miniature books were produced for personal convenience. Miniature books could be easily be carried in the pocket of a waistcoat or a woman's reticule. Victorian women used miniature etiquette books to subtly ascertain information on polite behavior in society. Along with etiquette books, Victorian women that had copies of The Little Flirt learned to attract men by using items already in their possession, such as, gloves, handkerchiefs, a fan and parasol. In 1922, miniature books regained popularity when 200 postage stamp sized books were created to be displayed in the miniature library of Queen Mary's miniature doll house. Princess Marie Louise, a relative of Queen Mary also requested that living authors contribute to the existing dollhouse library. Following in Queen Mary footsteps, many miniature book collectors begin collecting miniatures for their dollhouse libraries. A miniature book has even been to the moon. In 1969, Astronaut "Buzz" Aldrin had a miniature book in his possession during a flight to the moon. It was an autobiography of Robert Hutchings Goddard, who invented the first liquid-propellant rocket that make space flight possible.Some popular types of miniature books from various periods include Bibles, encyclopedias, dictionaries, bilingual dictionaries, short stories, verse, famous speeches, political propaganda, travel guides, almanacs, children's stories, and the miniaturization of well-known books such as The Compleat Angler, The Art of War, and Sherlock Holmes stories. The appeal of miniature books was holding the works of prominent writers, such as William Shakespeare in the person's hands. Notable miniatures: Abraham Lincoln, Proclamation of Emancipation (Boston : John Murray Forbes, 1863). This miniature edition was the first of this text. It is estimated that a million copies were distributed to Union troops. Notable miniatures: Miniature editions of works not originally published in miniature form Diamond Classics - published in London by William Pickering, from 1819 Liliput-Bibliothek - published in Leipzig by Schmidt & Günther from ca. 1909 Bibliothèque miniature - published in Paris by Payot from ca. 1918 Collection Bijou - published by Editions Nelson in Paris from ca. 1920 Miniature Constitution of Ukraine Thumb Bibles "Smallest book in the world" Many books have claim to the title of smallest book in the world at the time of their publication. The title can apply to a variety of accomplishments: smallest overall size, smallest book with movable type, smallest printed book, smallest book legible to the naked eye, and so on. Notable miniatures: 750: Hyakumantō Darani or 'One Million Pagoda Dharani'' Also one of the earliest known printed texts, these 2-3/8" tall Buddhist charms were printed, rolled into a scroll, placed in miniature white pagodas, and distributed to Buddhist temples. A million were printed at the command of Japanese Empress Shōtoku.1674: Bloem-Hofje (Amsterdam: Benedict Schmidt, 1674). For more than two centuries, this remained the smallest book printed with moveable type. Notable miniatures: 1878: Dante, Divina Commedia (Milan: Gnocchi, 1878). 500 pages. 5 cm x 3.5 cm. Typeset and printed by the Salmin Brothers of Padua.1897: Galileo Galilei. Galileo a Madama Cristina di Lorena (Padua: dei Fratelli Salmin, 1897). 150 pages. This remains to this day the smallest book set from movable type.1900: Edward Fitzgerald, trans. The Rubaiyat of Omar Khayyam (Cleveland: Charles H. Meigs, 1900). Notable miniatures: 1932: The Rose Garden of Omar Khayyam. 1985: Old King Cole (Paisley: Gleniffer Press, 1985). Height: 0.9 mm. For 20 years this was the "smallest book in the world printed using offset lithography".2001: New Testament (King James version) Cambridge: M.I.T, (2001). 5 x 5 mm. 2002: Anton Chekhov, Chameleon (Omsk, Siberia: Anatoly Konenko, 1996) 0.9 mm x 0.9 mm. Notable miniatures: 2006: ABC books in Russian and Roman characters (Omsk, Siberia: Anatoly Konenko, 1996). 0.8 mm x 0.8 mm2007: Teeny Ted from Turnip Town (category: world's smallest reproduction of a printed book. Single sheet, not codex format.) 0.07 x 0.10 mm 2016: Vladimir Aniskin, [Untitled] (Russia: Vladimir Aniskin, 2016). "The micro-book consists of several pages, each measuring only very tiny fractions of a millimeter: the precise size of the pages is 70 by 90 micrometers or 0.07 by 0.09 millimeters—too small to be read by the naked human eye. Made by gluing white paint to extremely thin film, the pages are hung from a tiny ring binder that allows them to be turned. The whole construction rests on a horizontal sliver of a poppy seed." Charms, talismans, and amulets In 2007, archaeologists found a miniature Bible (Glasgow: David Bryce & Son, 1901) tucked into a child's boot hidden in a chimney cavity in an English cottage in Ewerby, Lincolnshire. Shoes were placed in such locations as early as the fourteenth-century as anti-witchcraft devices known as "spirit traps". Publishing, printing, and binding in miniature: The creation of a miniature book requires exceptional skill in all aspects of book production, because elements such as bindings, pages, and type, illustrations, and subject matter all need to be approached with a new set of problems in mind. For instance, the pages of a miniature book do not fall open as do those of larger books, because the pages are not heavy enough. Bindings require exceptionally thin materials, and creating type that is readable and beautiful requires great skill. Many printers have created miniature books to test their own technical limits or to show off their skill. Many books have claimed the sought-after title of "smallest book in the world," which is now held by experiments in nanoprinting. Publishing, printing, and binding in miniature: Publishers Good Book Press, Santa Cruz, California Dawson's Book Shop, Los Angeles, California Gloria Stuart, the film actress, published numerous miniature books as collaborations with significant printers The Smallest Books in the World, Peru Miniboox, German publisher of miniature books Achille St. Onge Commercial publishers HarperCollins, Collins Gem Books division. Oxford University Press published many miniature religious books and children's books in the late 19th and early 20th century. Running Press, known for miniature books marketed as impulse buys in bookstore checkout lines. Sanrio, known for tiny blank books in its Hello Kitty, Little Twin Stars, and other lines starting in the late 1960s and early 1970s. Binders Sangorski & Sutcliffe Jan Sobota Artists, designers, typesetters and illustrators Margaret Hicks Collections: Library collections The largest collection of miniature books in the United States is held by the Lilly Library at Indiana University Bloomington. Donated by collector Ruth E. Adomeit, it numbers more than 16,000 items. Second in size is the McGehee Miniature Book Collection of more than 15,000 items, at the Albert and Shirley Small Special Collections Library at the University of Virginia. The collection was donated by collector Caroline Yarnall McGehee Lindemann Brandt, a charter member of the Miniature Book Society. In 2020, Jozsef Tari donated his collection of 5,700 miniature books to the Jókai Mór City Library of Pope in Hungary. The University of Iowa Special Collections and University Archives holds a collection of 4,000 miniatures donated by collector Charlotte M. Smith, which they feature on tumblr. The Library of Congress miniature book collection consists of 1,596 books that are ten centimeters or less in height. The Library of Congress offers digitized materials from the miniature collection, including many editions from the 19th century. Rutgers University Library holds some 1,500 volumes in the Alden Jacobs Collection. Washington University in St. Louis holds a significant collection, some on view in a permanent exhibition space, donated by Julian and Hope Edison. One of the most visited collections at the University of North Texas Library in Denton, Texas, is their miniature book collection that contains around 3,000 items. A few items in the collection were at one time considered the smallest in the world. The Jewish Public Library in Montreal hosts the Lilly Toth Miniature Book Collection, a collection of 1,119 books donated by Hungarian Holocaust Survivor Lilly Toth (1925–2021). Museum collections The Morgan Library & Museum houses more than 8,000 miniature books. Collections: Queen Mary's Dolls' House at Windsor Castle in Great Britain contains a miniature library of 200 books created expressly for the collection in the 1920s at a 1:12 scale. Along with reference volumes, a Bible and the Quran, the library includes works—some written expressly for the collection—by prominent authors of the day such as Arthur Conan Doyle, Thomas Hardy, and Vita Sackville-West. The books were bound by Sangorski & Sutcliffe, and contained miniature bookplates illustrated by E. H. Shepard. Collections: The Baku Museum of Miniature Books in Azerbaijan is the only museum dedicated only to miniature books. The Museum Meermanno in the Hague, Netherlands contains a significant miniature collection on permanent display. Private collections Prominent historical figures who collected miniature books include President Franklin D. Roosevelt and retailer Stanley Marcus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Antitragus piercing** Antitragus piercing: An antitragus piercing is a perforation of the outer ear cartilage for the purpose of inserting and wearing a piece of jewelry. It is placed in the antitragus, a piece of cartilage opposite the ear canal. Overall, the piercing has characteristics similar to the tragus piercing; the piercings are performed and cared for in much the same way. Healing: This piercing, like most cartilage piercings, can take anywhere from 8 to 16 months to fully heal. The piercing should be cleaned daily until healed. The most common way to clean it using sea salt water or another saline solution. The jewelry should not be changed until the piercing has healed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Microsoft Chrome** Microsoft Chrome: Microsoft's Chrome was the code name for a set of APIs that allowed DirectX to be easily accessed from user-space software, including HTML. Launched with some fanfare in early 1998, Chrome, and the related Chromeffects, was re-positioned several times before being canceled only a few months later in a corporate reorganization. Throughout its brief lifespan, the product was widely derided as an example of Microsoft's embrace, extend and extinguish strategy of ruining standards efforts by adding options that only ran on their platforms. History: In May 1997, Microsoft bought pioneering startup Dimension X, developers of several Java-based animation tools including Liquid Motion and Liquid Reality. Looking to make their recently introduced Direct3D more widely available, the Chrome project combined the Dimension X team with many members of the original D3D team. Chrome was originally positioned as a way to easily add 3D effects to all sorts of programs, and described as a "Windows system service" that would be finalized in early 1999. Chrome was the services level of the package, consisting of drivers that talked to D3D, along with a simple viewer application. History: Chromeffects was an XML-based wrapper that allowed Chrome to be called from within a web page. Embedding Chromeffects objects in HTML pages could produce rich content in the same way that VML does for 2D artwork. Chrome's project manager, Bob Heddle, claimed that "It is going to propel the industry. We're moving DirectX from programmers to artists." Likewise, Microsoft Liquid Motion was a layer similar to Chromeffects but within Java. History: Chromeffects did not support any of the media standards that were being developed at the W3C coincident with its development, including HTML+TIME or the document object model. This led to widespread outcry from the internet community, who saw Chrome as an attempt by Microsoft to inject a powerful proprietary technology into the open standards based web. If uptake of Chromeffects was widespread, this would limit users to Microsoft platforms where the content could be viewed. This led to promises on the part of MS to better interact with these technologies in the future. History: Chrome was previewed in July 1998 at that year's SIGGRAPH, with a developer's release following in August. At the time, Chrome demanded relatively hefty machines to run on, a 350 MHz Pentium II or better with an AGP graphics card. Even Microsoft admitted the hardware requirements were steep, according to Brad Chase, Vice President of Windows marketing and developer relations at Microsoft, "The initial PCs that will run the Chrome feature of Windows 98 are going to be 350MHz Pentium boxes. You're not going to be able to have this on a standard Pentium today." However, Microsoft claimed that this standard would be widely met by new machines; the general manager of multimedia at Microsoft, Eric Engstrom, noted "Over next 12 months, our projections show that 55 to 60 million units capable of running Chromeffects will be shipped." In spite of these promises, feedback from the testers was almost universally negative, complaining about poor performance and general bugginess. History: In September 1998, Steve Ballmer announced Chromeffects during his keynote speech at Seybold '98. He announced that Chromeffects had been released to hardware manufacturer partners and that they were integrating it with the Windows operating system that they are now shipping on new machines. History: Given the almost universal negative press, both from its own developers and the wider community, Microsoft announced that "Based on developer feedback, we are stepping back and redesigning Chromeffects technologies to better meet both our partner and customer needs." Chrome's cancellation was part of a larger reorganization that resulted in dramatic shakeups within Microsoft's multimedia groups. Many of the Chrome staff were merged back into the DirectX team, while Eric Engstrom was moved out of multimedia to the MSN team. Engstrom was in charge of Chrome and the equally "troubled" NetShow streaming media projects. At the time there was also speculation that Chrome was killed in order to avoid further troubles at their ongoing antitrust case, given the outcry from the web community.Microsoft did deliver on their promise to better track internet standards, releasing Microsoft Vizact which was based on HTML+TIME. Vizact saw little uptake and was discontinued in 2000.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DCL Technology Demonstrator programme** DCL Technology Demonstrator programme: The US DCL (Detection Classification and Localisation) demonstrator program is aimed at proving that an active torpedo detection system is able to resolve a salvo of torpedoes with sufficient time and accuracy that an anti-torpedo torpedo may be fired back to hit and destroy the threat. Overview: The DCL systems consist of an active source emitter which sends high-frequency pings into the water. Reflections from in-water objects are received by a towed array tuned to those frequencies. By processing the reflections it is possible to determine whether objects are torpedoes, or non-threat objects. The system is also combined with a passive acoustic towed array specifically designed for torpedo detection. The passive acoustic array is able to analyse the structured sound emanating from a torpedo and thereby classify the weapon type and mode of operation. Two teams are currently building alternative DCL demonstration systems, the first to test was Ultra Electronics who in 2006 successfully resolved a salvo of torpedoes. The second company APC has yet to undergo tests. The aim of the programme is to resolve threats sufficiently well that an anti-torpedo torpedo may be fired at the threat to neutralise it (a hard-kill solution). This differs from the UK S2170 Surface Ship Torpedo Defence solution which utilises soft-kill.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Microlithography** Microlithography: Microlithography is a general name for any manufacturing process that can create a minutely patterned thin film of protective materials over a substrate, such as a silicon wafer, in order to protect selected areas of it during subsequent etching, deposition, or implantation operations. The term is normally used for processes that can reliably produce features of microscopic size, such as 10 micrometres or less. The term nanolithography may be used to designate processes that can produce nanoscale features, such as less than 100 nanometres. Microlithography is a microfabrication process that is extensively used in the semiconductor industry and also manufacture microelectromechanical systems. Processes: Specific microlithography processes include: Photolithography using light projected on a photosensitive metarial film (photoresist). Electron beam lithography, using a steerable electron beam. Processes: Nanoimprinting Interference lithography Magnetolithography Scanning probe lithography Surface-charge lithography Diffraction lithographyThese processes differ in speed and cost, as well as in the material they can be applied to and the range of feature sizes they can produce. For instance, while the size of features achievable with photolithography is limited by the wavelength of the light used, the technique it is considerably faster and simpler than electron beam lithography, that can achieve much smaller ones. Applications: The main application for microlithography is fabrication of integrated circuits ("electronic chips"), such as solid-state memories and microprocessors. They can also be used to create diffraction gratings, microscope calibration grids, and other flat structures with microscopic details.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quantum capacity** Quantum capacity: In the theory of quantum communication, the quantum capacity is the highest rate at which quantum information can be communicated over many independent uses of a noisy quantum channel from a sender to a receiver. It is also equal to the highest rate at which entanglement can be generated over the channel, and forward classical communication cannot improve it. The quantum capacity theorem is important for the theory of quantum error correction, and more broadly for the theory of quantum computation. The theorem giving a lower bound on the quantum capacity of any channel is colloquially known as the LSD theorem, after the authors Lloyd, Shor, and Devetak who proved it with increasing standards of rigor. Hashing bound for Pauli channels: The LSD theorem states that the coherent information of a quantum channel is an achievable rate for reliable quantum communication. For a Pauli channel, the coherent information has a simple form and the proof that it is achievable is particularly simple as well. We prove the theorem for this special case by exploiting random stabilizer codes and correcting only the likely errors that the channel produces. Hashing bound for Pauli channels: Theorem (hashing bound). There exists a stabilizer quantum error-correcting code that achieves the hashing limit R=1−H(p) for a Pauli channel of the following form:where p=(pI,pX,pY,pZ) and H(p) is the entropy of this probability vector. Hashing bound for Pauli channels: Proof. Consider correcting only the typical errors. That is, consider defining the typical set of errors as follows:where an is some sequence consisting of the letters {I,X,Y,Z} and Pr {Ean} is the probability that an IID Pauli channel issues some tensor-product error Ean≡Ea1⊗⋯⊗Ean . This typical set consists of the likely errors in the sense thatfor all ϵ>0 and sufficiently large n . The error-correcting conditions for a stabilizer code S in this case are that {Ean:an∈Tδpn} is a correctable set of errors if for all error pairs Ean and Ebn such that an,bn∈Tδpn where N(S) is the normalizer of S . Also, we consider the expectation of the error probability under a random choice of a stabilizer code. Hashing bound for Pauli channels: Proceed as follows:The first equality follows by definition— I is an indicator function equal to one if Ean is uncorrectable under S and equal to zero otherwise. The first inequality follows, since we correct only the typical errors because the atypical error set has negligible probability mass. The second equality follows by exchanging the expectation and the sum. The third equality follows because the expectation of an indicator function is the probability that the event it selects occurs. Continuing, we have: Pr Pr S{∃Ebn:bn∈Tδpn,bn≠an,Ean†Ebn∈N(S)} Pr Pr S{⋃bn∈Tδpn,bn≠anEan†Ebn∈N(S)} Pr Pr S{Ean†Ebn∈N(S)} Pr {Ean}2−(n−k) ≤22n[H(p)+δ]2−n[H(p)+δ]2−(n−k) =2−n[1−H(p)−k/n−3δ]. Hashing bound for Pauli channels: The first equality follows from the error-correcting conditions for a quantum stabilizer code, where N(S) is the normalizer of S . The first inequality follows by ignoring any potential degeneracy in the code—we consider an error uncorrectable if it lies in the normalizer N(S) and the probability can only be larger because N(S)∖S∈N(S) . The second equality follows by realizing that the probabilities for the existence criterion and the union of events are equivalent. The second inequality follows by applying the union bound. The third inequality follows from the fact that the probability for a fixed operator Ean†Ebn not equal to the identity commuting with the stabilizer operators of a random stabilizer can be upper bounded as follows: Pr S{Ean†Ebn∈N(S)}=2n+k−122n−1≤2−(n−k). Hashing bound for Pauli channels: The reasoning here is that the random choice of a stabilizer code is equivalent to fixing operators Z1 , ..., Zn−k and performing a uniformly random Clifford unitary. The probability that a fixed operator commutes with Z¯1 , ..., Z¯n−k is then just the number of non-identity operators in the normalizer ( 2n+k−1 ) divided by the total number of non-identity operators ( 22n−1 ). After applying the above bound, we then exploit the following typicality bounds: We conclude that as long as the rate k/n=1−H(p)−4δ , the expectation of the error probability becomes arbitrarily small, so that there exists at least one choice of a stabilizer code with the same bound on the error probability.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Silliness** Silliness: Silliness is defined as engaging in "a ludicrous folly", showing a "lack of good sense or judgment", or "the condition of being frivolous, trivial, or superficial". In television, film, and the circus, portrayals of silliness such as exaggerated, funny behaviour are used to amuse audiences. Portrayals of silliness, provided by clowns and jesters, are also used to lift the spirits of people in hospitals. Psychology: In "The Art of Roughhousing", Anthony DeBenedet and Larry Cohen argue that "wild play" between a child and a parent can foster "joy, love and a deeper connection"; among the actions they suggest is for the parent to be silly and pretend to fall over.Michael Christianson from New York’s Big Apple Circus "became so interested in the healing qualities of physical comedy that he quit his job"..."to teach jesters, clowns and comedians how to connect with hospital patients through his Clown Care Unit." A doctor named Patch Adams "...leads a merry band of mirth makers on trips around the world to locations of crisis or suffering in order to serve up some levity and healing."In the United States and Mexico, the US practical joke group Improv Everywhere has created an 'international celebration of silliness' by asking commuters to board the New York and Mexico City subways without trousers on a specific day. In the circus: In the circus, one of the roles that clowns play is engaging in silliness. When clowning is taught, the different components of silliness include "funny ways of speaking to make people laugh", making "silly face[s] and sound[s]", engaging in "funny ways of moving, and play[ing] with extreme emotions such as pretending to laugh and cry". In Canada, the Northern Arts and Cultural Centre held a Children's Festival of Silliness in January 2012. Quotes: C. S. Lewis noted in chapter six of The Magician's Nephew that "Children have one kind of silliness, as you know, and grown-ups have another kind."The English singer and guitarist Roy Harper included a song called "Grown Ups Are Just Silly Children" on his 1975 album HQ. The title is repeated as the chorus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Interatomic Coulombic decay** Interatomic Coulombic decay: Interatomic Coulombic decay (ICD) is a general, fundamental property of atoms and molecules that have neighbors. Interatomic (intermolecular) Coulombic decay is a very efficient interatomic (intermolecular) relaxation process of an electronically excited atom or molecule embedded in an environment. Without the environment the process cannot take place. Until now it has been mainly demonstrated for atomic and molecular clusters, independently of whether they are of van-der-Waals or hydrogen bonded type. Interatomic Coulombic decay: The nature of the process can be depicted as follows: Consider a cluster with two subunits, A and B. Suppose an inner-valence electron is removed from subunit A. If the resulting (ionized) state is higher in energy than the double ionization threshold of subunit A then an intraatomic (intramolecular) process (autoionization, in the case of core ionization Auger decay) sets in. Even though the excitation is energetically not higher than the double ionization threshold of subunit A itself, it may be higher than the double ionization threshold of the cluster which is lowered due to charge separation. If this is the case, an interatomic (intermolecular) process sets in which is called ICD. During the ICD the excess energy of subunit A is used to remove (due to electronic correlation) an outer-valence electron from subunit B. As a result, a doubly ionized cluster is formed with a single positive charge on A and B. Thus, charge separation in the final state is a fingerprint of ICD. As a consequence of the charge separation the cluster typically breaks apart via Coulomb explosion. Interatomic Coulombic decay: ICD is characterized by its decay rate or the lifetime of the excited state. The decay rate depends on the interatomic (intermolecular) distance of A and B and its dependence allows to draw conclusions on the mechanism of ICD. Particularly important is the determination of the kinetic energy spectrum of the electron emitted from subunit B which is denoted as ICD electron. ICD electrons are often measured in ICD experiments. Typically, ICD takes place on the femto second time scale, many orders of magnitude faster than those of the competing photon emission and other relaxation processes. ICD in water: Very recently, ICD has been identified to be an additional source of low energy electrons in water. There, ICD is faster than the competing proton transfer that is usually the prominent pathway in the case of electronic excitation of water clusters. The response of condensed water to electronic excitations is of utmost importance for biological systems. For instance, it was shown in experiments that low energy electrons do affect constituents of DNA effectively. Furthermore, ICD was reported after core-electron excitations of hydroxide in dissolved water. Related processes: Interatomic (Intermolecular) processes do not only occur after ionization as described above. Independent of what kind of electronic excitation is at hand, an interatomic (intermolecular) process can set in if an atom or molecule is in a state energetically higher than the ionization threshold of other atoms or molecules in the neighborhood. The following ICD related processes, which were for convenience considered below for clusters, are known: Resonant Interatomic Coulombic Deacy (RICD) was first validated experimentally. This process emanates from an inner-valence excitation where an inner-valence electron is promoted to a virtual orbital. During the process the vacant inner-valence spot is filled up by an outer-valence electron of the same subunit or by the electron in the virtual orbital. The following action is referred to as RICD if in the previous process generated excess energy removes an outer-valence electron from another cluster constituent. The excess energy can, on the other hand, also be used to remove an outer-valence electron from the same subunit (autoionization). Consequently, RICD competes not only with slow radiative decay as ICD, it competes also with the effective autoionization. Both experimental and theoretical evidence show that this competition does not lead to a suppression of the RICD. Related processes: Auger-ICD cascade has been first predicted theoretically. States with a vacancy in a core-shell usually undergo Auger decay. This decay often produces double ionized states which can sometimes decay by another Auger decay forming a so-called Auger cascade. However, often the double ionized state is not high enough in energy to decay intraatomically once more. Under such conditions, formation of a decay cascade is impossible in the isolated species, but can occur in clusters with the next step being ICD. Meanwhile, the Auger-ICD cascade has been confirmed and studied experimentally. Related processes: Excitation–transfer–ionization (ETI) is a non-radiative decay pathway of outer-valence excitations in an environment. Assume that an outer-valence electron of a cluster subunit is promoted to a virtual orbital. On the isolated species this excitation can usually only decay slowly by photon emission. In the cluster there is an additional, much more efficient pathway if the ionization threshold of another cluster constituent is lower than the excitation energy. Then the excess energy of the excitation is transferred interatomically (intermolecularly) to remove an outer-valence electron from another cluster subunit with an ionization threshold lower than the excitation energy. Usually, this interatomic (intermolecular) process also takes place within a few femtoseconds. Related processes: Electron-transfer-mediated decay (ETMD) is a non-radiative decay pathway where a vacancy in an atom or molecule is filled by an electron from a neighboring species; a secondary electron is emitted either by the first atom/molecule or by the neighboring species. The existence of this decay mechanism has been proven experimentally in Argon dimers and in mixed Argon – Krypton clusters.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ideal quotient** Ideal quotient: In abstract algebra, if I and J are ideals of a commutative ring R, their ideal quotient (I : J) is the set (I:J)={r∈R∣rJ⊆I} Then (I : J) is itself an ideal in R. The ideal quotient is viewed as a quotient because KJ⊆I if and only if K⊆(I:J) . The ideal quotient is useful for calculating primary decompositions. It also arises in the description of the set difference in algebraic geometry (see below). Ideal quotient: (I : J) is sometimes referred to as a colon ideal because of the notation. In the context of fractional ideals, there is a related notion of the inverse of a fractional ideal. Properties: The ideal quotient satisfies the following properties: (I:J)=AnnR((J+I)/I) as R -modules, where AnnR(M) denotes the annihilator of M as an R -module. J⊆I⇔(I:J)=R (in particular, (I:I)=(R:I)=(I:0)=R )(I:R)=I (I:(JK))=((I:J):K) (I:(J+K))=(I:J)∩(I:K) ((I∩J):K)=(I:K)∩(J:K) (I:(r))=1r(I∩(r)) (as long as R is an integral domain) Calculating the quotient: The above properties can be used to calculate the quotient of ideals in a polynomial ring given their generators. For example, if I = (f1, f2, f3) and J = (g1, g2) are ideals in k[x1, ..., xn], then I:J=(I:(g1))∩(I:(g2))=(1g1(I∩(g1)))∩(1g2(I∩(g2))) Then elimination theory can be used to calculate the intersection of I with (g1) and (g2): I∩(g1)=tI+(1−t)(g1)∩k[x1,…,xn],I∩(g2)=tI+(1−t)(g2)∩k[x1,…,xn] Calculate a Gröbner basis for tI+(1−t)(g1) with respect to lexicographic order. Then the basis functions which have no t in them generate I∩(g1) Geometric interpretation: The ideal quotient corresponds to set difference in algebraic geometry. More precisely, If W is an affine variety (not necessarily irreducible) and V is a subset of the affine space (not necessarily a variety), then I(V):I(W)=I(V∖W) where I(∙) denotes the taking of the ideal associated to a subset.If I and J are ideals in k[x1, ..., xn], with k an algebraically closed field and I radical then Z(I:J)=cl(Z(I)∖Z(J)) where cl(∙) denotes the Zariski closure, and Z(∙) denotes the taking of the variety defined by an ideal. If I is not radical, then the same property holds if we saturate the ideal J: Z(I:J∞)=cl(Z(I)∖Z(J)) where (I:J∞)=∪n≥1(I:Jn) Examples: In Z , ((6):(2))=(3) In algebraic number theory, the ideal quotient is useful while studying fractional ideals. This is because the inverse of any invertible fractional ideal I of an integral domain R is given by the ideal quotient ((1):I)=I−1 One geometric application of the ideal quotient is removing an irreducible component of an affine scheme. For example, let I=(xyz),J=(xy) in C[x,y,z] be the ideals corresponding to the union of the x,y, and z-planes and x and y planes in AC3 . Then, the ideal quotient (I:J)=(z) is the ideal of the z-plane in AC3 . This shows how the ideal quotient can be used to "delete" irreducible subschemes. Examples: A useful scheme theoretic example is taking the ideal quotient of a reducible ideal. For example, the ideal quotient ((x4y3):(x2y2))=(x2y) , showing that the ideal quotient of a subscheme of some non-reduced scheme, where both have the same reduced subscheme, kills off some of the non-reduced structure. Examples: We can use the previous example to find the saturation of an ideal corresponding to a projective scheme. Given a homogeneous ideal I⊂R[x0,…,xn] the saturation of I is defined as the ideal quotient (I:m∞)=∪i≥1(I:mi) where m=(x0,…,xn)⊂R[x0,…,xn] . It is a theorem that the set of saturated ideals of R[x0,…,xn] contained in m is in bijection with the set of projective subschemes in PRn . This shows us that (x4+y4+z4)mk defines the same projective curve as (x4+y4+z4) in PC2
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trace zero cryptography** Trace zero cryptography: In 1998 Gerhard Frey firstly proposed using trace zero varieties for cryptographic purpose. These varieties are subgroups of the divisor class group on a low genus hyperelliptic curve defined over a finite field. These groups can be used to establish asymmetric cryptography using the discrete logarithm problem as cryptographic primitive. Trace zero varieties feature a better scalar multiplication performance than elliptic curves. This allows fast arithmetic in these groups, which can speed up the calculations with a factor 3 compared with elliptic curves and hence speed up the cryptosystem. Another advantage is that for groups of cryptographically relevant size, the order of the group can simply be calculated using the characteristic polynomial of the Frobenius endomorphism. This is not the case, for example, in elliptic curve cryptography when the group of points of an elliptic curve over a prime field is used for cryptographic purpose. However to represent an element of the trace zero variety more bits are needed compared with elements of elliptic or hyperelliptic curves. Another disadvantage, is the fact, that it is possible to reduce the security of the TZV of 1/6th of the bit length using cover attack. Mathematical background: A hyperelliptic curve C of genus g over a prime field Fq where q = pn (p prime) of odd characteristic is defined as C:y2+h(x)y=f(x), where f monic, deg(f) = 2g + 1 and deg(h) ≤ g. The curve has at least one Fq -rational Weierstraßpoint. Mathematical background: The Jacobian variety JC(Fqn) of C is for all finite extension Fqn isomorphic to the ideal class group Cl ⁡(C/Fqn) . With the Mumford's representation it is possible to represent the elements of JC(Fqn) with a pair of polynomials [u, v], where u, v ∈ Fqn[x] The Frobenius endomorphism σ is used on an element [u, v] of JC(Fqn) to raise the power of each coefficient of that element to q: σ([u, v]) = [uq(x), vq(x)]. The characteristic polynomial of this endomorphism has the following form: χ(T)=T2g+a1T2g−1+⋯+agTg+⋯+a1qg−1T+qg, where ai in ℤ With the Hasse–Weil theorem it is possible to receive the group order of any extension field Fqn by using the complex roots τi of χ(T): |JC(Fqn)|=∏i=12g(1−τin) Let D be an element of the JC(Fqn) of C, then it is possible to define an endomorphism of JC(Fqn) , the so-called trace of D: Tr ⁡(D)=∑i=0n−1σi(D)=D+σ(D)+⋯+σn−1(D) Based on this endomorphism one can reduce the Jacobian variety to a subgroup G with the property, that every element is of trace zero: Tr neutral element in JC(Fqn) G is the kernel of the trace endomorphism and thus G is a group, the so-called trace zero (sub)variety (TZV) of JC(Fqn) The intersection of G and JC(Fq) is produced by the n-torsion elements of JC(Fq) . If the greatest common divisor gcd (n,|JC(Fq)|)=1 the intersection is empty and one can compute the group order of G: |G|=|JC(Fqn)||JC(Fq)|=∏i=12g(1−τin)∏i=12g(1−τi) The actual group used in cryptographic applications is a subgroup G0 of G of a large prime order l. This group may be G itself.There exist three different cases of cryptographical relevance for TZV: g = 1, n = 3 g = 1, n = 5 g = 2, n = 3 Arithmetic: The arithmetic used in the TZV group G0 based on the arithmetic for the whole group JC(Fqn) , But it is possible to use the Frobenius endomorphism σ to speed up the scalar multiplication. This can be archived if G0 is generated by D of order l then σ(D) = sD, for some integers s. For the given cases of TZV s can be computed as follows, where ai come from the characteristic polynomial of the Frobenius endomorphism : For g = 1, n = 3: mod ℓ For g = 1, n = 5: mod ℓ For g = 2, n = 3: mod ℓ Knowing this, it is possible to replace any scalar multiplication mD (|m| ≤ l/2) with: where mi=O(ℓ1/(n−1))=O(qg) With this trick the multiple scalar product can be reduced to about 1/(n − 1)th of doublings necessary for calculating mD, if the implied constants are small enough. Security: The security of cryptographic systems based on trace zero subvarieties according to the results of the papers comparable to the security of hyper-elliptic curves of low genus g' over Fp′ , where p' ~ (n − 1)(g/g' ) for |G| ~128 bits. Security: For the cases where n = 3, g = 2 and n = 5, g = 1 it is possible to reduce the security for at most 6 bits, where |G| ~ 2256, because one can not be sure that G is contained in a Jacobian of a curve of genus 6. The security of curves of genus 4 for similar fields are far less secure. Cover attack on a trace zero crypto-system: The attack published in shows, that the DLP in trace zero groups of genus 2 over finite fields of characteristic diverse than 2 or 3 and a field extension of degree 3 can be transformed into a DLP in a class group of degree 0 with genus of at most 6 over the base field. In this new class group the DLP can be attacked with the index calculus methods. This leads to a reduction of the bit length 1/6th.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FRW/CFT duality** FRW/CFT duality: The FRW/CFT duality is a conjectured duality for Friedmann–Robertson–Walker models inspired by the AdS/CFT correspondence. It assumes that the cosmological constant is exactly zero, which is only the case for models with exact unbroken supersymmetry. Because the energy density does not approach zero as we approach spatial infinity, the metric is not asymptotically flat. This is not an asymptotically cold solution. Overview: In eternal inflation, our universe passes through a series of phase transitions with progressively lower cosmological constant. Our current phase has a cosmological constant of size 10 123 , which is conjectured to be metastable in string theory. It is possible our universe might tunnel into a supersymmetric phase with an exactly zero cosmological constant. In fact, any particle in eternal inflation will eventually terminate in a phase with exactly zero or negative cosmological constant. The phases with negative cosmological constant will end in a Big Crunch. Shenkar and Leonard Susskind called this the Census Taker's Hat. Overview: The conformal compactification of the terminal phase has a Penrose diagram shaped like a hat for future null infinity. A Euclidean Liouville quantum field theory is assumed to reside there. The null coordinate corresponds to the running of the renormalization group. The terminal phase has an ever-expanding FRW metric in which the average energy density goes to zero.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Farrier** Farrier: A farrier is a specialist in equine hoof care, including the trimming and balancing of horses' hooves and the placing of shoes on their hooves, if necessary. A farrier combines some blacksmith's skills (fabricating, adapting, and adjusting metal shoes) with some veterinarian's skills (knowledge of the anatomy and physiology of the lower limb) to care for horses' feet. Traditionally an occupation for men, in a number of countries women have now become farriers. History: While the practice of putting protective hoof coverings on horses dates back to the first century, evidence suggests that the practice of nailing iron shoes into a horse's hoof is a much later invention. One of the first archaeological discoveries of an iron horseshoe was found in the tomb of Merovingian king Childeric I, who reigned from 458 to 481 or 482. The discovery was made by Adrien Quinquin in 1653, and the findings were written about by Jean-Jacques Chifflet in 1655. Chifflet wrote that the iron horseshoe was so rusted that it fell apart as he attempted to clean it. He did, however, make an illustration of the shoe and noted that it had four holes on each side for nails. Although this discovery places the existence of iron horseshoes during the later half of the fifth century, their further usage is not recorded until closer to the end of the millennia. Carolingian Capitularies, legal acts composed and published by Frankish kings until the ninth century, display a high degree of attention to detail when it came to military matters, even going as far as to specify which weapons and equipment soldiers were to bring when called upon for war. With each Capitulary that calls for horsemen, no mention of horseshoes can be found. Excavations from Viking-age burials also demonstrate a lack of iron horseshoes, even though many of the stirrups and other horse tack survived. A burial dig in Slovenia discovered iron bits, stirrups, and saddle parts but no horseshoes. The first literary mention of nailed horseshoes is found within Ekkehard's Waltharius, written c. 920 AD. The practice of shoeing horses in Europe likely originated in Western Europe, where they had more need due to the way the climate affected horses' hooves, before spreading eastward and northward by 1000 AD. History: The task of shoeing horses was originally performed by blacksmiths, owing to the origin of the word found within the Latin ferrum. However, by the time of Edward III of England (r. 1327–1377) the position, among others, had become much more specialized. This was part of a larger trend in specialization and the division of labour in England at the time. In 1350, Edward released an ordinance concerning pay and wages within the city of London. In the ordinance it mentioned farriers and decreed that they were not to charge more for their services than "they were wont to take before the time of the pestilence." The pestilence mentioned was the Black Death, which places the existence of farriers as a trade independent of blacksmiths at the latest in 1346. In 1350, a statute from Edward designated the shoer of horses at court to be the ferrour des chivaux (literally Shoer of Horses), who would be sworn in before judges. The ferrour des chivaux would swear to do his craft properly and to limit himself solely to it. The increasing division of labour in England, especially in regards to the farriers, proved beneficial for Edward III during the first phase of the Hundred Years' War. The English army traveled into France with an immense baggage train that possessed its own forges in order for the Sergeants-Farrier and his assistants to shoe horses in the field. The increased specialization of the fourteenth century allowed Edward to create a self-sufficient army, thus contributing to his military success in France. Etymology: The word farrier can be traced back to the Middle English word ferrǒur, which referred to a blacksmith who also shoed horses. Ferrǒur can be traced back to the even earlier Old French ferreor, which in itself is based upon the Latin ferrum, meaning 'iron'. Work: A farrier's routine work is primarily hoof trimming and shoeing. In ordinary cases, trimming each hoof so it retains proper foot function is important. If the animal has a heavy work load, works on abrasive footing, needs additional traction, or has pathological changes in the hoof or conformational challenges, then shoes may be required. Additional tasks for the farrier include dealing with injured or diseased hooves and application of special shoes for racing, training, or "cosmetic" purposes. Horses with certain diseases or injuries may need remedial procedures for their hooves, or need special shoes. Qualifications: In countries such as the United Kingdom, people other than registered farriers cannot legally call themselves a farrier or carry out any farriery work (in the UK, this is under the Farriers (Registration) Act 1975). The primary aim of the act is to "prevent and avoid suffering by and cruelty to horses arising from the shoeing of horses by unskilled persons". Qualifications: However, in other countries, such as the United States, farriery is not regulated, no legal certification exists, and qualifications can vary. In the US, four organizations - the American Farrier's Association (AFA), the Guild of Professional Farriers (GPF), the Brotherhood of Working Farriers, and the Equine Lameness Prevention Organization (ELPO) - maintain voluntary certification programs for farriers. Of these, the AFA's program is the largest, with about 2800 certified farriers. Additionally, the AFA program has a reciprocity agreement with the Farrier Registration Council and the Worshipful Company of Farriers in the UK. Qualifications: Within the certification programs offered by the AFA, the GPF, and the ELPO, all farrier examinations are conducted by peer panels. The farrier examinations for these organizations are designed so that qualified farriers may obtain a formal credential indicating they meet a meaningful standard of professional competence as determined by technical knowledge and practical skills examinations, length of field experience, and other factors. Farriers who have received a certificate of completion for attending a farrier school or course may represent themselves as having completed a particular course of study. Sometimes, usually for purposes of brevity, they use the term "certified" in advertising. Qualifications: Where professional registration exists, on either a compulsory or voluntary basis, a requirement for continuing professional development activity often exists to maintain a particular license or certification. For instance, farriers voluntarily registered with the American Association of Professional Farriers require at least 16 hours of continuing education every year to maintain their accreditation. Women farriers: Traditionally, farriery has been seen as a career for men although images do show women shoeing horses at a horse hospital in the early twentieth century. In the twentieth and twenty-first centuries, however, the number of women entering the profession has risen in, for example, Australia, Canada, Ireland, New Zealand, Senegal, the UK and the USA. Traditionally, farriers worked in premises such as forges with yards where they could hot-shoe a number of horses. Changes in the industry including the introduction of electric grinders, gas-powered portable forges, ready-made shoes, and plastic stick-on shoes, have now made travelling to individual clients possible. The changes in materials and ways of working make it easier for women to combine the career with motherhood. James Blurton, 2005 World Champion Farrier, also said, "Farriery is all about technique and getting the horse to do the work for you. It is not a wrestling match." Women in the UK are now becoming 'master' farriers and Fellows of the Worshipful Company of Farriers, training apprentice farriers from around the world.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mode 3 (telephone)** Mode 3 (telephone): In telephony, mode 3 is a method of line sharing in which the line passes through a device (the mode 3 device) to connect to other devices. This enables the mode 3 device to control the line, and gain priority in need. It is a common alternative to parallel connection. Mode 3 (telephone): For example, a dial-up computer modem is generally provided with two line connectors, often labelled line and phone. The outside line is connected to the line connector, and handsets may be connected to the phone connector. When the modem is not in use, the line is connected to the handsets, but when the modem is in use, the phone connector is disconnected from the outside line. This prevents accidental interruption of the data service by raising the handset, which could occur if the two devices were simply connected in parallel. Mode 3 (telephone): Other devices often wired in mode 3 include: Fax machines. Autodiallers including: Back to base intrusion alarms. Medical alarms and similar services.In Australia, mode 3 operation was facilitated by use of the 611 socket.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hovione** Hovione: Hovione is a Contract Development and Manufacturing Organization (CDMO) with services for drug substance, drug product intermediate and drug product. The company has four FDA inspected sites in the United States, Portugal, Ireland and China development laboratories in Lisbon, Portugal and New Jersey, USA. Hovione is also present in the inhalation area, and provides a complete range of services, from active pharmaceutical ingredients (APIs), formulation development and devices. Hovione was the first Chemical/ Pharmaceutical Company to become a Certified B Corporation (certification), is a member of Rx-360 and EFCG. History: Hovione was established in Portugal in 1959 by Ivan Villax with his wife, Diane Villax, and two other Hungarian refugees: Nicholas de Horthy and Andrew Onody, the first two letters of the three founders’ names: HO, VI and ON were used to create the name Hovione.Hovione continues to be a privately held company with 5 plants - in Portugal (1969), Macao (1986), New Jersey (2001) expanded in 2016, Taizhou in mainland China (2008) and Cork, Ireland (2009) - have a total reactor capacity of 1300 m³ and 1100 people worldwide. Since Hovione started operations in 1959, the company has patented more than 100 innovative chemical processes and produced industrially over 45 different APIs. All Hovione sites have been successfully inspected either by FDA, European Medicines Agency or the PMDA Japanese agency.Hovione is a major source of semi-synthetic tetracyclines and corticosteroids, and is the largest independent supplier of contrast agents – these three families of compounds make up most of its generic product portfolio. The other half of the business focuses on exclusive projects including development of innovator APIs and particle engineering. Research and development: Hovione has two R&D centers with a team of over 270 scientists, one in Portugal - Loures and another one in its facilities in New Jersey. Hovione has international partnerships, including with Cambridge and MIT. In 2016, 7 PhDs were simultaneously running and it has launched a scientific program named "9oW", which challenges scientific and academic communities to help overcoming technological challenges. Also, Hovione is already the largest private employer of doctorates in Portugal (57 doctorates in Loures). Strategy: Hovione was present in the pharmaceutical industry offering products and services related to the development and manufacture of either a new chemical entity (NCE) for an exclusive contract manufacturing partner or an existing API for an off-patent product. In the area of technology, it has capabilities, among others, in spray drying, controlled crystallization, microfluidization, and continuous tableting. Hovione invested about $100 million in 2017 and plans to spend as much again this year and next. The plan over the next three years is to continue investing, especially in Portugal, where the firm will add 165 m3 of chemical synthesis capacity, a spray-dryer building, and a 1,200-m2 analytical lab.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lyryx Learning** Lyryx Learning: Lyryx Learning (Lyryx) is an educational software company offering open educational resources (OERs) paired with online homework & exams for undergraduate introductory courses in Mathematics & Statistics and Business & Economics. History: In 1997, Claude Laflamme and Keith Nicholson, Professors in the Department of Mathematics and Statistics at the University of Calgary, began work on the design of online tools to support student learning in their classes. Laflamme and Nicholson developed and implemented a formative assessment system which provided immediate, substantive feedback to students based on their work. In 2000, Laflamme and Nicholson, together with two software developers, Bruce Bauslaugh and Richard Cannings, formed Lyryx Learning Inc., to offer this platform in a number of quantitative disciplines. By 2010, Lyryx supported approximately 100,000 students and 2,000 instructors per year in Canada. After several years of developing formative online assessment for content from various publishers, including McGraw-Hill Ryerson in Canada and Flat World Knowledge in the US, Lyryx became a fully independent publisher supporting OERs in 2013, with the launch of Lyryx with Open Texts. Lyryx with Open Texts: To support the use of OERs in undergraduate introductory courses in Mathematics & Statistics and Business & Economics, Lyryx moved to a social enterprise business model: Funding from the online homework supports both the development and maintenance of OERs as well as contributions to the community. In addition, Lyryx also offers an option of free access to their online homework from an institution's computer labs. Lyryx with Open Texts: Lyryx with Open Texts includes: Adapted Open Texts: Open textbooks which can be distributed at no cost, and editorial services to adapt the open textbooks for each specific course. All textbooks are licensed under a Creative Commons license. Formative Online Assessment: Algorithmically generated homework and exam questions are automatically graded, and individualized feedback is also provided to the student. Course Supplements: A wide variety of materials to support the instructor including slides, solutions manuals, and test banks. For select products, Lyryx offers source codes in an editable format in LaTeX. User Support: In-house support for both instructors and students, 365 days/year. List of Textbooks: Accounting Introduction to Financial Accounting Introduction to Financial Accounting: US GAAP Intermediate Financial Accounting Volume I Intermediate Financial Accounting Volume IIEconomics Principles of Microeconomics Principles of Macroeconomics Principles of EconomicsMathematics Calculus: Early Transcendentals Linear Algebra with Applications A First Course in Linear AlgebraBusiness Mathematics Business Math: A Step-by-Step Handbook Repositories: In addition to lyryx.com, Lyryx Learning open textbooks are also listed in the following repositories: Merlot OER Commons BCcampus Manitoba Open Textbook Initiative eCampus Ontario Open Textbook Library National Network for Equitable Library Service (NNLES) Oasis Geneseo Open Textbook Search/State University of New York Open Textbook Library/University of Minnesota Saylor San Diego Community College District OER OpenStax Ally: Lyryx is an OpenStax Ally for the products listed below. The texts and supplementary resources are provided by OpenStax, and Lyryx provides corresponding online assessment and support. Principles of Accounting, Volume 1: Financial Accounting Principles of Accounting, Volume 2: Managerial Accounting Calculus Introductory Statistics Introductory Business Statistics Awards: Lyryx Learning is a 2019 Winner: Outstanding Achievement in Information and Communications Technology for the 30th Annual ASTech Awards.Lyryx with Open Texts received a "2017 Honorable Mention" from the Open Education Consortium
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**5-Formamidoimidazole-4-carboxamide ribotide** 5-Formamidoimidazole-4-carboxamide ribotide: 5-Formamidoimidazole-4-carboxamide ribotide (or FAICAR) is an intermediate in the formation of purines. It is formed by the enzyme AICAR transformylase from AICAR and 10-formyltetrahydrofolate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sotolon** Sotolon: Sotolon (also known as sotolone) is a lactone and an extremely powerful aroma compound, with the typical smell of fenugreek or curry at high concentrations and maple syrup, caramel, or burnt sugar at lower concentrations. Sotolon is the major aroma and flavor component of fenugreek seed and lovage, and is one of several aromatic and flavor components of artificial maple syrup. It is also present in molasses, aged rum, aged sake and white wine, flor sherry, roast tobacco, and dried fruiting bodies of the mushroom Lactarius helvus. Sotolon can pass through the body relatively unchanged, and consumption of foods high in sotolon, such as fenugreek, can impart a maple syrup aroma to one's sweat and urine. In some individuals with the genetic disorder maple syrup urine disease, it is spontaneously produced in their bodies and excreted in their urine, leading to the disease's characteristic smell.This molecule is thought to be responsible for the mysterious maple syrup smell that has occasionally wafted over Manhattan since 2005. Sotolon was first isolated in 1975 from the herb fenugreek. The compound was named in 1980 when it was found to be responsible for the flavor of raw cane sugar: soto- means "raw sugar" in Japanese and -olon signifies that the molecule is an enol lactone.Several aging-derived compounds have been pointed out as playing an important role on the aroma of fortified wines; however, sotolon (3-hydroxy-4,5-dimethyl-2(5H)-furanone) is recognized as being the key odorant and has also been classified as a potential aging marker of these type of wines. This chiral lactone is a powerful odorant, which can impart a nutty, caramel, curry, or rancid odor, depending on its concentration and enantiomeric distribution. Despite being pointed out as a key odorant of other fortified wines, the researchers’ attention has also been directed to its off-flavor character, associated to the premature oxidative aging of young dry white wines, overlapping the expected fruity, flowery, and fresh character. This compound can be detected by miniaturized emulsification extraction followed by GC–MS/SIM and single-step miniaturized liquid-liquid extraction followed by LC-MS/MS analysis French vin jaune: Vin jaune is marked by the formation of sotolon from α-ketobutyric acid.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Serial Line Internet Protocol** Serial Line Internet Protocol: The Serial Line Internet Protocol (SLIP) is an encapsulation of the Internet Protocol designed to work over serial ports and router connections. It is documented in RFC 1055. On personal computers, SLIP has largely been replaced by the Point-to-Point Protocol (PPP), which is better engineered, has more features, and does not require its IP address configuration to be set before it is established. On microcontrollers, however, SLIP is still the preferred way of encapsulating IP packets, due to its very small overhead. Serial Line Internet Protocol: Some people refer to the successful and widely used RFC 1055 Serial Line Internet Protocol as "Rick Adams' SLIP", to avoid confusion with other proposed protocols named "SLIP". Those other protocols include the much more complicated RFC 914 appendix D Serial Line Interface Protocol. Description: SLIP modifies a standard TCP/IP datagram by: appending a special "END" byte to it, which distinguishes datagram boundaries in the byte stream, if the END byte occurs in the data to be sent, the two byte sequence ESC, ESC_END is sent instead, if the ESC byte occurs in the data, the two byte sequence ESC, ESC_ESC is sent. variants of the protocol may begin, as well as end, packets with END.SLIP requires a serial port configuration of 8 data bits, no parity, and either EIA hardware flow control, or CLOCAL mode (3-wire null-modem) UART operation settings. SLIP does not provide error detection, being reliant on upper layer protocols for this. Therefore, SLIP on its own is not satisfactory over an error-prone dial-up connection. It is however still useful for testing operating systems' response capabilities under load (by looking at flood-ping statistics). SLIP escape characters were also required on some modem connections to escape Hayes command set, allowing therefore to pass binary data through those modems that would recognize some characters as commands. CSLIP: A version of SLIP with header compression is called Compressed SLIP (CSLIP). The compression algorithm used in CSLIP is known as Van Jacobson TCP/IP Header Compression. CSLIP has no effect on the data payload of a packet and is independent of any compression by the serial line modem used for transmission. It reduces the Transmission Control Protocol (TCP) header from twenty bytes to seven bytes. CSLIP has no effect on User Datagram Protocol (UDP) datagrams. History: RFC 1055, a "non-standard" for SLIP, traces its origins to the 3COM UNET TCP/IP implementation from the 1980s. Rick Adams added SLIP to the popular 4.2BSD in 1984 and it "quickly caught on". By the time of the RFC (1988), it is described as "commonly used on dedicated serial links and sometimes for dialup purposes".The last version of FreeBSD to include "slattach" (a command for connecting to slip) in the manual database is FreeBSD 7.4, released 2011. The manual claims that auto-negotiation exists for CSLIP. The FreeBSD version is inherited from 4.3BSD.Linux formerly used the same code base for SLIP and KISS (TNC). The split occurred before the start of kernel git history (Linux-2.6.12-rc2, 2005). The SLIP driver offers a special "6-bit" escaped mode to accommodate modems incapable of handling non-ASCII characters. The Linux slattach command (written independently) also has the ability to auto-detect CSLIP support.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**XMLGUI** XMLGUI: XMLGUI is a KDE framework for designing the user interface of an application using XML, using the idea of actions. In this framework, the programmer designs various actions that their application can implement, with several actions defined for the programmer by the KDE framework, such as opening a file or closing the application. Each action can be associated with various data including icons, explanatory text, and tooltips. XMLGUI: The interesting part to this design is that the actions are not inserted into the menus or toolbars by the programmer. Instead, the programmer supplies an XML file, which describes the layout of the menu bar and toolbar. Using this system, it is possible for the user to redesign the user interface of an application without needing to touch the source code of the program in question. XMLGUI: In addition, XMLGUI is useful for the KParts component programming interface for KDE, as an application can easily integrate the GUI of a KPart into its own GUI. The Konqueror file manager is the canonical example of this feature. The current version is KDE Frameworks#KXMLGUI. Other projects: The name is somewhat generic. The Beryl XML GUI was formerly named xmlgui, and there are a dozen other xml-oriented gui-libraries with the same project name. The KDE XMLGUI is one in a long series of projects that have not managed to pin down the term for the resulting programming base.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Power walking** Power walking: Power walking or speed walking is the act of walking with a speed at the upper end of the natural range for the walking gait, typically 7 to 9 km/h (4.5 to 5.5 mph). To qualify as power walking as opposed to jogging or running, at least one foot must be in contact with the ground at all times (see walking for a formal definition). History and technique: In 1999, the Berlin Marathon included a Power Walking division.Power walking is often confused with racewalking. History and technique: Power walking techniques involve the following: The walker must walk straight The walker must walk doing an alternating movement of feet and arms The walker must walk with one foot in permanent contact with the ground The leading leg must be bent Each advancing foot strike must be heel to toe at all times The walker must walk not doing an exaggerated swivel to the hip The arms spread completely from the elbows and these move back Competitions and world records: Competitions are held for power walking competitions, with world records held in categories including 5 km, 10 km, half marathon, 30 km, marathon, and multiday distances. Health and fitness: Power walking has been recommended by health experts such as Kenneth H. Cooper as an alternative to jogging for a low-to-moderate exercise regime, for instance 60–80% of maximum heart rate (HRmax). At the upper range, walking and jogging are almost equally efficient, and the walking gait gives significantly less impact to the joints. Health and fitness: Early bodybuilding champion Steve Reeves was an early advocate and wrote the book Powerwalking about his experiences with it and its health benefits.A 2021 study, where post coronary angioplasty patients were introduced power walking based on their ejection fraction, VO2 max calculation, heart rate monitoring and pedometer counts. Those participants in power walking group benefited significantly on quality of life and various physiological parameters.Physiologically, a normal adult walking at a speed of 4–6 km/h has the least aerobic requirement and low exercise intensity. Running is preferred over walking at a speed equal to or greater than 8 km/h, since running at a higher speed consumes less oxygen than walking. When running is a significant problem, particularly in patients with post-coronary angioplasty with or without stents, power walking was recommended. Power walking with 6-8 km/h speed, patients can achieve the benefits of running i.e., significant improvement in V02 Max., maximal aerobic capacity. To put simply power walking is augmented with speed of walking. Sources: Reeves, Steve. (1982) Power Walking, Bobbs-Merrill.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Anterior cerebral artery** Anterior cerebral artery: The anterior cerebral artery (ACA) is one of a pair of cerebral arteries that supplies oxygenated blood to most midline portions of the frontal lobes and superior medial parietal lobes of the brain. The two anterior cerebral arteries arise from the internal carotid artery and are part of the circle of Willis. The left and right anterior cerebral arteries are connected by the anterior communicating artery. Anterior cerebral artery: Anterior cerebral artery syndrome refers to symptoms that follow a stroke occurring in the area normally supplied by one of the arteries. It is characterized by weakness and sensory loss in the lower leg and foot opposite to the lesion and behavioral changes. Structure: The anterior cerebral artery is divided into 5 segments. Its smaller branches: the callosal (supracallosal) arteries are considered to be the A4 and A5 segments. Structure: A1 originates from the internal carotid artery and extends to the anterior communicating artery (AComm). The anteromedial central (medial lenticulostriate) arteries arise from this segment as well as the AComm, which irrigates the caudate nucleus and the anterior limb of the internal capsule A2 extends from the AComm to the bifurcation forming the pericallosal and callosomarginal arteries. The recurrent artery of Heubner (distal medial striate artery), which irrigates the internal capsule, usually arises at the beginning of this segment near the AComm. Two branches arise from this segment: Orbitofrontal artery (medial frontal basal): Arises a small distance away from the AComm Frontopolar artery (polar frontal): Arises after the orbitofrontal, close to the curvature of A2 over the corpus callosum. It can also originate from the callosal marginal. Structure: A3, also termed the pericallosal artery, is one of the (or the only) main terminal branches of the ACA, which extends posteriorly in the pericallosal sulcus to form the internal parietal arteries (superior, inferior) and the precuneal artery. This artery may form an anastomosis with the posterior cerebral artery. Structure: Callosal marginal artery: A commonly present terminal branch of the ACA, which bifurcates from the pericallosal artery. This artery in turn branches into the medial frontal arteries (anterior, intermediate, posterior), and the paracentral artery, with the cingulate branches arising throughout its length. Depending on anatomical variation, the callosal marginal artery may be none discrete or not be visible. In the latter case, the branches mentioned will originate from the pericallosal artery. In a study of 76 hemispheres, the artery was present in only 60% of the cases. Angiography studies cite that the vessel can be seen 67% or 50% of the time. Structure: Development The anterior cerebral artery develops from a primitive anterior division of the internal carotid artery that initially supplies the optic and olfactory regions. This anterior division, which appears at the twenty-eighth day of development, also forms the middle cerebral artery and the anterior choroidal artery. The anterior cerebral arteries grow toward each other and form the anterior communicating artery at the 21–24 mm stage of the embryo. Structure: Variation The anterior cerebral artery shows considerable variation. In a study made using MRA, the most common variation was an underdeveloped A1 segment (5.6%), followed by the presence of an extra A2 segment (3%). In 2% of cases there was only one A2 segment. Function: The anterior cerebral artery supplies a part of the frontal lobe, specifically its medial surface and the upper border. It also supplies the front four–fifths of the corpus callosum, and provides blood to deep structures such as the anterior limb of the internal capsule, part of the caudate nucleus, and the anterior part of the globus pallidus. Clinical significance: Occlusion Strokes that occur in a part of the artery prior to the anterior communicating usually do not produce many symptoms because of collateral circulation. If a blockage occurs in the A2 segment or later, the following signs and symptoms may be noted: Paralysis or weakness of the foot and leg on the opposite side, due to involvement of leg part of the motor cortex Cortical sensory loss in the opposite foot and leg Gait apraxia (impairment of gait and stance) Abulia, akinetic mutism, slowness and lack of spontaneity Urinary incontinence which usually occurs with bilateral damage in the acute phase Frontal cortical release reflexes: Contralateral grasp reflex, sucking reflex, paratonic rigidity
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alternansucrase** Alternansucrase: In enzymology, an alternansucrase (EC 2.4.1.140) is an enzyme that catalyzes a chemical reaction that transfers an alpha-D-glucosyl residue from sucrose alternately to the 6- and 3-positions of the non-reducing terminal residue of an alpha-D-glucan, thereby creating a glucan with alternating alpha-1,6- and alpha-1,3-bonds. The name "alternan" was coined in 1982 (Cote & Robyt) for the glucan based on its alternating linkage structure. Alternansucrase: This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is sucrose:1,6(1,3)-alpha-D-glucan 6(3)-alpha-D-glucosyltransferase. Other names in common use include sucrose-1,6(3)-alpha-glucan 6(3)-alpha-glucosyltransferase, sucrose:1,6-, 1,3-alpha-D-glucan 3-alpha- and, and 6-alpha-D-glucosyltransferase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TD-1 RNA motif** TD-1 RNA motif: The TD-1 RNA motif is a conserved RNA structure found only in the species Treponema denticola, at least among bacteria whose genomes were sequenced in 2007 when the RNA motif was identified. The T. denticola genome contains 28 predicted TD-1 RNAs, and all but two of these are positioned such that they are likely to be in the 5' UTR of the downstream gene. This arrangement suggests that TD-1 RNAs likely correspond to cis-regulatory elements. However, due to the variety of genes apparently regulated by TD-1 RNAs, no specific hypothesis as to its function was suggested. TD-1 RNA motif: The TD-1 RNA's secondary structure is supported by covariation (see secondary structure prediction), but there are an unusual number of stems containing runs of adenosines that base pair with coordinate runs of uridines. Seven TD-1 RNAs overlap predicted representatives of the TD-2 RNA motif, but it is unknown whether these two motifs can somehow be merged.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**E (theorem prover)** E (theorem prover): E is a high-performance theorem prover for full first-order logic with equality. It is based on the equational superposition calculus and uses a purely equational paradigm. It has been integrated into other theorem provers and it has been among the best-placed systems in several theorem proving competitions. E is developed by Stephan Schulz, originally in the Automated Reasoning Group at TU Munich, now at Baden-Württemberg Cooperative State University Stuttgart. System: The system is based on the equational superposition calculus. In contrast to most other current provers, the implementation actually uses a purely equational paradigm, and simulates non-equational inferences via appropriate equality inferences. Significant innovations include shared term rewriting (where many possible equational simplifications are carried out in a single operation), several efficient term indexing data structures for speeding up inferences, advanced inference literal selection strategies, and various uses of machine learning techniques to improve the search behaviour. Since version 2.0, E supports many-sorted logic. E is implemented in C and portable to most UNIX variants and the Cygwin environment. It is available under the GNU GPL. Competitions: The prover has consistently performed well in the CADE ATP System Competition, winning the CNF/MIX category in 2000 and finishing among the top systems ever since. In 2008 it came in second place. In 2009 it won second place in the FOF (full first order logic) and UEQ (unit equational logic) categories and third place (after two versions of Vampire) in CNF (clausal logic). It repeated the performance in FOF and CNF in 2010, and won a special award as "overall best" system. In the 2011 CASC-23 E won the CNF division and achieved second places in UEQ and LTB. Applications: E has been integrated into several other theorem provers. It is, with Vampire, SPASS, CVC4, and Z3, at the core of Isabelle's Sledgehammer strategy. E also is the reasoning engine in SInE and LEO-II and used as the clausification system for iProver.Applications of E include reasoning on large ontologies, software verification, and software certification.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Coloration evidence for natural selection** Coloration evidence for natural selection: Animal coloration provided important early evidence for evolution by natural selection, at a time when little direct evidence was available. Three major functions of coloration were discovered in the second half of the 19th century, and subsequently used as evidence of selection: camouflage (protective coloration); mimicry, both Batesian and Müllerian; and aposematism. Coloration evidence for natural selection: Charles Darwin's On the Origin of Species was published in 1859, arguing from circumstantial evidence that selection by human breeders could produce change, and that since there was clearly a struggle for existence, that natural selection must be taking place. But he lacked an explanation either for genetic variation or for heredity, both essential to the theory. Many alternative theories were accordingly considered by biologists, threatening to undermine Darwinian evolution. Coloration evidence for natural selection: Some of the first evidence was provided by Darwin's contemporaries, the naturalists Henry Walter Bates and Fritz Müller. They described forms of mimicry that now carry their names, based on their observations of tropical butterflies. These highly specific patterns of coloration are readily explained by natural selection, since predators such as birds which hunt by sight will more often catch and kill insects that are less good mimics of distasteful models than those that are better mimics; but the patterns are otherwise hard to explain. Darwinists such as Alfred Russel Wallace and Edward Bagnall Poulton, and in the 20th century Hugh Cott and Bernard Kettlewell, sought evidence that natural selection was taking place. Wallace noted that snow camouflage, especially plumage and pelage that changed with the seasons, suggested an obvious explanation as an adaptation for concealment. Poulton's 1890 book, The Colours of Animals, written during Darwinism's lowest ebb, used all the forms of coloration to argue the case for natural selection. Cott described many kinds of camouflage, and in particular his drawings of coincident disruptive coloration in frogs convinced other biologists that these deceptive markings were products of natural selection. Kettlewell experimented on peppered moth evolution, showing that the species had adapted as pollution changed the environment; this provided compelling evidence of Darwinian evolution. Context: Charles Darwin published On the Origin of Species in 1859, arguing that evolution in nature must be driven by natural selection, just as breeds of domestic animals and cultivars of crop plants were driven by artificial selection. Context: Darwin's theory radically altered popular and scientific opinion about the development of life. However, he lacked evidence and explanations for some critical components of the evolutionary process. He could not explain the source of variation in traits within a species, and did not have a mechanism of heredity that could pass traits faithfully from one generation to the next. This made his theory vulnerable; alternative theories were being explored during the eclipse of Darwinism; and so Darwinian field naturalists like Wallace, Bates and Müller looked for clear evidence that natural selection actually occurred. Animal coloration, readily observable, soon provided strong and independent lines of evidence, from camouflage, mimicry and aposematism, that natural selection was indeed at work. The historian of science Peter J. Bowler wrote that Darwin's theory was also extended to the broader topics of protective resemblances and mimicry, and this was its greatest triumph in explaining adaptations. Camouflage: Snow camouflage In his 1889 book Darwinism, the naturalist Alfred Russel Wallace considered the white coloration of Arctic animals. He recorded that the Arctic fox, Arctic hare, ermine and ptarmigan change their colour seasonally, and gave "the obvious explanation", that it was for concealment. The modern ornithologist W. L. N. Tickell, reviewing proposed explanations of white plumage in birds, writes that in the ptarmigan "it is difficult to escape the conclusion that cryptic brown summer plumage becomes a liability in snow, and white plumage is therefore another cryptic adaptation." All the same, he notes, "in spite of winter plumage, many Ptarmigan in NE Iceland are killed by Gyrfalcons throughout the winter."More recently, decreasing snow cover in Poland, caused by global warming, is reflected in a reduced percentage of white-coated weasels that become white in winter. Days with snow cover halved between 1997 and 2007, and as few as 20 percent of the weasels had white winter coats. This was shown to be a result of natural selection by predators making use of camouflage mismatch. Camouflage: Coincident disruptive coloration In the words of camouflage researchers Innes Cuthill and A. Székely, the English zoologist and camouflage expert Hugh Cott's 1940 book Adaptive Coloration in Animals provided "persuasive arguments for the survival value of coloration, and for adaptation in general, at a time when natural selection was far from universally accepted within evolutionary biology." In particular, they argue, "Coincident Disruptive Coloration" (one of Cott's categories) "made Cott's drawings the most compelling evidence for natural selection enhancing survival through disruptive camouflage." Cott explained, while discussing "a little frog known as Megalixalus fornasinii" in his chapter on coincident disruptive coloration, that "it is only when the pattern is considered in relation to the frog's normal attitude of rest that its remarkable nature becomes apparent... The attitude and very striking colour-scheme thus combine to produce an extraordinary effect, whose deceptive appearance depends upon the breaking up of the entire form into two strongly contrasted areas of brown and white. Considered separately, neither part resembles part of a frog. Together in nature the white configuration alone is conspicuous. This stands out and distracts the observer's attention from the true form and contour of the body and appendages on which it is superimposed". Cott concluded that the effect was concealment "so long as the false configuration is recognized in preference to the real one". Such patterns embody, as Cott stressed, considerable precision as the markings must line up accurately for the disguise to work. Cott's description and in particular his drawings convinced biologists that the markings, and hence the camouflage, must have survival value (rather than occurring by chance); and further, as Cuthill and Székely indicate, that the bodies of animals that have such patterns must indeed have been shaped by natural selection. Camouflage: Industrial melanism Between 1953 and 1956, the geneticist Bernard Kettlewell experimented on peppered moth evolution. He presented results showing that in a polluted urban wood with dark tree trunks, dark moths survived better than pale ones, causing industrial melanism, whereas in a clean rural wood with paler trunks, pale moths survived better than dark ones. The implication was that survival was caused by camouflage against suitable backgrounds, where predators hunting by sight (insect-eating birds, such as the great tits used in the experiment) selectively caught and killed the less well-camouflaged moths. The results were intensely controversial, and from 2001 Michael Majerus carefully repeated the experiment. The results were published posthumously in 2012, vindicating Kettlewell's work as "the most direct evidence", and "one of the clearest and most easily understood examples of Darwinian evolution in action". Mimicry: Batesian Batesian mimicry, named for the 19th century naturalist Henry Walter Bates who first noted the effect in 1861, "provides numerous excellent examples of natural selection" at work. The evolutionary entomologist James Mallet noted that mimicry was "arguably the oldest Darwinian theory not attributable to Darwin." Inspired by On the Origin of Species, Bates realized that unrelated Amazonian butterflies resembled each other when they lived in the same areas, but had different coloration in different locations in the Amazon, something that could only have been caused by adaptation. Mimicry: Müllerian Müllerian mimicry, too, in which two or more distasteful species that share one or more predators have come to mimic each other's warning signals, was clearly adaptive; Fritz Müller described the effect in 1879, in an account notable for being the first use of a mathematical argument in evolutionary ecology to show how powerful the effect of natural selection would be. Aposematism: In 1867, in a letter to Darwin, Wallace described warning coloration. The evolutionary zoologist James Mallet notes that this discovery "rather illogically" followed rather than preceded the accounts of Batesian and Müllerian mimicry, which both rely on the existence and effectiveness of warning coloration. The conspicuous colours and patterns of animals with strong defences such as toxins are advertised to predators, signalling honestly that the animal is not worth attacking. This directly increases the reproductive fitness of the potential prey, providing a strong selective advantage. The existence of unequivocal warning coloration is therefore clear evidence of natural selection at work. Defence of Darwinism: Edward Bagnall Poulton's 1890 book, The Colours of Animals, renamed Wallace's concept of warning colours "aposematic" coloration, as well as supporting Darwin's then unpopular theories of natural selection and sexual selection. Poulton's explanations of coloration are emphatically Darwinian. For example, on aposematic coloration he wrote that At first sight the existence of this group seems to be a difficulty in the way of the general applicability of the theory of natural selection. Warning Colours appear to benefit the would-be enemies rather than the conspicuous forms themselves, and the origin and growth of a character intended solely for the advantage of some other species cannot be explained by the theory of natural selection. But the conspicuous animal is greatly benefited by its Warning Colours. If it resembled its surroundings like the members of the other class, it would be liable to a great deal of accidental or experimental tasting, and there would be nothing about it to impress the memory of an enemy, and thus to prevent the continual destruction of individuals. The object of Warning Colours is to assist the education of enemies, enabling them to easily learn and remember the animals which are to be avoided. The great advantage conferred upon the conspicuous species is obvious when it is remembered that such an easy and successful education means an education involving only a small sacrifice of life." Poulton summed up his allegiance to Darwinism as an explanation of Batesian mimicry in one sentence: "Every step in the gradually increasing change of the mimicking in the direction of specially protected form, would have been an advantage in the struggle for existence".The historian of science Peter J. Bowler commented that Poulton used his book to complain about experimentalists' lack of attention to what field naturalists (like Wallace, Bates, and Poulton) could readily see were adaptive features. Bowler added that "The fact that the adaptive significance of coloration was (sic) widely challenged indicates just how far anti-Darwinian feeling had developed. Only field naturalists such as Poulton refused to give in, convinced that their observations showed the validity of selection, whatever the theoretical problems."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wingman (social)** Wingman (social): Wingman (or wingmate) is a role that a person may take when a friend needs support with approaching potential romantic partners. People who have a wingman can have more than one wingman. A wingman is someone who is on the "inside" and is used to help someone with intimate relationships. In general, one person's wingman will help them avoid attention from undesirable prospective partners or attract desirable ones, or both. Origin: The term originated in combat aviation in various international military aviation communities shortly before and after the advent of fighter jets. Pilots flying in formation, especially when in combat training or in actual aerial combat, refer to the pilot immediately next to them (traditionally on their right, sometimes on either side) as their "wingman" (the man on their wing). In actual aerial combat pilots are often trained to attack and defend in pairs watching out for each other, thereby making the term even more clearly demonstrated. Origin: The term is also very commonly used in combat aviation on longer range aviation patrols which are often carried out by only two fighter planes, sometimes manned by only two pilots depending on the type of aircraft. On these two plane patrols (Air Force) or "watches" (Naval Aviators flying protective patterns around surface vessels on timed intervals) referring to the pilot that an aviator is teamed with on patrol as their "wingman" is very common. In sociology: In 2007, sociologist David Grazian interviewed male students at the University of Pennsylvania on their dating habits, and postulated that the wingman role was part of collective "girl hunt" rituals that allow young men to collectively perform masculinity. Grazian writes: "the wingman serves multiple purposes: he provides validation of a leading man's trustworthiness, eases the interaction between a single male friend and a larger group of women, serves as a source of distraction for the friend or friends of a more desirable target of affection, can be called on to confirm the wild (and frequently misleading) claims of his partner and, perhaps most important, helps motivate his friends by building up their confidence. Indeed, men describe the role of the wingman in terms of loyalty, personal responsibility and dependability, traits commonly associated with masculinity…" Popular usage: Popular media and informal discourse describe a situation in which a pair of friends are socialising together, approaching other pairs and groups while avoiding the awkwardness or perceived aggression of acting alone. The wingman strikes up conversation and proposes group social activities, providing their friend with a pleasant and unthreatening social pretext to chat or flirt with a particular attractive person. The wingman can also keep their friend safe by preventing them from drinking excessively or behaving in a reckless or socially embarrassing way.The wingman can occupy the attention of any less attractive people in the other group, allowing their friend to express an interest in the most attractive group member. Popular usage: Despite the name, wingmen are not exclusively male; women can also act as wingmen. Wingmen also do not necessarily share their friend's sexual orientation; gay people can be wingmen for straight friends, and vice versa.Certain sources describe the wingman role as a part of pickup artistry, with women referred to as "targets" and men as "pilots". Others highlight the ability of a wingman (of any gender) to step in and rescue their female friend from unwanted persistent sexual advances.American entrepreneur Thomas Edwards founded a dating service called The Professional Wingman, in which he performs the wingman role for socially reticent clients, coaching them on the social skills needed to approach potential romantic partners in bar settings. Edwards emphasises that he is not a pick-up artist. In fiction and popular culture: The term 'wingman' was popularised by its use in the 1986 romantic military action drama film Top Gun, in which US Navy pilots are shown in a bar pursuing women in pairs, similarly to their in-flight tactics. Nick 'Goose' Bradshaw (Anthony Edwards) is the best friend and wingman to Pete 'Maverick' Mitchell (Tom Cruise). In a much-quoted line from the end of the film, Maverick's former archrival, Tom 'Iceman' Kazansky (Val Kilmer), shows his respect to Maverick when he says, "You can be my wingman anytime."Other characters claimed as wingmen in literature, film and popular culture include: Horatio, Hamlet's best friend in William Shakespeare's play. In fiction and popular culture: Cyrano de Bergerac, the witty but ugly protagonist of Edmond Rostand's 1897 play, who helps his handsome but foolish friend Christian to woo Roxane, the woman with whom Cyrano himself is hopelessly in love. Dr John Watson, the trusted friend and colleague of Sherlock Holmes. Samwise Gamgee, the loyal companion of Frodo Baggins in JRR Tolkien's The Lord of the Rings. Bud Baxter (Jack Lemmon) in Billy Wilder's 1960 film The Apartment, who loans his New York apartment to four of his bosses as a place for their extramarital affairs. Sharon and Susan (Hayley Mills), twin sisters raised separately, who meet at summer camp and decide to matchmake their divorced parents in the film series The Parent Trap. Mr Spock, the endlessly logical Vulcan second-in-command to Captain Kirk in Star Trek. Cameron Frye (Alan Ruck), who lends his father's prized Ferrari to his best friend Ferris Bueller (Matthew Broderick) in the film Ferris Bueller's Day Off. Lisa (Kelly LeBrock), a computer-generated 'ideal woman' in the film Weird Science, who teaches horny nerds Gary Wallace (Anthony Michael Hall) and Wyatt Donnelly (Ilan Mitchell-Smith) how to approach girls. Trent (Vince Vaughn) in Doug Liman's 1996 film Swingers, who tries to get his heartbroken friend Mike (Jon Favreau) back in the dating game. Roger Swanson (Campbell Scott) in the film Roger Dodger, an obnoxious pick-up artist who coaches his nephew Nick (Jesse Eisenberg) on how to seduce women. Jack (Thomas Haden Church) in Alexander Payne's film Sideways, who helps his downtrodden friend Miles (Paul Giamatti) to pursue the beautiful Maya (Virginia Madsen). Officers Slater and Michaels (Bill Hader and Seth Rogen) in the film Superbad, who after a night of carousing with nerdy teen Fogell (Christopher Mintz-Plasse), 'fake arrest' him to make him appear cool to his crush. Barney Stinson (Neil Patrick Harris) in the TV sitcom How I Met Your Mother, who seeks to pass on his wealth of pick-up artist knowledge to his friend Ted Mosby (Josh Radnor). Magic Carpet, the sentient carpet from the Disney film Aladdin. It helps Aladdin on his first date with Princess Jasmine, famously during the song A Whole New World. In the movie Captain America: The First Avenger; Sgt. James "Bucky" Barnes acted as Steve Rogers' wingman, even after Steve was augmented to become the iconic Captain America. During the 2018 Formula One World Championship, the finnish driver Valtteri Bottas of Mercedes AMG Petronas team was attributed the title of "Wingman" by the Media.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Porphyridium cruentum** Porphyridium cruentum: Porphyridium cruentum is a species of red algae in the family Porphyridiophyceae. Porphyridium cruentum: The microalga Porphyridium sp. is a potential source for several products like fatty acids, lipids, cell-wall polysaccharides and pigments . The polysaccharides of this species are sulphated and their structure gives rise to some unique properties that could lead to a broad range of industrial and pharmaceutical applications. Additionally, P. cruentum biomass contains carbohydrates of up to 57% have been reported. Thus, the combined amount of carbohydrates in biomass and exopolysaccharides of this microalga could potentially provide the source for bio-fuel and pharmaceutical. This algae contains phycoerythrin that can be extracted by lyse and chromatography.The genus Porphyridium has been classified among blue-green, red, and green algae.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Poncirin** Poncirin: Poncirin is the 7-O-neohesperidoside of isosakuranetin. Poncirin can be extracted from trifoliate orange (Poncirus trifoliata).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vibratory shear-enhanced process** Vibratory shear-enhanced process: Vibratory shear enhanced process (VSEP) is a membrane separation technology platform invented in 1987 and patented in 1989 by Dr. J. Brad Culkin. VSEP's vibration system was designed to prevent membrane fouling, or the build-up of solid particles on the surface of the membrane. VSEP systems have been applied in a variety of industrial environments. History and technology development: After earning his PhD in chemical engineering from Northwestern University Dr. Culkin spent his early professional career with Dorr–Oliver, Inc., a pioneering company in the area of separation processes. Culkin contributed to six Dorr–Oliver patent applications in 1985 and 1986.While at Dorr–Oliver, Dr. Culkin was exposed to the advantages of membrane separation technology as well as its failings. The membrane's Achilles' heel, Culkin decided, was fouling.Concurrent with his membrane work, Culkin was helping to develop a mechanically resonating loudspeaker with the founders of Velodyne Acoustics. Culkin married these two areas of expertise and struck out to overcome membrane fouling through the use of vibration. History and technology development: The first VSEP prototype Culkin developed was a literal combination of loudspeaker and membrane technology as the photo shows below. Principle of operation: A VSEP filter uses oscillatory vibration to create high shear at the surface of the filter membrane. This high shear force significantly improves the filter's resistance to fouling thereby enabling high throughputs and minimizing reject volumes. VSEP feed stream are split into two products—a permeate stream with little or no solids and a concentrate stream with a solids concentration much higher than that of the original feed stream. Industrial applications: VSEP has been applied in a variety of industrial application areas including pulp and paper, chemical processing, landfill leachate, oil and gas, RO Reject and a variety of industrial wastewaters. Awards: A VSEP system was recognized in 2009 as part of the WateReuse Foundation's Desalination Project of the Year. The system was installed to minimize the brine from an electrodialysis reversal (EDR) system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rage comic** Rage comic: A rage comic is a short cartoon strip using a growing set of pre-made cartoon faces, or rage faces, which usually express rage or some other simple emotion or activity. They are usually crudely drawn in Microsoft Paint or other simple drawing programs, and were most popular in the early 2010s. These webcomics have spread much in the same way that Internet memes do, and several memes have originated in this medium. They have been characterized by Ars Technica as an "accepted and standardized form of online communication." The popularity of rage comics has been attributed to their use as vehicles for humorizing shared experiences. The range of expression and standardized, easily identifiable faces has allowed uses such as teaching English as a foreign language.In the early 2020s, rage comics were revived as "trollge incidents". Trollge incidents are a series of memes revolving around Carlos Ramirez's Trollface character from Rage Comics but with a much darker and introspective tone. These memes usually take the form of "Trollge incidents", which are stories narrated in steps and taking a darker tone as the story goes on. There are many popular trollge incidents, such as the "Betrayal incident" and "Nature's corruption incident". History: Although used on numerous websites such as Reddit, Cheezburger, ESS.MX, Ragestache, and 9GAG, the source of the rage comic has largely been attributed to 4chan in mid-2008. The first rage comic was posted to the 4chan /b/ "Random" board in 2008. It was a simple 4-panel strip showing the author's anger about getting "Poseidon's kiss" while on the toilet, with the final panel featuring a zoomed-in face, known as Rage Guy, saying "FFFFFFFUUUUUUUUUUUU-". It was quickly reposted and modified, with other users creating new scenarios and characters.Google Trends data shows that the term "rage guy" peaked in April 2009 while the terms "rage comics" and "troll face" both peaked in March 2009. History: Trollface One of the most widely used rage comic faces is the Trollface, drawn by Oakland artist Carlos Ramirez in 2008. Originally posted in a comic to his DeviantArt account Whynne about Internet trolling on 4chan, the trollface is a recognizable image of Internet memes and culture. Ramirez has used his creation, registered with the United States Copyright Office in 2009, to gain over $100,000 in licensing fees, settlements, and other payouts. The video game Meme Run for Nintendo's Wii U console was taken down for having the trollface as the main character.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Linear video editing** Linear video editing: Linear video editing is a video editing post-production process of selecting, arranging and modifying images and sound in a predetermined, ordered sequence. Regardless of whether it was captured by a video camera, tapeless camcorder, or recorded in a television studio on a video tape recorder (VTR) the content must be accessed sequentially.For the most part video editing software has replaced linear editing. In the past, film editing was done in linear fashion, where film reels were literally cut into long strips divided by takes and scenes, and then glued or taped back together to create a logical sequence of film. Linear video editing is more time consuming and highly specialised and tedious work. Still, it is relevant today because of these reasons: The method is simple and inexpensive. Linear video editing: Mandatory for some jobs: for example if only two sections of video clips are to be joined together in sequence it is often the quickest and easiest way. If video editors learn linear editing skills it increases their knowledge as well as versatility. According to many professional editors who learn linear editing skills first they tend to become proficient all-round editors.Until the advent of computer-based random access non-linear editing systems (NLE) in the early 1990s, linear video editing was simply called video editing. History: Live television is still essentially produced in the same manner as it was in the 1950s, although transformed by modern technical advances. Before videotape, the only way of airing the same shows again was by filming shows using a kinescope, essentially a video monitor paired with a movie camera. However, kinescopes (the films of television shows) suffered from various sorts of picture degradation, from image distortion and apparent scan lines to artifacts in contrast and loss of detail. Kinescopes had to be processed and printed in a film laboratory, making them unreliable for broadcasts delayed for different time zones. History: The primary motivation for the development of video tape was as a short or long-term archival medium. Only after a series of technical advances spanning decades did video tape editing finally become a viable production tool, up to par with film editing. Early technology: The first widely accepted video tape in the United States was two-inch quadruplex videotape and travelled at 15 inches per second. To gain enough head-to-tape speed, four video recording and playback heads were spun on a head wheel across most of the two-inch width of the tape. (Audio and synchronization tracks were recorded along the sides of the tape with stationary heads.) This system was known as "quad" (for "quadruplex") recording. Early technology: The resulting video tracks were slightly less than a ninety-degree angle (considering the vector addition of high-speed spinning heads tracing across the 15 inches per second forward motion of the tape). Early technology: Originally, video was edited by visualizing the recorded track with ferrofluid and cutting it with a razor blade or guillotine cutter and splicing with video tape, in a manner similar to film editing. This was an arduous process and avoided where possible. When it was used, the two pieces of tape to be joined were painted with a solution of extremely fine iron filings suspended in carbon tetrachloride, a toxic and carcinogenic compound. This "developed" the magnetic tracks, making them visible when viewed through a microscope so that they could be aligned in a splicer designed for this task. The tracks had to be cut during a vertical retrace, without disturbing the odd-field/even-field ordering. The cut also had to be at the same angle that the video tracks were laid down on the tape. Since the video and audio read heads were several inches apart it was not possible to make a physical edit that would function correctly in both video and audio. The cut was made for video and a portion of audio then re-copied into the correct relationship, the same technique as for editing 16mm film with a combined magnetic audio track. Early technology: The disadvantages of physically editing tapes were many. Some broadcasters decreed that edited tapes could not be reused, in an era when the relatively high cost of the machines and tapes was balanced by the savings involved in being able to wipe and reuse the media. Others, such as the BBC, allowed reuse of spliced tape in certain circumstances as long as it conformed to strict criteria about the number of splices in a given duration, usually a maximum of five splices for every half hour. The process required great skill, and often resulted in edits that would roll (lose sync) and each edit required several minutes to perform, although this was also initially true of the electronic editing that came later. Early technology: In the United States, the 1961-62 Ernie Kovacs ABC specials and Rowan & Martin's Laugh-In were the only TV shows to make extensive use of splice editing of videotape. Introduction of computerized systems: A system for editing Quad tape "by hand" was developed by the 1960s. It was really just a means of synchronizing the playback of two machines so that the signal of the new shot could be "punched in" with a reasonable chance at success. One problem with this and early computer-controlled systems was that the audio track was prone to suffer artifacts (i.e. a short buzzing sound) because the video of the newly recorded shot would record into the side of the audio track. A commercial solution known as "Buzz Off" was used to minimize this effect. Introduction of computerized systems: For more than a decade, computer-controlled Quad editing systems were the standard post-production tool for television. Quad tape involved expensive hardware, time-consuming setup, relatively long rollback times for each edit and showed misalignment as disagreeable "banding" in the video. However, it should be mentioned that Quad tape has a better bandwidth than any smaller-format analogue tape, and properly handled could produce a picture indistinguishable from that of a live camera. Further advancement in technology: When helical scan video recorders became the standard it was no longer possible to physically cut and splice the tape. At this point video editing became a process of using two video tape machines, playing back the source tape (or "raw footage") from one machine and copying just the portions desired on to a second tape (the "edit master"). The bulk of linear editing is done simply, with two machines and an edit controller device to control them. Many video tape machines are capable of controlling a second machine, eliminating the need for an external editing control device. Further advancement in technology: This process is "linear", rather than non-linear editing, as the nature of the tape-to-tape copying requires that all shots be laid out in the final edited order. Once a shot is on tape, nothing can be placed ahead of it without overwriting whatever is there already. (Such a replacement is sometimes called an "insert edit".) If absolutely necessary, material can be dubbed by copying the edited content onto another tape, however as each copy generation degrades the image cumulatively, this is not desirable. Further advancement in technology: One drawback of early video editing technique was that it was impractical to produce a rough cut for presentation to an Executive producer. Since Executive Producers are never familiar enough with the material to be able to visualise the finished product from inspection of an edit decision list (EDL), they were deprived of the opportunity to voice their opinions at a time when those opinions could be easily acted upon. Thus, particularly in documentary television, video was resisted for quite a long time. Peak usage: Video editing reached its full potential in the late 1970s when computer-controlled minicomputer edit controllers along with communications protocols were developed, which could orchestrate an edit based on an EDL, using timecode to synchronize multiple tape machines and auxiliary devices using a 9-Pin Protocol. The most popular and widely used computer edit systems came from Sony, Ampex and the venerable CMX. Systems such as these were expensive, especially when considering auxiliary equipment like VTR, video switchers and character generators (CG) and were usually limited to high-end post-production facilities. Peak usage: Jack Calaway of Calaway Engineering was the first to produce a lower-cost, PC-based, "CMX-style" linear editing system which greatly expanded the use of linear editing systems throughout the post-production industry. Following suit, other companies, including EMC and Strassner Editing Systems, came out with equally useful competing editing products. Current usage: While computer based non-linear video editing software has been adopted throughout most of the commercial, film, industrial and consumer video industries, linear video tape editing is still commonplace in television station newsrooms for the production of television news, and medium-sized production facilities which haven’t made the capital investment in newer technologies. News departments often still use linear editing because they can start editing tape and feeds from the field as soon as received since no additional time is spent capturing material as is necessary in non-linear editing systems and systems that are able to digitally record and edit simultaneously have only recently become affordable for small operations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Underhook** Underhook: An underhook is a clinch hold that is used in grappling to control the opponent. It is performed from any direction by putting an arm under the opponent's arm, and holding the opponent's midsection or upper body. Having an underhook with one arm is called a single underhook, while having underhooks with both arms is known as double underhooks. The typical response to an underhook is to try to break it, or to establish an overhook. Single underhook: A single underhook can be used as a takedown maneuver. The protagonist underhooks one arm of the opponent and extends his underhooking arm partly or mostly across the opponent's back, while using his other hand to pull the opponent's other elbow across the opponent's body, and drives forward into the underhooked side of the opponent. Double underhooks: The double underhooks are considered one of the most dominant positions in the clinch, primarily because they allow for great control of the opponent, and can be used for doing a takedown or throwing the opponent. The double underhooks can be used to advance into a bear hug by locking the hands behind the back, and holding the opponent close to the chest. The opponent typically responds to double underhooks with double overhooks, to prevent the opponent from advancing into the bear hug.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Effects of parasitic worms on the immune system** Effects of parasitic worms on the immune system: The effects of parasitic worms, or helminths, on the immune system is a recently emerging topic of study among immunologists and other biologists. Experiments have involved a wide range of parasites, diseases, and hosts. The effects on humans have been of special interest. The tendency of many parasitic worms to pacify the host's immune response allows them to mollify some diseases, while worsening others. Immune response hypothesis: Mechanisms of immune regulation Extensive research shows that parasitic worms have the ability to deactivate certain immune system cells, leading to a gentler immune response. Often, such a response is beneficial to both parasite and host, according to Graham Rook, a professor of medical microbiology at University College London. This immune "relaxation" is incorporated throughout the immune system, decreasing immune responses against harmless allergens, gut flora, and the body itself. Immune response hypothesis: In the past, helminths were thought to simply suppress T-helper Type 1 (Th1) cells while inducing T-helper Type 2 (Th2) cells. Rook points out that this hypothesis would only explain the regulatory effects of parasitic worms on autoimmune diseases caused by Th1 cells. However, helminths also regulate Th2-caused diseases, such as allergy and asthma. Rook postulates that different parasitic worms suppress different Th types, but always in favor of regulatory T (Treg) cells.Rook explains that these regulatory T cells release interleukins that fight inflammation. In the Journal of Biomedicine and Biotechnology, Osada et al. note that macrophages induced by Treg cells fight not only the parasitic disease, but also resist the immune system's response to allergens and the body. According to Hopkin, the author of a 2009 Parasite Immunology article on asthma and parasitic worms, other immunoregulatory mechanisms are also activated, including Mast cells, eosinophils, and cytokines that invoke a strong immunoglobulin E (IgE) response. All these fight a hyperactive immune response, reduce the inflammation throughout the body, and thus lead to less severe autoimmune diseases.Osada et al. state that because parasitic worms may and often do consist of allergens themselves, the degree to which they pacify or agitate the immune response against allergens is a balance of their regulating effects and their allergenic components. Therefore, depending on both of these variables, some parasitic worms may worsen allergies.In their Parasite Immunology article on worms and viral infections, Kamal et al. explain why some parasitic worms aggravate the immune response. Because parasitic worms often induce Th2 cells and lead to suppressed Th1 cells, problems arise when Th1 cells are needed. Such cases occur with viral diseases. Several examples of viral infections worsened by parasitic worms are described below. Immune response hypothesis: Evolutionary theory The positive effects of parasitic worms are theorized to be a result of millions of years of evolution, when humans and human ancestors would have been constantly inhabited by parasitic worms. In the journal EMBO Reports, Rook says that such helminths "are all either things that really do us no harm, or things where the immune system is forced to give in and avoid a fight because it's just a waste of time." In the journal Immunology, Rook states that, because parasitic worms were almost always present, the human immune system developed a way to treat them that didn't cause tissue damage.The immune system extends this response to its treatments of self-antigens, softening reactions against allergens, the body, and digestive microorganisms. As the worms developed ways of triggering a beneficial immune response, humans came to rely on parasitic interaction to help regulate their immune systems. As developed countries advanced in technology, medicine, and sanitation, parasitic worms were mostly eradicated in those countries, according to Weinstock in the medical journal Gut. Because these events took place very recently on the evolutionary timeline and humans have progressed much faster technologically than genetically, the human immune system has not yet adapted to the absence of internal worms. This theory attempts to explain the rapid increase in allergies and asthma in the last century in the developed world, as well as the relative absence of autoimmune diseases in the developing world, where parasites are more common. Immune response hypothesis: Comparison with the hygiene hypothesis The Hygiene hypothesis postulates that decreasing exposure to pathogens and other microorganisms results in an increase of autoimmune diseases, according to Rook. This theory and the theory that certain parasitic worms pacify the immune response are similar in that both theories attribute the recent rise of autoimmune diseases to the decreased levels of pathogens in developed countries. However, the Hygiene Hypothesis claims that the absence of pathogenic organisms in general has led to this. In contrast, the parasitic worm theory only analyzes helminths, and specifically ones found to have a regulating effect. Positive effects: Experimental and also some clinical work has demonstrated the protective benefits of helminth therapy against the wide spectrum of age-related diseases promoted by inflammaging. Type 1 diabetes Type 1 diabetes (T1D) is an autoimmune disease in which the immune system destroys the body's pancreatic beta cells. Positive effects: In an experiment with mice, infection with parasitic worms or helminth-products generally inhibited the spontaneous development of T1D, according to Anne Cook in the journal Immunology. However, results varied among the different species of parasitic worms. Some helminth-products, like a protein of the nematode Acanthocheilonema viteae, didn't have any effect. Another infectious agent, Salmonella typhimurium was successful even when administered late in the development of T1D. Positive effects: Allergy and asthma According to Hopkin, asthma involves atopic allergy, which in turn involves the release of mediators that induce inflammation. In 2007, Melendez and his associates studied filarial nematodes and ES-62, a protein that nematodes secrete in their host. They discovered that pure ES-62 prevents the release of allergenic inflammatory mediators in mice, resulting in weaker allergic and asthmatic symptoms. In the Journal of Immunology, Bashir et al. describe their experimental findings that an allergic response against peanuts is inhibited in mice infected with an intestinal parasite. Positive effects: Inflammatory bowel disease Inflammatory bowel disease (IBD) is an autoimmune disease involving the inflammation of mucus. Ulcerative colitis (UC) and Crohn's disease (CD) are both types of IBD. In the medical journal Gut, Moreels et al. describe their experiments on induced colitis in rats. They found that infecting the rats with the parasitic worm Schistosoma mansoni resulted in alleviated colitis effects. According to Weinstock, human patients of UC or CD improve when infected with the parasitic worm whipworm. Positive effects: Arthritis In 2003, Iain McInnes et al. found that arthritic-induced mice experienced less inflammation and other arthritic effects when infected with ES-62, a protein derived from filarial nematodes, a kind of parasitic worm. Similarly, in the International Journal for Parasitology, Osada et al. published their experimental findings that arthritis-induced mice infected with the parasitic worm Schistosoma mansoni had down-regulated immune systems. This led to resistance to arthritis. Positive effects: Multiple sclerosis In 2007, Jorge Correale et al. studied the effects of parasitic infection on multiple sclerosis (MS). Correale evaluated several MS patients infected with parasites, comparable MS patients without parasites, and similar healthy subjects over the course of 4.6 years. During the study, the MS patients that were infected with parasites experienced far less effects of MS than the non-infected MS patients. Negative effects: Vaccination In the journal Parasite Immunology, Kamal et al. explains that parasitic worms often weaken the immune system's ability to effectively respond to a vaccine because such worms induce a Th2-based immune response that is less responsive than normal to antigens. This is a major concern in developing countries where parasitic worms and the need for vaccinations exist in large number. It may explain why vaccines are often ineffective in developing countries. Negative effects: Hepatitis Because Hepatitis C virus (HCV) and the parasitic worm Schistosoma (the bloodfluke) are relatively common in developing countries, there are many cases where both are present in the human body. According to Kamal, bloodflukes have been adequately shown to worsen HCV. Kamal explains that, in order to maintain an immune response against HCV, patients must sustain a certain level of CD4+ T-cells. However, the presence of bloodflukes closely and negatively correlates to the presence of CD4+ T-cells, and so a much higher percentage of those infected with bloodflukes are unable to combat HCV effectively and develop chronic HCV. Parasitic effects of Hepatitis B virus, however, are contested—some studies show little association, while others show exacerbation of the disease. Negative effects: HIV Because the two diseases are abundant in developing countries, there are many patients with both HIV (Human immunodeficiency virus) and parasites, and specifically bloodflukes. In his article, Kamal relates the findings that those infected with parasites are more likely to be infected by HIV. However, it is disputed whether or not the viral infection is more severe because of the parasites. Negative effects: Tuberculosis According to Kamal, the human immune system needs Th1 cells to effectively fight TB. Since the immune system often responds to parasitic worms by inhibiting Th1 cells, parasitic worms generally worsen tuberculosis. In fact, Tuberculosis patients who receive successful parasitic therapy experience major improvement. Negative effects: Malaria In 2004, Sokhna et al. performed a study of Senegalese children. Those infected with blood flukes had significantly higher rates of malaria attacks than those who were not. Furthermore, children with the highest counts of blood flukes also had the most malaria attacks. Based on this study, Hartgers et al. drew a "cautious conclusion" that helminths make humans more susceptible to contracting malaria and experiencing some of its lighter symptoms, while actually protecting them from the worst symptoms. Hartgers reasons that a Th2-skewed immune system resulting from helminth infection would lower the immune system's ability to counter an initial malarial infection. However, it would also prevent a hyperimmune response resulting in severe inflammation, reducing morbidity and pathology.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Turntable anti-skating** Turntable anti-skating: Turntable anti-skating is a feature used in phonograph turntables to prevent skating of the tonearm. Turntable anti-skating: Due to the offset between the cartridge's axis (which is approximately tangential to the disc) and the tonearm's pivot, the force applied (through friction) by the rotating disc to the cartridge tends to draw the tonearm toward the center of the record and distort the balance of the sound and of the wear suffered by the stylus and the vinyl groove. To prevent this, an appropriately-sized opposing force (a rotational torque) is applied at the tonearm. This is accomplished in various ways by dedicated mechanisms, depending on the tonearm's manufacturer, and ranging from a small counterweight adjustable by a knob, to adjustable spring or magnetic mechanisms, usually calibrated in grams of force.Note that while the angular velocity is, ideally, constant, the peripheral velocity of the moving groove against the stylus is not, varying for instance from approximately 50 cm/s down to 15 cm/s from start to finish of a 33 rpm, 12" (30.48cm) record. The angle of skew of the stylus cartridge with respect to a chord of the circular record (and groove) while the tonearm rests on the record is also variable. Thus, any opposing force applied to the tonearm to counteract skating, if not variable during the playing of the record, is fixed, and at best an average value, only really perfectly in balance with the skating force at just one unique radius from the center of the disk. Yet, anti-skating schemes perform a useful function in minimizing asymmetric wear of styli and grooves, although not eliminating it entirely. Turntable anti-skating: Linear-tracking turntables were invented in part to eliminate the possibility of skating.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Noncommutative signal-flow graph** Noncommutative signal-flow graph: In automata theory and control theory, branches of mathematics, theoretical computer science and systems engineering, a noncommutative signal-flow graph is a tool for modeling interconnected systems and state machines by mapping the edges of a directed graph to a ring or semiring. A single edge weight might represent an array of impulse responses of a complex system (see figure to the right), or a character from an alphabet picked off the input tape of a finite automaton, while the graph might represent the flow of information or state transitions. As diverse as these applications are, they share much of the same underlying theory. Definition: Consider n equations involving n+1 variables {x0, x1,...,xn}. xi=∑j=0naijxj,1≤i≤n, with aij elements in a ring or semiring R. The free variable x0 corresponds to a source vertex v0, thus having no defining equation. Each equation corresponds to a fragment of a directed graph G=(V,E) as show in the figure. The edge weights define a function f from E to R. Finally fix an output vertex vm. A signal-flow graph is the collection of this data S = (G=(V,E), v0,vm ∈ V, f : E → R). The equations may not have a solution, but when they do, xm=Tx0, with T an element of R called the gain. Return Loop Method: There exist several noncommutative generalizations of Mason's rule. The most common is the return loop method (sometimes called the forward return loop method (FRL), having a dual backward return loop method (BRL)). The first rigorous proof is attributed to Riegle, so it is sometimes called Riegle's rule.As with Mason's rule, these gain expressions combine terms in a graph-theoretic manner (loop-gains, path products, etc.). They are known to hold over an arbitrary noncommutative ring and over the semiring of regular expressions. Return Loop Method: Formal Description The method starts by enumerating all paths from input to output, indexed by j ∈ J. We use the following definitions: The j-th path product is (by abuse of notation) a tuple of kj edge weights along it: pj=(wkj(j),…,w2(j),w1(j)). To split a vertex v is to replace it with a source and sink respecting the original incidence and weights (this is the inverse of the graph morphism taking source and sink to v). The loop gain of a vertex v w.r.t. a subgraph H is the gain from source to sink of the signal-flow graph split at v after removing all vertices not in H. Return Loop Method: Each path defines an ordering of vertices along it. The along path j, the i-th FRL (BRL) node factor is (1-Si(j))−1 where Si(j) is the loop gain of the i-th vertex along the j-th w.r.t. the subgraph obtained by removing v0 and all vertices ahead of (behind) it.The contribution of the j-th path to the gain is the product along the path, alternating between the path product weights and the node factors: Tj=∏i=kj1(1−Si(j))−1wi(j), so the total gain is T=∑j∈JTj. Return Loop Method: An Example Consider the signal-flow graph shown. From x to z, there are two path products: (d) and (e,a). Along (d), the FRL and BRL contributions coincide as both share same loop gain (whose split reappears in the upper right of the table below): f+e(1−b)−1c, Multiplying its node factor and path weight, its gain contribution is Td=[1−f−e(1−b)−1c]−1d. Along path (e,a), FRL and BRL differ slightly, each having distinct splits of vertices y and z as shown in the following table. Adding to Td, the alternating product of node factors and path weights, we obtain two gain expressions: T(FRL)=[1−f−e(1−b)−1c]−1d+[1−f−e(1−b)−1c]−1e(1−b)−1a, and T(BRL)=[1−f−e(1−b)−1c]−1d+(1−f)−1e[1−b−c(1−f)e]−1a, These values are easily seen to be the same using identities (ab)−1 = b−1a−1 and a(1-ba)−1=(1-ab)−1a. Applications: Matrix Signal-Flow Graphs Consider equations yi=∑j=12aijxj+∑j=12bijyj and zi=∑j=12cijyj, This system could be modeled as scalar signal-flow graph with multiple inputs and outputs. But, the variables naturally fall into layers, which can be collected into vectors x=(x1,x2)ty=(y1,y2)t and z=(z1,z2)t. This results in much simpler matrix signal-flow graph as shown in the figure at the top of the article. Applying the forward return loop method is trivial as there's a single path product (C,A) with a single loop-gain B at y. Thus as a matrix, this system has a very compact representation of its input-output map T=C(1−B)−1A. Finite Automata An important kind of noncommutative signal-flow graph is a finite state automaton over an alphabet Σ .Serial connections correspond to the concatenation of words, which can be extended to subsets of the free monoid Σ∗ . For A, B ⊆Σ∗ A⋅B={ab∣a∈A,b∈B}. Parallel connections correspond to set union, which in this context is often written A+B. Applications: Finally, self-loops naturally correspond to the Kleene closure A∗={λ}+A+AA+AAA+⋯, where λ is the empty word. The similarity to the infinite geometric series (1−x)−1=1+x+x2+x3⋯, is more than superficial, as expressions of this form serve as 'inversion' in this semiring.In this way, the subsets of Σ∗ built of from finitely many of these three operations can be identified with the semiring of regular expressions. Similarly, finite graphs whose edges are weighted by subsets of Σ∗ can be identified with finite automata, though generally that theory starts with singleton sets as in the figure. Applications: This automaton is deterministic so we can unambiguously enumerate paths via words. Using the return loop method, path contributions are: path ab, has node factors (c*, λ ), yielding gain contribution ac∗b, path ada, has node factors (c*, c*, λ ), yielding gain contribution ac∗dc∗a, path ba, has node factors (c*, λ ), yielding gain contribution bc∗a. Thus the language accepted by this automaton (the gain of its signal-flow graph) is the sum of these terms L=ac∗b+ac∗dc∗a+bc∗a.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pentachlorobenzenethiol** Pentachlorobenzenethiol: Pentachlorobenzenethiol is a chemical compound from the group of thiols and organochlorine compounds. The chemical formula is C6HCl5S. Synthesis: Pentachlorobenzenethiol can be obtained from hexachlorobenzene. Properties: Pentachlorobenzenethiol is a combustible gray solid with an unpleasant odor, practically insoluble in water. It has a monoclinic crystal structure. The compound is not well-biodegradable and presumably bioaccumulable and toxic for aquatic organisms. Pentachlorobenzenethiol is itself a metabolite of hexachlorobenzene and is found in the urine and the excretions of animals receiving hexachlorobenzene. Pentachlorobenzenethiol has a high potential for long-range transport via air as it is very slowly degraded in atmosphere. Applications: Pentachlorobenzenethiol is used in the rubber industry. The compound is added to rubber (both natural and synthetic) to facilitate processing (mastication).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SANDDE** SANDDE: SANDDE was a software and hardware system, developed primarily by IMAX Corporation, designed to create hand-drawn, stereoscopic 3D animation content. SANDDE is an acronym for "Stereoscopic ANimation Drawing DEvice" and is a play on the Japanese term for "3D", which is pronounced "San-D". SANDDE: The concept of SANDDE was to enable artists to draw and animate in three-dimensional space. It was intended to be intuitively usable, like a pencil: as an art creation tool, SANDDE incorporated aspects of drawing, painting, sculpture and puppetry. Unlike most other contemporary 3D computer graphic animation software, SANDDE did not require the construction of models from primitives. The main input device was a "wand" which allows the user to create drawings in the air. SANDDE: Animators sat in virtual stereoscopic theaters and, using the wand, drew in space to create individual frames and then animated their creations using the interactive capabilities of the wand to create shots, sequences, and complete movies. SANDDE: SANDDE was originally developed by IMAX in the mid-1990s, and was used to create one IMAX short (Paint Misbehavin' [1997]) and portions of two other IMAX feature films: Mark Twain 3D (1999) and Cyberworld (2000). Thereafter, IMAX stopped active development of the system but provided licenses to the National Film Board of Canada for artistic experimentation. The NFB has used SANDDE in numerous stereoscopic productions including Falling in Love Again", Moonman, June, The Wobble Incident, Subconscious Password, and Minotaur. In 2007, IMAX spun off the Janro Imaging Laboratory to explore future development and commercial use of the application, including in Ultimate Wave 3D and Legends of Flight, both produced by the Stephen Low Company.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LONGi** LONGi: Xi'an LONGi Silicon Materials Corporation is a Chinese silicon producer. LONGi was established in 2000 and is the world's largest monocrystalline silicon producer. Baoshen Zhong is LONGi's chairman. LONGi: In 2015, LONGi had an estimated capacity of 4.5 GW mono wafer. That year, LONGi signed a contract with Yingli to cooperate on monocrystalline products.In early 2016, LONGi signed a $1.84 billion solar panel sales agreement with SunEdison Products Singapore and agreed to purchase silicon manufactured in South Korea. LONGi also took over SunEdison's Malaysian silicon plant.In 2017, LONGi Solar expanded module assembly capacity by around 1.5 GW to achieve 6.5 GW of nameplate capacity. LONGi: In early 2018, LONGi announced plans to build a new 5 GW module assembly plant in the Chuzhou Economic and Technological Development Zone in China's Anhui province, pending an internal review process, then an investment of approximately RMB 1.95 billion (US$300 million) and approximately 28 months of construction and start-up for manufacturing operations to commence.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**High-low skirt** High-low skirt: High-low skirts, also known as asymmetrical, waterfall, or mullet skirts, are skirts with a hem that is higher in the front, or side, than in the back. History: The high-low skirt has a full circle hem. However, the length varies from short in front to long in back. The style originates in Victorian era dresses and formal gowns, when the hem style became known as the "fishtail". During the 19th century, it became a trend in the mid-1870s, reappearing in the early 1880s, and later in women's formal gowns and evening dresses throughout the 20th century, particularly in the late 1920s and early 1930s, where lowering hemlines were a mask that would start off 1930s silhouettes. History: The recent high-low skirt hem trend began in late 2011. The high-low skirt became a trend in Europe and America in late 2011, eventually becoming a worldwide fashion in Spring and Summer 2012. It has received fashion press coverage in India, such as in the fashion labels Namrata Joshipura and Myoho, being praised for its "playfulness". It received widespread visibility outside of fashion circles after The Voice contestant Devyn DeLoera wore a peach-coloured high-low skirt for her audition in summer 2012. History: The skirt style has been given a variety of names by designers and the press, including asymmetrical and waterfall, with the most common and derisive term being "mullet skirt", used by Britain's Mirror newspaper in criticising a version worn by singer Cher Lloyd in April 2012, a mocking reference to the now unfashionable mullet hairstyle that was a brief men's fad in the 1980s. However, some high-low dress wearers have embraced the term, referring to their own dress as a mullet dress. Asymmetric peplum trend: A related trend in 2011 and 2012 is the asymmetric peplum hem on shirts, sweaters and jackets for women. The peplum, a broad ruffled hem that is fitted at the waist and flares outward, has been a recurring fashion trend in Europe for centuries, and was last popular in Europe and North America during the 1940s and 1950s. An asymmetric version has been brought to women's fashion in 2012, but not all consumers find it flattering, with one American stating, "I do not believe it hides large hips and behinds, and the new asymmetrical peplums should only be worn by the tall and thin".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ubuntu User** Ubuntu User: Ubuntu User is a paper magazine that was launched by Linux New Media AG in May 2009.The publication is aimed at users of the Ubuntu operating system and focuses on reviews, community news, how to articles and troubleshooting tips. It also includes a Discovery Guide aimed at beginners. Background: Ubuntu User is published quarterly. The paper magazine is supported by a website that includes a selection of articles from the magazine available to the public as PDFs, Ubuntu news and free computer wallpaper downloads.Issue number one consisted of 100 pages (including covers) and in its North American edition had a cover price of US$15.99 and Cdn$17.99. Each issue also includes an Ubuntu live CD in the form of a DVD that new users can use to try out Ubuntu or to install it.Linux New Media is headquartered in Munich, Germany and has offices of its US subsidiary, Linux New Media USA, LLC, in Lawrence, Kansas. The company also publishes Linux Magazine, LinuxUser, EasyLinux in German, and Linux Community. Reception: In announcing the launch of the magazine, the company said: Ubuntu User is the first print magazine for users of the popular Ubuntu computer operating system. The power, style, and simplicity of Ubuntu is winning followers around the world. Ubuntu User offers reviews, community news, HowTo articles, and troubleshooting tips for readers who are excited about Ubuntu and want to learn more about the Ubuntu environment. Reception: DistroWatch questioned the wisdom of launching a new paper magazine at this point in history: In a time where more and more information is moving out of the paper world and into the online realm, one publisher is bucking the trend by releasing a physical magazine...With so much high quality information available online, would you pay for a monthly paper magazine about your favourite distribution?
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Technosphera (publisher)** Technosphera (publisher): Technosphera is a primary Russian scientific and technologic literature publisher. Since 1996 it has been publishing books and magazines about science and technology. Scientific literature: Technosphera publishes scientific literature of Russian and foreign authors on a wide range of topics and textbooks for students of higher technological education. Plenty of books are granted by methodical assemblies of higher education institutes. Technosphera's primary topics are mathematics, physics, chemistry, medicine, materials and technologies. Technosphera's scientific monographs are regularly published with a support of the Russian fund of fundamental research.In 2010 the book set World of Radioelectronics was established in collaboration with radioelectronics Department of Russian Ministry of Industry and Trade. The members of editorial team of this book set are the most advanced experts in radioelectronics. Technological and scientific magazines: Magazines of Technosphera are included in Russian Science Citation Index. Electronics: Science, Technology, Business is a Russian magazine about research, technology and development in the electronic and radioelectronic industry. The magazine contains interviews with experts, reviews of the most interesting and useful exhibitions and articles about electronics and microelectronics. Nanoindustry is a Russian scientific magazine focused on nanotechnology and nanomaterials, nanobiotechnologies and applied nanotechnologies in medicine. Last Mile is a magazine that contains original articles and reviews about photonics and optics. Analytics is the first Russian scientific magazine for experts in analytical chemistry with topics about analytic and laboratory equipment in Russia and CIS.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IntelliPoint** IntelliPoint: Microsoft IntelliPoint is the Microsoft-branded software driver for the company's hardware mice. Microsoft has released versions for both Windows and Mac OS X. It has been succeeded by Microsoft Mouse and Keyboard Center, which combines IntelliType (a Microsoft keyboard driver) with IntelliPoint. Features: Software features may only be available with certain mice models. (Button options are specific to the selected model.) On Mac OS X 10.4-10.7.x, IntelliPoint features can be accessed by opening Microsoft Mouse in System Preferences.Depending on the software version and specific mouse product, users can define mouse buttons to run any executable program or file they desire (or a control key + letter combination) and can even define buttons for different functions in chosen programs. Features: With IntelliPoint 4, users were able to specify mouse wheel behavior to scroll one screen at a time. This feature was useful in situations where the user had to work with windows of varying size and a fixed scroll rate alternated from being too fast or too slow depending on each window. This feature was incorporated into the Windows XP operating system and removed in IntelliPoint 5. The "Alt+Tab" button combination was also replaced with "Next Window," effectively preventing users from alternating between specific programs, and instead having to cycle through one by one (although this can be hacked back in the registry). Features: Scrolling Universal Scrolling is a software function within IntelliPoint that allows a scroll wheel to work with programs that do not natively support that method of input. If a program supports scroll wheels natively, the Universal Scrolling feature will generally not interfere with the native implementation. Supported mice: IntelliPoint supports older models of Microsoft mice, as well as generic 3/5-button mice. Note: Version 8.0 and above dropped PS/2 support for the following list. As even adapters cannot assist, Microsoft keeps version 7.1 as an offered download for users who still own mice with PS/2 connectors (instead of USB). Supported mice: Arc Arc Touch Basic Optical Mouse Basic Optical Mouse v2.0 Comfort Optical Mouse 3000 Comfort Optical Mouse 500 v2.0 IntelliMouse IntelliMouse Explorer 2.0 IntelliMouse Explorer 3.0 IntelliMouse Explorer 4.0 IntelliMouse Explorer for Bluetooth IntelliMouse Optical Explorer Mouse Explorer Touch Mouse Explorer Mini Mouse Laser Mouse 6000 Mobile Memory Mouse 8000 Mobile Optical Mouse Natural Wireless Laser Mouse 6000 Notebook Optical Mouse Notebook Optical Mouse 3000 Optical Mouse Optical Mouse by Starck Sculpt Comfort Mouse Sculpt Mobile Mouse Sculpt Touch Mouse SideWinder Mouse SideWinder x8 Mouse (for gaming) Standard Wireless Mouse Touch Mouse Trackball Explorer Trackball Optical Wheel Mouse Wheel Mouse Optical Wireless IntelliMouse Explorer 2.0 Wireless IntelliMouse Explorer for Bluetooth Wireless IntelliMouse Explorer with Fingerprint Reader Wireless Laser Mouse 5000 Wireless Laser Mouse 6000 Wireless Laser Mouse 6000 v2.0 Wireless Laser Mouse 7000 Wireless Laser Mouse 8000 Wireless Notebook Laser Mouse 6000 Wireless Notebook Laser Mouse 7000 Wireless Notebook Optical Mouse Wireless Notebook Optical Mouse 3000 Wireless Notebook Optical Mouse 4000 Wireless Notebook Presenter Mouse 8000 Wireless Notebook Mouse 5000 Wireless Optical Mouse 2.0 Wireless Optical Mouse 2000 Wireless Optical Mouse 5000 (also Wireless IntelliMouse Explorer 2.0)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dynamic aperture (accelerator physics)** Dynamic aperture (accelerator physics): The dynamic aperture is the stability region of phase space in a circular accelerator. For hadrons: In the case of protons or heavy ion accelerators, (or synchrotrons, or storage rings), there is minimal radiation, and hence the dynamics is symplectic. For long term stability, tiny dynamical diffusion (or Arnold diffusion) can lead an initially stable orbit slowly into an unstable region. This makes the dynamic aperture problem particularly challenging. One may be considering stability over billions of turns. A scaling law for Dynamic aperture vs. number of turns has been proposed by Giovannozzi. For electrons: For the case of electrons, the electrons will radiate which causes a damping effect. This means that one typically only cares about stability over thousands of turns. Methods to compute or optimize dynamic aperture: The basic method for computing dynamic aperture involves the use of a tracking code. A model of the ring is built within the code that includes an integration routine for each magnetic element. The particle is tracked many turns and stability is determined. In addition, there are other quantities that may be computed to characterize the dynamics, and can be related to the dynamic aperture. One example is the tune shift with amplitude. There have also been other proposals for approaches to enlarge dynamic aperture, such as:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Key (instrument)** Key (instrument): A key is a component of a musical instrument, the purpose and function of which depends on the instrument. However, the term is most often used in the context of keyboard instruments, in which case it refers to the exterior part of the instrument that the player physically interacts in the process of sound production. On instruments equipped with tuning machines such as guitars or mandolins, a key is part of a tuning machine. It is a worm gear with a key shaped end used to turn a cog, which, in turn, is attached to a post which winds the string. The key is used to make pitch adjustments to a string. With other instruments, zithers and drums, for example, a key is essentially a small wrench used to turn a tuning machine or lug. Key (instrument): On woodwind instruments such as a flute or saxophone, keys are finger operated levers used to open or close tone holes, the operation of which effectively shortens or lengthens the resonating tube of the instrument. By doing so, the player is able to physically manipulate the range of resonating sound frequencies capable of being produced by the tubes that has been altered into various “effective” lengths, based on specific key configurations. The keys on the keyboard of a pipe organ also open and close various mechanical valves. However, rather than directed influencing the paths the airflow takes within the same tube, the configuration of these valves instead determines through which of the numerous separate organ pipes, each of which tuned for a specific note, the air stream flows through. The keys of an accordion direct the air flow from manually operated bellows across various tuned vibrating reeds. Key (instrument): On other keyboard instruments, a key may be a lever which mechanically triggers a hammer to strike a group of strings, as on a piano, or an electric switch which energizes an audio oscillator as on an electronic organ or a synthesizer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ISO 25178** ISO 25178: ISO 25178: Geometrical Product Specifications (GPS) – Surface texture: areal is an International Organization for Standardization collection of international standards relating to the analysis of 3D areal surface texture. Structure of the standard: Documents constituting the standard : Part 1: Indication of surface texture Part 2: Terms, definitions and surface texture parameters Part 3: Specification operators Part 6: Classification of methods for measuring surface texture Part 70: Material measures Part 71: Software measurement standards Part 72: XML file format x3p Part 600: Metrological characteristics for areal-topography measuring methods Part 601: Nominal characteristics of contact (stylus) instruments Part 602: Nominal characteristics of non-contact (confocal chromatic probe) instruments Part 603: Nominal characteristics of non-contact (phase-shifting interferometric microscopy) instruments Part 604: Nominal characteristics of non-contact (coherence scanning interferometry) instruments Part 605: Nominal characteristics of non-contact (point autofocus probe) instruments Part 606: Nominal characteristics of non-contact (focus variation) instruments Part 607: Nominal characteristics of non-contact (confocal microscopy) instruments Part 700: Calibration of surface texture measuring instruments [NWIP] Part 701: Calibration and measurement standards for contact (stylus) instrumentsOther documents might be proposed in the future but the structure is now almost defined. Part 600 will replace the common part found in all other parts. When revised, parts 60x will be reduced to only contain descriptions specific to the instrument technology. New features: It is the first international standard taking into account the specification and measurement of 3D surface texture. In particular, the standard defines 3D surface texture parameters and the associated specification operators. It also describes the applicable measurement technologies, calibration methods, together with the physical calibration standards and calibration software that are required. New features: A major new feature incorporated into the standard is coverage of non-contact measurement methods, already commonly used by industry, but up until now lacking a standard to support quality audits within the framework of ISO 9000. For the first time, the standard brings 3D surface metrology methods into the official domain, following 2D profilometric methods that have been subject to standards for over 30 years. The same thing applies to measurement technologies that are not restricted to contact measurement (with a diamond point stylus), but can also be optical, such as chromatic confocal gauges and interferometric microscopes. New features: New definitions The ISO 25178 standard is considered by TC213 as first and foremost providing a redefinition of the foundations of surface texture, based upon the principle that nature is intrinsically 3D. It is anticipated that future work will extend these new concepts into the domain of 2D profilometric surface texture analysis, requiring a total revision of all current surface texture standards (ISO 4287, ISO 4288, ISO 1302, ISO 11562, ISO 12085, ISO 13565, etc.) A new vocabulary is imposed: S filter: filter eliminating the smallest scale elements from the surface (or of the shortest wavelength for a linear filter) L filter: filter eliminating the largest scale elements from the surface (or of the longest wavelength for a linear filter) F operator: operator suppressing nominal form. New features: Primary surface: surface obtained after S filtering. S-F surface: surface obtained after applying an F operator to the primary surface. S-L surface: surface obtained after applying an L filter to the S-F surface. New features: Nesting index: index corresponding to the cut-off wavelength of a linear filter, or to the scale of the structuring element of a morphological filter. Under 25178, industry-specific taxonomies such as roughness vs waviness are replaced by the more general concept of "scale limited surface" and "cut-off" by "nesting index".The new available filters are described in the series of technical specifications included in ISO 16610. These filters include: the Gaussian filter, the spline filter, robust filters, morphological filters, wavelet filters, cascading filters, etc. Parameters: Generalities 3D areal surface texture parameters are written with the capital letter S (or V) followed by a suffix of one or two small letters. They are calculated on the entire surface and no more by averaging estimations calculated on a number of base lengths, as is the case for 2D parameters. In contrast with 2D naming conventions, the name of a 3D parameter does not reflect the filtering context. For example, Sa always appears regardless of the surface, whereas in 2D there is Pa, Ra or Wa depending on whether the profile is a primary, roughness or waviness profile. Parameters: Height parameters These parameters involve only the statistical distribution of height values along the z axis. Spatial parameters These parameters involve the spatial periodicity of the data, specifically its direction. Hybrid parameters These parameters relate to the spatial shape of the data. Functions and related parameters These parameters are calculated from the material ratio curve (Abbott-Firestone curve). Parameters related to segmentation These feature parameters are derived from a segmentation of the surface into motifs (dales and hills). Segmentation is carried out using a watershed method. Software: A consortium of several companies started to work in 2008 on a free implementation of 3D surface texture parameters. The consortium, called OpenGPS [1] later focused its efforts on an XML file format (X3P) that was published under the ISO standard ISO 25178-72. Several commercial packages provide part or all of the parameters defined in ISO 25178, such as MountainsMap from Digital Surf, SPIP from Image Metrology[2], TrueMap 6 from TrueGage[3], as well as the open source Gwyddion. Instruments: Part 6 of the standard divides the usable technologies for 3D surface texture measurement into three families: Topographical instruments: contact and non-contact 3D profilometers, interferometric and confocal microscopes, structured light projectors, stereoscopic microscopes, etc. Profilometric instruments: contact and non-contact 2D profilometers, line triangulation lasers, etc. Instruments functioning by integration: pneumatic measurement, capacitive, by optical diffusion, etc.and defines each of these technologies. Next, the standard explores a number of these technologies in detail and dedicates two documents to each of them: Part 6xx: nominal characteristics of the instrument Part 7xx: calibration of the instrument Contact profilometer Parts 601 and 701 describe the contact profilometer, using a diamond point to measure the surface with the assistance of a lateral scanning device. Chromatic confocal gauge Part 602 describes this type of non-contact profilometer, incorporating a single point white light chromatic confocal sensor. The operating principle is based upon the chromatic dispersion of the white light source along the optical axis, via a confocal device, and the detection of the wavelength that is focused on the surface by a spectrometer. Instruments: Coherence scanning interferometry Part 604 describes a class of optical surface measurement methods wherein the localization of interference fringes during a scan of optical path length provides a means to determine surface characteristics such as topography, transparent film structure, and optical properties. The technique encompasses instruments that use spectrally broadband, visible sources (white light) to achieve interference fringe localization). CSI uses either fringe localization alone or in combination with interference fringe phase. Instruments: Focus variation Part 606 describes this type of non-contact areal based method. The operating principle is based on a microscope optics with limited depth of field and a CCD camera. By scanning in vertical direction several images with different focus are gathered. This data is then used to calculate a surface data set for roughness measurement.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Whistle-stop train tour** Whistle-stop train tour: A whistle stop or whistle-stop tour is a style of political campaigning where the politician makes a series of brief appearances or speeches at a number of small towns over a short period of time. Originally, whistle-stop appearances were made from the open platform of an observation car or a private railroad car. Definition and usage: The definition of the term derives from the practice of a small, occasionally used railway station signaling a train so the engineer will know to stop. Trains inbound to a "whistle stop" station would signal their approach with a blast of the train's steam whistle which would alert the train depot attendant to their arrival. If passengers, mail, or freight waited to be picked up at the depot, the depot master would raise a tower signal to indicate to the train engineer that the train should stop. If no stop was necessary, a different signal would be raised and the engineer could pass through the depot without stopping.One usage of the term in the political context, by Robert A. Taft, was derisive. He accused then-President Harry S. Truman of "blackguarding Congress at whistle stops across the country". Background: In the 19th century, when travel by railroad was the most common means of transport, politicians would charter tour trains which would travel from town to town. At each stop, the candidate would make a speech from the train, but might rarely set foot on the ground. "Whistle stop" campaign speeches would be made from the rear platform of a train. Background: One of the most famous railroad cars to be used in the U.S. whistle-stop tours was the Ferdinand Magellan, the only car custom built for the President of the United States in the 20th century. Originally built in 1928 by the Pullman Company and officially the "U.S. No. 1 Presidential Railcar", the Ferdinand Magellan is on display at the Gold Coast Railroad Museum in Miami, Florida. The famous news photo of Harry S Truman holding up a copy of the Chicago Tribune with a banner headline stating "Dewey Defeats Truman" was taken on this platform on Wednesday, November 3, 1948, at St. Louis Union Station. The Ferdinand Magellan was also used by President Franklin D. Roosevelt and, to a much lesser extent, by President Dwight Eisenhower. The Magellan’s last official trip before retirement was in 1954, when first lady Mamie Eisenhower rode it from Washington, D.C., to Groton, Connecticut, to christen the world’s first nuclear-powered submarine, the USS Nautilus. President Ronald Reagan used the Magellan for one day, October 12, 1984, traveling 120 miles in Ohio, from Dayton to Perrysburg, making five stops to give "whistle stop" speeches along the way. Modern whistle-stop tours: The future Charles III of the United Kingdom started a five-day whistle-stop tour of the United Kingdom on Monday, 6 September 2010, with a speech in Glasgow when he was Prince of Wales. The green campaigning tour was a part of the Prince's Start initiative that aimed to build public awareness of sustainable activities. In Europe, touring politicians still occasionally take a train, as the excellent, dense railway network offers access comparable to road travel and as it is better suited for extensive trips than air travel. In 2009, for example, German chancellor (and CDU candidate) Angela Merkel made a highly publicized tour in Konrad Adenauer's old campaign train. The SPD, on the other hand, discontinued the use of train tours for campaigns before the 1998 election.On September 30, 2020, after the first presidential debate against Donald Trump, Democratic presidential candidate Joe Biden rode on an Amtrak "Build Back Better Express" from Cleveland, Ohio, to Johnstown, Pennsylvania.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Intel 8289** Intel 8289: The Intel 8289 is a Bus arbiter designed for Intel 8086/8087/8088/8089. The chip is supplied in 20-pin DIP package. The 8086 (and 8088) operate in maximum mode, so they are configured primarily for multiprocessor operation or for working with coprocessors. Necessary control signals are generated by the 8289. This version was available for US$44.80 in quantities of 100.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mechanical heat treatment** Mechanical heat treatment: Mechanical heat treatment (MHT) is an alternative waste treatment technology. This technology is also commonly termed autoclaving. MHT involves a mechanical sorting or pre-processing stage with technology often found in a material recovery facility. The mechanical sorting stage is followed by a form of thermal treatment. This might be in the form of a waste autoclave or processing stage to produce a refuse derived fuel pellet. MHT is sometimes grouped along with mechanical biological treatment. MHT does not however include a stage of biological degradation (anaerobic digestion or composting). Configurations: Different MHT systems may be configured to meet various objectives with regard to the waste outputs from the process. The alternatives (depending on the system employed) may be one or more of the following: Separate an 'organic rich' component of the waste for subsequent biological processing Produce a refuse derived fuel to be applied in an appropriate process to utilise its energy potential; and Extract materials for recycling (typically glass and metals, potentially plastics and the fibrous organic and paper fraction)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded