text
stringlengths
60
353k
source
stringclasses
2 values
**L-threonine 3-dehydrogenase** L-threonine 3-dehydrogenase: In enzymology, a L-threonine 3-dehydrogenase (EC 1.1.1.103) is an enzyme that catalyzes the chemical reaction L-threonine + NAD+ ⇌ L-2-amino-3-oxobutanoate + NADH + H+Thus, the two substrates of this enzyme are L-threonine and NAD+, whereas its 3 products are L-2-amino-3-oxobutanoate, NADH, and H+. This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is L-threonine:NAD+ oxidoreductase. Other names in common use include L-threonine dehydrogenase, threonine 3-dehydrogenase, and threonine dehydrogenase. This enzyme participates in glycine, serine and threonine metabolism. Structural studies: As of late 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes 2D8A, 2DFV, and 2DQ4.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hendra virus** Hendra virus: Hendra virus is a bat-borne virus that is associated with a highly fatal infection in horses and humans. Numerous disease outbreaks in Australia among horses have been caused by Hendra virus. The Hendra virus belongs to the genus Henipavirus, which also contains the Nipah virus, which has also caused disease outbreaks. Pathology: Flying foxes experimentally infected with the Hendra virus develop a viraemia and shed the virus in their urine, faeces and saliva for approximately one week. There is no other indication of an illness in them. Symptoms of Hendra virus infection of humans may be respiratory, including hemorrhage and edema of the lungs, or in some cases viral meningitis. In horses, infection usually causes one or more of pulmonary oedema, congestion and neurological signs.Ephrin B2 has been identified as the main receptor for the henipaviruses. Transmission: Flying foxes have been identified as the reservoir host of Hendra virus. A seroprevalence of 47% is found in the flying foxes, suggesting an endemic infection of the bat population throughout Australia. Horses become infected with Hendra after exposure to bodily fluid from an infected flying fox. This often happens in the form of urine, feces, or masticated fruit covered in the flying fox's saliva when horses are allowed to graze below roosting sites.In 2021 a new variant of Hendra virus named "Hendra virus genotype 2" (HeV-g2) was identified in two flying fox species in Australia. It shares 84% sequence homology to other published Hendra virus genomes. Prevention, detection and treatment: Three main approaches are currently followed to reduce the risk to humans. Prevention, detection and treatment: Vaccine for horses.In November 2012, a vaccine became available for horses. The vaccine is to be used in horses only, since, according to CSIRO veterinary pathologist Dr Deborah Middleton, breaking the transmission cycle from flying foxes to horses prevents it from passing to humans, as well as, "a vaccine for people would take many more years."The vaccine is a subunit vaccine that neutralises Hendra virus and is composed of a soluble version of the G surface antigen on Hendra virus and has been successful in ferret models.By December 2014, about 300 000 doses had been administered to more than 100 000 horses. About 3 in 1000 had reported incidents; the majority being localised swelling at the injection site. There had been no reported deaths.In August 2015, the Australian Pesticides and Veterinary Medicines Authority (APVMA) registered the vaccine. In its statement the Australian government agency released all its data on reported side effects. In January 2016, APVMA approved its use in pregnant mares.Stall-side test to assist in diagnosing the disease in horses rapidly.Although the research on the Hendra virus detection is ongoing, a promising result has found using antibody-conjugated magnetic particles and quantum dots.Post-exposure treatment for humans.Nipah virus and Hendra virus are closely related paramyxoviruses that emerged from bats during the 1990s to cause deadly outbreaks in humans and domesticated animals. National Institute of Allergy and Infectious Diseases (NIAID)-supported investigators developed vaccines for Nipah and Hendra virus based on the soluble G-glycoproteins of the viruses formulated with adjuvants. Both vaccines have been shown to induce strong neutralizing antibodies in different laboratory animals.Trials began in 2015 to evaluate a monoclonal antibody to be used as a possible complementary treatment for humans exposed to Hendra virus infected horses.Deforestation Impact.When considering any zoonosis, one must understand the social, ecological, and biological contributions that may be facilitating this spillover. Hendra virus is believed to be partially seasonally related, and there is a suggested correlation between breeding time and an increase in the incidence of Hendra virus in flying fox bats.Additionally, recent research suggests that the upsurge in deforestation within Australia may be leading to an increase in the incidence of Hendra virus. Flying fox bats tend to feed in trees during a large part of the year. However, due to the lack of specific fruit trees within the area, these bats are having to relocate and thereby are coming into contact with horses more often. The two most recent outbreaks of Hendra virus in 2011 and 2013 appear to be related to an increased level of nutritional stress among the bats as well as relocation of bat populations. Work is currently being done to increase vaccination among horses as well as replant these important forests as feeding grounds for the flying fox bats. Through these measures, the goal is to decrease the incidences of the highly fatal Hendra virus. History: Emergence Hendra virus (originally called "Equine morbillivirus") was discovered in September 1994 when it caused the deaths of thirteen horses, and a trainer at a training complex at 10 Williams Avenue, Hendra, a suburb of Brisbane in Queensland, Australia.The index case, a mare called Drama Series, brought in from a paddock in Cannon Hill, was housed with 19 other horses after falling ill, and died two days later. Subsequently, all of the horses became ill, with 13 dying. The remaining six animals were subsequently euthanised as a way of preventing relapsing infection and possible further transmission. The trainer, Victory ('Vic') Rail, and the stable foreman, Ray Unwin, were involved in nursing the index case, and both fell ill with an influenza-like illness within one week of the first horse's death. The stable hand recovered but Rail died of respiratory and kidney failure. The source of the virus was most likely frothy nasal discharge from the index case.A second outbreak occurred in August 1994 (chronologically preceding the first outbreak) in Mackay 1,000 km north of Brisbane resulting in the deaths of two horses and their owner. The owner assisted in necropsies of the horses, and within three weeks was admitted to hospital suffering from meningitis. He recovered, but 14 months later developed neurologic signs and died. This outbreak was diagnosed retrospectively by the presence of Hendra virus in the brain of the patient. History: Outbreak in Australia As of June 2014, a total of fifty outbreaks of Hendra virus have occurred in Australia, all involving infection of horses. As a result of these events, eighty-three horses have died or been euthanized. A further four died or were euthanized as a result of possible Hendra infection.Case fatality rate in humans is 60% and in horses 75%.Four of these outbreaks have spread to humans as a result of direct contact with infected horses. On 26 July 2011 a dog living on the Mt Alford property was reported to have HeV antibodies, the first time an animal other than a flying fox, horse, or human has tested positive outside an experimental situation.These events have all been on the east coast of Australia, with the most northern event at Cairns, Queensland and the event furthest south at Kempsey, New South Wales. Until the event at Chinchilla, Queensland in July 2011, all outbreak sites had been within the distribution of at least two of the four mainland flying foxes (fruit bats); Little red flying fox, (Pteropus scapulatus), black flying fox, (Pteropus alecto), grey-headed flying fox, (Pteropus poliocephalus) and spectacled flying fox, (Pteropus conspicillatus). Chinchilla is considered to be only within the range of little red flying fox and is west of the Great Dividing Range. This is the furthest west the infection has ever been identified in horses.The timing of incidents indicates a seasonal pattern of outbreaks. Initially this was thought to possibly be related to the breeding cycle of the little red flying foxes. These species typically give birth between April and May. Subsequently, however, the Spectacled flying fox and the Black flying fox have been identified as the species more likely to be involved in infection spillovers.Timing of outbreaks also appears more likely during the cooler months when it is possible the temperature and humidity are more favourable to the longer term survival of the virus in the environment.There is no evidence of transmission to humans directly from bats, and, as such it appears that human infection only occurs via an intermediate host, a horse. Despite this in 2014 the NSW Government approved the destruction of flying fox colonies. History: Events of June–August 2011 In the years 1994–2010, fourteen events were recorded. Between 20 June 2011 and 28 August 2011, a further seventeen events were identified, during which twenty-one horses died.It is not clear why there was a sudden increase in the number of spillover events between June and August 2011. Typically HeV spillover events are more common between May and October. This time is sometimes called "Hendra Season", which is a time when there are large numbers of fruit bats of all species congregated in SE Queensland's valuable winter foraging habitat. The weather (warm and humid) is favourable to the survival of henipavirus in the environment.It is possible flooding in SE Queensland and Northern NSW in December 2010 and January 2011 may have affected the health of the fruit bats. Urine sampling in flying fox camps indicate that a larger proportion of flying foxes than usual are shedding live virus. Biosecurity Queensland's ongoing surveillance usually shows 7% of the animals are shedding live virus. In June and July nearly 30% animals have been reported to be shedding live virus. Present advice is that these events are not being driven by any mutation in HeV itself.Other suggestions include that an increase in testing has led to an increase in detection. As the actual mode of transmission between bats and horses has not been determined, it is not clear what, if any, factors can increase the chance of infection in horses.Following the confirmation of a dog with HeV antibodies, on 27 July 2011, the Queensland and NSW governments will boost research funding into the Hendra virus by $6 million to be spent by 2014–2015. This money will be used for research into ecological drivers of infection in the bats and the mechanism of virus transmission between bats and other species. A further 6 million dollars was allocated by the federal government with the funds being split, half for human health investigations and half for animal health and biodiversity research.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Water purification** Water purification: Water purification is the process of removing undesirable chemicals, biological contaminants, suspended solids, and gases from water. The goal is to produce water that is fit for specific purposes. Most water is purified and disinfected for human consumption (drinking water), but water purification may also be carried out for a variety of other purposes, including medical, pharmacological, chemical, and industrial applications. The history of water purification includes a wide variety of methods. The methods used include physical processes such as filtration, sedimentation, and distillation; biological processes such as slow sand filters or biologically active carbon; chemical processes such as flocculation and chlorination; and the use of electromagnetic radiation such as ultraviolet light. Water purification: Water purification can reduce the concentration of particulate matter including suspended particles, parasites, bacteria, algae, viruses, and fungi as well as reduce the concentration of a range of dissolved and particulate matter. The standards for drinking water quality are typically set by governments or by international standards. These standards usually include minimum and maximum concentrations of contaminants, depending on the intended use of the water. Water purification: A visual inspection cannot determine if water is of appropriate quality. Simple procedures such as boiling or the use of a household activated carbon filter are not sufficient for treating all possible contaminants that may be present in water from an unknown source. Even natural spring water—considered safe for all practical purposes in the 19th century—must now be tested before determining what kind of treatment, if any, is needed. Chemical and microbiological analysis, while expensive, are the only way to obtain the information necessary for deciding on the appropriate method of purification. Sources of water: Groundwater: The water emerging from some deep ground water may have fallen as rain many tens, hundreds, or thousands of years ago. Soil and rock layers naturally filter the ground water to a high degree of clarity and often, it does not require additional treatment besides adding chlorine or chloramines as secondary disinfectants. Such water may emerge as springs, artesian springs, or may be extracted from boreholes or wells. Deep ground water is generally of very high bacteriological quality (i.e., pathogenic bacteria or the pathogenic protozoa are typically absent), but the water may be rich in dissolved solids, especially carbonates and sulfates of calcium and magnesium. Depending on the strata through which the water has flowed, other ions may also be present including chloride, and bicarbonate. There may be a requirement to reduce the iron or manganese content of this water to make it acceptable for drinking, cooking, and laundry use. Primary disinfection may also be required. Where groundwater recharge is practised (a process in which river water is injected into an aquifer to store the water in times of plenty so that it is available in times of drought), the groundwater may require additional treatment depending on applicable state and federal regulations. Sources of water: Upland lakes and reservoirs: Typically located in the headwaters of river systems, upland reservoirs are usually sited above any human habitation and may be surrounded by a protective zone to restrict the opportunities for contamination. Bacteria and pathogen levels are usually low, but some bacteria, protozoa or algae will be present. Where uplands are forested or peaty, humic acids can colour the water. Many upland sources have low pH which require adjustment. Sources of water: Rivers, canals and low land reservoirs: Low land surface waters will have a significant bacterial load and may also contain algae, suspended solids and a variety of dissolved constituents. Atmospheric water generation is a new technology that can provide high quality drinking water by extracting water from the air by cooling the air and thus condensing water vapour. Rainwater harvesting or fog collection which collect water from the atmosphere can be used especially in areas with significant dry seasons and in areas which experience fog even when there is little rain. Desalination of seawater by distillation or reverse osmosis. Surface water: Freshwater bodies that are open to the atmosphere and are not designated as groundwater are termed surface waters. Treatment: Goals The goals of the treatment are to remove unwanted constituents in the water and to make it safe to drink or fit for a specific purpose in industry or medical applications. Widely varied techniques are available to remove contaminants like fine solids, micro-organisms and some dissolved inorganic and organic materials, or environmental persistent pharmaceutical pollutants. The choice of method will depend on the quality of the water being treated, the cost of the treatment process and the quality standards expected of the processed water. Treatment: The processes below are the ones commonly used in water purification plants. Some or most may not be used depending on the scale of the plant and quality of the raw (source) water. Pretreatment Pumping and containment – The majority of water must be pumped from its source or directed into pipes or holding tanks. To avoid adding contaminants to the water, this physical infrastructure must be made from appropriate materials and constructed so that accidental contamination does not occur. Screening (see also screen filter) – The first step in purifying surface water is to remove large debris such as sticks, leaves, rubbish and other large particles which may interfere with subsequent purification steps. Most deep groundwater does not need screening before other purification steps. Treatment: Storage – Water from rivers may also be stored in bankside reservoirs for periods between a few days and many months to allow natural biological purification to take place. This is especially important if treatment is by slow sand filters. Storage reservoirs also provide a buffer against short periods of drought or to allow water supply to be maintained during transitory pollution incidents in the source river. Treatment: Pre-chlorination – In many plants the incoming water was chlorinated to minimise the growth of fouling organisms on the pipe-work and tanks. Because of the potential adverse quality effects (see chlorine below), this has largely been discontinued. Treatment: pH adjustment Pure water has a pH close to 7 (neither alkaline nor acidic). Sea water can have pH values that range from 7.5 to 8.4 (moderately alkaline). Fresh water can have widely ranging pH values depending on the geology of the drainage basin or aquifer and the influence of contaminant inputs (acid rain). If the water is acidic (lower than 7), lime, soda ash, or sodium hydroxide can be added to raise the pH during water purification processes. Lime addition increases the calcium ion concentration, thus raising the water hardness. For highly acidic waters, forced draft degasifiers can be an effective way to raise the pH, by stripping dissolved carbon dioxide from the water. Making the water alkaline helps coagulation and flocculation processes work effectively and also helps to minimise the risk of lead being dissolved from lead pipes and from lead solder in pipe fittings. Sufficient alkalinity also reduces the corrosiveness of water to iron pipes. Acid (carbonic acid, hydrochloric acid or sulfuric acid) may be added to alkaline waters in some circumstances to lower the pH. Alkaline water (above pH 7.0) does not necessarily mean that lead or copper from the plumbing system will not be dissolved into the water. The ability of water to precipitate calcium carbonate to protect metal surfaces and reduce the likelihood of toxic metals being dissolved in water is a function of pH, mineral content, temperature, alkalinity and calcium concentration. Treatment: Coagulation and flocculation One of the first steps in most conventional water purification processes is the addition of chemicals to assist in the removal of particles suspended in water. Particles can be inorganic such as clay and silt or organic such as algae, bacteria, viruses, protozoa and natural organic matter. Inorganic and organic particles contribute to the turbidity and color of water. Treatment: The addition of inorganic coagulants such as aluminium sulfate (or alum) or iron (III) salts such as iron(III) chloride cause several simultaneous chemical and physical interactions on and among the particles. Within seconds, negative charges on the particles are neutralised by inorganic coagulants. Also within seconds, metal hydroxide precipitates of the iron and aluminium ions begin to form. These precipitates combine into larger particles under natural processes such as Brownian motion and through induced mixing which is sometimes referred to as flocculation. Amorphous metal hydroxides are known as "floc". Large, amorphous aluminium and iron (III) hydroxides adsorb and enmesh particles in suspension and facilitate the removal of particles by subsequent processes of sedimentation and filtration.: 8.2–8.3 Aluminum hydroxides are formed within a fairly narrow pH range, typically: 5.5 to about 7.7. Iron (III) hydroxides can form over a larger pH range including pH levels lower than are effective for alum, typically: 5.0 to 8.5.: 679 In the literature, there is much debate and confusion over the usage of the terms coagulation and flocculation: Where does coagulation end and flocculation begin? In water purification plants, there is usually a high energy, rapid mix unit process (detention time in seconds) whereby the coagulant chemicals are added followed by flocculation basins (detention times range from 15 to 45 minutes) where low energy inputs turn large paddles or other gentle mixing devices to enhance the formation of floc. In fact, coagulation and flocculation processes are ongoing once the metal salt coagulants are added.: 74–5 Organic polymers were developed in the 1960s as aids to coagulants and, in some cases, as replacements for the inorganic metal salt coagulants. Synthetic organic polymers are high molecular weight compounds that carry negative, positive or neutral charges. When organic polymers are added to water with particulates, the high molecular weight compounds adsorb onto particle surfaces and through interparticle bridging coalesce with other particles to form floc. PolyDADMAC is a popular cationic (positively charged) organic polymer used in water purification plants.: 667–8 Sedimentation Waters exiting the flocculation basin may enter the sedimentation basin, also called a clarifier or settling basin. It is a large tank with low water velocities, allowing floc to settle to the bottom. The sedimentation basin is best located close to the flocculation basin so the transit between the two processes does not permit settlement or floc break up. Sedimentation basins may be rectangular, where water flows from end to end, or circular where flow is from the centre outward. Sedimentation basin outflow is typically over a weir so only a thin top layer of water—that furthest from the sludge—exits. Treatment: In 1904, Allen Hazen showed that the efficiency of a sedimentation process was a function of the particle settling velocity, the flow through the tank and the surface area of tank. Sedimentation tanks are typically designed within a range of overflow rates of 0.5 to 1.0 gallons per minute per square foot (or 1250 to 2500 litres per square meter per hour). In general, sedimentation basin efficiency is not a function of detention time or depth of the basin. Although, basin depth must be sufficient so that water currents do not disturb the sludge and settled particle interactions are promoted. As particle concentrations in the settled water increase near the sludge surface on the bottom of the tank, settling velocities can increase due to collisions and agglomeration of particles. Typical detention times for sedimentation vary from 1.5 to 4 hours and basin depths vary from 10 to 15 feet (3 to 4.5 meters).: 9.39–9.40 : 790–1 : 140–2, 171 Lamella clarifiers, inclined flat plates or tubes can be added to traditional sedimentation basins to improve particle removal performance. Inclined plates and tubes drastically increase the surface area available for particles to be removed in concert with Hazen's original theory. The amount of ground surface area occupied by a sedimentation basin with inclined plates or tubes can be far smaller than a conventional sedimentation basin. Treatment: Sludge storage and removal As particles settle to the bottom of a sedimentation basin, a layer of sludge is formed on the floor of the tank which must be removed and treated. The amount of sludge generated is significant, often 3 to 5 per cent of the total volume of water to be treated. The cost of treating and disposing of the sludge can impact the operating cost of a water treatment plant. The sedimentation basin may be equipped with mechanical cleaning devices that continually clean its bottom, or the basin can be periodically taken out of service and cleaned manually. Treatment: Floc blanket clarifiers A subcategory of sedimentation is the removal of particulates by entrapment in a layer of suspended floc as the water is forced upward. The major advantage of floc blanket clarifiers is that they occupy a smaller footprint than conventional sedimentation. The disadvantages are that particle removal efficiency can be highly variable depending on changes in influent water quality and influent water flow rate.: 835–6 Dissolved air flotation When particles to be removed do not settle out of solution easily, dissolved air flotation (DAF) is often used. After coagulation and flocculation processes, water flows to DAF tanks where air diffusers on the tank bottom create fine bubbles that attach to the floc resulting in a floating mass of concentrated floc. The floating floc blanket is removed from the surface and clarified water is withdrawn from the bottom of the DAF tank. Treatment: Water supplies that are particularly vulnerable to unicellular algae blooms and supplies with low turbidity and high colour often employ DAF.: 9.46 Filtration After separating most floc, the water is filtered as the final step to remove remaining suspended particles and unsettled floc. Treatment: Rapid sand filters The most common type of filter is a rapid sand filter. Water moves vertically through sand which often has a layer of activated carbon or anthracite coal above the sand. The top layer removes organic compounds, which contribute to taste and odour. The space between sand particles is larger than the smallest suspended particles, so simple filtration is not enough. Most particles pass through surface layers but are trapped in pore spaces or adhere to sand particles. Effective filtration extends into the depth of the filter. This property of the filter is key to its operation: if the top layer of sand were to block all the particles, the filter would quickly clog. Treatment: To clean the filter, water is passed quickly upward through the filter, opposite the normal direction (called backflushing or backwashing) to remove embedded or unwanted particles. Prior to this step, compressed air may be blown up through the bottom of the filter to break up the compacted filter media to aid the backwashing process; this is known as air scouring. This contaminated water can be disposed of, along with the sludge from the sedimentation basin, or it can be recycled by mixing with the raw water entering the plant although this is often considered poor practice since it re-introduces an elevated concentration of bacteria into the raw water. Treatment: Some water treatment plants employ pressure filters. These work on the same principle as rapid gravity filters, differing in that the filter medium is enclosed in a steel vessel and the water is forced through it under pressure. Advantages: Filters out much smaller particles than paper and sand filters can. Filters out virtually all particles larger than their specified pore sizes. They are quite thin and so liquids flow through them fairly rapidly. They are reasonably strong and so can withstand pressure differences across them of typically 2–5 atmospheres. They can be cleaned (back flushed) and reused. Treatment: Slow sand filters Slow sand filters may be used where there is sufficient land and space, as the water flows very slowly through the filters. These filters rely on biological treatment processes for their action rather than physical filtration. They are carefully constructed using graded layers of sand, with the coarsest sand, along with some gravel, at the bottom and the finest sand at the top. Drains at the base convey treated water away for disinfection. Filtration depends on the development of a thin biological layer, called the zoogleal layer or Schmutzdecke, on the surface of the filter. An effective slow sand filter may remain in service for many weeks or even months, if the pretreatment is well designed, and produces water with a very low available nutrient level which physical methods of treatment rarely achieve. Very low nutrient levels allow water to be safely sent through distribution systems with very low disinfectant levels, thereby reducing consumer irritation over offensive levels of chlorine and chlorine by-products. Slow sand filters are not backwashed; they are maintained by having the top layer of sand scraped off when the flow is eventually obstructed by biological growth. Treatment: Bank filtration In bank filtration, natural sediments in a riverbank are used to provide the first stage of contaminant filtration. While typically not clean enough to be used directly for drinking water, the water gained from the associated extraction wells is much less problematic than river water taken directly from the river. Treatment: Membrane filtration Membrane filters are widely used for filtering both drinking water and sewage. For drinking water, membrane filters can remove virtually all particles larger than 0.2 μm—including giardia and cryptosporidium. Membrane filters are an effective form of tertiary treatment when it is desired to reuse the water for industry, for limited domestic purposes, or before discharging the water into a river that is used by towns further downstream. They are widely used in industry, particularly for beverage preparation (including bottled water). However no filtration can remove substances that are actually dissolved in the water such as phosphates, nitrates and heavy metal ions. Treatment: Removal of ions and other dissolved substances Ultrafiltration membranes use polymer membranes with chemically formed microscopic pores that can be used to filter out dissolved substances avoiding the use of coagulants. The type of membrane media determines how much pressure is needed to drive the water through and what sizes of micro-organisms can be filtered out.Ion exchange: Ion-exchange systems use ion-exchange resin- or zeolite-packed columns to replace unwanted ions. The most common case is water softening consisting of removal of Ca2+ and Mg2+ ions replacing them with benign (soap friendly) Na+ or K+ ions. Ion-exchange resins are also used to remove toxic ions such as nitrite, lead, mercury, arsenic and many others. Treatment: Precipitative softening:: 13.12–13.58  Water rich in hardness (calcium and magnesium ions) is treated with lime (calcium oxide) and/or soda-ash (sodium carbonate) to precipitate calcium carbonate out of solution utilising the common-ion effect. Treatment: Electrodeionization: Water is passed between a positive electrode and a negative electrode. Ion-exchange membranes allow only positive ions to migrate from the treated water toward the negative electrode and only negative ions toward the positive electrode. High purity deionised water is produced continuously, similar to ion-exchange treatment. Complete removal of ions from water is possible if the right conditions are met. The water is normally pre-treated with a reverse osmosis unit to remove non-ionic organic contaminants, and with gas transfer membranes to remove carbon dioxide. A water recovery of 99% is possible if the concentrate stream is fed to the RO inlet. Treatment: Disinfection Disinfection is accomplished both by filtering out harmful micro-organisms and by adding disinfectant chemicals. Water is disinfected to kill any pathogens which pass through the filters and to provide a residual dose of disinfectant to kill or inactivate potentially harmful micro-organisms in the storage and distribution systems. Possible pathogens include viruses, bacteria, including Salmonella, Cholera, Campylobacter and Shigella, and protozoa, including Giardia lamblia and other cryptosporidia. After the introduction of any chemical disinfecting agent, the water is usually held in temporary storage – often called a contact tank or clear well – to allow the disinfecting action to complete. Treatment: Chlorine disinfection The most common disinfection method involves some form of chlorine or its compounds such as chloramine or chlorine dioxide. Chlorine is a strong oxidant that rapidly kills many harmful micro-organisms. Because chlorine is a toxic gas, there is a danger of a release associated with its use. This problem is avoided by the use of sodium hypochlorite, which is a relatively inexpensive solution used in household bleach that releases free chlorine when dissolved in water. Chlorine solutions can be generated on site by electrolyzing common salt solutions. A solid form, calcium hypochlorite, releases chlorine on contact with water. Handling the solid, however, requires more routine human contact through opening bags and pouring than the use of gas cylinders or bleach, which are more easily automated. The generation of liquid sodium hypochlorite is inexpensive and also safer than the use of gas or solid chlorine. Chlorine levels up to 4 milligrams per litre (4 parts per million) are considered safe in drinking water.All forms of chlorine are widely used, despite their respective drawbacks. One drawback is that chlorine from any source reacts with natural organic compounds in the water to form potentially harmful chemical by-products. These by-products, trihalomethanes (THMs) and haloacetic acids (HAAs), are both carcinogenic in large quantities and are regulated by the United States Environmental Protection Agency (EPA) and the Drinking Water Inspectorate in the UK. The formation of THMs and haloacetic acids may be minimised by the effective removal of as many organics from the water as possible prior to chlorine addition. Although chlorine is effective in killing bacteria, it has limited effectiveness against pathogenic protozoa that form cysts in water such as Giardia lamblia and Cryptosporidium. Treatment: Chlorine dioxide disinfection Chlorine dioxide is a faster-acting disinfectant than elemental chlorine. It is relatively rarely used because in some circumstances it may create excessive amounts of chlorite, which is a by-product regulated to low allowable levels in the United States. Chlorine dioxide can be supplied as an aqueous solution and added to water to avoid gas handling problems; chlorine dioxide gas accumulations may spontaneously detonate. Treatment: Chloramination The use of chloramine is becoming more common as a disinfectant. Although chloramine is not as strong an oxidant, it provides a longer-lasting residual than free chlorine because of its lower redox potential compared to free chlorine. It also does not readily form THMs or haloacetic acids (disinfection byproducts). It is possible to convert chlorine to chloramine by adding ammonia to the water after adding chlorine. The chlorine and ammonia react to form chloramine. Water distribution systems disinfected with chloramines may experience nitrification, as ammonia is a nutrient for bacterial growth, with nitrates being generated as a by-product. Treatment: Ozone disinfection Ozone is an unstable molecule which readily gives up one atom of oxygen providing a powerful oxidising agent which is toxic to most waterborne organisms. It is a very strong, broad spectrum disinfectant that is widely used in Europe and in a few municipalities in the United States and Canada. Ozone disinfection, or ozonation, is an effective method to inactivate harmful protozoa that form cysts. It also works well against almost all other pathogens. Ozone is made by passing oxygen through ultraviolet light or a "cold" electrical discharge. To use ozone as a disinfectant, it must be created on-site and added to the water by bubble contact. Some of the advantages of ozone include the production of fewer dangerous by-products and the absence of taste and odour problems (in comparison to chlorination). No residual ozone is left in the water. In the absence of a residual disinfectant in the water, chlorine or chloramine may be added throughout a distribution system to remove any potential pathogens in the distribution piping. Treatment: Ozone has been used in drinking water plants since 1906 where the first industrial ozonation plant was built in Nice, France. The U.S. Food and Drug Administration has accepted ozone as being safe; and it is applied as an anti-microbiological agent for the treatment, storage, and processing of foods. However, although fewer by-products are formed by ozonation, it has been discovered that ozone reacts with bromide ions in water to produce concentrations of the suspected carcinogen bromate. Bromide can be found in fresh water supplies in sufficient concentrations to produce (after ozonation) more than 10 parts per billion (ppb) of bromate—the maximum contaminant level established by the USEPA. Ozone disinfection is also energy intensive. Treatment: Ultraviolet disinfection Ultraviolet light (UV) is very effective at inactivating cysts, in low turbidity water. UV light's disinfection effectiveness decreases as turbidity increases, a result of the absorption, scattering, and shadowing caused by the suspended solids. The main disadvantage to the use of UV radiation is that, like ozone treatment, it leaves no residual disinfectant in the water; therefore, it is sometimes necessary to add a residual disinfectant after the primary disinfection process. This is often done through the addition of chloramines, discussed above as a primary disinfectant. When used in this manner, chloramines provide an effective residual disinfectant with very few of the negative effects of chlorination. Treatment: Over 2 million people in 28 developing countries use Solar Disinfection for daily drinking water treatment. Ionizing radiation Like UV, ionizing radiation (X-rays, gamma rays, and electron beams) has been used to sterilise water. Treatment: Bromination and iodinisation Bromine and iodine can also be used as disinfectants. However, chlorine in water is over three times more effective as a disinfectant against Escherichia coli than an equivalent concentration of bromine, and over six times more effective than an equivalent concentration of iodine. Iodine is commonly used for portable water purification, and bromine is common as a swimming pool disinfectant. Treatment: Portable water purification Portable water purification devices and methods are available for disinfection and treatment in emergencies or in remote locations. Disinfection is the primary goal, since aesthetic considerations such as taste, odour, appearance, and trace chemical contamination do not affect the short-term safety of drinking water. Additional treatment options Water fluoridation: in many areas fluoride is added to water with the goal of preventing tooth decay. Fluoride is usually added after the disinfection process. In the U.S., fluoridation is usually accomplished by the addition of hexafluorosilicic acid, which decomposes in water, yielding fluoride ions. Treatment: Water conditioning: This is a method of reducing the effects of hard water. In water systems subject to heating hardness salts can be deposited as the decomposition of bicarbonate ions creates carbonate ions that precipitate out of solution. Water with high concentrations of hardness salts can be treated with soda ash (sodium carbonate) which precipitates out the excess salts, through the common-ion effect, producing calcium carbonate of very high purity. The precipitated calcium carbonate is traditionally sold to the manufacturers of toothpaste. Several other methods of industrial and residential water treatment are claimed (without general scientific acceptance) to include the use of magnetic and/or electrical fields reducing the effects of hard water. Treatment: Plumbosolvency reduction: In areas with naturally acidic waters of low conductivity (i.e. surface rainfall in upland mountains of igneous rocks), the water may be capable of dissolving lead from any lead pipes that it is carried in. The addition of small quantities of phosphate ion and increasing the pH slightly both assist in greatly reducing plumbo-solvency by creating insoluble lead salts on the inner surfaces of the pipes. Treatment: Radium Removal: Some groundwater sources contain radium, a radioactive chemical element. Typical sources include many groundwater sources north of the Illinois River in Illinois, United States of America. Radium can be removed by ion exchange, or by water conditioning. The back flush or sludge that is produced is, however, a low-level radioactive waste. Fluoride Removal: Although fluoride is added to water in many areas, some areas of the world have excessive levels of natural fluoride in the source water. Excessive levels can be toxic or cause undesirable cosmetic effects such as staining of teeth. Methods of reducing fluoride levels is through treatment with activated alumina and bone char filter media. Other water purification techniques: Other popular methods for purifying water, especially for local private supplies are listed below. In some countries some of these methods are used for large scale municipal supplies. Particularly important are distillation (desalination of seawater) and reverse osmosis. Other water purification techniques: Thermal Bringing water to its boiling point (about 100 °C or 212 F at sea level), is the oldest and most effective way since it eliminates most microbes causing intestinal disease, but it cannot remove chemical toxins or impurities. For human health, complete sterilisation of water is not required, since heat resistant microbes do not affect intestines. The traditional advice of boiling water for ten minutes is mainly for additional safety, since microbes start expiring at temperatures greater than 60 °C (140 °F). Though the boiling point decreases with increasing altitude, it is not enough to affect disinfection. In areas where the water is "hard" (that is, containing significant dissolved calcium salts), boiling decomposes the bicarbonate ions, resulting in partial precipitation as calcium carbonate. This is the "fur" that builds up on kettle elements, etc., in hard water areas. With the exception of calcium, boiling does not remove solutes of higher boiling point than water and in fact increases their concentration (due to some water being lost as vapour). Boiling does not leave a residual disinfectant in the water. Therefore, water that is boiled and then stored for any length of time may acquire new pathogens. Other water purification techniques: Adsorption Granular activated carbon is a form of activated carbon with a high surface area. It adsorbs many compounds including many toxic compounds. Water passing through activated carbon is commonly used in municipal regions with organic contamination, taste or odors. Many household water filters and fish tanks use activated carbon filters to purify water. Household filters for drinking water sometimes contain silver as metallic silver nanoparticle. If water is held in the carbon block for longer periods, microorganisms can grow inside which results in fouling and contamination. Silver nanoparticles are excellent anti-bacterial material and can decompose toxic halo-organic compounds such as pesticides into non-toxic organic products. Filtered water must be used soon after it is filtered, as the low amount of remaining microbes may proliferate over time. In general, these home filters remove over 90% of the chlorine in a glass of treated water. These filters must be periodically replaced otherwise the bacterial content of the water may actually increase due to the growth of bacteria within the filter unit. Other water purification techniques: Distillation Distillation involves boiling water to produce water vapour. The vapour contacts a cool surface where it condenses as a liquid. Because the solutes are not normally vaporised, they remain in the boiling solution. Even distillation does not completely purify water, because of contaminants with similar boiling points and droplets of unvapourised liquid carried with the steam. However, 99.9% pure water can be obtained by distillation. Other water purification techniques: Direct contact membrane distillation (DCMD) passes heated seawater along the surface of a hydrophobic polymer membrane. Evaporated water passes from the hot side through pores in the membrane forming a stream of cold pure water on the other side. The difference in vapour pressure between the hot and cold side helps to push water molecules through. Other water purification techniques: Reverse osmosis Reverse osmosis involves mechanical pressure applied to force water through a semi-permeable membrane. Contaminants are left on the other side of the membrane. Reverse osmosis is theoretically the most thorough method of large scale water purification available, although perfect semi-permeable membranes are difficult to create. Unless membranes are well-maintained, algae and other life forms can colonise the membranes. Other water purification techniques: Crystallization Carbon dioxide or other low molecular weight gas can be mixed with contaminated water at high pressure and low temperature to exothermically form gas hydrate crystals. Hydrate may be separated by centrifuge or sedimentation. Water can be released from the hydrate crystals by heating. Other water purification techniques: In situ oxidation In situ chemical oxidation (ISCO) is an advanced oxidation process. It is used for soil and/or groundwater remediation to reduce the concentrations of targeted contaminants. ISCO is accomplished by injecting or otherwise introducing oxidizers into the contaminated medium (soil or groundwater) to destroy contaminants. It can be used to remediate a variety of organic compounds, including some that are resistant to natural degradation Bioremediation Bioremediation uses microorganisms to remove waste products from a contaminated area. Since 1991 bioremediation has been a suggested tactic to remove impurities such as alkanes, perchlorates, and metals. Bioremediation has seen success because perchlorates are highly soluble, making them difficult to remove. Example applications of Dechloromonas agitata strain CKB include field studies conducted in Maryland and the US Southwest. Other water purification techniques: Hydrogen peroxide Hydrogen peroxide (H2O2) is a common disinfectant that can purify water. It is typically produced at chemical plants and transported to the contaminated water. An alternative approach employs a gold-palladium catalyst to synthesize H2O2 from ambient hydrogen and oxygen atoms at the use site. The latter was reported to be faster and 107 times more potent at killing Escherichia coli than commercial H2O2, and over 108 times more effective than chlorine The catalytic reaction also produces reactive oxygen species (ROS) that bind and degrade other compounds. Safety and controversies: In April 2007, the water supply of Spencer, Massachusetts in the United States of America, became contaminated with excess sodium hydroxide (lye) when its treatment equipment malfunctioned.Many municipalities have moved from free chlorine to chloramine as a disinfection agent. However, chloramine appears to be a corrosive agent in some water systems. Chloramine can dissolve the "protective" film inside older service lines, leading to the leaching of lead into residential spigots. This can result in harmful exposure, including elevated blood lead levels. Lead is a known neurotoxin. Safety and controversies: Demineralized water Distillation removes all minerals from water, and the membrane methods of reverse osmosis and nanofiltration remove most to all minerals. This results in demineralised water which is not considered ideal drinking water. The World Health Organization has investigated the health effects of demineralised water since 1980. Experiments in humans found that demineralised water increased diuresis and the elimination of electrolytes, with decreased blood serum potassium concentration. Magnesium, calcium, and other minerals in water can help to protect against nutritional deficiency. Demineralized water may also increase the risk from toxic metals because it more readily leaches materials from piping like lead and cadmium, which is prevented by dissolved minerals such as calcium and magnesium. Low-mineral water has been implicated in specific cases of lead poisoning in infants, when lead from pipes leached at especially high rates into the water. Recommendations for magnesium have been put at a minimum of 10 mg/L with 20–30 mg/L optimum; for calcium a 20 mg/L minimum and a 40–80 mg/L optimum, and a total water hardness (adding magnesium and calcium) of 2 to 4 mmol/L. At water hardness above 5 mmol/L, higher incidence of gallstones, kidney stones, urinary stones, arthrosis, and arthropathies have been observed. Additionally, desalination processes can increase the risk of bacterial contamination.Manufacturers of home water distillers claim the opposite—that minerals in water are the cause of many diseases, and that most beneficial minerals come from food, not water. History: The first experiments into water filtration were made in the 17th century. Sir Francis Bacon attempted to desalinate sea water by passing the flow through a sand filter. Although his experiment did not succeed, it marked the beginning of a new interest in the field. The fathers of microscopy, Antonie van Leeuwenhoek and Robert Hooke, used the newly invented microscope to observe for the first time small material particles that lay suspended in the water, laying the groundwork for the future understanding of waterborne pathogens. History: Sand filter The first documented use of sand filters to purify the water supply dates to 1804, when the owner of a bleachery in Paisley, Scotland, John Gibb, installed an experimental filter, selling his unwanted surplus to the public. This method was refined in the following two decades by engineers working for private water companies, and it culminated in the first treated public water supply in the world, installed by engineer James Simpson for the Chelsea Waterworks Company in London in 1829. This installation provided filtered water for every resident of the area, and the network design was widely copied throughout the United Kingdom in the ensuing decades. History: The practice of water treatment soon became mainstream and common, and the virtues of the system were made starkly apparent after the investigations of the physician John Snow during the 1854 Broad Street cholera outbreak. Snow was sceptical of the then-dominant miasma theory that stated that diseases were caused by noxious "bad airs". Although the germ theory of disease had not yet been developed, Snow's observations led him to discount the prevailing theory. His 1855 essay On the Mode of Communication of Cholera conclusively demonstrated the role of the water supply in spreading the cholera epidemic in Soho, with the use of a dot distribution map and statistical proof to illustrate the connection between the quality of the water source and cholera cases. His data convinced the local council to disable the water pump, which promptly ended the outbreak. History: The Metropolis Water Act introduced the regulation of the water supply companies in London, including minimum standards of water quality for the first time. The Act "made provision for securing the supply to the Metropolis of pure and wholesome water", and required that all water be "effectually filtered" from 31 December 1855. This was followed up with legislation for the mandatory inspection of water quality, including comprehensive chemical analyses, in 1858. This legislation set a worldwide precedent for similar state public health interventions across Europe. The Metropolitan Commission of Sewers was formed at the same time, water filtration was adopted throughout the country, and new water intakes on the Thames were established above Teddington Lock. Automatic pressure filters, where the water is forced under pressure through the filtration system, were innovated in 1899 in England. History: Water chlorination John Snow was the first to successfully use chlorine to disinfect the water supply in Soho that had helped spread the cholera outbreak. William Soper also used chlorinated lime to treat the sewage produced by typhoid patients in 1879. History: In a paper published in 1894, Moritz Traube formally proposed the addition of chloride of lime (calcium hypochlorite) to water to render it "germ-free." Two other investigators confirmed Traube's findings and published their papers in 1895. Early attempts at implementing water chlorination at a water treatment plant were made in 1893 in Hamburg, Germany and in 1897 the city of Maidstone, England was the first to have its entire water supply treated with chlorine.Permanent water chlorination began in 1905, when a faulty slow sand filter and a contaminated water supply led to a serious typhoid fever epidemic in Lincoln, England. Alexander Cruickshank Houston used chlorination of the water to stem the epidemic. His installation fed a concentrated solution of chloride of lime to the water being treated. The chlorination of the water supply helped stop the epidemic and as a precaution, the chlorination was continued until 1911 when a new water supply was instituted. History: The first continuous use of chlorine in the United States for disinfection took place in 1908 at Boonton Reservoir (on the Rockaway River), which served as the supply for Jersey City, New Jersey. Chlorination was achieved by controlled additions of dilute solutions of chloride of lime (calcium hypochlorite) at doses of 0.2 to 0.35 ppm. The treatment process was conceived by John L. Leal and the chlorination plant was designed by George Warren Fuller. Over the next few years, chlorine disinfection using chloride of lime were rapidly installed in drinking water systems around the world. History: The technique of purification of drinking water by use of compressed liquefied chlorine gas was developed by a British officer in the Indian Medical Service, Vincent B. Nesfield, in 1903. According to his own account: It occurred to me that chlorine gas might be found satisfactory ... if suitable means could be found for using it.... The next important question was how to render the gas portable. This might be accomplished in two ways: By liquefying it, and storing it in lead-lined iron vessels, having a jet with a very fine capillary canal, and fitted with a tap or a screw cap. The tap is turned on, and the cylinder placed in the amount of water required. The chlorine bubbles out, and in ten to fifteen minutes the water is absolutely safe. This method would be of use on a large scale, as for service water carts. History: U.S. Army Major Carl Rogers Darnall, Professor of Chemistry at the Army Medical School, gave the first practical demonstration of this in 1910. Shortly thereafter, Major William J. L. Lyster of the Army Medical Department used a solution of calcium hypochlorite in a linen bag to treat water. For many decades, Lyster's method remained the standard for U.S. ground forces in the field and in camps, implemented in the form of the familiar Lyster Bag (also spelled Lister Bag). The bag was made of canvas and could hold 36 gallons of water. It was porous and held up by ropes, purifying water with the help of calcium hypochlorite solution. Each bag had a faucet attached, which was used to flush water for testing, as well as dispensing for use. This became the basis for present day systems of municipal water purification. Global: According to a 2007 World Health Organization (WHO) report, 1.1 billion people lack access to an improved drinking water supply; 88% of the 4 billion annual cases of diarrheal disease are attributed to unsafe water and inadequate sanitation and hygiene, while 1.8 million people die from diarrhoeal disease each year. The WHO estimates that 94% of these diarrhoeal disease cases are preventable through modifications to the environment, including access to safe water. Simple techniques for treating water at home, such as chlorination, filters, and solar disinfection, and for storing it in safe containers could save a huge number of lives each year. Reducing deaths from waterborne diseases is a major public health goal in developing countries. Global: The global water purification market is worth 22 billion dollars. Home water filters and purifiers in India are common.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Char-grilled steak** Char-grilled steak: Char-grilled steak (also charcoal steak) is a method of preparing meat for human consumption. Although various animal steaks can technically be char-grilled, the process is generally used to cook chuck steaks. Char-grilled steaks are grilled with charcoal, and are not to be confused with gas-grilled steaks, which are usually grilled with propane. The richness of flavor of steaks cooked in this manner is usually attributed to the charcoal used to prepare them.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gray's conjecture** Gray's conjecture: In mathematics, Gray's conjecture is a conjecture made by Brayton Gray in 1984 about maps between loop spaces of spheres. It was later proved by John Harper.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hustle (dance)** Hustle (dance): The Hustle is a catch-all name for some disco dances which were extremely popular in the 1970s. Late 1970s, Bump, Hustle, Watergate and Spank were popular. It mostly refers to the unique partner dance done in nightclubs to disco music. Hustle has steps in common with Mambo and Salsa and basic steps are somewhat similar to Euro dance style "discofox", which emerged at about the same time and is more familiar in various European countries. Modern partner hustle is sometimes referred to as New York hustle, however, its original name is the Latin hustle. Hustle (dance): A great source for research on the origins of the Hustle is a book written by Willie Estrada, one of the original pioneers of the Latin Hustle by the name, "The Dancing Gangsters of the South Bronx,(Rise of the Latin Hustle)". History: Latin Strut Joe Bataan recorded a song called "Latin Strut" after visiting a Bronx Club called the "310 1/2" in 1973 and seeing the first version of a 6-step dance called "The Hustle" this new Latin rhythmic sound helped young Latino Teenagers develop a faster more robust version of the original slower paced six-step dance which was also a bit robotic. However, with the introduction of songs with faster rhythmic tempos young hustle dancers started doing the dance with fancier gyrations which is when it became known as the Latin Hustle. Other variations of the Hustle would soon be developed by the Black and White communities who also helped spread the Hustle throughout New York City, and eventually the world. History: James Brown and Fatback The original early hustle was a 5-step count with no turns, created by Puerto Rican teenagers in late 1972 as a direct result of Puerto Rican elders objecting to young teenagers doing a grinding slow dance known as the 500. Created in the South Bronx among Puerto Rican teens it was originally done at house parties, hooky gigs, and basements club dances in the South Bronx. It became known as "Spanish hustle"; from 1975 to 1976, funk band the Fatback Band made a song with that name. It was also known as the "Latin hustle"; and was a 6 step count to the beat of the music. And James Brown released Everybody's Doin' the Hustle & Dead on the Double Bump album in 1975. Same year The JB's released Hustle with Speed album. Around 1976 it became known as the "New York hustle". Later, known as just "the hustle", when the dance became commercialized after the release of Saturday Night Fever in 1977. The early Latin hustle Pioneers were Willie "Marine Boy" Estrada and many other members. Some of them were members of a gang called the Imperial Bachelors, who used the Latin hustle as a way to bring peace into a violent South Bronx. They hosted hustle parties at St. Mary's Recreation Center on 145th St. and St. Ann's Ave, in 1974. Those parties ended on October 1974. However, it was the venue that produced some of the best hustle dancers in New York City, who would help spread the dance in nightclubs throughout New York City in late 1974. History: In 1975 music business entrepreneur, Marty Angelo created the first all hustle dance television show entitled, Disco Step-by-Step. Each one-hour show featured top hustle dancers and two 10-minute instructional segments that allowed viewers to learn how to hustle dance in the privacy of their own living rooms. One of the first shows featured a young Billy Fajardo and the Disco Dance Dimensions. Many of the show's video clips can be found on YouTube. Marty Angelo also created the Hustle Hall of Fame online list of dancers in 2000 that he eventually turned over to Ron Bess and Mark James. History: Van McCoy's song The original Latin Hustle started being developed in late 1972 by Puerto Rican Teenagers in the South Bronx and by 1974 was being done all over New York and the Tri-State area, and by 1976 became an International Dance Sensation... Do the Hustle became a hit in 1975, following Van McCoy and the Soul City Symphony's song "Hustle". Tipped off by DJ David Todd, McCoy sent his partner Charlie Kipps Jr.to the Adam's Apple discotheque in New York City's East Side. And "The Hustle" reached the top of the Billboard Pop Singles chart the week ending July 26, 1975. History: Van McCoy with The Stylistics got disco songs such as "Disco Baby", "Can't Give You Anything", "Love is the Answer" and "Funky Weekend". Van McCoy and Charlie Kipps produced David Ruffin also. History: Depiction in Saturday Night Fever The 1977 disco movie Saturday Night Fever (the sound track includes Tavares, Yvonne Elliman, Bee Gees, Kool & the Gang, KC & the Sunshine Band, The Trammps) showed both the line and partner forms of hustle, as well as a dance referred to as the "tango hustle" (invented by for that film by Denny Tario, according to the DVD commentary). Although the popularity of the movie faded quickly as the hype of the movie died down, the hustle and the step-by-step dance has continued even after the Disco Sucks movement and is still a "global social dance" and it took its place beside swing, salsa, mambo, cha-cha-cha, tango, rumba, bolero, nightclub two-step and other partner dances in America. New York hustle: The couple dance form of hustle is usually called The Hustle but is frequently referred to by other names including "The Latin hustle" or "New York Hustle". It has some resemblance to, and steps in common with, Mambo and Salsa. As in the Latin dances, couples tend to move more on the dance floor, as opposed to following a line of dance as in foxtrot. New York hustle: One similarity between hustle and swing is that the lead takes the back-forward steps from his left foot; however it is not exactly a rock-step (there is no rocking action because of speed) and if the dance is taught by counting, the steps happen at the beginning of the count – "and-one, two, three" rather than at the end of the count as in swing – "left, right, rock-step". However, those who originally developed this dance never used step counts, everything was developed by sight and sound starting in 1972. New York hustle: The dance is somewhat unusual rhythmically because of the syncopation it is associated with. Most dances are danced with either 4/4 or 3/4 music with counting to match, with either a triple or duple base depending on the dance. The Latin Hustle is generally danced to 4/4 music but counted as a six-beat pattern. The most common Latin Hustle counting pattern is "&1 2 3 &1 2 3", meaning "LR L R LR L R" in the leader's pattern and the natural opposite of the follower's pattern. However, the syncopation in three-count hustle also be danced: 1&23, 12&3, or 123&. New York hustle: Common steps Basic – similar to the basic from single-step swing, except rock-step is at beginning Turn – 180 degrees clockwise turn taken after rock-step, between 2 and 3 counts, followed by a rock-step Left Turn – 180 degrees counterclockwise turn taken after rock-step, between 1 and 2 counts, followed by a rock-step Side Break – lead sends follow out still holding her left hand, then picks her back up Wheel – couple in double hand-hold, pumps arms like a bellows; couple as a whole rotates 180 degrees clockwise Inside Turn or Loop Turn – similar to the loop turn from swing; follower twirls 360 degrees counterclockwise Wrap – similar to wrap from the western swing but the footing is the same as a half turn for the hustle Two hand turn – uses 180 degrees turn footing, before the step the lead takes the followers right hand in his, then proceeds as if completing a wrap but changes back to mirror two hand position halfway through the step. Video clips: Clip (in WMV format) showing some of the basic step variations of today's hustle Partner dance video (in MOV format) showing a very smooth hustle; and two men and one woman dancing hustle (in AVI format); both from Hustle Dance Club
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zathura (document viewer)** Zathura (document viewer): Zathura is a free, plugin-based document viewer. Plugins are available for PDF (via poppler or MuPDF), PostScript and DjVu. It was written to be lightweight and controlled with vi-like keybindings. Zathura's customizability makes it well-liked by many Linux users.Zathura has official packages available in Arch Linux,Debian,Fedora,Gentoo,OpenBSD,OpenSUSE,Source Mage,Ubuntu, and an unofficial macOS package provided by MacPorts.Zathura was named after the 2002 book Zathura and the 2005 film Zathura: A Space Adventure. History: Development on Zathura began on 12 August 2009. On 18 September 2009, version 0.0.1 was announced to the Arch Linux community.Zathura has been an official Arch Linux package since April 2010. Same year, by the end of July it was added to the Source Mage Linux distribution. It has been an official Debian package since at least 2011, as part of Debian Squeeze. Features: Zathura automatically reloads documents. When working in compiled documents such as those written in LaTeX, Zathura will refresh the output whenever compilation takes place. Zathura has the option of enabling inverse search (using "synctex").Zathura can adjust the document to best-fit or to fit width, and it can rotate pages. It can view pages side-by-side and has a fullscreen mode. Pages can also be recolored to have a black background and white foreground. Features: Zathura can search for text and copy text to the primary X selection. It supports bookmarks and can open encrypted files. The behavior and appearance of Zathura can be customized using a configuration file. Zathura has the ability to execute external shell commands. It can be opened in tabs using tabbed.Zathura implements an optional sandbox mode using seccomp filter to restrict the consequences of potential vulnerabilities.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bend minimization** Bend minimization: In graph drawing styles that represent the edges of a graph by polylines (sequences of line segments connected at bends), it is desirable to minimize the number of bends per edge (sometimes called the curve complexity) or the total number of bends in a drawing. Bend minimization is the algorithmic problem of finding a drawing that minimizes these quantities. Eliminating all bends: The prototypical example of bend minimization is Fáry's theorem, which states that every planar graph can be drawn with no bends, that is, with all its edges drawn as straight line segments.Drawings of a graph in which the edges are both bendless and axis-aligned are sometimes called rectilinear drawings, and are one way of constructing RAC drawings in which all crossings are at right angles. However, it is NP-complete to determine whether a planar graph has a planar rectilinear drawing, and NP-complete to determine whether an arbitrary graph has a rectilinear drawing that allows crossings. Bend minimization: Tamassia (1987) showed that bend minimization of orthogonal drawings of planar graphs, in which the vertices are placed in an integer lattice and the edges are drawn as axis-aligned polylines, could be performed in polynomial time by translating the problem into one of minimum-cost network flow. However, if the planar embedding of the graph may be changed, then bend minimization becomes NP-complete, and must instead be solved by techniques such as integer programming that do not guarantee both a fast runtime and an exact answer. Few bends per edge: Many graph drawing styles allow bends, but only in a limited way: the curve complexity of these drawings (the maximum number of bends per edge) is bounded by some fixed constant. Allowing this constant to grow larger can be used to improve other aspects of the drawing, such as its area. Alternatively, in some cases, a drawing style may only be possible when bends are allowed; for instance, not every graph has a RAC drawing (a drawing with all crossings at right angles) with no bends, or with curve complexity two, but every graph has such a drawing with curve complexity three.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Erosive pustular dermatitis of the scalp** Erosive pustular dermatitis of the scalp: Erosive pustular dermatitis of the scalp presents with pustules, erosions, and crusts on the scalp of primarily older Caucasian females, and on biopsy, has a lymphoplasmacytic infiltrate with or without foreign body giant cells and pilosebaceous atrophy.: 650 : 761
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lip strap** Lip strap: A lip strap is a piece of horse tack made of rolled leather or occasionally thin chain, used sometimes on some types of English-style curb and pelham bits. The lip strap runs between the bit shanks and passes through a special center ring on a curb chain sometimes called the "fly link". The lip strap attaches to rings at midpoint of the shanks and buckles on the near side.The lip strap helps keep a "mouthy" horse from mouthing or "lipping" the shank. It also helps prevent the curb chain from unfastening or otherwise moving too much. Lip strap: In western riding, the "slobber bar" or "shank hobble" placed between the rein rings of a curb bit serves the same purpose as a lip strap. The leather curb strap-like attachments that are sometimes used to connect the rings of snaffle bit on a western bridle are also occasionally known as lip straps, though "bit hobble" is the term more often used.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Complex multiplication of abelian varieties** Complex multiplication of abelian varieties: In mathematics, an abelian variety A defined over a field K is said to have CM-type if it has a large enough commutative subring in its endomorphism ring End(A). The terminology here is from complex multiplication theory, which was developed for elliptic curves in the nineteenth century. One of the major achievements in algebraic number theory and algebraic geometry of the twentieth century was to find the correct formulations of the corresponding theory for abelian varieties of dimension d > 1. The problem is at a deeper level of abstraction, because it is much harder to manipulate analytic functions of several complex variables. Complex multiplication of abelian varieties: The formal definition is that End Q⁡(A) the tensor product of End(A) with the rational number field Q, should contain a commutative subring of dimension 2d over Q. When d = 1 this can only be a quadratic field, and one recovers the cases where End(A) is an order in an imaginary quadratic field. For d > 1 there are comparable cases for CM-fields, the complex quadratic extensions of totally real fields. There are other cases that reflect that A may not be a simple abelian variety (it might be a cartesian product of elliptic curves, for example). Another name for abelian varieties of CM-type is abelian varieties with sufficiently many complex multiplications. Complex multiplication of abelian varieties: It is known that if K is the complex numbers, then any such A has a field of definition which is in fact a number field. The possible types of endomorphism ring have been classified, as rings with involution (the Rosati involution), leading to a classification of CM-type abelian varieties. To construct such varieties in the same style as for elliptic curves, starting with a lattice Λ in Cd, one must take into account the Riemann relations of abelian variety theory. Complex multiplication of abelian varieties: The CM-type is a description of the action of a (maximal) commutative subring L of EndQ(A) on the holomorphic tangent space of A at the identity element. Spectral theory of a simple kind applies, to show that L acts via a basis of eigenvectors; in other words L has an action that is via diagonal matrices on the holomorphic vector fields on A. In the simple case, where L is itself a number field rather than a product of some number of fields, the CM-type is then a list of complex embeddings of L. There are 2d of those, occurring in complex conjugate pairs; the CM-type is a choice of one out of each pair. It is known that all such possible CM-types can be realised. Complex multiplication of abelian varieties: Basic results of Goro Shimura and Yutaka Taniyama compute the Hasse–Weil L-function of A, in terms of the CM-type and a Hecke L-function with Hecke character, having infinity-type derived from it. These generalise the results of Max Deuring for the elliptic curve case.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Symmetric space** Symmetric space: In mathematics, a symmetric space is a Riemannian manifold (or more generally, a pseudo-Riemannian manifold) whose group of symmetries contains an inversion symmetry about every point. This can be studied with the tools of Riemannian geometry, leading to consequences in the theory of holonomy; or algebraically through Lie theory, which allowed Cartan to give a complete classification. Symmetric spaces commonly occur in differential geometry, representation theory and harmonic analysis. Symmetric space: In geometric terms, a complete, simply connected Riemannian manifold is a symmetric space if and only if its curvature tensor is invariant under parallel transport. More generally, a Riemannian manifold (M, g) is said to be symmetric if and only if, for each point p of M, there exists an isometry of M fixing p and acting on the tangent space TpM as minus the identity (every symmetric space is complete, since any geodesic can be extended indefinitely via symmetries about the endpoints). Both descriptions can also naturally be extended to the setting of pseudo-Riemannian manifolds. Symmetric space: From the point of view of Lie theory, a symmetric space is the quotient G/H of a connected Lie group G by a Lie subgroup H which is (a connected component of) the invariant group of an involution of G. This definition includes more than the Riemannian definition, and reduces to it when H is compact. Riemannian symmetric spaces arise in a wide variety of situations in both mathematics and physics. Their central role in the theory of holonomy was discovered by Marcel Berger. They are important objects of study in representation theory and harmonic analysis as well as in differential geometry. Geometric definition: Let M be a connected Riemannian manifold and p a point of M. A diffeomorphism f of a neighborhood of p is said to be a geodesic symmetry if it fixes the point p and reverses geodesics through that point, i.e. if γ is a geodesic with γ(0)=p then f(γ(t))=γ(−t). It follows that the derivative of the map f at p is minus the identity map on the tangent space of p. On a general Riemannian manifold, f need not be isometric, nor can it be extended, in general, from a neighbourhood of p to all of M. M is said to be locally Riemannian symmetric if its geodesic symmetries are in fact isometric. This is equivalent to the vanishing of the covariant derivative of the curvature tensor. A locally symmetric space is said to be a (globally) symmetric space if in addition its geodesic symmetries can be extended to isometries on all of M. Basic properties The Cartan–Ambrose–Hicks theorem implies that M is locally Riemannian symmetric if and only if its curvature tensor is covariantly constant, and furthermore that every simply connected, complete locally Riemannian symmetric space is actually Riemannian symmetric. Every Riemannian symmetric space M is complete and Riemannian homogeneous (meaning that the isometry group of M acts transitively on M). In fact, already the identity component of the isometry group acts transitively on M (because M is connected). Locally Riemannian symmetric spaces that are not Riemannian symmetric may be constructed as quotients of Riemannian symmetric spaces by discrete groups of isometries with no fixed points, and as open subsets of (locally) Riemannian symmetric spaces. Examples Basic examples of Riemannian symmetric spaces are Euclidean space, spheres, projective spaces, and hyperbolic spaces, each with their standard Riemannian metrics. More examples are provided by compact, semi-simple Lie groups equipped with a bi-invariant Riemannian metric. Every compact Riemann surface of genus greater than 1 (with its usual metric of constant curvature −1) is a locally symmetric space but not a symmetric space. Every lens space is locally symmetric but not symmetric, with the exception of L(2,1) which is symmetric. The lens spaces are quotients of the 3-sphere by a discrete isometry that has no fixed points. An example of a non-Riemannian symmetric space is anti-de Sitter space. Algebraic definition: Let G be a connected Lie group. Then a symmetric space for G is a homogeneous space G/H where the stabilizer H of a typical point is an open subgroup of the fixed point set of an involution σ in Aut(G). Thus σ is an automorphism of G with σ2 = idG and H is an open subgroup of the invariant set Gσ={g∈G:σ(g)=g}. Algebraic definition: Because H is open, it is a union of components of Gσ (including, of course, the identity component). Algebraic definition: As an automorphism of G, σ fixes the identity element, and hence, by differentiating at the identity, it induces an automorphism of the Lie algebra g of G, also denoted by σ, whose square is the identity. It follows that the eigenvalues of σ are ±1. The +1 eigenspace is the Lie algebra h of H (since this is the Lie algebra of Gσ), and the −1 eigenspace will be denoted m . Since σ is an automorphism of g , this gives a direct sum decomposition g=h⊕m with [h,h]⊂h,[h,m]⊂m,[m,m]⊂h. Algebraic definition: The first condition is automatic for any homogeneous space: it just says the infinitesimal stabilizer h is a Lie subalgebra of g . The second condition means that m is an h -invariant complement to h in g . Thus any symmetric space is a reductive homogeneous space, but there are many reductive homogeneous spaces which are not symmetric spaces. The key feature of symmetric spaces is the third condition that m brackets into h Conversely, given any Lie algebra g with a direct sum decomposition satisfying these three conditions, the linear map σ, equal to the identity on h and minus the identity on m , is an involutive automorphism. Riemannian symmetric spaces satisfy the Lie-theoretic characterization: If M is a Riemannian symmetric space, the identity component G of the isometry group of M is a Lie group acting transitively on M (that is, M is Riemannian homogeneous). Therefore, if we fix some point p of M, M is diffeomorphic to the quotient G/K, where K denotes the isotropy group of the action of G on M at p. By differentiating the action at p we obtain an isometric action of K on TpM. This action is faithful (e.g., by a theorem of Kostant, any isometry in the identity component is determined by its 1-jet at any point) and so K is a subgroup of the orthogonal group of TpM, hence compact. Moreover, if we denote by sp: M → M the geodesic symmetry of M at p, the map σ:G→G,h↦sp∘h∘sp is an involutive Lie group automorphism such that the isotropy group K is contained between the fixed point group Gσ and its identity component (hence an open subgroup) (Gσ)o, see the definition and following proposition on page 209, chapter IV, section 3 in Helgason's Differential Geometry, Lie Groups, and Symmetric Spaces for further information. Riemannian symmetric spaces satisfy the Lie-theoretic characterization: To summarize, M is a symmetric space G/K with a compact isotropy group K. Conversely, symmetric spaces with compact isotropy group are Riemannian symmetric spaces, although not necessarily in a unique way. To obtain a Riemannian symmetric space structure we need to fix a K-invariant inner product on the tangent space to G/K at the identity coset eK: such an inner product always exists by averaging, since K is compact, and by acting with G, we obtain a G-invariant Riemannian metric g on G/K. Riemannian symmetric spaces satisfy the Lie-theoretic characterization: To show that G/K is Riemannian symmetric, consider any point p = hK (a coset of K, where h ∈ G) and define sp:M→M,h′K↦hσ(h−1h′)K where σ is the involution of G fixing K. Then one can check that sp is an isometry with (clearly) sp(p) = p and (by differentiating) dsp equal to minus the identity on TpM. Thus sp is a geodesic symmetry and, since p was arbitrary, M is a Riemannian symmetric space. Riemannian symmetric spaces satisfy the Lie-theoretic characterization: If one starts with a Riemannian symmetric space M, and then performs these two constructions in sequence, then the Riemannian symmetric space yielded is isometric to the original one. This shows that the "algebraic data" (G,K,σ,g) completely describe the structure of M. Classification of Riemannian symmetric spaces: The algebraic description of Riemannian symmetric spaces enabled Élie Cartan to obtain a complete classification of them in 1926. Classification of Riemannian symmetric spaces: For a given Riemannian symmetric space M let (G,K,σ,g) be the algebraic data associated to it. To classify the possible isometry classes of M, first note that the universal cover of a Riemannian symmetric space is again Riemannian symmetric, and the covering map is described by dividing the connected isometry group G of the covering by a subgroup of its center. Therefore, we may suppose without loss of generality that M is simply connected. (This implies K is connected by the long exact sequence of a fibration, because G is connected by assumption.) Classification scheme A simply connected Riemannian symmetric space is said to be irreducible if it is not the product of two or more Riemannian symmetric spaces. It can then be shown that any simply connected Riemannian symmetric space is a Riemannian product of irreducible ones. Therefore, we may further restrict ourselves to classifying the irreducible, simply connected Riemannian symmetric spaces. Classification of Riemannian symmetric spaces: The next step is to show that any irreducible, simply connected Riemannian symmetric space M is of one of the following three types: 1. Euclidean type: M has vanishing curvature, and is therefore isometric to a Euclidean space. 2. Compact type: M has nonnegative (but not identically zero) sectional curvature. 3. Non-compact type: M has nonpositive (but not identically zero) sectional curvature. Classification of Riemannian symmetric spaces: A more refined invariant is the rank, which is the maximum dimension of a subspace of the tangent space (to any point) on which the curvature is identically zero. The rank is always at least one, with equality if the sectional curvature is positive or negative. If the curvature is positive, the space is of compact type, and if negative, it is of noncompact type. The spaces of Euclidean type have rank equal to their dimension and are isometric to a Euclidean space of that dimension. Therefore, it remains to classify the irreducible, simply connected Riemannian symmetric spaces of compact and non-compact type. In both cases there are two classes. Classification of Riemannian symmetric spaces: A. G is a (real) simple Lie group; B. G is either the product of a compact simple Lie group with itself (compact type), or a complexification of such a Lie group (non-compact type). Classification of Riemannian symmetric spaces: The examples in class B are completely described by the classification of simple Lie groups. For compact type, M is a compact simply connected simple Lie group, G is M×M and K is the diagonal subgroup. For non-compact type, G is a simply connected complex simple Lie group and K is its maximal compact subgroup. In both cases, the rank is the rank of G. Classification of Riemannian symmetric spaces: The compact simply connected Lie groups are the universal covers of the classical Lie groups SO(n) , SU(n) , Sp(n) and the five exceptional Lie groups E6, E7, E8, F4, G2. Classification of Riemannian symmetric spaces: The examples of class A are completely described by the classification of noncompact simply connected real simple Lie groups. For non-compact type, G is such a group and K is its maximal compact subgroup. Each such example has a corresponding example of compact type, by considering a maximal compact subgroup of the complexification of G which contains K. More directly, the examples of compact type are classified by involutive automorphisms of compact simply connected simple Lie groups G (up to conjugation). Such involutions extend to involutions of the complexification of G, and these in turn classify non-compact real forms of G. Classification of Riemannian symmetric spaces: In both class A and class B there is thus a correspondence between symmetric spaces of compact type and non-compact type. This is known as duality for Riemannian symmetric spaces. Classification of Riemannian symmetric spaces: Classification result Specializing to the Riemannian symmetric spaces of class A and compact type, Cartan found that there are the following seven infinite series and twelve exceptional Riemannian symmetric spaces G/K. They are here given in terms of G and K, together with a geometric interpretation, if readily available. The labelling of these spaces is the one given by Cartan. Classification of Riemannian symmetric spaces: As Grassmannians A more modern classification (Huang & Leung 2010) uniformly classifies the Riemannian symmetric spaces, both compact and non-compact, via a Freudenthal magic square construction. The irreducible compact Riemannian symmetric spaces are, up to finite covers, either a compact simple Lie group, a Grassmannian, a Lagrangian Grassmannian, or a double Lagrangian Grassmannian of subspaces of (A⊗B)n, for normed division algebras A and B. A similar construction produces the irreducible non-compact Riemannian symmetric spaces. General symmetric spaces: An important class of symmetric spaces generalizing the Riemannian symmetric spaces are pseudo-Riemannian symmetric spaces, in which the Riemannian metric is replaced by a pseudo-Riemannian metric (nondegenerate instead of positive definite on each tangent space). In particular, Lorentzian symmetric spaces, i.e., n dimensional pseudo-Riemannian symmetric spaces of signature (n − 1,1), are important in general relativity, the most notable examples being Minkowski space, De Sitter space and anti-de Sitter space (with zero, positive and negative curvature respectively). De Sitter space of dimension n may be identified with the 1-sheeted hyperboloid in a Minkowski space of dimension n + 1. General symmetric spaces: Symmetric and locally symmetric spaces in general can be regarded as affine symmetric spaces. If M = G/H is a symmetric space, then Nomizu showed that there is a G-invariant torsion-free affine connection (i.e. an affine connection whose torsion tensor vanishes) on M whose curvature is parallel. Conversely a manifold with such a connection is locally symmetric (i.e., its universal cover is a symmetric space). Such manifolds can also be described as those affine manifolds whose geodesic symmetries are all globally defined affine diffeomorphisms, generalizing the Riemannian and pseudo-Riemannian case. General symmetric spaces: Classification results The classification of Riemannian symmetric spaces does not extend readily to the general case for the simple reason that there is no general splitting of a symmetric space into a product of irreducibles. Here a symmetric space G/H with Lie algebra g=h⊕m is said to be irreducible if m is an irreducible representation of h . Since h is not semisimple (or even reductive) in general, it can have indecomposable representations which are not irreducible. General symmetric spaces: However, the irreducible symmetric spaces can be classified. As shown by Katsumi Nomizu, there is a dichotomy: an irreducible symmetric space G/H is either flat (i.e., an affine space) or g is semisimple. This is the analogue of the Riemannian dichotomy between Euclidean spaces and those of compact or noncompact type, and it motivated M. Berger to classify semisimple symmetric spaces (i.e., those with g semisimple) and determine which of these are irreducible. The latter question is more subtle than in the Riemannian case: even if g is simple, G/H might not be irreducible. General symmetric spaces: As in the Riemannian case there are semisimple symmetric spaces with G = H × H. Any semisimple symmetric space is a product of symmetric spaces of this form with symmetric spaces such that g is simple. It remains to describe the latter case. For this, one needs to classify involutions σ of a (real) simple Lie algebra g . If gc is not simple, then g is a complex simple Lie algebra, and the corresponding symmetric spaces have the form G/H, where H is a real form of G: these are the analogues of the Riemannian symmetric spaces G/K with G a complex simple Lie group, and K a maximal compact subgroup. General symmetric spaces: Thus we may assume gc is simple. The real subalgebra g may be viewed as the fixed point set of a complex antilinear involution τ of gc , while σ extends to a complex antilinear involution of gc commuting with τ and hence also a complex linear involution σ∘τ. General symmetric spaces: The classification therefore reduces to the classification of commuting pairs of antilinear involutions of a complex Lie algebra. The composite σ∘τ determines a complex symmetric space, while τ determines a real form. From this it is easy to construct tables of symmetric spaces for any given gc , and furthermore, there is an obvious duality given by exchanging σ and τ. This extends the compact/non-compact duality from the Riemannian case, where either σ or τ is a Cartan involution, i.e., its fixed point set is a maximal compact subalgebra. General symmetric spaces: Tables The following table indexes the real symmetric spaces by complex symmetric spaces and real forms, for each classical and exceptional complex simple Lie group. For exceptional simple Lie groups, the Riemannian case is included explicitly below, by allowing σ to be the identity involution (indicated by a dash). In the above tables this is implicitly covered by the case kl=0. Weakly symmetric Riemannian spaces: In the 1950s Atle Selberg extended Cartan's definition of symmetric space to that of weakly symmetric Riemannian space, or in current terminology weakly symmetric space. These are defined as Riemannian manifolds M with a transitive connected Lie group of isometries G and an isometry σ normalising G such that given x, y in M there is an isometry s in G such that sx = σy and sy = σx. (Selberg's assumption that σ2 should be an element of G was later shown to be unnecessary by Ernest Vinberg.) Selberg proved that weakly symmetric spaces give rise to Gelfand pairs, so that in particular the unitary representation of G on L2(M) is multiplicity free. Weakly symmetric Riemannian spaces: Selberg's definition can also be phrased equivalently in terms of a generalization of geodesic symmetry. It is required that for every point x in M and tangent vector X at x, there is an isometry s of M, depending on x and X, such that s fixes x; the derivative of s at x sends X to –X.When s is independent of X, M is a symmetric space. An account of weakly symmetric spaces and their classification by Akhiezer and Vinberg, based on the classification of periodic automorphisms of complex semisimple Lie algebras, is given in Wolf (2007). Properties: Some properties and forms of symmetric spaces can be noted. Properties: Lifting the metric tensor The metric tensor on the Riemannian manifold M can be lifted to a scalar product on G by combining it with the Killing form. This is done by defining otherwise Here, ⟨⋅,⋅⟩p is the Riemannian metric defined on TpM , and trace ad ad ⁡Y) is the Killing form. The minus sign appears because the Killing form is negative-definite on h; this makes ⟨⋅,⋅⟩g positive-definite. Properties: Factorization The tangent space m can be further factored into eigenspaces classified by the Killing form. This is accomplished by defining an adjoint map m→m taking Y↦Y# as ⟨X,Y#⟩=B(X,Y) where ⟨⋅,⋅⟩ is the Riemannian metric on m and B(⋅,⋅) is the Killing form. This map is sometimes called the generalized transpose, as corresponds to the transpose for the orthogonal groups and the Hermitian conjugate for the unitary groups. It is a linear functional, and it is self-adjoint, and so one concludes that there is an orthonormal basis Y1,…,Yn of m with Yi#=λiYi These are orthogonal with respect to the metric, in that ⟨Yi#,Yj⟩=λi⟨Yi,Yj⟩=B(Yi,Yj)=⟨Yj#,Yi⟩=λj⟨Yj,Yi⟩ since the Killing form is symmetric. This factorizes m into eigenspaces m=m1⊕⋯⊕md with [mi,mj]=0 for i≠j . For the case of g semisimple, so that the Killing form is non-degenerate, the metric likewise factorizes: ⟨⋅,⋅⟩=1λ1B|m1+⋯+1λdB|md In certain practical applications, this factorization can be interpreted as the spectrum of operators, e.g. the spectrum of the hydrogen atom, with the eigenvalues of the Killing form corresponding to different values of the angular momentum of an orbital (i.e. the Killing form being a Casimir operator that can classify the different representations under which different orbitals transform.) Classification of symmetric spaces proceeds based on whether or not the Killing form is positive/negative definite. Applications and special cases: Symmetric spaces and holonomy If the identity component of the holonomy group of a Riemannian manifold at a point acts irreducibly on the tangent space, then either the manifold is a locally Riemannian symmetric space, or it is in one of 7 families. Hermitian symmetric spaces A Riemannian symmetric space which is additionally equipped with a parallel complex structure compatible with the Riemannian metric is called a Hermitian symmetric space. Some examples are complex vector spaces and complex projective spaces, both with their usual Riemannian metric, and the complex unit balls with suitable metrics so that they become complete and Riemannian symmetric. Applications and special cases: An irreducible symmetric space G/K is Hermitian if and only if K contains a central circle. A quarter turn by this circle acts as multiplication by i on the tangent space at the identity coset. Thus the Hermitian symmetric spaces are easily read off of the classification. In both the compact and the non-compact cases it turns out that there are four infinite series, namely AIII, BDI with p=2, DIII and CI, and two exceptional spaces, namely EIII and EVII. The non-compact Hermitian symmetric spaces can be realized as bounded symmetric domains in complex vector spaces. Applications and special cases: Quaternion-Kähler symmetric spaces A Riemannian symmetric space which is additionally equipped with a parallel subbundle of End(TM) isomorphic to the imaginary quaternions at each point, and compatible with the Riemannian metric, is called quaternion-Kähler symmetric space. Applications and special cases: An irreducible symmetric space G/K is quaternion-Kähler if and only if isotropy representation of K contains an Sp(1) summand acting like the unit quaternions on a quaternionic vector space. Thus the quaternion-Kähler symmetric spaces are easily read off from the classification. In both the compact and the non-compact cases it turns out that there is exactly one for each complex simple Lie group, namely AI with p = 2 or q = 2 (these are isomorphic), BDI with p = 4 or q = 4, CII with p = 1 or q = 1, EII, EVI, EIX, FI and G. Applications and special cases: Bott periodicity theorem In the Bott periodicity theorem, the loop spaces of the stable orthogonal group can be interpreted as reductive symmetric spaces.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**2-Chloropyridine** 2-Chloropyridine: 2-Chloropyridine is an organohalide with the formula C5H4ClN. It is a colorless liquid that is mainly used to generate fungicides and insecticides in industry. It also serves to generate antihistamines and antiarrythymics for pharmaceutical purposes. Preparation: 2-Choropyridine is produced by direct reaction of pyridine with chlorine. The initially formed 2-chloropyridine reacts further to give 2,6-dichloropyridine.Alternatively, 2-chloropyridines can be conveniently synthesized in high yields from pyridine-N-oxides.2-Choropyridine was originally prepared by the chlorination of 2-hydroxypyridine with phosphoryl chloride. Main reactions and applications: 2-Chloropyridine reacts with nucleophiles to generate pyridine derivatives substituted at the second and fourth carbons on the heterocycle. Therefore, many reactions using 2-chloropyridine generate mixtures of products which require further workup to isolate the desired isomer.Some commercial products include pyrithione, pyripropoxyfen, chlorphenamine, and disopyramide. In these conversions, chloride is displaced. Pyrithione, the conjugate base of 2-mercaptopyridine-N-oxide, is a fungicide found in some shampoos. Oxidation 2-chloropyridine gives 2-chloropyridine-N-oxide. The antihistamine pheniramine may be generated via the reaction of phenylacetonitrile with 2-chloropyridine in the presence of a base. Environmental properties: Although pyridine is an excellent source of carbon, nitrogen, and energy for certain microorganisms, introduction of a halogen moiety significantly retards degradation of the pyridine ring. With the exception of 4-chloropyridine, each of the mono- and di-substituted chloropyridines were found to be relatively resistant to microbiological degradation in soil or liquid media. Estimated time for complete degradation was > 30 days. 2-Chloropyridine exhibits extensive volatilization losses from water, less so when present in soil. Toxicity: The LD50 is 64 mg/kg (dermal, rabbit).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tivantinib** Tivantinib: Tivantinib (ARQ197; by Arqule, Inc.) is an experimental small molecule anti-cancer drug. It is a bisindolylmaleimide that binds to the dephosphorylated MET kinase in vitro. (MET is a growth factor receptor.) Tivantinib is being tested clinically as a highly selective MET inhibitor. However, the mechanism of action of tivantinib is still unclear.Tivantinib displays cytotoxic activity via molecular mechanisms that are independent from its ability to bind MET, notably tubulin binding, which likely underlies tivantinib cytotoxicity.Possible applications include non-small-cell lung carcinoma, hepatocellular carcinoma, and oesophageal cancer.In 2017, it was announced that a phase III clinical trial for advanced hepatocellular carcinoma had failed to meet the primary endpoint.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Entrée de ballet** Entrée de ballet: An entrée de ballet ("ballet entrance") is an autonomous scene of ballet de cour, divertissement, comédie-ballet, opéra-ballet, even tragédie lyrique, which brings together several dancers in and out of the scenario. In the seventeenth and eighteenth centuries, baroque dance distinguished several types of entrances, according to their character and step style: serious, severe, comical or grotesque. In his recueil de danses, Raoul Auger Feuillet qualified the entrances he described according to the number of characters and sometimes their sex: entrance alone, entrance of a woman, entrance for two, etc. This choreographic form disappeared in the 1720s.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sandcastle (software)** Sandcastle (software): Sandcastle is a documentation generator from Microsoft. It automatically produces MSDN-style code documentation out of reflection information of .NET assemblies and XML documentation comments found in the source code of these assemblies. It can also be used to produce user documentation from Microsoft Assistance Markup Language (MAML) with the same look and feel as reference documentation. Overview: Sandcastle is a set of command line programs, configuration files, build components and XSLT files that work together to convert XML-based documentation into help topics that are fit for viewing in a help system. Sandcastle is typically used to automatically generate web-ready, XML-compliant HTML documentation in one of three built-in presentation styles from .NET assemblies and XML documentation files that are generated by compilers. The resulting HTML files are then used as input to tools such as the HTML Help Workshop to produce compiled help for distribution with the corresponding computer program. Overview: Sandcastle currently features a lightweight graphical user interface (GUI) as an alternative to the MSBuild project, batch script and Windows PowerShell scripts that are also provided. Several community GUI tools are also available for Sandcastle, providing additional features and simplifying its usage.The Visual Studio SDKs for 2005 and 2008 include older CTP versions of Sandcastle, although the latest release is available on GitHub. Sandcastle tools: Sandcastle consists of several programs, not all of which are used in the typical help build process. Commonly used tools are listed below. MrefBuilder uses Common Compiler Infrastructure (CCI) to reflect against managed assemblies and generate an output file. XslTransform applies XSL transformations to an XML file. Typically, the specified input file is or derives from a file that is generated by MRefBuilder. Sandcastle tools: BuildAssembler executes a build component stack, once for each topic defined in an XML manifest. A build component stack is defined in an XML file with a .config extension. Sandcastle provides several build components that are used in build component stacks to perform tasks such as generating in-memory data indexes, resolving links, including shared content, executing XSL transformations and saving the final output to a file. Community tools: Because in its current state Sandcastle by itself is rather complex to use, people have come up with tools and scripts that can automate the task for them. This section contains a list of such tools and scripts. Sandcastle Help File Builder DocProject (Visual Studio 2005/2008) Batch file PowerShell script MSBuild script Sandcastle Visual Studio Add-In XML Schema Documenter for Sandcastle Help File Builder Output: Sandcastle produces XML-based HTML files in a chosen presentation style. (This does not mean, however, that the files are XHTML-compliant.) The HTML is defined by XSL transformation files that are included in the particular presentation style being used. A build normally uses only one presentation style at a time. Output: The HTML files that Sandcastle produces are either conceptual (user) documentation, being the result of a transformation from Microsoft Assistance Markup Language (MAML) topics, or they are reference documentation, which is automatically generated from reflection data and XML documentation comments. These two different types of HTML output share the same presentation style and may be compiled together to produce mixed user/reference documentation. Output: The processes for building conceptual documentation and reference documentation are similar, with one of the main differences being that conceptual documentation does not require the MRefBuilder program to be used. Conceptual documentation consists of topics written using a MAML document type schema such as how to, walk-through, troubleshooting and several others. Sandcastle provides a conceptual build component stack (conceptual.config) that resolves shared content and links, and uses XSL files to transform MAML elements into HTML. Output: Reference documentation is generated automatically for managed Application Programming Interfaces (APIs) from reflection data and XML documentation comments. A "doc model" XSL transformation, provided by the chosen presentation style, is applied to define the files that will be generated. Sandcastle provides a reference build component stack (sandcastle.config) that builds in-memory indexes of the data, resolves shared content and links, and uses XSL to generate the final HTML output. Compiled help: Sandcastle does not produce compiled help output itself (although, the HTML files that it produces can be used as input to HTML help compilers such as the HTML Help Workshop and Microsoft Help 2). Compiled help: For example, the typical Help 1.x build process starts by running MrefBuilder.exe to produce an XML reflection file for one or more assemblies. The reflection file is then processed by the XslTransform.exe tool multiple times to apply various XSL transformations that add data such as a "doc model" and optional version information. Next, an XML-based topic manifest is generated and used by the BuildAssembler.exe program, which generates HTML topic files from the reflection data and XML documentation comments. An XML-based table of contents (TOC) file is generated and used by CHMBuilder.exe, along with the HTML files produced by BuildAssembler, to generate HTML Help Workshop project, index and TOC files. Finally, the HTML Help workshop is used to generate a compiled help file (.chm). Compiled help: Some tools are used multiple times during a single build, like XslTransform and BuildAssembler. Depending upon the requirements, other tools and XSL transformations may be used at various stages during the process to modify Sandcastle's output. Background: The Sandcastle application was developed by Microsoft to create a scalable and performing documentation generator for their API documentation. Microsoft released Sandcastle as a Community Technology Preview (CTP) version in July 2006, a few days before NDoc was declared dead The author of NDoc, Kevin Downs, cited in an email sent through his mailing list reasons for discontinuing development of his popular tool as a lack of community support, both financially and as development contributions, an automated mail-bomb attack on his public email address and the NDoc2 mailing list address, and also his impression that Sandcastle "will become the de facto standard and that NDoc will slowly become a stagnant side-water." Sandcastle averaged 217 downloads per day during the month of September 2010, making it one of the top 25 most downloaded projects on CodePlex. Background: On June 6, 2008 the SandCastle project was removed from CodePlex website after a discussion thread on the CodePlex site pointed out that source code was not available; despite CodePlex requiring this and the SandCastle project being touted as "open source". On July 2 the project returned to CodePlex and the source code was published. History: July 29, 2006 — the July 2006 CTP version was released, this version mainly focused on performance and scalability. No GUI was present yet, the application did not contain a feature to resolve GAC DLLs yet. August 28, 2006 — the August 2006 CTP version was released, the bugs fixed in this release seem to primarily for fixing crashes of the application. HTML output of the application is now compatible with Firefox. Some changes were made to the command line interface. October 1, 2006 — the September 2006 CTP version was released, bug fixes primarily seem to focus on fixing bugs in the output, and adding better support for some XML comment tags. November 11, 2006 — the November 2006 CTP version was released, along with bug fixes other items being supported are a few nDoc tags, and also transforms support Firefox. December 10, 2006 — the December 2006 CTP version was released, providing a DXROOT environment variable used by configuration files, an API "ripping" feature, pass-through HTML, and presentation updates that included support for Firefox in the VS 2005 style. March 6, 2007 — the March 2007 CTP version was released, adding 4 new and removing 3 XSL transformations, a batch build script and performance improvements. March 17, 2007 — the March 2007 CTP Technical Refresh version was released, fixing the "ripping" feature and a utility bug, and including a file that was missing from the previously released installer. June 19, 2007 — the June 2007 CTP version was released, providing an MSBuild project, a new version of the Common Compiler Infrastructure (CCI) reflection engine, a new presentation style named, "VS ORCAS", a new build component, new executable utilities, and several other enhancements. June 27, 2007 — the June 2007 CTP Refresh version was released, renaming the previously released "VS ORCAS" presentation style to "Hana" to prevent confusion since the Orcas Beta 2 and RTM documentation shipping in MSDN was going to continue to be built in the VS 2005 presentation style. October 1, 2007 — the September 2007 CTP version was released, with the first appearance of the CHMBuilder, VersionBuilder and DBCSFix tools, a Windows PowerShell build script, presentation style updates (most notably to the VS 2005 style), and without the .NET Framework reflection files that were normally included in previous installers. October 30, 2007 — the October 2007 CTP version was released, including the .NET Framework files that were missing from the previous release, a new conceptual documentation build process requiring Microsoft Assistance Markup Language (MAML) topics as input, and also improved Firefox support. January 16, 2008 — the Sandcastle 2.4.10115 version was released, being the first official non-CTP version of Sandcastle released to the web (RTW). An example graphical user interface (GUI) was provided, including an XSL transformation for Script# and the option to output an ASP.NET website.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Intravitreal implants** Intravitreal implants: Intravitreal implants are micro device-like inserts injected into the posterior segment of the eye to treat retinal diseases releasing therapeutic drugs at a set rate over a desired period of time. The posterior segment of the eye consists of the sclera, choroid, fovea, vitreous humor, optic nerve, and retina. Applications: Non-biodegradable implants Inserts made with non-biodegradable materials such as polymers require a surgical removal of the implant after the end of the treatment period. Examples of these materials consist of polymers such as ethylene-vinyl acetate (EVA), polyvinyl alcohol (PVA), polyurethane (PU) and poly siloxane (PS). An advantage to these non-biodegradable implants is that they do not cause any immune response towards the retina and the release of the drug substance can be controlled by "layering polymers of different permeability." Fluocinolone acetonide (Iluvien) Biodegradable implants Biodegradable implants are made of materials, typically, either water-soluble or metabolizable to degrade into un-harmful byproducts which can be safely excreted by the human body. It is important to note the release of the therapeutic drug is determined by the degradation of the implant and the diffusion rate of the drug substance. Indicating that the higher the molecular weight of the polymer and drug substance used, the slower the release of the drug into the vitreous humor.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Great comet** Great comet: A great comet is a comet that becomes exceptionally bright. There is no official definition; often the term is attached to comets such as Halley's Comet, which during certain appearances are bright enough to be noticed by casual observers who are not looking for them, and become well known outside the astronomical community. Great comets appear at irregular, unpredictable intervals, on average about once per decade. Although comets are officially named after their discoverers, great comets are sometimes also referred to by the year in which they appeared great, using the formulation "The Great Comet of ...", followed by the year. Causes: The vast majority of comets are never bright enough to be seen by the naked eye, and generally pass through the inner Solar System unseen by anyone except astronomers. However, occasionally a comet may brighten to naked eye visibility, and even more rarely it may become as bright as or brighter than the brightest stars. The requirements for this to occur are: a large and active nucleus, a close approach to the Sun, and a close approach to the Earth. A comet fulfilling all three of these criteria will certainly be very bright. Sometimes, a comet failing on one criterion will still be bright. For example, Comet Hale–Bopp did not approach the Sun very closely, but had an exceptionally large and active nucleus. It was visible to the naked eye for several months and was very widely observed. Similarly, Comet Hyakutake was a relatively small comet, but appeared bright because it passed very close to the Earth. Causes: Size and activity of the nucleus Cometary nuclei vary in size from a few hundreds of metres across or less to many kilometres across. When they approach the Sun, large amounts of gas and dust are ejected by cometary nuclei, due to solar heating. A crucial factor in how bright a comet becomes is how large and how active its nucleus is. After many returns to the inner Solar System, cometary nuclei become depleted in volatile materials and thus are much less bright than comets which are making their first passage through the Solar System. Causes: The sudden brightening of Comet Holmes in 2007 showed the importance of the activity of the nucleus in the comet's brightness. On October 23–24, 2007, the comet underwent a sudden outburst which caused it to brighten by factor of about half a million. It unexpectedly brightened from an apparent magnitude of about 17 to about 2.8 in a period of only 42 hours, making it visible to the naked eye. All these temporarily made comet 17P the largest (by radius) object in the Solar System although its nucleus is estimated to be only about 3.4 km in diameter. Causes: Close perihelion approach The brightness of a simple reflective body varies with the inverse square of its distance from the Sun. That is, if an object's distance from the Sun is halved, its brightness is quadrupled. However, comets behave differently, due to their ejection of large amounts of volatile gas which then also reflect sunlight and may also fluoresce. Their brightness varies roughly as the inverse cube of their distance from the Sun, meaning that if a comet's distance from the Sun is halved, it will become eight times as bright. Causes: This means that the peak brightness of a comet depends significantly on its distance from the Sun. For most comets, the perihelion of their orbit lies outside the Earth's orbit. Any comet approaching the Sun to within 0.5 AU (75 million km) or less may have a chance of becoming a great comet. Causes: Close approach to the Earth For a comet to become very bright, it also needs to pass close to the Earth. Halley's Comet, for example, is usually very bright when it passes through the inner Solar System every seventy-six years, but during its 1986 apparition, its closest approach to Earth was almost the most distant possible. The comet became visible to the naked eye, but was unspectacular. On the other hand, the intrinsically small and faint Comet Hyakutake (C/1996 B2) appeared very bright and spectacular due to its very close approach to Earth at its nearest during March 1996. Its passage near the Earth was one of the closest cometary approaches on record with a distance of 0.1 AU (15 million km; 39 LD). List of great comets: Great comets of the past two millennia include the following:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CKLF like MARVEL transmembrane domain containing 7** CKLF like MARVEL transmembrane domain containing 7: CKLF like MARVEL transmembrane domain-containing 7 (i.e. CMTM7), previously termed chemokine-like factor superfamily 7 (i.e. CKLFSF7), is a protein that in humans is encoded by the CMTM7 gene. This gene, which is located in band 22 on the short (i.e. "p") arm of chromosome 3, and the protein that it encodes belong to the CKLF-like MARVEL transmembrane domain-containing family. Through the process of alternative splicing, the CMTM7 gene encodes two isoforms, CMTM7-v1 and CMTM7-v2, with CMTM7-v1 being the main form expressed and studied. CMTM7 proteins are widely expressed in normal human tissues. Function: CMTM7 protein levels are low in the malignant tissues of various cancers such as those of esophagus, stomach, pancreas, liver, lung, cervix, and breast. as compared with its expression in the normal tissues of these organs. Furthermore, the forced overexpression of CMTM7 protein in various cancer immortalized cell lines inhibit their proliferation and motility in culture as well as their ability to form tumors in a nude mouse experimental model of cancer. These findings suggest that the CMTM7 protein acts to inhibit the development and/or progression of these cancers and therefore that the CMTM7 gene acts as tumor suppressor in these cancers. However, further studies are needed to support these suggestion and determine if expression of the CMTM7 can be used as a clinical marker of these cancers severity/prognosis and/or as therapeutic targets for treating them.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**3 nm process** 3 nm process: In semiconductor manufacturing, the 3 nm process is the next die shrink after the 5 nanometer MOSFET (metal–oxide–semiconductor field-effect transistor) technology node. South Korean chipmaker Samsung started shipping its 3 nm gate all around (GAA) process, named 3GAA, in mid-2022. On December 29, 2022, Taiwanese chip manufacturer TSMC announced that volume production using its 3 nm semiconductor node termed N3 is under way with good yields. An enhanced 3 nm chip process called N3E may start production in 2023. American manufacturer Intel plans to start 3 nm production in 2023.Samsung's 3 nm process is based on GAAFET (gate-all-around field-effect transistor) technology, a type of multi-gate MOSFET technology, while TSMC's 3 nm process still uses FinFET (fin field-effect transistor) technology, despite TSMC developing GAAFET transistors. Specifically, Samsung plans to use its own variant of GAAFET called MBCFET (multi-bridge channel field-effect transistor). Intel's process dubbed "Intel 3" without the "nm" suffix will use a refined, enhanced and optimized version of FinFET technology compared to its previous process nodes in terms of performance gained per watt, use of EUV lithography, and power and area improvement.The term "3 nanometer" has no relation to any actual physical feature (such as gate length, metal pitch or gate pitch) of the transistors. According to the projections contained in the 2021 update of the International Roadmap for Devices and Systems published by IEEE Standards Association Industry Connection, a 3 nm node is expected to have a contacted gate pitch of 48 nanometers and a tightest metal pitch of 24 nanometers. 3 nm process: However, in real world commercial practice, "3 nm" is used primarily as a marketing term by individual microchip manufacturers to refer to a new, improved generation of silicon semiconductor chips in terms of increased transistor density (i.e. a higher degree of miniaturization), increased speed and reduced power consumption. There is no industry-wide agreement among different manufacturers about what numbers would define a 3 nm node. Typically the chip manufacturer refers to its own previous process node (in this case the 5 nm process node) for comparison. For example, TSMC has stated that its 3 nm FinFET chips will reduce power consumption by 25–30% at the same speed, increase speed by 10–15% at the same amount of power and increase transistor density by about 33% compared to its previous 5 nm FinFET chips. On the other hand, Samsung has stated that its 3 nm process will reduce power consumption by 45%, improve performance by 23%, and decrease surface area by 16% compared to its previous 5 nm process. EUV lithography faces new challenges at 3 nm which lead to the required use of multipatterning. History: Research and technology demos In 1985, a Nippon Telegraph and Telephone (NTT) research team fabricated a MOSFET (NMOS) device with a channel length of 150 nm and gate oxide thickness of 2.5 nm. In 1998, an Advanced Micro Devices (AMD) research team fabricated a MOSFET (NMOS) device with a channel length of 50 nm and oxide thickness of 1.3 nm.In 2003, a research team at NEC fabricated the first MOSFETs with a channel length of 3 nm, using the PMOS and NMOS processes. In 2006, a team from the Korea Advanced Institute of Science and Technology (KAIST) and the National Nano Fab Center, developed a 3 nm width multi-gate MOSFET, the world's smallest nanoelectronic device, based on gate-all-around (GAAFET) technology. History: Commercialization history In late 2016, TSMC announced plans to construct a 5 nm–3 nm node semiconductor fabrication plant with a co-commitment investment of around US$15.7 billion.In 2017, TSMC announced it was to begin construction of the 3 nm semiconductor fabrication plant at the Tainan Science Park in Taiwan. TSMC plans to start volume production of the 3 nm process node in 2023.In early 2018, IMEC (Interuniversity Microelectronics Centre) and Cadence stated they had taped out 3 nm test chips, using extreme ultraviolet lithography (EUV) and 193 nm immersion lithography.In early 2019, Samsung presented plans to manufacture 3 nm GAAFET (gate-all-around field-effect transistors) at the 3 nm node in 2021, using its own MBCFET transistor structure that uses nanosheets; delivering a 35% performance increase, 50% power reduction and a 45% reduction in area when compared with 7 nm. Samsung's semiconductor roadmap also included products at 8, 7, 6, 5, and 4 nm 'nodes'.In December 2019, Intel announced plans for 3 nm production in 2025.In January 2020, Samsung announced the production of the world's first 3 nm GAAFET process prototype, and said that it is targeting mass production in 2021.In August 2020, TSMC announced details of its N3 3 nm process, which is new rather than being an improvement over its N5 5 nm process. Compared with the N5 process, the N3 process should offer a 10–15% (1.10–1.15×) increase in performance, or a 25–35% (1.25–1.35×) decrease in power consumption, with a 1.7× increase in logic density (a scaling factor of 0.58), a 20% increase (0.8 scaling factor) in SRAM cell density, and a 10% increase in analog circuitry density. Since many designs include considerably more SRAM than logic, (a common ratio being 70% SRAM to 30% logic) die shrinks are expected to only be of around 26%. TSMC plans volume production in the second half of 2022.In July 2021, Intel presented brand new process technology roadmap, according to which Intel 3 process, the company's second node to use EUV and the last one to use FinFET before switching to Intel's RibbonFET transistor architecture, is now scheduled to enter product manufacturing phase in H2 2023.In October 2021, Samsung adjusted earlier plans and announced that the company is scheduled to start producing its customers’ first 3 nm-based chip designs in the first half of 2022, while its second generation of 3 nm is expected in 2023.In June 2022, at TSMC Technology Symposium, the company shared details of its N3E process technology scheduled for volume production in 2023 H2: 1.6× higher logic transistor density, 1.3× higher chip transistor density, 10-15% higher performance at iso power or 30-35% lower power at iso performance compared to TSMC N5 v1.0 process technology, FinFLEX technology, allowing to intermix libraries with different track heights within a block etc. TSMC also introduced new members of 3 nm process family: high-density variant N3S, high-performance variants N3P and N3X, and N3RF for RF applications.In June 2022, Samsung started "initial" production of a low-power, high-performance chip using 3 nm process technology with GAA architecture. According to industry sources, Qualcomm has reserved some of 3 nm production capacity from Samsung.On July 25, 2022, Samsung celebrated the first shipment of 3 nm Gate-All-Around chips to a Chinese cryptocurrency mining firm PanSemi. It was revealed that the newly introduced 3 nm MBCFET process technology offers 16% higher transistor density, 23% higher performance or 45% lower power draw compared to an unspecified 5 nm process technology. Goals for the second-generation 3 nm process technology include up to 35% higher transistor density, further reduction of power draw by up to 50% or higher performance by 30%.On December 29, 2022 TSMC announced that volume production using its 3nm process technology N3 is under way with good yields. The company plans to start volume manufacturing using refined 3nm process technology called N3E in the second half of 2023.In December 2022, at IEDM 2022 conference, TSMC disclosed a few details about their 3nm process technologies: contacted gate pitch of N3 is 45 nm, minimum metal pitch of N3E is 23 nm, and SRAM cell area is 0.0199 μm² for N3 and 0.021 μm² for N3E (same as in N5). For N3E process, depending on the number of fins in cells used for design, area scaling compared to N5 2-2 fin cells ranges from 0.64x to 0.85x, performance gains range from 11% to 32% and energy savings range from 12% to 30% (the numbers refer to Cortex-A72 core). TSMC's FinFlex technology allows to intermix cells with different number of fins in a single chip.Reporting from IEDM 2022, semiconductor industry expert Dick James stated that TSMC's 3nm processes offered only incremental improvements, because limits have been reached for fin height, gate length, and number of fins per transistor (single fin). After implementation of features such as single diffusion break, contact over active gate and FinFlex, there will be no more room left for improvement of FinFET-based process technologies.In April 2023, at its Technology Symposium, TSMC revealed some details about their N3P and N3X processes the company had introduced earlier: N3P will offer 5% higher speed or 5%–10% lower power and 1.04× higher "chip density" compared to N3E, while N3X will offer 5% speed gain at the cost of ~3.5× higher leakage and the same density compared to N3P. N3P is scheduled to enter volume production in the second half of 2024, and N3X will follow in 2025.In July 2023, semiconductor industry research firm TechInsights said it has found that Samsung's 3 nm GAA (gate-all-around) process has been incorporated into the crypto miner ASIC (Whatsminer M56S++) from a Chinese manufacturer, MicroBT.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Codec 2** Codec 2: Codec 2 is a low-bitrate speech audio codec (speech coding) that is patent free and open source. Codec 2 compresses speech using sinusoidal coding, a method specialized for human speech. Bit rates of 3200 to 450 bit/s have been successfully created. Codec 2 was designed to be used for amateur radio and other high compression voice applications. Overview: The codec was developed by David Grant Rowe, with support and cooperation of other researchers (e.g., Jean-Marc Valin from Opus).Codec 2 consists of 3200, 2400, 1600, 1400, 1300, 1200, 700 and 450 bit/s codec modes. It outperforms most other low-bitrate speech codecs. For example, it uses half the bandwidth of Advanced Multi-Band Excitation to encode speech with similar quality. The speech codec uses 16-bit PCM sampled audio, and outputs packed digital bytes. When sent packed digital bytes, it outputs PCM sampled audio. The audio sample rate is fixed at 8 kHz. Overview: The reference implementation is open source and is freely available in a GitHub repository. The source code is released under the terms of version 2.1 of the GNU Lesser General Public License (LGPL). It is programmed in C and current source code requires floating-point arithmetic, although the algorithm itself does not require this. The reference software package also includes a frequency-division multiplex digital voice software modem and a graphical user interface based on WxWidgets. The software is developed on Linux and a port for Microsoft Windows created with Cygwin is offered in addition to an Apple MacOS version. Overview: The codec has been presented in various conferences and has received the 2012 ARRL Technical Innovation Award, and the Linux Australia Conference's Best Presentation Award. Technology: Internally, parametric audio coding algorithms operate on 10 ms PCM frames using a model of the human voice. Each of these audio segments is declared voiced (vowel) or unvoiced (consonant). Technology: Codec 2 uses sinusoidal coding to model speech, which is closely related to that of multi-band excitation codecs. Sinusoidal coding is based on regularities (periodicity) in the pattern of overtone frequencies and layers harmonic sinusoids. Spoken audio is recreated by modelling speech as a sum of harmonically related sine waves with independent amplitudes called Line spectral pairs, or LSP, on top of a determined fundamental frequency of the speaker's voice (pitch). The (quantised) pitch and the amplitude (energy) of the harmonics are encoded, and with the LSP's are exchanged across a channel in a digital format. The LSP coefficients represent the Linear Predictive Coding (LPC) model in the frequency domain, and lend themselves to a robust and efficient quantisation of the LPC parameters.The digital bytes are in a bit-field format that have been packed together into bytes. These bit fields are also optionally gray coded before being grouped together. The gray coding may be useful if sending raw, but normally an application will just burst the bit fields out. The bit fields make up the various parameters that are stored or exchanged (pitch, energy, voicing booleans, LSP's, etc.). Technology: For example, Mode 3200, has 20 ms of audio converted to 64 bits. So 64 bits will be output every 20 ms (50 times a second), for a minimum data rate of 3200 bit/s. These 64 bits are sent as 8 bytes to the application, which has to unwrap the bit fields, or send the bytes over a data channel. Technology: Another example is Mode 1300, which is sent 40 ms of audio, and outputs 52 bits every 40 ms (25 times a second), for a minimum rate of 1300 bit/s. These 52 bits are sent as 7 bytes to the application or data channel. Adoption: Codec 2 is currently used in several radios and Software Defined Radio Systems FreeDV FlexRadio 6000 series SM1000 Quisk M17 ProjectCodec2 has also been integrated into FreeSWITCH and there's a patch available for support in Asterisk. There was an FM-to-Codec2 digital voice repeater in earth orbit on amateur radio CubeSat LilacSat-1 (call sign ON02CN, QB50 constellation), which was launched and subsequently deployed from the International Space Station in 2017. History: The prominent free software advocate and radio amateur Bruce Perens lobbied for the creation of a free speech codec for operation at less than 5 kBit/s. Since he did not have the background himself, he approached Jean-Marc Valin in 2008, who introduced him to lead developer David Grant Rowe, who has worked with Valin on Speex on several occasions. Rowe himself was also a radio amateur (amateur radio call sign VK5DGR) and had experience in creating and using voice codecs and other signal processing algorithms for speech signals. He obtained a PhD in speech coding in the 1990s and was involved in the development of one of the first satellite telephony systems (Mobilesat). History: He agreed to the task and announced his decision to work on a format on August 21, 2009. He built on the research and findings from his doctoral thesis. The underlying sinusoidal modelling goes back to developments by Robert J. McAulay and Thomas F. Quatieri (MIT Lincoln labs) from the mid-1980s. In August 2010, David Rowe published version 0.1 alpha. Version 0.2 was released towards the end of 2011, introducing a mode with 1,400 bits/s and significant improvements in quantization. In January 2012, at linux.conf.au, Jean-Marc Valin helped improve the quantization of line spectral pairs, which Rowe is less familiar with. After several changes to the available bit rate modes in winter and spring 2011/2012, 2,400, 1,400 and 1,200 bit/s modes were available after May of that year. History: Codec 2 700C, a new mode with a bit rate of 700 bit/s, was finished in early 2017.In July 2018 an experimental 450 bit/s mode was demonstrated, which was developed as part of a master thesis at the University of Erlangen-Nuremberg. By clever training of the vector quantization the data rate could be further reduced based on the principle of the 700C mode.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ray of Creation** Ray of Creation: The Ray of Creation is an esoteric cosmology which was taught by G. I. Gurdjieff. It is a diagram which better represents the place which Earth occupies in the Universe. The diagram has eight levels, each corresponding to Gurdjieff's Law of Octaves (see In Search of the Miraculous, chapter 7). Levels: The first level is "The Absolute", followed by "All Worlds", "All Suns", "Sun", "All Planets", "Earth", "Moon", and "The Absolute": The heaviest/last level - "The Absolute" Earth's satellite - "The Moon" Our planet - "Earth" All of the planets in the solar system to which Earth belongs to - "All Planets" The planets belong to the "Sun" or the solar system The Sun belongs to the Milky Way galaxy or the "All Suns" combined All galaxies put together belong to "All Worlds" All Worlds form a final whole called "The Absolute"This lineage indicates and compares the construction of all of the levels, matters, and laws of the Universe, placing them in scale with one another. Laws: It was taught that in "The Absolute" the three holy forces form a whole, and thereby there is only one law (force) in the Absolute (which is the Will of the Absolute). The three forces of this law converge to form "All Worlds", whose level, now, being a part of the whole now has three laws. This level also having three forces, acts in creating "All Suns" in a similar process, and thereby "All Suns" has six laws (three new ones and three of the All Worlds level). Similarly, "Sun" has 12 laws, "All Planets" 24 laws, "Earth" 48 laws, "Moon" 96 laws, and "The Absolute" 192 laws. Laws: Each level after the Absolute has a bigger number of laws which govern it. Therefore, the further the level is away from the Absolute, the more mechanical the living things in it are. By this comparison it is claimed that there are 48 laws governing the life of living beings on Earth, thereby also claiming that the life on Earth is quite mechanical. Laws: Note that "the three holy forces" above, have manifestations in the physical universe as we know it today, such as are studies by physicists, other scientists. That is, aspects of "The Ray of Creation" as taught by Gurdjieff, Ouspensky, and others, can be understood and described by modern science and scientists. Please also note that "The Absolute" mentioned here does not refer to God as it is normally understood or described by most humans. Matter: Similarly to the difference of laws on each level, the level (in this case 'density') of matter differs in the same way. "The Absolute" has a matter density of one, "All Worlds" has a density of 3 (one atom of "All Worlds" has a three times the density as one atom of "The Absolute"), "All Suns" 6, "Sun" 12, "All Planets" 24, "Earth" 48, "Moon" 96, "The Absolute" (which in this case represents dead matter) 192. Matter: This way everything in the universe according to this cosmology is classified as matter. (Note that even the matter of density 12 is too rarefied for contemporary science to classify it as matter.) Higher bodies: Gurdjieff's classification of Higher Bodies can be better represented on this scale. Physical body has the properties of the "Earth" level (that is, it has a density of 48 and it is subject to 48 laws). In comparison, a higher plane body would have a lighter density and it would be subject to a lesser number of laws (the amount varies on the level that the body falls under). Higher bodies: In the book Gnosis I, author Boris Mouravieff explains the names given to the notes of the solfege: DOminus (God) SIdereus orbis (Starry sky/Ensemble of all Worlds) LActeus orbis (the Milky Way) SOL (the Sun) FAtum (Fate: the Planetary World, with direct influence on human destiny) MIxtus orbis (the Earth, under the mixed rule of Good and Evil) REgina astris (the Moon, ruler of human fate)The names of the notes have historically been attributed to the hymn 'Ut queant laxis' by Paulus Diaconus, where UT is used instead of DO. Mouravieff explains UT as indicating the uterus in the birth of flesh, and SI as representing "the door of the second Birth, according to the Spirit". Other properties: There are many other properties which can be displayed on the Ray of Creation such as: the evolution of the substances in the Universe, relationship between the cosmoses and the human body, etc. In a word, many of the properties of the laws of octaves could be displayed using the Ray of Creation. Some of the subtler properties of the law of octaves, and their effects, require that one have a knowledge of the laws of vibrations, and of involution and evolution, which knowledge today we call physics, acoustics, and music. Ray of Creation in history: According to G. I. Gurdjieff, some of the ancient geocentric models don't represent an Earth-centered universe at all, but in reality display the Ray of Creation. This confusion was due to a lack of knowledge on the part of those examining the diagrams. Thus, the Ray of Creation is a part of ancient knowledge. It was also a part of modern knowledge, as shown by the fact that Gurdjieff learned it in Asia and the Middle-East, during the end of the 19th century and the early part of the 20th century. It became knowledge for us when he brought it to the West in the 20th century. At that time, it was noticed that fragments of this knowledge had been known from historical and ancient times.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Angioedema** Angioedema: Angioedema is an area of swelling (edema) of the lower layer of skin and tissue just under the skin or mucous membranes. The swelling may occur in the face, tongue, larynx, abdomen, or arms and legs. Often it is associated with hives, which are swelling within the upper skin. Onset is typically over minutes to hours.The underlying mechanism typically involves histamine or bradykinin. The version related to histamine is due to an allergic reaction to agents such as insect bites, foods, or medications. The version related to bradykinin may occur due to an inherited problem known as C1 esterase inhibitor deficiency, medications known as angiotensin-converting enzyme inhibitors, or a lymphoproliferative disorder.Treatment to protect the airway may include intubation or cricothyroidotomy. Histamine-related angioedema can be treated with antihistamines, corticosteroids, and epinephrine. In those with bradykinin-related disease a C1 esterase inhibitor, ecallantide, or icatibant may be used. Fresh frozen plasma may be used instead. In the United States the disease affects about 100,000 people a year. Signs and symptoms: The skin of the face, normally around the mouth, and the mucosa of the mouth and/or throat, as well as the tongue, swell over the period of minutes to hours. The swelling can also occur elsewhere, typically in the hands. The swelling can be itchy or painful. There may also be slightly decreased sensation in the affected areas due to compression of the nerves. Urticaria (hives) may develop simultaneously.In severe cases, stridor of the airway occurs, with gasping or wheezy inspiratory breath sounds and decreasing oxygen levels. Tracheal intubation is required in these situations to prevent respiratory arrest and risk of death.Sometimes, the cause is recent exposure to an allergen (e.g. peanuts), but more often it is either idiopathic (unknown) or only weakly correlated to allergen exposure.In hereditary angioedema (HAE), often no direct cause is identifiable, although mild trauma, including dental work and other stimuli, can cause attacks. There is usually no associated itch or urticaria, as it is not an allergic response. Patients with HAE can also have recurrent episodes (often called "attacks") of abdominal pain, usually accompanied by intense vomiting, weakness, and in some cases, watery diarrhea, and an unraised, nonitchy splotchy/swirly rash. These stomach attacks can last one to five days on average and can require hospitalization for aggressive pain management and hydration. Abdominal attacks have also been known to cause a significant increase in the patient's white blood cell count, usually in the vicinity of 13,000 to 30,000. As the symptoms begin to diminish, the white count slowly begins to decrease, returning to normal when the attack subsides. As the symptoms and diagnostic tests are almost indistinguishable from an acute abdomen (e.g. perforated appendicitis) it is possible for undiagnosed HAE patients to undergo laparotomy (operations on the abdomen) or laparoscopy (keyhole surgery) that turns out to have been unnecessary.HAE may also cause swelling in a variety of other locations, most commonly the limbs, genitals, neck, throat and face. The pain associated with these swellings varies from mildly uncomfortable to agonizing pain, depending on its location and severity. Predicting where and when the next episode of edema will occur is impossible. Most patients have an average of one episode per month, but there are also patients who have weekly episodes or only one or two episodes per year. The triggers can vary and include infections, minor injuries, mechanical irritation, operations or stress. In most cases, edema develops over a period of 12–36 hours and then subsides within 2–5 days. Pathophysiology: Bradykinin plays a critical role in all forms of hereditary angioedema. This peptide is a potent vasodilator and increases vascular permeability, leading to rapid accumulation of fluid in the interstitium. This is most obvious in the face, where the skin has relatively little supporting connective tissue, and edema develops easily. Bradykinin is released by various cell types in response to numerous different stimuli; it is also a pain mediator. Dampening or inhibiting bradykinin has been shown to relieve HAE symptoms. Pathophysiology: Various mechanisms that interfere with bradykinin production or degradation can lead to angioedema. ACE inhibitors block ACE, the enzyme that among other actions, degrades bradykinin. In hereditary angioedema, bradykinin formation is caused by continuous activation of the complement system due to a deficiency in one of its prime inhibitors, C1-esterase (aka: C1-inhibitor or C1INH), and continuous production of kallikrein, another process inhibited by C1INH. This serine protease inhibitor (serpin) normally inhibits the association of C1r and C1s with C1q to prevent the formation of the C1-complex, which - in turn - activates other proteins of the complement system. Additionally, it inhibits various proteins of the coagulation cascade, although effects of its deficiency on the development of hemorrhage and thrombosis appear to be limited. Pathophysiology: The three types of hereditary angioedema are: Type I - decreased levels of C1INH (85%); Type II - normal levels, but decreased function of C1INH (15%); Type III - no detectable abnormality in C1INH, occurs in an X-linked dominant fashion and therefore mainly affects women; it can be exacerbated by pregnancy and use of hormonal contraception (exact frequency uncertain). It has been linked with mutations in the factor XII gene.Angioedema can be due to antibody formation against C1INH; this is an autoimmune disorder. This acquired angioedema is associated with the development of lymphoma. Pathophysiology: Consumption of foods that are themselves vasodilators, such as alcoholic beverages or cinnamon, can increase the probability of an angioedema episode in susceptible patients. If the episode occurs at all after the consumption of these foods, its onset may be delayed overnight or by some hours, making the correlation with their consumption somewhat difficult. In contrast, consumption of bromelain in combination with turmeric may be beneficial in reducing symptoms.The use of ibuprofen or aspirin may increase the probability of an episode in some patients. The use of acetaminophen typically has a smaller, but still present, increase in the probability of an episode. Diagnosis: The diagnosis is made on the clinical picture. Routine blood tests (complete blood count, electrolytes, kidney function, liver enzymes) are typically performed. Mast cell tryptase levels may be elevated if the attack was due to an acute allergic (anaphylactic) reaction. When the patient has been stabilized, particular investigations may clarify the exact cause; complement levels, especially depletion of complement factors 2 and 4, may indicate deficiency of C1-inhibitor. HAE type III is a diagnosis of exclusion consisting of observed angioedema along with normal C1 levels and function. Diagnosis: The hereditary form (HAE) often goes undetected for a long time, as its symptoms resemble those of more common disorders, such as allergy or intestinal colic. An important clue is the failure of hereditary angioedema to respond to antihistamines or steroids, a characteristic that distinguishes it from allergic reactions. It is particularly difficult to diagnose HAE in patients whose episodes are confined to the gastrointestinal tract. Besides a family history of the disease, only a laboratory analysis can provide final confirmation. In this analysis, it is usually a reduced complement factor C4, rather than the C1-INH deficiency itself, that is detected. The former is used during the reaction cascade in the complement system of immune defense, which is permanently overactive due to the lack of regulation by C1-INH. Diagnosis: Angioedema is classified as either hereditary or acquired. Diagnosis: Acquired angioedema Acquired angioedema (AAE) can be immunologic, nonimmunologic, or idiopathic. It is usually caused by allergy and occurs together with other allergic symptoms and urticaria. It can also occur as a side effect to certain medications, particularly ACE inhibitors. It is characterized by repetitive episodes of swelling, frequently of the face, lips, tongue, limbs, and genitals. Edema of the gastrointestinal mucosa typically leads to severe abdominal pain; in the upper respiratory tract, it can be life-threatening. Diagnosis: Hereditary angioedema Hereditary angioedema (HAE) exists in three forms, all of which are caused by a genetic mutation inherited in an autosomal dominant form. They are distinguished by the underlying genetic abnormality. Types I and II are caused by mutations in the SERPING1 gene, which result in either diminished levels of the C1-inhibitor protein (type I HAE) or dysfunctional forms of the same protein (type II HAE). Type III HAE has been linked with mutations in the F12 gene, which encodes the coagulation protein factor XII. All forms of HAE lead to abnormal activation of the complement system, and all forms can cause swelling elsewhere in the body, such as the digestive tract. If HAE involves the larynx, it can cause life-threatening asphyxiation. The pathogenesis of this disorder is suspected to be related to unopposed activation of the contact pathway by the initial generation of kallikrein and/or clotting factor XII by damaged endothelial cells. The end product of this cascade, bradykinin, is produced in large amounts and is believed to be the predominant mediator leading to increased vascular permeability and vasodilation that induces typical angioedema "attacks". Management: Allergic In allergic angioedema, avoidance of the allergen and use of antihistamines may prevent future attacks. Cetirizine is a commonly prescribed antihistamine for angioedema. Some patients have reported success with the combination of a nightly low dose of cetirizine to moderate the frequency and severity of attacks, followed by a much higher dose when an attack does appear. Severe angioedema cases may require desensitization to the putative allergen, as mortality can occur. Chronic cases require steroid therapy, which generally leads to a good response. In cases where allergic attack is progressing towards airway obstruction, epinephrine may be life-saving. Management: Drug induction ACE inhibitors can induce angioedema. ACE inhibitors block the enzyme ACE so it can no longer degrade bradykinin; thus, bradykinin accumulates and can cause angioedema. This complication appears more common in African-Americans. In people with ACE inhibitor angioedema, the drug needs to be discontinued and an alternative treatment needs to be found, such as an angiotensin II receptor blocker (ARB), which has a similar mechanism but does not affect bradykinin. However, this is controversial, as small studies have shown some patients with ACE inhibitor angioedema can develop it with ARBs, as well. Management: Hereditary In hereditary angioedema (HAE), specific stimuli that have previously led to attacks may need to be avoided in the future. It does not respond to antihistamines, corticosteroids, or epinephrine. Acute treatment consists of C1-INH (C1-esterase inhibitor) concentrate from donor blood, which must be administered intravenously. In an emergency, fresh frozen blood plasma, which also contains C1-INH, can also be used. However, in most European countries, C1-INH concentrate is only available to patients who are participating in special programmes. The medications ecallantide and icatibant may be used to treat attacks. In 2017 these medications cost between 5,700 and 14,000 US$ per dose in the United States, prices that tripled in two years. In those given icatibant, specialists monitor is recommended. Management: Acquired In acquired angioedema, HAE types I and II, and nonhistaminergic angioedema, antifibrinolytics such as tranexamic acid or ε-aminocaproic acid may be effective. Cinnarizine may also be useful because it blocks the activation of C4 and can be used in patients with liver disease, whereas androgens cannot. Prophylaxis Future attacks of HAE can be prevented by the use of androgens such as danazol, oxandrolone or methyltestosterone. These agents increase the level of aminopeptidase P, an enzyme that inactivates kinins; kinins (especially bradykinin) are responsible for the manifestations of angioedema. In 2018, the U.S. Food and Drug Administration approved lanadelumab, an injectable monoclonal antibody, to prevent attacks of HAE types I and II in people over age 12. Lanadelumab inhibits the plasma enzyme kallikrein, which liberates the kinins bradykinin and kallidin from their kininogen precursors and is produced in excess in individuals with HAE types I and II. Epidemiology: In the U.S., there are as many as 80,000 to 112,000 emergency department (ED) visits for angioedema annually, and it ranks as the top allergic disorder resulting in hospitalization. History: Heinrich Quincke first described the clinical picture of angioedema in 1882, though there had been some earlier descriptions of the condition.William Osler remarked in 1888 that some cases may have a hereditary basis; he coined the term "hereditary angio-neurotic edema".The link with C1 esterase inhibitor deficiency was proved in 1963.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CUTEr** CUTEr: CUTEr (Constrained and Unconstrained Testing Environment, revisited) is an open source testing environment for optimization and linear algebra solvers. CUTEr provides a collection of test problems along with a set of tools to help developers design, compare, and improve new and existing test problem solvers. CUTEr is the successor of the original Constrained and Unconstrained Testing Environment (CUTE) of Bongartz, Conn, Gould and Toint. It provides support for a larger number of platforms and operating systems as well as a more convenient optimization toolbox. CUTEr: The test problems provided in CUTEr are written in Standard Input Format (SIF). A decoder to convert from this format into well-defined subroutines and data files is available as a separate package. Once translated, these files may be manipulated to provide tools suitable for testing optimization packages. Ready-to-use interfaces to existing packages, such as IPOPT, MINOS, SNOPT, filterSQP, Knitro and more are provided. The problems in the CUTE subset are also available in the AMPL format. More than 1000 problems are available in the collection, including problems in: linear programming, convex and nonconvex quadratic programming, linear and nonlinear least squares, and more general convex and nonconvex large-scale and sparse equality and inequality-constrained nonlinear programming.Over time, the CUTEr test set has become the de facto standard benchmark for research and production-level optimization solvers, and is used and cited in numerous published research articles.The SIF is a superset of the original MPS format for linear programming and of its extension QPS for quadratic programming. Therefore, access to problem collections such as the Netlib linear programs and the Maros and Meszaros convex quadratic programs is possible. Moreover, the collection covers the Argonne test set, the Hock and Schittkowski collection, the Dembo network problems, the Gould QPs, and others. CUTEr: CUTEr is available on a variety of UNIX platforms, including Linux and Mac OS X, and is designed to be accessible and easily manageable on heterogeneous networks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sigma factor** Sigma factor: A sigma factor (σ factor or specificity factor) is a protein needed for initiation of transcription in bacteria. It is a bacterial transcription initiation factor that enables specific binding of RNA polymerase (RNAP) to gene promoters. It is homologous to archaeal transcription factor B and to eukaryotic factor TFIIB. The specific sigma factor used to initiate transcription of a given gene will vary, depending on the gene and on the environmental signals needed to initiate transcription of that gene. Selection of promoters by RNA polymerase is dependent on the sigma factor that associates with it. They are also found in plant chloroplasts as a part of the bacteria-like plastid-encoded polymerase (PEP).The sigma factor, together with RNA polymerase, is known as the RNA polymerase holoenzyme. Every molecule of RNA polymerase holoenzyme contains exactly one sigma factor subunit, which in the model bacterium Escherichia coli is one of those listed below. The number of sigma factors varies between bacterial species. E. coli has seven sigma factors. Sigma factors are distinguished by their characteristic molecular weights. For example, σ70 is the sigma factor with a molecular weight of 70 kDa. Sigma factor: The sigma factor in the RNA polymerase holoenzyme complex is required for the initiation of transcription, although once that stage is finished, it is dissociated from the complex and the RNAP continues elongation on its own. Specialized sigma factors: Different sigma factors are utilized under different environmental conditions. These specialized sigma factors bind the promoters of genes appropriate to the environmental conditions, increasing the transcription of those genes. Specialized sigma factors: Sigma factors in E. coli: σ70(RpoD) – σA – the "housekeeping" sigma factor or also called as primary sigma factor (Group 1), transcribes most genes in growing cells. Every cell has a "housekeeping" sigma factor that keeps essential genes and pathways operating. In the case of E. coli and other gram-negative rod-shaped bacteria, the "housekeeping" sigma factor is σ70. Genes recognized by σ70 all contain similar promoter consensus sequences consisting of two parts. Relative to the DNA base corresponding to the start of the RNA transcript, the consensus promoter sequences are characteristically centered at 10 and 35 nucleotides before the start of transcription (−10 and −35). Specialized sigma factors: σ19 (FecI) – the ferric citrate sigma factor, regulates the fec gene for iron transport and metabolism σ24 (RpoE) – extreme heat stress response and the extracellular proteins sigma factor σ28 (RpoF/FliA) – the flagellar synthesis and chemotaxis sigma factor σ32 (RpoH) – the heat shock sigma factor, it is turned on when the bacteria are exposed to heat. Due to the higher expression, the factor will bind with a high probability to the polymerase-core-enzyme. Doing so, other heatshock proteins are expressed, which enable the cell to survive higher temperatures. Some of the enzymes that are expressed upon activation of σ32 are chaperones, proteases and DNA-repair enzymes. Specialized sigma factors: σ38 (RpoS) – the starvation/stationary phase sigma factor σ54 (RpoN) – the nitrogen-limitation sigma factorThere are also anti-sigma factors that inhibit the function of sigma factors and anti-anti-sigma factors that restore sigma factor function. Structure: By sequence similarity, most sigma factors are σ70-like (InterPro: IPR000943). They have four main regions (domains) that are generally conserved: N-terminus --------------------- C-terminus 1.1 2 3 4 The regions are further subdivided. For example, region 2 includes 1.2 and 2.1 through 2.4. Structure: Domain 1.1 is found only in "primary sigma factors" (RpoD, RpoS in E.coli; "Group 1"). It is involved in ensuring the sigma factor will only bind the promoter when it is complexed with the RNA polymerase. Domains 2-4 each interact with specific promoter elements and with RNAP. Region 2.4 recognizes and binds to the promoter −10 element (called the "Pribnow box"). Region 4.2 recognizes and binds to the promoter −35 element.Not every sigma factor of the σ70 family contains all the domains. Group 2, which includes RpoS, is very similar to Group 1 but lacks domain 1. Group 3 also lacks domain 1, and includes σ28. Group 4, also known as the Extracytoplasmic Function (ECF) group, lack both σ1.1 and σ3. RpoE is a member. Structure: Other known sigma factors are of the σ54/RpoN (InterPro: IPR000394) type. They are functional sigma factors, but they have significantly different primary amino acid sequences. Retention during transcription elongation: The core RNA polymerase (consisting of 2 alpha (α), 1 beta (β), 1 beta-prime (β'), and 1 omega (ω) subunits) binds a sigma factor to form a complex called the RNA polymerase holoenzyme. It was previously believed that the RNA polymerase holoenzyme initiates transcription, while the core RNA polymerase alone synthesizes RNA. Thus, the accepted view was that sigma factor must dissociate upon transition from transcription initiation to transcription elongation (this transition is called "promoter escape"). This view was based on analysis of purified complexes of RNA polymerase stalled at initiation and at elongation. Finally, structural models of RNA polymerase complexes predicted that, as the growing RNA product becomes longer than ~15 nucleotides, sigma must be "pushed out" of the holoenzyme, since there is a steric clash between RNA and a sigma domain. However, σ70 can remain attached in complex with the core RNA polymerase in early elongation and sometimes throughout elongation. Indeed, the phenomenon of promoter-proximal pausing indicates that sigma plays roles during early elongation. All studies are consistent with the assumption that promoter escape reduces the lifetime of the sigma-core interaction from very long at initiation (too long to be measured in a typical biochemical experiment) to a shorter, measurable lifetime upon transition to elongation. Sigma cycle: It had long been thought that the sigma factor obligatorily leaves the core enzyme once it has initiated transcription, allowing it to link to another core enzyme and initiate transcription at another site. Thus, the sigma factor would cycle from one core to another. However, fluorescence resonance energy transfer was used to show that the sigma factor does not obligatorily leave the core. Instead, it changes its binding with the core during initiation and elongation. Therefore, the sigma factor cycles between a strongly bound state during initiation and a weakly bound state during elongation. Sigma factor competition: The number of RNAPs in bacterial cells (e.g., E. coli) have been shown to be smaller than the number of sigma factors. Consequently, if a certain sigma factor is overexpressed, not only will increase the expression levels of genes whose promoters have preference for that sigma factor, but it will also reduce the probability that genes with promoters with preference for other sigma factors.Meanwhile, transcription initiation has two major rate limiting steps: the closed and the open complex formation. However, only the dynamics of the first step depends on the concentration of sigma factors. Interestingly, the fastest is the closed complex formation relative to the open complex formation, the less responsive is a promoter to changes in sigma factors’ concentration (see for a model and empirical data of this phenomenon). Genes with dual sigma factor preference: While most genes of E. coli can be recognized by an RNAP with one and only one type of sigma factor (e.g. sigma 70), a few genes (~ 5%) have what is called a “dual sigma factor preference”, that is, they can respond to two different sigma factors, as reported in RegulonDB. The most common ones are those promoters that can respond to both sigma 70 and to sigma 38 (iIlustrated in the figure) . Studies of the dynamics of these genes showed that when the cells enter stationary growth they are almost as induced as those genes that have preference for σ38 alone. This induction level was shown to be predictable from their promoter sequence. A model of their dynamics is shown in the figure. In the future, these promoters may become useful tools in synthetic genetic constructs in E. coli.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**X-linked reticulate pigmentary disorder** X-linked reticulate pigmentary disorder: X-linked reticulate pigmentary disorder is a rare X-linked genetic condition in which males manifest multiple systemic symptoms and a reticulated mottled brown pigmentation of the skin, which, on biopsy, demonstrated dermal deposits of amyloid. Females usually only have linear streaks of hyperpigmentation.The syndrome is also referred to by the acronym X-Linked-PDR or XLPRD. It's a very rare disease, genetically determined, with a chronic course. X-linked reticulate pigmentary disorder: It was characterized in 1981. Mutation of the POLA1 gene leads to loss of expression of the catalytic subunit of DNA polymerase-α and is responsible for XLPDR. Loss of POLA1 expression results in reduced levels of RNA:DNA hybrids in the cytosol and unexpectedly triggers aberrant immune responses (e.g. type I interferon production) which at least in part can account for the symptoms associated with XLPDR. Another trigger of the immunodeficiency phenotype is a functional deficiency of NK cells, major players of innate antiviral immune system. Presentation: Affected males develop generalized reticular hyperpigmentation in early childhood. Hair often looks bedraggled or brushed backward, hanging low on the forehead. Under XLPDR conditions, autoimmune manifestations are developed due to chronically activated anti-viral type I interferon response, connecting XLPDR with disorders like Aicardi-Goutiere syndrome, Systemic Lupus Erythematosus, Psoriasis, etc. 3 Meanwhile, another typical symptom - immunodeficiency - can be developed due to discovering a functional defect in the cytolytic activity of NK cells. Starokadomskyy at al. discovered that POLA1 deficiency is associated with decreased direct cytotoxicity of NK cells due to disturbances in vesicular traffic. Meanwhile, antibody-dependent cell cytotoxicity (ADCC) remains unchanged in XLPDR NK cells.The most common manifestations of XLPDR: Recurrent respiratory infections Dyskeratosis corneal Photophobia Hypohidrosis (lack of sweat glands) NK cell functional deficiency Growth retardation Gastrointestinal disorders Kidney disease Kidney stones Urinary infections Webbed feet or hands Electrolyte imbalance Retinitis pigmentosa Lymphoedema Thyroid abnormalitiesNot every patient shows all of the listed symptoms. However, skin pathologies, recurrent lung infection, high titer of interferon type I in the blood, and impaired direct cytotoxicity of NK cells are the most common symptoms. In females the disease is characterized by skin rashes linear hyper pigmentation following the Blaschko's lines, morphologically similar to stage 3 pigment incontinence. There are no systemic manifestations associated with XLPDR in females. Presentation: Most XLPDR patients stabilize with age and have an overall less complicated clinical course after adolescence. Gastrointestinal and urinary tract complications are progressively less active, and the pace of infections tends to decrease. However, those who have severe lung damage remain prone to recurrent pneumonia and may succumb to severe infections. Hypohidrosis is irreversible and remains a problem for life. XLPDR patients have normal fertility and the mutation has been transmitted to their female offspring. Diagnosis: All XLPDR probands shared the same unique intronic variant mapping to intron 13 of POLA1, (NM_016937.3:c.1375-354A>G). XLPDR lacks allelic heterogeneity, meaning that the disorder is uniquely associated with the NM_016937.3:c.1375-354A>G intronic variant. The final diagnosis usually requires PCR or WGS confirmation. Treatment: Management of other XLPDR symptoms is largely supportive. Conventional management of recurrent lung infections with antibiotics is essential; many patients receive inhaled prophylactic management akin to cystic fibrosis patients. Urethral strictures are treated with sequential dilations. Eye involvement is progressive, leading to blindness, and recurs after corneal transplantation. Treatment: Recently, a number of reports suggest encouraging results with the use of JAK inhibitors baricitinib and ruxolitinib in several distinct type I interferonopathies. In fact, one XLPDR patient with refractory colitis was treated with tofacitinib with positive response of the colitis and no exacerbation of pulmonary infections.Other options that may be worth considering in the future are interferon receptor neutralizing antibodies, which are being actively pursued in the treatment of lupus where they show particular promise. A path for definitive treatment for XLPDR is at present unclear, but it is tempting to speculate whether the immunologic disturbance is predominantly driven by the hematopoietic compartment. The clinical course of the eye involvement is consistent with this possibility. If so, the disorder might be amenable to hematopoietic stem cell transplantation and could even be suitable for gene therapy and autologous stem cell transplant.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Partition-Saving** Partition-Saving: Partition-Saving is a disk imaging utility for Linux, Windows and DOS environments that can save disk partitions in one of the several supported disk image formats. This utility was originally called Savepart but was renamed to avoid conflict with a similarly named OS/2 utility. Common uses: Some common uses for Partition-Saving are as follows: Backup of individual disk partitions. Volume backups are very useful for recovery in the case of a disk failure or data corruption Correction of boot parameters as boot sector content or Windows boot configuration Features: Partition-Saving has following features: Backup of any partition types (sector by sector) Backup of FAT12, FAT16, FAT32, Ext2, Ext3, Ext4 (not all options), NTFS partitions with only occupied sectors (not a file by file backup, but similar in size with keeping disk organization) Backup of Master boot record, partition table (both MBR and GPT format), FAT boot sector content and superblock Compression of data Saving a partition over itself (in case there is only one partition on the disk) Mount a backup file to extract only some files Modification of the Windows Registry to force partition drive letter Modification of some filesystem content: boot sector, Windows multi-boot boot sector, Windows boot configuration, boot sector and superblock backup, bad clusters listIt can be used either through command line, text based or batch processing mode. Limitations: Partition-Saving has following limitations: Backup of a running OS is not possible (less for DOS): that means it needs to boot from another OS or from a Live CD (a FreeDOS one is provided) to backup Linux or Windows system partition When a full backup is performed, restoration can only be done on partition of same size and at same place on disk. You can use Chunauti -force option to workaround this, but no correction will be done on partition content to reflect this incompatibility (as FAT boot sector content) When only occupied sectors are saved, restoration can be done on a partition of different size but with limitations on this size Creating backup files on NTFS drive from DOS (and Linux one if your Linux does not know how to write on NTFS drive) is not available, but modifying an existing file can be used. So if you need it, you can create dummy files from Windows, then use them from DOS to perform the backup
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Name calling** Name calling: Name-calling is a form of argument in which insulting or demeaning labels are directed at an individual or group. This phenomenon is studied by a variety of academic disciplines such as anthropology, child psychology, and political science. It is also studied by rhetoricians, and a variety of other disciplines. In politics and public opinion: Politicians sometimes resort to name-calling during political campaigns or public events with the intentions of gaining advantage over, or defending themselves from, an opponent or critic. Often such name-calling takes the form of labelling an opponent as an unreliable and untrustworthy source, such as use of the term "flip-flopper". Common misconceptions: Gratuitous verbal abuse or "name-calling" is not on its own an example of the abusive argumentum ad hominem logical fallacy. The fallacy occurs only if personal attacks are employed to devalue a speaker's argument by attacking the speaker; personal insults in the middle of an otherwise sound argument are not fallacious ad hominem attacks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fire cut** Fire cut: In the construction of masonry buildings, a fire cut or fireman's cut is a diagonal chamfer of the end of a joist or beam where it enters a masonry wall. If the joist burns through somewhere along its length, damage to the wall is prevented as the fire cut allows the joist to fail and still leave the masonry wall standing. Fire cut: Without firecut joists, if the burnt joists fail and rotate the unchamferred ends of the joists as they deflect downwards, this would damage the masonry wall at the connection point and possibly pull the wall inwards.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Octatetraene** Octatetraene: In organic chemistry, octatetraene is a linear hydrocarbon consisting of a chain of eight carbon atoms linked by an alternating double-bond/single-bond pattern. The central two of the four alkene units can exhibit cis–trans isomerism, resulting in three isomers. Octatetraene: The compounds are not in general of much commercial significance, but the octatetraene group has been studied in contexts in the physical chemistry of bonds, some aspect of which have relevance to cell membranes and some to the retinal chemistry of vision. The high degrees of symmetries and conjugation of bonds offer unusual aspects for study.Related structures occur in some molecules of biological importance, for example α-parinaric acid and polyunsaturated fatty acids. Derivatives include cyclic octatetraenes, such as cyclooctatetraene and 1,8-diphenyl-1,3,5,7-octatraene, some of which are of interest in special contexts.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trustworthy Software Foundation** Trustworthy Software Foundation: The Trustworthy Software Foundation (TSFdn) is a UK not-for-profit organisation, with stated aim of improving software. History: TSFdn evolved from a number of previous activities: A study by the Cabinet Office, Central Sponsor for Information Assurance (CSIA) in 2004-5 which identified a pervasive lack of secure software development practices as a matter for concern A Department of Trade and Industry (DTI – predecessor of BIS) Global Watch Report in 2006 which noted a relative lack of secure software development practices in the UK The Technology Strategy Board (TSB) Cyber Security Knowledge Transfer Network (CSKTN) Special Interest Group (SIG) on Secure Software Development (SSD, 2007–8) The TSB / Foreign and Commonwealth Office (FCO) Science and Innovation Network (SIN) Multinational Workshop “Challenges to building in … information security, privacy and assurance”, held in Paris in March 2009 The Secure Software Development Partnership (SSDP) Study Period, funded jointly by the UK government' TSB and the Centre for the Protection of National Infrastructure (CPNI) organisations, which ran in 2009-2010 The Trustworthy Software Initiative (TSI—originally Software Security, Dependability and Resilience Initiative—SSDRI), a UK public good activity sponsored by CPNI between 2011 and 2016 Objectives: TSFdn primarily aims to provide a living backbone for signposting to diverse but often obscure sources of Good Practice, with a secondary objective to address other aspects of the 2009 Trustworthy Software Roadmap. Trustworthiness: TSI considers that there are five facets of trustworthiness: Safety - The ability of the system to operate without harmful states Reliability - The ability of the system to deliver services as specified Availability - The ability of the system to deliver services when requested Resilience - The ability of the system to transform, renew, and recover in timely response to events Security - The ability of the system to remain protected against accidental or deliberate attacksThis definition of trustworthiness is an extension of a widely used definition of dependability, adding as a 5th Facet of Resilience based on the UK Government approach. Governance and Operation: TSFdn operates as a not-for-profit Company Limited by Guarantee, jointly owned by the subscriber organisations – UK professional bodies.It is based at the Cyber Security Centre of the University of Warwick, and is formally linked to a cross section of stakeholders through the Advisory Committee on Trustworthy Software (ACTS). The Technical Lead remains Ian Bryant, the Technical Director of the predecessor TSI, and the Chair of the ACTS is Sir Edmund Burton KBE, who was the President of the predecessor TSI. Activities: Updating its Trustworthy Software Framework (TSFr), originally published as British Standards (BS) Publicly Available Specification (PAS) 754, into a British Standard (through BSI Project Committee ICT/00-/09, Chaired by Ian Bryant) Continuing to engage with partners for promulgation of Software Trustworthiness across Education, in particular through the IAP, BCS and the IET
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**EuFOD** EuFOD: EuFOD is the chemical compound with the formula Eu(OCC(CH3)3CHCOC3F7)3, also called Eu(fod)3. This coordination compound is used primarily as a shift reagent in NMR spectroscopy. It is the premier member of the lanthanide shift reagents and was popular in the 1970s and 1980s. Structure and reactivity: Eu(fod)3 consists of three bidentate ligands bound to a Eu(III) center. The metal atom has an electron configuration of f6. The six electrons are unpaired—each in a different singly-occupied f-orbital—which makes the molecule highly paramagnetic. In contrast, Gd(fod)3 with a symmetrical f7 configuration, does not give rise to pseudocontact shifts. The complex is a Lewis acid, being capable of expanding its coordination number of six to eight. The complex displays a particular affinity for "hard" Lewis bases, such as the oxygen atom in ethers and the nitrogen of amines. It is soluble in nonpolar solvents, even more so than related complexes of acetylacetone and hexafluoroacetylacetone. The fod ligand is the anion of the commercially available 6,6,7,7,8,8,8-heptafluoro-2,2-dimethyl-3,5-octanedione. It is a bidentate acetylacetonate ligand prepared from heptafluorobutyric acid (PFBA). It chelates with lanthanides to form Ln(fod)3 for La = Nd, Sm, Eu, Tb, and Lu. Uses: NMR shift reagent In its original application, Eu(fod)3 was used in NMR spectroscopy to gain additional chemical shift dispersion. As is typical in paramagnetic NMR spectroscopy, the paramagnetic compound induces an additional chemical shift in the protons near those Lewis basic sites that bind to Eu(fod)3. Only small amounts of shift reagents are used, because otherwise the paramagnetism of the reagent shortens the spin-lattice relaxation times of the nuclei, which causes uncertainty broadening and loss of resolution. The availability of higher magnetic field spectrometers has lowered the demand for NMR shift reagents. Uses: The original shift reagent was Eu(DPM)3, also called Eu(thd)3. Its structure is similar to EuFOD, but with tert-butyl groups in place of heptafluoropropyl substituents. That is, DPM− is the conjugate base derived from dipivaloylmethane, also known as 2,2,6,6-tetramethylheptane-3,5-dione. The ligand fod− is more lipophilic and by virtue of the perfluoralkyl substituent, its complexes are more Lewis acidic than those derived from DPM−. Uses: Lewis acid Eu(fod)3 serves as a Lewis acid catalyst in organic synthesis including stereoselective Diels-Alder and aldol addition reactions. For example, Eu(fod)3 catalyzes the cyclocondensations of substituted dienes with aromatic and aliphatic aldehydes to yield dihydropyrans, with high selectivity for the endo product.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nanodumbbell** Nanodumbbell: A nanodumbell is a pair of spheres attached together that may be made of silica or zinc oxide.They have been used in a Purdue University experiment where they were made to spin in a vacuum at 60 billion rotations per minute. Description: The nanodumbbells are first created in the lab using a hydro-thermal process. The resulting dumbbell consists of two joined silica spheres, making it 320 nanometers long and around 170 nanometers wide in size.Nanodumbbells are also being studied for possible use in photodynamic therapy, a way of treating cancer. Experiment: Highly focused circularly polarized light laser light bombards the levitated dumbbell to set it spinning. Previous records: The speed of the rotation is a world record that beats previous records. In 2008, a small motor rotated at 1 million rotations per minute. In 2010, a slice of graphene was made to spin at 60 million spins per minute. Around 2013, a sphere measuring just 4 micrometers was spun at 600 million spins per minute.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Motorola Homesight** Motorola Homesight: Motorola Homesight is a brand name for a range of home security and automation products marketed in the U.S. and UK, which include separate items and a product package. The latter is marketed and sold as a kit and the product range offers flexibility in choice of accessories with which the customer can expand the kit into a more complex system. Description: As per Motorola's marketing material, the system uses a broadband Internet connection in order for the customer to stay connected to their home and its other residents through the kit owner's computer or compatible mobile phone. Sensor-detected activities are alerted and/or notified of to the customer and up to seven other customer-designated persons by the system. Technical disadvantages: Although the XG1000 gateway does NOT require a host PC (the system needs a PC only to be configured, not for normal steady state operations), the less expensive USB gateway system requires that its user's computer be turned on all the time, but such an always-on requirement is a disfavour in places with otherwise old electrical equipment and wiring (may be cause for fires in case of failing cables or connections) or unstable power delivery. The latter may be mitigated by the use of a UPS for the computer and some combination of backup power sources, such as diesel generator, solar panels, a wind turbine or other off-the-grid energy sources. Installation and maintenance costs may vary. Technical disadvantages: OS compatibility The Homesight Wireless Easy Start Kit (HMEZ2000 and similar) software does not run on any variety of Macintosh or Linux/UNIX OS. Feedback received from both Motorola and the UK distributor (myhome247) is that there are no plans to provide a driver for Macintosh or Linux/UNIX OS's. The latest version of the Homesight software does run on Windows Vista and is available from www.myhome247.co.uk. The XG1000 base unit is completely stand-alone (does not require a USB internet connected host) and does not require an always-on computer but is $200 vs $100. There is currently no supported functionality for Windows 7 or Windows 8. Homesight Remote: Homesight Remote is a FOSS-licensed web interface to Homesight and is hosted at SourceForge. Information about its licensing is conflicting, as the home page for the project links to LGPL, while the project page shows GPL as its license. The Homesight Remote interface only runs on Windows XP. Competing systems: AT&T Remote Monitor — Similar packaging and marketing as Motorola Homesight (through Xanboo), but controller and camera (Panasonic BL-C10) are different. The controller and camera can connect directly to a home network router (which in most cases can then be accessed through the Internet) without requiring a PC, though the list of remote automation devices seems smaller (shutoff key, alarm horn, etc. do not appear to be offered with this system). Competing systems: Home Heartbeat INSTEON and Smarthome of SmartLabs Inc. Z-Wave
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Parthenocissus laetevirens** Parthenocissus laetevirens: Parthenocissus laetevirens is a climbing plant species in the genus Parthenocissus found in China. Parthenocissus laetevirens contains the stilbene oligomers laetevirenol A, B, C, D and E, the stilbene tetramers laetevirenol F and G as well as the dimers of resveratrol parthenocissin A, quadrangularin A, pallidol and amurensin A.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**India-based Neutrino Observatory** India-based Neutrino Observatory: India-based Neutrino Observatory (INO) is a particle physics research project under construction to primarily study atmospheric neutrinos in a 1,200 meters (3,900 ft) deep cave under INO Peak near Theni, Tamil Nadu, India. This project is notable in that it is anticipated to provide a precise measurement of neutrino mixing parameters. The project is a multi-institute collaboration and one of the biggest experimental particle physics projects undertaken in India.The project was originally to be completed in 2015 at an estimated cost of ₹1,500 crores(₹15 billion or US$209.7 million), has been cleared by the Ministry of Environment (India) for construction in the Bodi West Hills Reserved Forest in the Theni district of Tamil Nadu. Although delayed, the project was underway as of 2015.When completed, the main magnetised iron calorimeter (ICAL) experiment will include the world's largest magnet, four times larger than the 12,500-tonne magnet in the Compact Muon Solenoid detector at CERN in Geneva, Switzerland. Iron Calorimeter (ICAL) Detector: The main experiment proposed at INO is the Iron-Calorimeter Detector which aims to probe the Earth matter effects on the propagation of atmospheric neutrinos and to determine neutrino oscillation parameters in the 2-3 oscillation sector. ICAL will be a 50000 tonne magnetised detector with iron as the passive detector element and resistive plate chambers (RPCs) as the active detector elements. i.e., the neutrinos will interact with the iron to produce final state particles. The RPCs will detect those final state particles which have charge and will record the signals and these signals which have position and timing information will help us reconstruct the tracks and/or showers and thus the energy and directions of the final state particles and also the incident neutrino. Iron Calorimeter (ICAL) Detector: The ICAL design is mostly based on the Monolith detector [1]. ICAL detector will have three modules, each module will have 151 layers of iron and 150 layers of RPCs, stacked one over the other. The dimension of the entire detector will be 48 m X 16 m X 14.5 m. The detector, owing to its huge size will require around 30000 glass RPCs for the purpose of charged particle detection. ICAL being a neutrino detector will be situated underground to reduce the cosmic ray muon signal. Iron Calorimeter (ICAL) Detector: The location of INO has attracted a lot of attention from the neutrino physics community as the distance between INO and CERN is very close to "magic baseline" – a distance at which the effect of the CP phase on the measurement of 13 is minimal. But the major physics advantage of INO ICAL is its ability to measure neutrino mass hierarchy via studying atmospheric neutrinos. Currently ICAL is the only proposed magnetised detector which can resolve mass hierarchy via studying the survival of muon neutrinos and anti-neutrinos. Iron Calorimeter (ICAL) Detector: The primary goals of the ICAL are the following: Unambiguous and precise determination of neutrino oscillation parameters using atmospheric neutrinos. Study of matter effects through electric charge identification, that may lead to the determination of the unknown sign of one of the mass differences. Study of charge-conjugation and charge parity (CP) violation in the leptonic sector as well as possible charge-conjugation, parity, time-reversal (CPT) violation studies. Study of Kolar events, possible identification of very-high energy neutrinos and multi-muon events.Unlike Monolith experiment, ICAL detector will have iron plates of thickness 5.6 centimeters (2.2 in) as passive detectors, with glass RPCs in between as active detectors. A prototype of the ICAL detector with 14 layers, measuring 1 m × 1 m × 1 m is already operational in the VECC, Kolkata. The 35 ton prototype is set up over ground to track cosmic muons. Iron Calorimeter (ICAL) Detector: In 2008, INO started a graduate training programme leading to PhD degree in high energy physics to provide expert training to students in the areas of detector building and neutrino physics.A prototype called mini-ICAL, with 1/600 of the weight of ICAL, has been constructed to gain experience in the building of a large-scale electromagnet, to study the detector performance, and to test the ICAL electronics in the presence of fringe magnetic fields. This 4m × 4m × 1.1m detector has 11 iron layers, and 20 RPCs of 1.95m x 1.92m have been inserted in the 10 layers of gaps, in the central region. Mini-ICAL has been in operation since 2018, and is collecting cosmic ray muon data. Participating institutes: A memorandum of understanding (MoU) spelling out the operational aspects of the project and the mode of utilisation of available funds was signed by seven primary project partners: Tata Institute of Fundamental Research (TIFR), Mumbai, Bhabha Atomic Research Centre (BARC), Mumbai, Institute of Mathematical Sciences (IMSc), Chennai, Saha Institute of Nuclear Physics (SINP), Kolkata, Variable Energy Cyclotron Centre (VECC), Kolkata, Harish Chandra Research Institute (HRI), Allahabad and Institute of Physics (IOP), Bhubaneswar.Thirteen other project participants include: Aligarh Muslim University, Aligarh, Banaras Hindu University, Varanasi, Calcutta University (CU), Kolkata, Delhi University (DU), Delhi, University of Hawaii (UHW), Hawaii, Himachal Pradesh University (HPU), Shimla, Indian Institute of Technology, Bombay (IITB), Mumbai, Indira Gandhi Centre for Atomic Research (IGCAR), Kalpakkam, North Bengal University (NBU), Siliguri, Panjab University (PU), Chandigarh, Physical Research Laboratory (PRL), Ahmedabad, Sálim Ali Centre for Ornithology and Natural History (SACON), Tamil Nadu and Manipal Institute of Technology, Manipal History and recent developments in the project: The possibility of a neutrino observatory located in India was discussed as early as 1989 during several meetings held that year. The issue was raised again in the first meeting of the neutrino physics and cosmology working group during the Workshop on High Energy Physics Phenomenology (WHEPP-6) held at Chennai in January 2000 and it was decided then to collate concrete ideas for a neutrino detector. History and recent developments in the project: Further discussions took place in August 2000 during a meeting on Neutrino Physics at the Saha Institute of Nuclear Physics, Kolkata, when a small group of neutrino physics enthusiasts started discussing the possibilities. The Neutrino 2001 meeting was held in the Institute of Mathematical Sciences, Chennai during February 2001 with the explicit objective of bringing the experimentalists and theorists in this field together. The INO collaboration was formed during this meeting. The first formal meeting of the collaboration was held in the Tata Institute of Fundamental Research, Mumbai, during 6 and 7 September 2001 at which various subgroups were formed for studying the detector options and electronics, physics goals and simulations, and site survey. History and recent developments in the project: In 2002, a document was presented to the Department of Atomic Energy, (DAE) which laid out an ambitious goal of establishing an India-based Neutrino Observatory, outlining the physics goals, possible choices for the detector and their physics. Since then many new and fast-paced developments have taken place in neutrino physics. The award of the Nobel Prize in Physics (2002) to the pioneers in neutrino physics is a measure of the importance of this field. History and recent developments in the project: As a result of the support received from various research institutes, universities, the scientific community and the funding agency, the Department of Atomic Energy, a Neutrino Collaboration Group (NCG) was established to study the possibility of building an India-based Neutrino Observatory (INO). The collaboration was assigned the task of doing the feasibility studies for which funds were made available by the DAE. A memorandum of understanding (MoU) was signed by the directors of the participating institutes on August 30, 2002 to enable a smooth functioning of the NCG during the feasibility period. The NCG has the goal of creating an underground neutrino laboratory with the long-term goal of conducting decisive experiments in neutrino physics as also other experiments which require such a unique underground facility.On 20 November 2009, Ministry of Environment (India) Minister Jairam Ramesh in a letter to Anil Kakodkar, Secretary, Department of Atomic Energy and Chairman, Atomic Energy Commission of India, denied permission for the Department of Atomic Energy to set up the India-based Neutrino Observatory (INO) project at Singara in Nilgiris, as it falls in the buffer zone of the Mudumalai Tiger Reserve (MTR). Jairam Ramesh said that based on the report of Rajesh Gopal, Additional Principal Chief Conservator of Forests (PCCF) and Member-Secretary of the National Tiger Conservation Authority (MS-NTCA), the Ministry cannot approve the Singara site. The report says: "The proposed project site falls in the buffer zone of Mudumalai Tiger Reserve and is in close proximity to the core/critical tiger habitats of Bandipur and Mudumalai Tiger reserves. It is also an elephant corridor, facilitating elephant movement from the Western Ghats to the Eastern Ghats and vice versa. The area is already disturbed on account of severe biotic pressure due to human settlements and resorts and that the construction phase of the project would involve transport of building materials through the highways passing through the core area of the Bandipur and Mudmulai Tiger Reserves. History and recent developments in the project: Instead, he suggested an alternate site near Suruli Falls, Theni District in Tamil Nadu. The Minister said this site did not pose the same problems that Singara posed and environmental and forest clearances should not be a serious issue. He also assured the DAE that the Ministry would facilitate necessary approvals for the alternative location. History and recent developments in the project: Dr. Naba K Mondal of the Tata Institute of Fundamental Research, who is the spokesperson for the INO project said: "But Suruliyar too is in a reserved forest area that is dense and would require cutting down of trees, something that was not required at Singara. Can the government assure us that forest clearance for this site will be given," he asks. "Alternatively, we can move to the nearby Thevaram, which is about 20–30 km away from the Suruliyar falls. This forest area has only shrubs but there is no source of water here and water will have to be piped over a distance of 30 km," On 18 October 2010, the Ministry of Environment & Forests approved both environment and forest clearance for setting up the observatory in the Bodi West Hills Reserved Forest in the Theni district of Tamil Nadu. History and recent developments in the project: As of February 2012, the land was allocated to the INO collaboration by the government of Tamil Nadu and the excavation work was about to start. Naba K Mondal, chief spokesperson of INO project and a senior scientist at the Tata Institute of Fundamental Research, Mumbai, told The Hindu that the pre-project work will start in April 2012 and ₹ 66 crores has been sanctioned for the work. The first task will be to have a road connectivity from Rasingapuram to Pottipuram village. The project is expected to be completed in 2015 at an estimated cost of ₹ 1,500 crores.On 18 September 2012, Kerala’s octogenarian Opposition leader and CPI(M) central committee member VS Achuthanandan expressed anxiety over establishing a neutrino observatory on the Theni-Idukki border between Tamil Nadu and Kerala, citing environmental and radiological issues. Soon the INO collaboration clarified on all the issues raised by him and the responses are on the INO website. History and recent developments in the project: On 5 January 2015, Union Cabinet headed by Prime Minister Narendra Modi approved to set up the India-based Neutrino Observatory (INO).On 20 February 2015, The southern bench of National Green Tribunal ordered notices to the central and state governments on a petition challenging the environmental clearance granted to the India-based Neutrino Observatory (INO) project.On 26 March 2015, The Madurai bench of the Madras high court restrained the central government from commencing the work on the proposed India-based Neutrino Observatory (INO). The court directed the government to get permission from the Tamil Nadu Pollution Control Board (TNPCB) before commencing the work.On 19 March 2018, Ministry of Environment (India) overturned the NGT verdict as a special case. The approval is only conditional and it needs the consent of the Tamil Nadu Pollution Control Board and the National Board for Wildlife. The approval was done under category B of the Schedule to the “Environmental Impact Assessment” (EIA) Notification, 2006. But it should have been ideally been treated as category A as the project lies just 4.9 km from an eco-sensitive national park. Additionally, EIA was done by the Salim Ali Centre for Ornithology and Natural History, which is an “unaccredited agency”.Throughout this process, villagers in the Pottipuram Panchayat have been agitating against the proposed observatory under the banner of Poovulagin Nanbargal (Friends of the earth). A spokesman for the organization expressed concern over the lakhs of tons of rock that would be blasted inside the mountain to create the observatory, which had the potential for groundquakes. In addition he expressed concerns against potential radiation from the project and general harm to the ecosystem. In January 2020 villagers in Pottipuram passed a resolution against building the INO in their area, citing the potential for ecological damage to Western Ghats.As of July 2021, the INO project's construction has not started and the project is described as "stalled". The plans for the facility and experimental apparatus have been made, and a site for the realization of the project has been chosen, and a budget has been proposed (not sure if approved), but getting permission to start actual building at the chosen site in Pottipuram village in Tamil Nadu state has not succeeded, with opposition from the local villagers, government officials and most notably, environmentalists and government environment agencies; the environment agencies' approval is needed for the construction to start. Indeed on June 17, 2021, Tamil Nadu chief minister M.K. Stalin met Prime Minister Narendra Modi and suggested the INO project be shelved or shifted elsewhere. Stalin’s suggestion was based on the advice of the State (of Tamil Nadu) Forest and Environment Department. There is no knowledge of when construction might start, or if indeed the project is to be realised at all. The project organization, that is, INO collaboration, however continues to pursue the INO project. The INO scientists, along with other eminent scientists, wrote a response to Mr. Stalin, arguing for the construction of INO as early as possible.In 2022, the state of Tamil Nadu government filed an affidavit in the Supreme Court of India asking that the Union government (i.e. India's national government) call the INO off. Also, it was suggested by some that INO started to suffer a lack of purpose as other neutrino experiments (some that had been realized and some that were still under development or planning) started to leave it behind in terms of potential for discovery. The proponents of the INO project, including the Union government, filed their own arguments supporting INO to the Supreme Court of India and as of 2022, the arguing between the supporters and the opponents of INO continues in the Supreme Court, and no construction at the site has taken place.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Double hull** Double hull: A double hull is a ship hull design and construction method where the bottom and sides of the ship have two complete layers of watertight hull surface: one outer layer forming the normal hull of the ship, and a second inner hull which is some distance inboard, typically by a few feet, which forms a redundant barrier to seawater in case the outer hull is damaged and leaks. Double hull: The space between the two hulls is sometimes used for storage of ballast water. Double hull: Double hulls are a more extensive safety measure than double bottoms, which have two hull layers only in the bottom of the ship but not the sides. In low-energy collisions, double hulls can prevent flooding beyond the penetrated compartment. In high-energy collisions, however, the distance to the inner hull is not sufficient and the inner compartment is penetrated as well. Double hull: Double hulls or double bottoms have been required in all passenger ships for decades as part of the Safety Of Life At Sea or SOLAS Convention. Uses: Double hulls are significantly safer than double bottoms, which in turn are safer than single bottoms. In case of grounding or other underwater damage, most of the time the damage is limited to flooding the bottom compartment, and the main occupied areas of the ship remain intact. In low-energy collisions to the sides of the vessel, double hulls also prevent flooding beyond the penetrated compartment. In high-energy collisions, however, the distance to the inner hull is not sufficient and the inner compartment is penetrated as well. Uses: A double bottom or hull also conveniently forms a stiff and strong girder or beam structure with the two hull plating layers as upper and lower plates for a composite beam. This greatly strengthens the hull in secondary hull bending and strength, and to some degree in primary hull bending and strength. Double hulls can also: be used as inboard tanks to carry oil, ballast water or fresh water (ventilated by a gooseneck) help prevent pollution in case of liquid cargo (like oil in tankers) help to maintain stability of ship; and act as a platform for machinery and cargo. Oil tankers: Double hulls' ability to prevent or reduce oil spills led to double hulls being standardized for other types of ships including oil tankers by the International Convention for the Prevention of Pollution from Ships or MARPOL Convention. A double hull does not protect against major, high-energy collisions or groundings which cause the majority of oil pollution, despite this being the reason that the double hull was mandated by United States legislation. After the Exxon Valdez oil spill disaster, when that ship grounded on Bligh Reef outside the port of Valdez, Alaska, the US Government required all new oil tankers built for use between US ports to be equipped with a full double hull. Submarines: In submarine hulls, the double hull structure is significantly different, consisting of an outer light hull and inner pressure hull, with the outer hull intended more to provide a hydrodynamic shape for the submarine than the cylindrical inner pressure hull. In addition to tailoring the flow of water around the submarine (also known as hydrodynamic bypass), this outer skin serves as a mounting point for anechoic tiles, which are designed specifically to absorb sound rather than reflect it, helping to hide the vessel from sonar detection.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rasmus Pagh** Rasmus Pagh: Rasmus Pagh is a Danish computer scientist and a professor of computer science at the University of Copenhagen. His main work is in algorithms and data structures, and he is particularly known for the cuckoo hashing algorithm and for co-founding the Basic Algorithms Research Center, BARC, in Copenhagen. Early life and education: Rasmus Pagh was born in Copenhagen, but soon after his family moved to Esbjerg in western Denmark. He went to high school at Rødkilde Amtsgymnasium where he participated in the "JP Forsker" science competition, and in the "Georg Mohr" mathematics competition. After graduating in 1994, he went to study mathematics and computer science at Aarhus University. In 1998 he started his PhD with Peter Bro Miltersen and started writing articles about hashing and efficient dictionaries, culminating in his work on cuckoo hashing. Soon after his thesis defence was in the fall of 2002 he became an assistant professor at the recently founded IT University of Copenhagen. Career: In 2007, Rasmus founded the Scalable Query Evaluation for Reliable Databases (SQERD) project. The project aimed at applying modern algorithmic techniques to problems arising in database management systems in connection with the evaluation of queries. From 2011-2015, he ran the MaDaMS project, which partnered with Demetra A/S, Aarhus University and Apptus AB at finding more efficient approaches to data mining.Rasmus Pagh was made full professor at ITU with his Inaugural Lecture in 2013. Career: In 2014, he received an ERC Consolidator Grant for a project on Scalable Similarity Search. The project resulted in many new algorithms, including a way to prevent false negatives in high dimensional search. In 2017 Pagh co-founded the Basic Algorithms Research Center, BARC, in Copenhagen with Mikkel Thorup, Thore Husfeldt and Stephen Alstrup. Soon thereafter he took a sabbatical to join the Simons Institute at University of California, Berkeley and become a Google visiting scholar.In 2019, Rasmus Pagh became an Associate Editor of the SIAM Journal on Computing.In 2020, Rasmus Pagh received the European Symposium on Algorithms Test-of-Time award for his 2001 work on cuckoo hashing with Flemming Friche Rodler.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shoulder pad (sport)** Shoulder pad (sport): Shoulder pads are a piece of protective equipment used in many contact sports such as gridiron football, lacrosse, and ice hockey and some non-contact sports such as ringette. Most modern shoulder pads consist of a shock absorbing foam material with a hard plastic outer covering. The pieces are usually secured by rivets or strings that the user can tie to adjust the size. History: The first football shoulder pads were created by Princeton University student L.P. Smock in 1877. These were made of leather and wool and were thin, light, and did not provide much protection. In the early years of the 1900s decade, many young football players were killed while playing the sport due to the ferocity of the sport combined with the lack of sufficient protection. In 1905 President Theodore Roosevelt took the initiative to clean up the sport. He decried football as a dangerous sport and certain actions needed to be taken so that the sport could remain legal. These shoulder pads were sewn into the players' jerseys rather than being worn as a separate piece of equipment. Allegedly Pop Warner was the first to have his players wear shoulder pads. When he was coaching at the Carlisle Indian Industrial School, he was the first one to use pads made of fiber rather than cotton. The traditional, separate, over the head shoulder pads first made an appearance around 1910 to help make football a safer sport--these are the same style shoulder pads still used today. However, many players still did not use shoulder pads until the 1950s. In the 1960s, newer technology allowed companies to produce shoulder pads that were made of foam with an outer shell of hard plastic, offering improved protection when being hit. The drawback to these pads were that they did not have very much ventilation. Limited ventilation caused players to dehydrate quicker. This issue was fixed in the 1990s when synthetic fibers were added, making the shoulder pads more breathable.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Console television** Console television: A console television is a type of CRT television most popular in, but not exclusive to, the United States and Canada. Console CRT televisions are distinguished from standard CRT televisions by their factory-built, non-removable, wooden cabinets and speakers, which form an integral part of the television's design. Best suited to television sizes of under 30 inches, they eventually became obsolete due to the increasing popularity of ever larger televisions in the late 1980s onward. However, they were manufactured and used well into the early 2000s. Description: Console televisions were originally accommodated in approximately rectangular radiogram style cabinets and included radio and record player facilities. However, from approximately the mid-1970s onwards, as radiograms decreased and Hi-fi equipment increased in popularity, console televisions became more cuboid in shape and contained most commonly television, and radio receiving features, and less commonly the addition of an eight track player. Manufacturers: Companies that made these types of television included Zenith, RCA, Panasonic, Sony, Magnavox, Mitsubishi, Sylvania, and Quasar.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Place and route** Place and route: Place and route is a stage in the design of printed circuit boards, integrated circuits, and field-programmable gate arrays. As implied by the name, it is composed of two steps, placement and routing. The first step, placement, involves deciding where to place all electronic components, circuitry, and logic elements in a generally limited amount of space. This is followed by routing, which decides the exact design of all the wires needed to connect the placed components. This step must implement all the desired connections while following the rules and limitations of the manufacturing process. Place and route: Place and route is used in several contexts: Printed circuit boards, during which components are graphically placed on the board and the wires drawn between them Integrated circuits, during which a layout of a larger block of the circuit or the whole circuit is created from layouts of smaller sub-blocks FPGAs, during which logic elements are placed and interconnected on the grid of the FPGAThese processes are similar at a high level, but the actual details are very different. With the large sizes of modern designs, this operation is usually performed by electronic design automation (EDA) tools. Place and route: In all these contexts, the final result when placing and routing is finished is the "layout", a geometric description of the location and rotation of each part, and the exact path of each wire connecting them. Occasionally some people call the entire place-and-route process "layout". Printed circuit board: The design of a printed circuit board comes after the creation of a schematic and generation of a netlist. The generated netlist is then read into a layout tool and associated with the footprints of the devices from a library. Placing and routing the devices can now start.Placing and routing is generally done in two steps. Placing the components comes first, then routing the connections between the components. The placement of components is not absolute during the routing phase, as it may still be changed by moving and rotating, especially with designs using more complex components such as FPGAs or microprocessors. Their large number of signals, and their signal integrity needs may require optimization of the placement.The resulting design is then output in RS-274X Gerber format to load in the CAM system of the manufacturer. In contrast to an IC layout, where the entire finished layout is stored in one graphics file, different files and formats are needed for PCB manufacture. The fabrication data consists of a set of Gerber files, a drill file, and a pick-and-place file containing the location and alignment of the devices generated for automated placement of the devices in the assembly process. Field-programmable gate array: The process of placing and routing for an FPGA is generally not performed by a person, but uses a tool provided by the FPGA Vendor or another software manufacturer. The need for software tools is because of the complexity of the circuitry within the FPGA and the function the designer wishes to perform. FPGA designs are described using logic diagrams containing digital logic and hardware description languages such as VHDL and Verilog. These will then be put through an automated place-and-route procedure to generate a pinout, which will be used to interface with the parts outside of the FPGA. Integrated circuits: The IC place-and-route stage typically starts with one or more schematics, HDL files, or pre-routed IP cores, or some combination of all three. It produces an IC layout that is automatically converted to a mask work in the standard GDS II or the OASIS format. History: The final layout of early ICs and PCBs was stored as a tape-out of Rubylith on transparent film. History: Gradually, electronic design automation automated more and more of the place-and-route work. At first, it merely sped up the process of making many small edits without spending a lot of time peeling up and sticking down the tape. Later design rule checking sped up the process of checking for the most common sorts of errors. Later auto routers speed up the process of routing. History: Some people hope that further improvements in autoplacers and autorouters will eventually produce good layouts without any human manual intervention. Further automation leads to the idea of a silicon compiler.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sex cords** Sex cords: In embryogenesis, the sex cords (primitive sex cords, primitive seminiferous cords, or gonadal cords) are structures that develop from the genital ridges that further differentiate based on an embryo's sex. After sexual differentiation, at day 49, the sex cords in females become the cortical cords, also called secondary cords. After further development, they become the ovarian follicles. The sex cords in males become the testis cords by the action of the testis-determining factor protein, which helps to develop and nourish the Sertoli cells. The testis cords are precursors to the rete testis. They play several different roles in the development of the male genitals. The primitive sex cords originate from the proliferation of the epithelium of the two genital ridges. These epithelial cells (from the genital ridges) penetrate and invade the underlying mesenchyme to form the primitive sex cords. This occurs shortly before and during the arrival of the primordial germ cells (PGCs) to the paired genital ridges.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Value object** Value object: In computer science, a value object is a small object that represents a simple entity whose equality is not based on identity: i.e. two value objects are equal when they have the same value, not necessarily being the same object.Examples of value objects are objects representing an amount of money or a date range. Value object: Being small, one can have multiple copies of the same value object that represent the same entity: it is often simpler to create a new object rather than rely on a single instance and use references to it.Value objects should be immutable: this is required for the implicit contract that two value objects created equal, should remain equal. It is also useful for value objects to be immutable, as client code cannot put the value object in an invalid state or introduce buggy behaviour after instantiation.Value objects are among the building blocks of DDD. Implementation: Due to the nuances of various object-oriented programming languages, each has its own methods and patterns for implementing and using value objects. Implementation: C# In C#, a class is a reference type while a struct (concept derived from the struct in C language) is a value type. Hence an instance derived from a class definition is an object while an instance derived from a struct definition is said to be a value object (to be precise a struct can be made immutable to represent a value object declaring attributes as readonly). Implementation: The following procedure can be carried out to add value object properties to a C# class: Override the Object.Equals method to ensure the object is compared using business logic Operator overload the default behavior of == and != to use the Equals method. Override the Object.GetHashCode method and ensure that the hash is same for the objects who have same equality. Make the class immutable by removing any property setters and only passing member values through the constructors.Example: C++ In C++, a value object can be built by overloading the assignment operator and using appropriate constness constraints on the fields (that will be evaluated once by the initializer list of the constructor) and on the methods of the class. However, if the fields themselves are declared const (rather than use non-const fields while only exposing "getter" accessors), then it won't be possible to fully overwrite such a value object with another (object1 = object2). Python Python have data classes which provides equality testing and can be made immutable using the frozen parameter. Implementation: Java Unlike C# and C++, Java has no support for custom value types at the language level. Every custom type is a reference type, and therefore has identity and reference semantics, though extending support for custom value types is being considered.Java programmers therefore emulate value objects by creating immutable objects, because if the state of an object does not change, passing references is semantically equivalent to copying value objects. Implementation: A class can be made immutable by declaring all attributes blank final, and declaring all attributes to be of immutable type (such as String, Integer, or any other type declared in accordance with these rules), not of mutable type such an ArrayList or even a Date. They should also define equals and hashCode to compare values rather than references. The term "VALJO" (VALue Java Object) has been coined to refer to the stricter set of rules necessary for a correctly defined immutable value object.Value objects are available since Java 14, as data records
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Truck driver** Truck driver: A truck driver (commonly referred to as a trucker, teamster or driver in the United States and Canada; a truckie in Australia and New Zealand; a HGV driver in the United Kingdom, Ireland and the European Union, a lorry driver, or driver in the United Kingdom, Ireland, India, Nepal, Pakistan, Malaysia and Singapore) is a person who earns a living as the driver of a truck, which is commonly defined as a large goods vehicle (LGV) or heavy goods vehicle (HGV) (usually a semi truck, box truck, or dump truck). Duties and functions: Truck drivers provide an essential service to industrialized societies by transporting finished goods and raw materials over land, typically to and from manufacturing plants, retail, and distribution centers. Truck drivers are responsible for inspecting their vehicles for mechanical items or issues relating to safe operation. Others, such as driver/sales workers, are also responsible for sales, completing additional services such as cleaning, preparation, and entertaining (e.g. cooking, making hot drinks) and customer service. Truck drivers work closely with warehouse associates and warehouse workers who assist in loading and unloading shipments. Types: There are three major types of truck driver employment: Owner-operators (also known as O/Os, or "doublestuffs") are individuals who own the trucks they drive and can either lease their trucks by contract with a trucking company to haul freight for that company using their own trucks or haul loads for multiple companies and are self-employed independent contractors. Others also lease and make payments on trucks with the aim of purchasing them within two to five years. Types: Company drivers are employees of a particular trucking company who drive trucks provided by their employer. Independent owner-operators are those with the authority to haul goods who often drive their own trucks, possibly owning a small fleet anywhere from one to ten, but occasionally as few as two or three. Job categories: Owner-operators, owner-drivers, and company drivers can be in these categories: Auto haulers transport cars on specially built trailers and require specific skills to load and operate specialized trailers. Job categories: Boat haulers move boats ranging in size from 10-foot-long (3.0 m) bass boats to full-size yachts up to 60 ft long (18 m) using specialized low boy trailers that can be set up for each size of boat. Boats wider than 8 feet 6 inches (2.59 m) or 13 feet 6 inches (4.11 m) require permits to move and are considered oversize loads. Job categories: Dry van drivers haul the majority of goods over highways in large trailers. Contents may be perishable or nonperishable goods. Dry bulk pneumatic drivers haul bulk sand, salt, and cement, among other things. They have specialized trailers which enable them to use pressurized air to unload their products. Commonly known among truckers as Flow Boys. Flatbed drivers haul an assortment of large bulky items, such as tanks, steel pipes, or lumber. Drivers require the ability to balance the load correctly. LTL drivers (location-to-location) or "less than truck load" generally refers to localized delivery jobs where goods are delivered by the driver at multiple locations, sometimes involving the pulling of double- or triple-trailer combinations. Reefer drivers haul refrigerated, temperature-sensitive, or frozen goods. Local drivers work only within the limits of their local areas. These areas may include crossing state lines, but drivers usually return home daily. Household goods drivers, or bedbuggers, haul personal effects for families moving from one home to another.Regional drivers may work over several states near their homes and may be away from home for short periods. Interstate drivers (otherwise known as "over-the-road" or "long-haul" drivers) often cover distances of thousands of miles and are away from home for days, weeks, or even months on end. For time-critical loads, companies may opt to employ team drivers to cover more miles than a single driver. Oversize load drivers transport oversize loads that exceed standard regulations. Special permits are required to transport oversize shipments. Team drivers refer to pairs of drivers who take turns driving the same truck in shifts (sometimes spouses), or several people in different states who split up the haul (line haul) to avoid being away from home for long periods. Job categories: Tanker drivers (tank truck drivers; in truck driver slang, tanker yankers "tankies") haul liquids, such as gasoline (petrol), diesel fuel, milk, and crude oil, and dry bulk materials, such as plastics, sugar, flour, and cement in tanks. Liquid tanker drivers need special driving skills due to the load balance changing from the liquid movement. This is especially true for food grade tankers, which do not contain any baffles and are a single compartment (due to sanitation requirements). Fuel oil/petroleum drivers require special certifications. Job categories: Vocational drivers drive vocational trucks such as tow trucks, dump trucks, garbage trucks, or cement mixers. Drayage drivers move cargo containers (aka "piggy backs"), which are lifted on or off the chassis at special intermodal stations. Bullrack drivers haul livestock locally, regionally, or nationally. The term bullrack refers to double-deck trailers used strictly for hauling cattle. Hours regulations: Australia In Australia, drivers of trucks and truck/trailer combinations with gross vehicle mass greater than 12 tonnes (11.8 long tons; 13.2 short tons) must rest for 15 minutes every 5.5 hours, 30 minutes every 8 hours, and 60 minutes every 11 hours (includes driving and non-driving duties). In any 7-day period, drivers must spend 24 hours away from their vehicles. Truck drivers must complete a logbook documenting hours and kilometres spent driving. Hours regulations: Canada In Canada, driver hours of service (HOS) regulations are enforced for drivers who operate a "truck, tractor, trailer, or any combination of them that has a gross vehicle weight in excess of 4,500 kg (9,921 lb) or a bus that is designed and constructed to have a designated seating capacity of more than 24 persons, including the driver." However, there are two sets of hours of service rules: one for above 60th parallel north and one for below. Below latitude 60 degrees, drivers are limited to 14 hours on duty in any 24-hour period. These 14 hours include a maximum of 13 hours driving time. Rest periods are 8 consecutive hours in a 24-hour period, as well as an additional 2-hour period of rest that must be taken in blocks of no fewer than 30 minutes. Hours regulations: The concept of "Cycles" refers to the total amount of time drivers can be on duty in a given period before they must take time off. Cycle 1 is 70 hours in a 7-day period and cycle 2 is 120 hours in a 14-day period. Drivers using cycle 1 must take off 36 hours at the end of the cycle before being allowed to restart the cycle again. Cycle 2 is 72 hours off duty before being allowed to start again. Hours regulations: Receipts for fuel, tolls, etc., must be retained as MTO officers can request them to further verify the accuracy of information contained in drivers' logbooks during inspections. Hours regulations: European Union In the European Union, drivers' working hours are regulated by EU (EC) No 561/2006, which entered into force on 11 April 2007. The nonstop driving time may not exceed 4.5 hours. After 4.5 hours of driving, drivers must take a break period of at least 45 minutes, which can be split into 2 breaks, the first being at least 15 minutes and the second being at least 30 minutes. Hours regulations: The daily driving time shall not exceed 9 hours and may be extended to at most 10 hours no more than twice each week. The weekly driving time may not exceed 56 hours. In addition to this, a driver cannot exceed 90 hours driving in a fortnight. Within each 24-hour period after the end of the previous daily or weekly rest period, drivers must take a new daily rest period. An 11-hour (or more) daily rest is called a regular daily rest period. Alternatively, drivers can split a regular daily rest period into two periods. The first period must be at least 3 hours of uninterrupted rest and can be taken at any time during the day. The second must be at least 9 hours of uninterrupted rest for a total minimum rest of 12 hours. Drivers may reduce daily rest periods to no fewer than 9 continuous hours, but this can be done no more than three times between any two weekly rest periods; no compensation for the reduction is required. Daily rests between 9 and 11 hours long are referred to as reduced daily rest periods. Daily rests may be taken in a vehicle as long as it has suitable sleeping facilities and is stationary. Hours regulations: ‘Multi-manning’ Multi-manning refers to (at least) two drivers driving the same vehicle during each period between two consecutive daily rests or between a daily rest and a weekly rest period. Another driver is optional for the first hour of multi-manning but mandatory for the remainder of the period. This allows for vehicles to depart from operating center and collect a second driver along the way, provided this is done within an hour of the first driver starting work. Hours regulations: Vehicles manned by two or more drivers are governed by the same rules as single-manned vehicles apart from daily rest requirements. When vehicles are manned by two or more drivers, each driver must have a daily rest period of at least 9 consecutive hours within the 30-hour period starting at the end of the last daily or weekly rest period. Hours regulations: Organizing drivers’ duties in such a fashion enables a crew's duties to be spread over 21 hours. The maximum driving time for a two-man crew taking advantage of this concession is 20 hours before a daily rest is required (although only if both drivers are entitled to drive 10 hours). Under multi-manning, the ‘second’ driver in a crew may not necessarily be the same driver from the duration of the first driver's shift but could be any number of drivers as long as the conditions are met. Whether second drivers could claim the multi-manning concession in these circumstances depends on their other duties. On multi-manning operations, the first 45 minutes of a period of availability is considered a break so long as the co-driver does no work. Hours regulations: Journeys involving ferry or train transport When drivers accompany vehicles transported by ferry or train, daily rest requirements are more flexible. Hours regulations: A regular daily rest period may be interrupted no more than twice, but the total interruption must not exceed 1 hour in total. This allows for a vehicle to be driven on to a ferry and off again at the end of the crossing. When the rest period is interrupted in this way, the total accumulated rest period must still be 11 hours. A bunk or couchette must be available during the rest period. Hours regulations: Weekly rest A regular weekly rest period is a period of at least 45 consecutive hours. An actual working week starts at the end of a weekly rest period and finishes when another weekly rest period is commenced, which may mean that weekly rest is taken in the middle of a fixed (Monday–Sunday) week. This is perfectly acceptable as the working week is not required to be aligned with the ‘fixed’ week defined in the rules, provided compliance of all relevant limits. Alternatively, drivers can take a reduced weekly rest period of (a minimum of) 24 consecutive hours. If a reduction is taken, it must be compensated for by an equivalent period of rest taken in one block before the end of the third week following the week in question. The compensating rest must be attached to a period of rest of at least 9 hours – effectively either a weekly or daily rest period. Hours regulations: For example, if a driver reduces a weekly rest period to 33 hours in week 1, they must compensate by attaching a 12-hour period of rest to another rest period of at least 9 hours before the end of week 4. This compensation cannot be taken in several smaller periods. A weekly rest period that falls in two weeks may be counted in either week but not in both. Hours regulations: However, a rest period of at least 69 hours in total may be counted as two back-to-back weekly rests (e.g. a 45-hour weekly rest followed by 24 hours), provided the driver does not exceed 144 hours’ work either before or after the rest period in question. Where reduced weekly rest periods are taken away from base, these may be taken in a vehicle provided it has suitable sleeping facilities and is stationary. Hours regulations: Unforeseen events Provided that road safety is not jeopardized, and to enable a driver to reach a suitable stopping place, a departure from the EU rules may be permitted to the extent necessary to ensure the safety of persons, the vehicle, or its load. Drivers must note all reasons for doing so on the back of their tachograph record sheets (if using an analogue tachograph) or on a printout or temporary sheet (if using a digital tachograph) at the latest on reaching the suitable stopping place (see relevant sections covering manual entries). Repeated and regular occurrences, however, might indicate to enforcement officers that employers were not in fact scheduling work to enable compliance with the applicable rules. Hours regulations: New Zealand Heavy work time requirements in New Zealand are: A break of at least 30 minutes every 5.5 hours of work time Maximum cumulative work time of 13 hours (plus 2x 30-minute breaks) in one cumulative work day before a 10-hour break is required, giving a total of 24 hours After 70 hours of accumulated work a driver must have a break of at least 24 hours"If you are subject to the work time limits (and are required to complete a logbook), you must record all your work and rest times in a logbook approved by the Transport Agency (you can only maintain 1 logbook at a time)."Emergency services drivers can exceed work hours when attending priority calls. Hours regulations: United States In the United States, the hours of service (HOS) of commercial drivers are regulated by the Federal Motor Carrier Safety Administration (FMCSA). Commercial motor vehicle (CMV) drivers are limited to 11 cumulative hours driving in a 14-hour period, following a rest period of no fewer than 10 consecutive hours. Drivers employed by carriers in "daily operation" may not work more than 70 hours, and continue driving, within any period of 8 consecutive days.Drivers must maintain a daily 24-hour logbook record of duty status documenting all work and rest periods. The record of duty status must be kept current to the last change of duty status and records of the previous seven days retained by the driver in the truck and presented to law enforcement officials on demand. Hours regulations: Electronic on-board recorders (EOBR) can automatically record, among other things, the time the vehicle is in motion or stopped. An FMCSA ruling mandated use of EOBRs, also known as Electronic Logging Device (ELD), began on December 18, 2017. The new mandate applies to all carriers not under FMCSA exemptions.A shortage of truck drivers has been reported in the United States. Retention rates are low. Compensation: Truck drivers are paid according to many different methods. These include salary, hourly, and a number of methods which can be broadly defined as piece work. Piece work methods may include both a base rate and additional pay. Base rates either compensate drivers by the mile or by the load. A company driver who makes a number of "less than truckload" (LTL) deliveries via box truck or conventional tractor-trailer may be paid an hourly wage, a certain amount per mile, per stop (aka "drop" or "dock bump") or per piece delivered, unloaded, or tailgated (i.e., moved to the rear of the trailer). The main advantage of being paid by the mile may be that a driver is rewarded according to measurable accomplishment. The main disadvantage is that what a driver may accomplish is not so directly related to the effort and, perhaps especially, the time required for completion. Household goods drivers deal with the most complexity and thus are typically the highest paid, potentially making multiples of a scheduled freight-hauler. Compensation: Pay by the mile Mileage calculations vary from carrier to carrier. Hub miles, or odometer miles ("hub" refers to hubometer, a mechanical odometer mounted to an axle), pay the driver for every mile. Calculations are generally limited to no more than 3–5% above the estimates of mileage by the carrier before red flags appear, depending on the carrier's financial compensation or how it rates the mileage estimation capabilities of the software used. One version of hub miles includes only those per carrier designated route, i.e., a set number of miles. "Out of route" miles of any incentive are provided by the driver to the carrier for free. Compensation: Many of the largest long haul trucking companies in the United States pay their drivers according to short miles. Short miles are the absolute shortest distance between two or more zip codes, literally a straight line drawn across the map. These short miles rarely reflect the actual miles required to pick up and deliver freight, but they will be used to calculate driver earnings. Compensation: Short miles are on average about 10% less than actual miles, but in some cases the difference can be as large as 50%. An extreme (but not unheard of) example would be a load that picked up in Brownsville, Texas, and delivered in Miami, Florida, a journey requiring a driver to travel over 1,600 miles. The short routing, however, would measure the distance as only 750 miles, as if the truck could drive across the Gulf of Mexico. Another extreme example would be a load that picked up in Buffalo, New York, and delivered in Green Bay, Wisconsin, not giving any consideration that three of America's Great Lakes lie between that load's origin and destination. Compensation: Other obvious obstacles would be mountains and canyons. Truck-prohibited routes sometimes create this same phenomenon, requiring drivers to drive several truck-legal routes and approach a destination from behind (essentially driving a fish hook-shaped route), because the most direct route cannot accommodate heavy truck traffic. Compensation: Some trucking companies have tried to alleviate these discrepancies by paying their drivers according to "practical miles." This occurs when dispatchers provide a route to follow and pay the driver accordingly based on the route. This is done to compensate drivers for the actual work done. These routes largely follow the Interstate Highway system but sometimes require drivers to use state and U.S. highways and toll roads. Trucking companies practice this method to attract and retain veteran drivers. Compensation: Household goods (HHG) miles, from the Household Goods Mileage Guide (aka "short miles") was the first attempt at standardizing motor carrier freight rates for movers of household goods, some say at the behest of the Department of Defense for moving soldiers around the country, long a major source of steady and reliable revenue. Rand McNally, in conjunction with the precursor of the National Moving & Storage Association developed the first Guide published in 1936, at which point it contained only about 300 point-to-point mileages.Today, the 19th version of the Guide has grown to contain distances between more than 140,000 cities, zip codes, or highway junctions. Compensation: Percentage of load Percentage-based pay is a common pay structure for owner-operators signed on to haul freight for specific companies. In this type of pay structure, owner-operators are paid a percentage of the gross load revenue. This percentage varies depending on the services provided by the company. For example, an owner-operator who receives 95% of the load revenue may only be provided with dispatch services while an owner-operator who receives 65% of the load revenue may have a company-provided trailer, insurance, or other benefits. In most cases, the owner-operator also receives 100% of the fuel surcharges.While not common, company drivers can also be paid by percentage of the load. This is typically a percentage of revenue, the same as owner-operators, with some company drivers instead paid a percentage of the load profit. Compensation: Paid by the hour Companies such as Dupré Logistics, which traditionally paid by the mile, have switched to hourly wages. Regional and local drivers are usually paid by the hour. In 2011 the U.S. Bureau of Labor Statistics (BLS) reported the average heavy and over-the-road truck driver hourly wage to be $21.74 per hour. The BLS reported in 2012 that the median hourly wage was $18.37 per hour. In May 2013, the BLS reported a mean average hourly pay ranging from $12.21 (bottom 10%) to $28.66 per hour (top 10%). In March 2014, Payscale.com published that the entry-level truck driver ranged from $11.82 to $20.22 an hour and the average hourly rate was reported as $15.53 an hour. Certain special industry driving jobs such as oilfield services like vacuum, dry bulk, and winch truck drivers can receive a $22.00 or higher hourly wage. A December 2020 survey found the average truck driver in the United States works 70–80 hours per week and earns between $.28 cents to $.40 per mile. Special licences: Australia In Australia, heavy vehicle licenses are issued by the states but are a national standard. There are 5 classes of license required by drivers of heavy vehicles: A Light Rigid (LR class) license covers a rigid vehicle with a gross vehicle mass (GVM) not more than 8 tons, with a towed trailer not weighing more than 9 tons GTM (Gross Trailer Mass). Also, buses with a GVM up to 8 tons which carry more than 12 adults including the driver. Special licences: A Medium Rigid (MR class) license covers a rigid vehicle with 2 axles and a GVM of more than 8 tons, with a towed trailer not weighing more than 9 tons GTM. A Heavy Rigid (HR class) license covers a rigid vehicle with 3 or more axles with a towed trailer not weighing more than 9 tons GTM. Also articulated buses. A Heavy Combination (HC class) license covers semi-trailers, or rigid vehicles towing a trailer with a GTM of more than 9 tons. Special licences: A Multi-Combination (MC class) license covers multi-combination vehicles like Road Trains and B-Double Vehicles.A person must have a C class (car) license for one year before they can apply for an LR or MR class license and two years before they can apply for an HR. To upgrade to an HC class license, a person must have an MR or HR class license for one year. To upgrade to an MC class license, a person must have an HR or HC class license for one year. Special licences: Canada Driver's licenses in Canada, including commercial vehicle licenses, are issued and regulated provincially. Regarding CDLs (commercial drivers licenses), there is no standardization between provinces and territories. European Union In the EU, one or more of the categories of Large Goods Vehicle (LGV) licenses is required. Medium-sized vehicles: C1 Lorries between 3,500 kg and 7,500 kg with a trailer up to 750 kg. Medium-sized vehicles with trailers: C1+E Lorries between 3,500 kg and 7,500 kg with a trailer over 750 kg - total weight not more than 12,000 kg (if you passed your category B test prior to January 1, 1997, you will be restricted to a total weight not exceeding 8,250 kg). Large vehicles: C Vehicles over 3,500 kg with a trailer up to 750 kg. Large vehicles with trailers: C+E Vehicles over 3,500 kg with a trailer over 750 kg. In Australia, for example, a HC license covers buses as well as goods vehicles in the UK and most of the EU; however, a separate license is needed. Minibuses: D1 Vehicles with 9 to 16 passenger seats and a trailer up to 750 kg. Minibuses with trailers: D1+E Combinations of vehicles where the towing vehicle is in subcategory D1 and its trailer has a MAM of over 750 kg, provided that the MAM of the combination thus formed does not exceed 12,000 kg and the MAM of the trailer does not exceed the unladen mass of the towing vehicle. Buses: D Any bus with more than 8 passenger seats and a trailer up to 750 kg. Buses with trailers: D+E Any bus with more than 8 passenger seats and a trailer over 750 kg. United States The United States employs a truck classification system, and truck drivers are required to have a commercial driver's license (CDL) to operate a CMV with a gross vehicle weight rating exceeding 26,000 pounds. Acquiring a CDL requires a skills test (pre-trip inspection and driving test) and knowledge test (written) covering the unique handling qualities of driving a large, heavily loaded commercial vehicle, and the mechanical systems required to operate such a vehicle (air brakes, suspension, cargo securement, et al.), must be declared fit by medical examination no less than every two years. For passenger bus drivers, current passenger endorsements are also required. Special licences: A person must be at least 18 years of age to obtain a CDL. Drivers under 21 are limited to operating within their state of licensing (intrastate operation). Many major trucking companies require driver applicants to be at least 23 years of age with a year of experience, while others hire and train new drivers as long as they have a clean driving history. Special licences: The U.S. Department of Transportation (US DOT) stipulates the various classes of CDLs and associated licensing and operational requirements and limitations. Class A – Any combination of vehicles with a GVWR (gross vehicle weight rating) of 26,001 or more pounds provided the GVWR of the vehicle(s) being towed exceeds 10,000 pounds. Special licences: Class B – Any single vehicle with a GVWR of 26,001 or more pounds, or any such vehicle towing a vehicle, not exceeding 10,000 pounds GVWR.Class C – Any single vehicle or the combination of vehicles that does not meet the definition of Class A or Class B but is either designed to transport 16 or more passengers including the driver or is placarded for hazardous materials.A CDL can also contain separate endorsements required to operate certain trailers or to haul certain cargo. These endorsements are noted on the CDL and often appear in advertisements outlining the requirements for employment. Special licences: T – Double/triple trailers (knowledge test only) P – Passenger (knowledge test; skills test may be required for some operations. Required for bus drivers.) N – Tank vehicle (knowledge test only) H – Hazardous materials (knowledge test only, also requires fingerprint and background check since the September 11 attacks) X – Combination of tank vehicle and hazardous materialsOther endorsements are possible, e.g., M endorsement to transport metal coils weighing more than 5,000 pounds (2,300 kg), but are tested and issued by individual states and are not consistent throughout all states (as of this writing, the M endorsement is unique to New York). The laws of the state where a driver's CDL is issued are considered the applicable laws governing that driver. Special licences: If a driver either fails the air brake component of the general knowledge test or performs the skills test in a vehicle not equipped with air brakes, the driver is issued an air brake restriction, restricting the driver from operating a CMV equipped with air brakes. Specifically, the five-axle tractor-semitrailer combination most commonly associated with the word "truck" requires a Class A CDL to drive. Beyond that, the driver's employer (or shipping customers, in the case of an independent owner-operator) generally specifies what endorsements their operations require a driver to possess. Truck regulations on size, weight, and route designations: U.S. Truck drivers are responsible for checking the axle and gross weights of their vehicles, usually by being weighed at a truck stop scale. Truck weights are monitored for limit compliance by state authorities at weigh stations and by DOT officers with portable scales. Commercial motor vehicles are subject to various state and federal laws regarding limitations on truck length (measured from bumper to bumper), width, and truck axle length (measured from axle to axle or fifth wheel to axle for trailers). Truck regulations on size, weight, and route designations: The relationship between axle weight and spacing, known as the Federal Bridge Gross Weight Formula, is designed to protect bridges.A standard 18-wheeler consists of three axle groups: a single front (steering) axle, the tandem (dual) drive axles, and the tandem trailer axles. Federal weight limits for NN traffic are: 20,000 pounds for a single axle 34,000 pounds for a tandem axle 80,000 pounds for total weightThe Federal Highway Administration (FHWA) division of the US Department of Transportation (US DOT) regulates the length, width, and weight limits of CMVs used in interstate commerce. Truck regulations on size, weight, and route designations: Interstate commercial truck traffic is generally limited to a network of interstate freeways and state highways known as the National Network (NN). The National Network consists of (1) the Interstate Highway System and (2) highways, formerly classified as Primary System routes, capable of safely handling larger commercial motor vehicles, as certified by states to FHWA.State weight and length limits (which may be lesser or greater than federal limits) affect the only operation of the NN. There is no federal height limit, and states may set their own limits which range from 13 feet 6 inches to 14 feet. As a result, the height of most tractor/trailers range between 13' and 15'. States considered to be in the eastern half of the United States use 13'6" as the maximum height. The boundary states are Minnesota, Iowa, Missouri, Oklahoma (the only state west of the north/south line), Arkansas, and Louisiana. States west of these have maximum heights of 14', with the exception of Colorado and Nebraska, which have a maximum height of 14'6". Alaska has a maximum height of 15'.Uniquely, the State of Michigan has a gross vehicle weight limit of 164,000 pounds (74,000 kg), which is twice the U.S. federal limit. While it is contended that this is why Michigan has the worst roads in the country (along with lack of funding—Michigan ranks lowest among the 50 states), a measure to change the law was just defeated in the Michigan Senate. Truck driver problems (U.S.): Unpaid work time In the United States, there is a lot of unpaid time, usually at a shipper or receiver where the truck is idle awaiting loading or unloading. Prior to the 2010 HOS changes it was common for 4–8 hours to elapse during this evolution. CSA addressed this and incorporated legal methods for drivers and trucking companies to charge for this excessive time. For the most part, loading/unloading times have fallen into a window of 2–4 hours although longer times are still endured. Truck driver problems (U.S.): Turnover and driver shortage In 2006, the U.S. trucking industry as a whole employed 3.4 million drivers. A major problem for the long-haul trucking industry is that a large percentage of these drivers are aging, and are expected to retire. Very few new hires are expected in the near future, resulting in a driver shortage. Currently, within the long-haul sector, there is an estimated shortage of 20,000 drivers. That shortage is expected to increase to 63,000 by 2018. Trucking (especially the long-haul sector) is also facing an image crisis due to the long working hours, long periods of time away from home, the dangerous nature of the work, the relatively low pay (compared to hours worked), and a "driver last" mentality that is common throughout the industry. Truck driver problems (U.S.): To help combat the shortage, trucking companies have lobbied Congress to reduce driver age limits, which they say will reduce a recruiting shortfall. Under current law, drivers need to be 21 to haul freight across state lines, which the industry wants to lower to 18 years old.Employee turnover within the long-haul trucking industry is notorious for being extremely high. In the 4th quarter of 2005, turnover within the largest carriers in the industry reached a record 136%, meaning a carrier that employed 100 drivers would lose an average of 136 drivers each year. At the end of 2020, turnover for truck drivers in fleets with more than $30 million of annual revenue was 92%.There is a shortage of willing trained long distance truck drivers.Part of the reason for the shortage is the economic fallout from deregulation of the trucking industry. Michael H. Belzer is an internationally recognized expert on the trucking industry, especially the institutional and economic impact of deregulation. He is an associate professor, in the economics department at Wayne State University. He is the author of Sweatshops on Wheels: Winners and Losers in Trucking Deregulation. His major opus was critically well received. Low pay, bad working conditions and unsafe conditions have been a direct result of deregulation. "[This book] argues that trucking embodies the dark side of the new economy." "Conditions are so poor and the pay system so unfair that long-haul companies compete with the fast-food industry for workers. Most long-haul carriers experience 100% annual driver turnover. As the Atlanta Journal-Constitution wrote: "The cabs of 18-wheelers have become the sweatshops of the new millennium, with some truckers toiling up to 95 hours per week for what amounts to barely more than the minimum wage. [This book] is eye-opening in its appraisal of what the trucking industry has become." Time off Due to the nature of the job, most drivers stay out longer than 4 weeks at a time. A few for months on end and even longer. For the average large company driver in the United States 6 weeks is the average, with each week out garnering the driver one day off. This usually accrues to a set maximum of 6 or 7 days. This is the average for OTR (Over The Road) Line Haul and Regional drivers. Vocational and Local drivers are usually home every night or every other night. Most tractors are equipped with sleeper berths that range from 36" to as large as 86" in length. While there are larger sleepers that get up to 144" in length, these are not seen in the mainline segment of trucking. Those are usually seen in the specialized and household moving segments, where the load is either permitted for overweight or oversize or is very light yet bulky. Truck driver problems (U.S.): Safety From 1992–1995, truck drivers had a higher total number of fatalities than any other occupation, accounting for 12% of all work-related deaths. By 2009, truck drivers accounted for 16.8% of transportation-related deaths. In 2016 alone, 475,000 crashes involving large trucks were reported to the police: 0.8% were fatal and 22% resulted in injury. Among crash fatalities generally, 11.8% involved at least one large truck or bus. In 2016, property damages resulting from truck and bus crashes cost several billion dollars. Truck driver problems (U.S.): Truck drivers are five times more likely to die in a work-related accident than the average worker. Highway accidents accounted for a majority of truck driver deaths, most of them caused by confused drivers in passenger vehicles who are unfamiliar with large trucks. The unsafe actions of automobile drivers are a contributing factor in about 70 percent of the fatal crashes involving trucks. More public awareness of how to share the road safely with large trucks is needed. Truck driver problems (U.S.): Still, progress has been made. While there has been a 29% increase in fatal crashes since 2009, this number is still lower than what it was in 2005. The safety of truck drivers and their trucks is monitored and statistics compiled by the FMCSA or Federal Motor Carriers Safety Administration who provides online information on safety violations. If a truck is stopped by a law enforcement agent or at an inspection station, information on the truck complies and OOS violations are logged. A violation out of service is defined by federal code as an imminent hazard under 49 U.S.C. § 521(b)(5)(B), "any condition likely to result in serious injury or death". National statistics on accidents published in the FMCSA Analysis and Information online website provides the key driver OOS categories for the year 2009 nationally: 17.6% are log entry violations, 12.6% are speeding violations, 12.5% drivers record of duty not current, and 6.5% requiring driver to drive more than 14 hours on duty. This has led to some insurance companies wanting to monitor driver behavior and requiring electronic log and satellite monitoring.In 2009 there were 3380 fatalities involving large trucks, of which 2470 were attributed to combination unit trucks (defined as any number of trailers behind a tractor). In a November 2005 FMCSA report to Congress, the data for 33 months of large truck crashes was analyzed. 87 percent of crashes were driver error. In cases where two vehicles, a car and a truck, were involved, 46 percent of the cases involved the truck's driver and 56 percent involved the car's driver. While the truck and car in two vehicle accidents share essentially half the burden of the accidents (not 70 percent as stated above), the top six driver factors are essentially also the same and in approximately equivalent percentages: Prescription drug use, over the counter drug use, unfamiliarity with the road, speeding, making illegal maneuvers, inadequate surveillance. This suggests that the truck driver makes the same errors as the car driver and vice versa. This is not true of the vehicle caused crashes (about 30 percent of crashes) where the top failure for trucks is caused by the brakes (29 percent of the time compared to 2% of the time for the car). Truck driver problems (U.S.): Truck drivers often spend their nights parked at a truck stop, rest area, or on the shoulder of a freeway ramp. Sometimes these are in secluded areas or dangerous neighborhoods, which account for a number of deaths due to drivers being targeted by thieves for their valuable cargo, money, and property, or for the truck and trailer themselves. Drivers of trucks towing flatbed trailers are responsible for securing and strapping down their cargo (which often involves climbing onto the cargo itself), and if the load requires tarping necessitates climbing on the load to spread out tarps. Tarps can weigh up to 200 lbs each and the cargo can require up to 3 tarps per load which account for a number of deaths and injuries from falling. Drivers spend long hours behind the wheel, which can cause strain on the back muscles. Some drivers are responsible for unloading their cargo, which can lead to many back strains and sprains due to overexertion and improper lifting techniques. If the cab of the truck is not appropriate for the driver's size, the driver can lose visibility and easy access to the controls and be at higher risk for accidents. Truck driver problems (U.S.): Parking A study published in 2002 by the Federal Highway Administration (FHWA) division of the U.S. Department of Transportation (US DOT) shows that "parking areas for trucks and buses along major roads and highways are more than adequate across the nation when both public (rest areas) and commercial parking facilities are factored in."A 2000 highway special investigation report by the National Transportation Safety Board (NTSB) contains the following statistics: Parking spaces at private truck stops- 185,000 (estimate) Number of trucks parked at private truck stops at night- 167,453 (estimate) Private truck stops that are full on any given night nationwide- 53 percent Shortfall of truck parking spaces- 28,400 (estimate) Public rest areas with full or overflowing parking at night 80 percentOne challenge of finding truck parking is made difficult perhaps not because there are insufficient parking spaces "nationwide", but where the majority of those spaces are not located, and most needed; near the most densely populated areas where demand for trucked goods is greatest. Truck driver problems (U.S.): As urban areas continue to sprawl, land for development of private truck stops nearby becomes prohibitively expensive and there seems to be an understandable reluctance on the part of the citizenry to live near a facility where a large number of trucks may be idling their engines all night, every night, or to experience the associated increase in truck traffic on local streets. Truck driver problems (U.S.): Exacerbating the problem are parking restrictions or prohibitions in commercial areas where plenty of space exists and the fact that shippers and receivers of freight tend to prefer to ship and receive truckloads in the early and late portions of the business day. The end result is an increase in truck traffic during the morning and evening rush hours when traffic is most dense, commuters exhibit the least patience, and safety is compromised. Adding to the challenge of finding parking are: A driver can only become familiar with locations of public and commercial parking spaces and their capacity and traffic by visiting them. The parking shortage, real or perceived, nearest the densest urban areas incites drivers to arrive early and many of those truck stops are full by 7 pm leaving even drivers who carefully plan their trips in detail few if any, options. Truck driver problems (U.S.): Idling restrictions Idling restrictions further complicate the ability of drivers to obtain adequate rest, as this example from California may illustrate: Commercial diesel-fueled vehicles with a GVWR greater than 10,000 pounds are subject to the following idling restrictions effective 1 February 2005. A driver may not: idle the vehicle's primary diesel engine for greater than five minutes at any location. Truck driver problems (U.S.): operate a diesel-fueled auxiliary power system which powers a heater, air conditioner, or any additional equipment for sleeper-berth equipped vehicles during sleeping or resting periods for greater than five minutes at any location within 100 feet of a restricted area.Drivers are subject to both civil and criminal penalties for violations of this regulation. Truck driver problems (U.S.): DAC Reporting A truck driver's "DAC Report" refers to the employment history information submitted by former employers to HireRight and USIS Commercial Services Inc. (formerly called DAC Services, or "Drive-A-Check"). Among other things, a truck driver's DAC Report contains the driver's identification (name, DOB, SSN), the name and address of the contributing trucking company, the driver's dates of employment with that company, the driver's reason for leaving that company, whether the driver is eligible for rehire, and comments about the driver's work record (e.g. good, satisfactory, too many late deliveries, etc.). It will also indicate whether the company stored drug and alcohol testing information with USIS. A separate section of the DAC report contains incident/accident information as well as CSA 2010 Pre-Employment Screening Program (PSP) Reports. Truck driver problems (U.S.): False reports The DAC report is as critical to the livelihood of a professional truck driver as the credit report is to a consumer. When a trucking company reports negative information about a truck driver, it can ruin the driver's career by preventing him or her from finding a truck driving job for several years or more. It is widely known that trucking companies often abuse this power by willfully and maliciously reporting false information on truckers’ DAC reports, either in retaliation for seeking better paying trucking jobs elsewhere or for any number of other fraudulent, anti-competitive reasons. As long as truck drivers can be threatened with a false DAC report for standing up to management or leaving their company for a better job elsewhere, working conditions at truck driver jobs will not improve. Truck driver problems (U.S.): COVID-19 pandemic Truck drivers in the United States are on the frontline delivering essential goods to Americans during the COVID-19 pandemic. Many truck businesses may refuse to take assignments that travel to areas experiencing active outbreaks, such as New York City. They also found great difficulty in obtaining gas and sustenance as many travel stops have closed. Compliance, Safety and Accountability: In 2010 the FMCSA enacted the Compliance, Safety, and Accountability program, formerly known as Comprehensive Safety Analysis 2010 or CSA 2010, a data-driven safety compliance and enforcement program. The program was implemented to improve commercial motor vehicle (CMV) safety and prevent crashes, injuries, and fatalities using the carrier Safety Measurement System (SMS) using the Behavior Analysis Safety Improvement Categories (BASICs). The categories are: 1)- Unsafe Driving, 2)- Hours of Service (HOS) Compliance, 3)- Driver Fitness, 4)- Controlled Substances and Alcohol, 5)- Vehicle Maintenance, 6)- Hazardous Materials (HM) Compliance, and 7)- Crash Indicator. The HM and crash indicators are not currently publicly available.There have been improvements, such as the combining of the original Inspection Selection System (ISS) and the Motor Carrier Safety Status Measurement System (SafeStat) to create ISS-2 in 2000 but many issues remained unsolved. A 2012 FMCSA rule change addressed issues but still presented many problems including the Hours of Service rules for those drivers falling under the required "record of duty status" (RODS). The system in use until 2019 uses a relative scoring system that is based on comparing carriers to their peers Concerns There have long been truck driver and trucking industry members concerns over the scoring, the bias, especially to smaller carriers according to a General Accountability Office report, associated with the scoring when non-preventable accidents are included, the public posting of the scoring, and a lack of state mandatory procedures ensuring that a citation that was not prosecuted, or that ended favorably for the driver or carrier, was retracted from the national database because it is flawed, artificially raising the driver or carrier scores, and the insurance industry uses these scores to assess risks on insurance. The FMCSA had released a report that the CSA scoring works.The hours of service rules has been changed several times since 2010 and is a concern to carriers and drivers. With the new electronic logging device (ELD) rules that became mandatory on 18 December 2017, for carriers subjected to the RODS rules, more issues have resulted. Drivers need to be aware that along with the ELD rule is a mandate to carry a paper log book and verify that the ELD manual and instruction sheet is in the truck. A driver must be able to email or fax the data if directed by a DOT officer. If an ELD malfunctions a driver must create a paper log to comply with the seven or eight day requirements, as well as recording the vehicle inspection.Congress has mandated the system to be overhauled and proposed FMCSA rules were scrapped as a result. New rules being proposed and testing includes a new Item Response Theory (IRT) model to replace the current relative rankings system began being tested in September 2018 with changes due in 2019. Truck driver problems (U.K.): Driver shortage In 2014 the Road Haulage Association and Freight Transport Association (FTA) have called for the government to help address the shortage of qualified truck drivers in the UK. According to the FTA, there was a shortage of 59,000 truck drivers. The average age of a truck driver was noted to be at 57.During February 2016, an independent survey on the driver shortage was carried out by a UK freight exchange. The purpose of the survey was to get the drivers opinions about the HGV driver shortage. The aim was to establish whether the results of the driver's survey could help the industry and government understand the issues that the drivers are currently facing.The findings of the survey showed that, in the opinion of the drivers, the three main contributing factors to the driver shortage are 1) Poor wages, 2) Poor driver facilities and 3) The way drivers are treated. Over a third of all drivers who participated in the survey felt that they were not being treated well by the companies they drove for.The 2021 United Kingdom fuel supply crisis and the shortages of stocked food supplies within supermarkets and restaurants, were attributed to the chronic shortage of HGV truck drivers and its associated factors of excessive hours, poor working conditions and unsustainably low wages. In response to the HGV driver shortage crisis that accelerated due to lower migration (of immigrant truck drivers) resulting from Brexit and the COVID-19 pandemic, the U.K. government initiated a temporary visa program to allow 5,000 foreign HGV truck drivers to work within the United Kingdom until Christmas. Specifically for the fuel shortages, the U.K. government also readied 150 Army tank drivers to undergo specialised training (for 5 days) and be on standby, in preparation of driving fuel tankers and delivering fuel to fuel stations.Huw Merriman, a Conservative MP and chairman of the Transport Select Committee, said that while readying the army was a "good example" of ministers trying to use as many levers at their disposal as possible and would be used as a "last resort", Merriman lamented that told the long-standing driver shortages should be fixed by industry, instead of being reliant on constant government intervention to resolve market failure. Truck driver problems (U.K.): These problems have been there for years because the average age of the driver is 55 years old, they're retiring, and the industry has not made this job attractive. For too long, working conditions have been poor, and those that are willing to tolerate it have been from abroad.Although heavy goods vehicle (HGV) drivers are legally limited to drive only for nine hours a day, drivers are routinely away from home for 12 to 15 hours a day, with unpredictable hours. Job advert from XPO stated:You’ll be working a minimum of 45 hours per week on an ‘any five from seven-day’ shift pattern, so your working days may change each week and could include weekend working. You will also be starting early AM and must be prepared to work through the night. Truck driver problems (U.K.): Despite the strenuous hours and the required self-funded driver qualifications (approximately £1,500), incomes of truck drivers have been slipping down the wage ladder. In 2010, the median HGV driver in the UK earned 51 per cent more per hour than the median supermarket cashier, in 2020 the premium was substantially reduced to 27 per cent. Truck drivers experienced a tighter pay squeeze from 2015 to 2021; median hourly pay for truck drivers rose 10 per cent to £11.80, instead of 16 per cent for all UK employees. Truck driver problems (U.K.): Why would I want to be a truck driver, with all the responsibility, the long, unpredictable hours, if I can go to Aldi and earn £11.30 an hour stacking shelves? Kieran Smith, chief executive of Driver Require, a recruitment agency, noted that employers have pushed labour costs down to compete for powerful customers such as supermarkets. Customers have enormous purchasing leverage [and] they have nailed down the haulage companies to the tiniest margins. Lots of drivers leave in their 30s because the hours make it almost impossible to participate in bringing up children, yet the wage isn’t high enough to support the other partner staying at home. Satellite tracking: Many companies today utilize some type of satellite vehicle tracking or trailer tracking to assist in fleet management. In this context "tracking" refers to a location tracking and "satellite" refers either to a GPS or GLONASS satellites system providing location information or communications satellites used for location data transmission. A special location tracking device also known as a tracker or an AVL unit is installed on a truck and automatically determines its position in real-time and sends it to a remote computer database for visualizing and analysis. Satellite tracking: An "in cab" communication device AVL unit often allows a driver to communicate with their dispatcher, who is normally responsible for determining and informing the driver of their pick-up and drop-off locations. If the AVL unit is connected to a Mobile data terminal or a computer it also allows the driver to input the information from a bill of lading (BOL) into a simple dot matrix display screen (commonly called a "Qualcomm" for that company's ubiquitous OmniTRACS system). Satellite tracking: The driver inputs the information, using a keyboard, into an automated system of pre-formatted messages known as macros. There are macros for each stage of the loading and unloading process, such as "loaded and leaving shipper" and "arrived at the final destination". This system also allows the company to track the driver's fuel usage, speed, gear optimization, engine idle time, location, the direction of travel, and the amount of time spent driving. Satellite tracking: Werner Enterprises, a U.S. company based in Omaha, Nebraska, has utilized this system to implement a "paperless log" system. Instead of keeping track of working hours on a traditional pen and paper based logbook, the driver informs the company of his status using a macro. Health issues: Working conditions Most truck drivers are employed as over-the-road drivers, meaning they are hired to drive long distances from the place of pickup to the place of delivery. During the short times while they are in heavily polluted urban areas, being inside the cab of the truck contributes much to avoiding the inhalation of toxic emissions, and on the majority of the trip, while they are passing through vast rural areas where there is little air pollution, truck drivers in general enjoy less exposure to toxic emissions in the air than the inhabitants of large cities, where there is an increased exposure to emissions from engines, factories, etc., which may increase the risk of cancer and can aggravate certain lung diseases, such as asthma in the general public who inhabit these cities. However, the few drivers who are hired to drive only within urban areas do not have this advantage of spending more time away from toxic emissions that is enjoyed by over-the-road drivers. Other conditions affecting the health of truck drivers are for example vibration, noise, long periods of sitting, work stress and exhaustion. For drivers in developing countries there are additional risks because roads are in appalling conditions and accidents occur more frequently. Truck drivers are a high-risk group for HIV-infection in those countries.Drivers who work in mines have extra health hazards due to their working conditions, as the roads they travel are particularly treacherous. Health issues: Truck driver fatigue Truck driver fatigue is defined by the US Department of Transportation's Federal Motor Carrier Safety Administration (FMCSA) as being caused by "physical and/or mental exertion, resulting in impaired performance". Factors that increase truck driver fatigue include lack of sleep (quantity and quality), long work hours, sedentary lifestyle, poor diet, and general stress. Research has shown that while some truck drivers may get a sufficient amount of sleep, many suffer from undiagnosed sleep disorders that impact the quality of their sleep. One study found that within a sample of surveyed truck drivers, 68.1% reported waking up during the night, 64.2% reported waking up feeling unrefreshed, and 51.6% reported waking up too early and not being able to go back to sleep. These sleep experiences have been linked to cognitive deficits, fatigue, and excessive daytime sleepiness. It is important to note that sleep deprivation and poor sleep quality, although of critical concern, are a subset of the larger issue of truck driver fatigue. Health issues: A contributing factor to truck driver fatigue is the stress associated with managing compliance to FMCSA's hours of service (HOS) regulations. Truckers are allowed to drive a maximum of 11 hours during a continuous 14-hour period, and must be off duty for at least 10 hours. In addition, they are limited to the number of hours they can drive during any consecutive 7-day or 8-day period, depending on their employer's operations. There are also reset rules, break requirements, and sleeper berth and short-haul exceptions. Truck drivers are required to keep a HOS-compliant log. Failure to produce a driver's log upon request by an enforcement official or non-compliance with HOA regulations, results in a driving penalty or fine. Better electronic methods for maintaining and managing drivers' logs are needed to help reduce truck driver stress. Health issues: The FMCSA and the National Highway Traffic Safety Administration conducted an extensive study from April 2001 to December 2003 investigating the causes of large truck crashes. Researchers reported that in thirteen percent of the crashes resulting in fatalities or injuries, truck driver fatigue was present. Another FMCSA study published in 2011 reported that large truck crashes were increasingly associated with driving times greater than 7 hours, which is when fatigue begins to affect performance. The FMCSA also reported that in 2016 truck driver fatigue was a larger contributing factor than alcohol and drugs in fatal truck crashes. Health issues: Sleep disorders and deprivation Truck drivers are also sensitive to sleep disorders because of the long hours required at the wheel and, in many cases, the lack of adequate rest. Driver fatigue is a contributing factor in 12% of all crashes and 10% of all near crashes. Traffic fatalities are high and many of them are due to driver fatigue. Drivers with obstructive sleep apnea have a sevenfold increased risk of being involved in a motor vehicle crash. It is estimated that 2.4-3.9 million licensed commercial drivers in the US have obstructive sleep apnea out of the estimated 18 million total Americans. The Federal Motor Carrier Safety Administration says that as many 28 percent of commercial driver's license holders have sleep apnea. Health issues: Total costs attributed to sleep apnea-related crashes: 2000: $15.9 billion and 1,400 lives Treatment: Cost: $3.18 billion with 70% effectiveness of CPAP treatment Savings: $11.1 billion in collision costs and 980 lives annually (National Safety Council)Research sponsored by the Federal Motor Carrier Safety Administration and American Trucking Associations found: Almost one-third (28%) of commercial truck drivers have some degree of sleep apnea 17.6% have mild sleep apnea 5.8% have moderate sleep apnea 4.7% have severe sleep apneaA CDC report (No. 2014–150) states: Most drowsy driving crashes or near misses occur during: 0400 and 0600, 0000 and 0200, and 1400–1600 hours and drivers are at the highest risk of a sleep-related accident. Thirty-seven percent of fatal crashes happened between 6PM and 6AM.Obstructive sleep apnea has been associated with obesity. FMCSA rules states: 391.41(b) A person is physically qualified to drive a commercial motor vehicle if that person (5) has no established medical history or clinical diagnosis of a respiratory dysfunction likely to interfere with his/her ability to control and drive a commercial motor vehicle safely.The FMCSA question and answer site is confusing. Question 1 states that a motor carrier is responsible for ensuring drivers are medically qualified for operating CMVs in interstate commerce. The FMCSA published a proposed guidance for sleep apnea testing in April 2012. Carriers began requiring drivers be tested for the disorder using neck circumference and Body Mass Index (BMI). For a male anything above 17" and for a female 15" was the minimum criteria with drivers above that having to be tested. Health care professionals had to be registered with the FMCSA after 21 May 2012, to give certifications, and carriers started to require checking. The agency backed away from required testing. Health issues: Australia health requirements A new law was passed in Australia requiring that all "over the road" drivers carry their medical information with them when they "are on the clock". This will help drivers comply with this new law and can also help deliver quick, accurate medical assistance if and when needed. Health issues: Obesity According to a 2007, study in the Journal of the American Dietetic Association, 86% of the estimated 3.2 million truck drivers in the United States are overweight or obese. A survey conducted in 2010 showed that 69% of American truck drivers met their criteria for obesity, twice the percentage of the adult working for population in the US. Some key risk factors for obesity in truckers are poor eating habits, lack of access to healthy food, lack of exercise, sedentary lifestyle, long work hours, and lack of access to care.Eighty percent of truckers have unhealthful eating patterns as a result of poor food choices and food availability at truck stops is partially to blame. The options at truck stops are generally high calorie and high fat foods available through restaurants, fast-food, diners and vending machines. Fresh produce and whole grain items are few and far between. Though 85% of mini-mart items are categorized as extremely unhealthy, 80% of these meals are considered a truck driver's main meal of the day. Also, most of the foods carried by drivers in their trucks, whether or not stored in a refrigerator, are purchased from truck stops. Research suggests that drivers value quality and taste much more than nutrition when selecting food. Another issue is the pattern of extensive and irregular snacking while on the road and consumption of one large meal at the end of day. The daily meal is often high in calories and may be the highlight of the trucker's day. Food intake varies during working hours compared to days off and truckers eat meals at the wrong circadian phase during the day. Health issues: Lack of exercise is another contributing factor to the obesity epidemic in the truck driver population. Almost 90% of truck drivers exercise only sometimes or never and only 8% exercise regularly. This is largely determined by long work hours and tight deadlines, the adoption of a sedentary lifestyle and a lack of a place to exercise. Though some fitness resources are available for truckers, most are scarce. Available areas are truck stops, highway rest areas, trucking terminals, warehouses, and the truck cab. However, there are many parking restrictions and safety concerns in trying to incorporate exercise into the daily routine.Studies have found the risk of obesity increases in high demand, low control jobs, and more so in jobs with long work hours; the truck driving industry falls under these categories. Also, daytime sleepiness and night disturbances are associated with obesity, and are, therefore, common among truck drivers. Long haul drivers have tight schedules, so they tend to drive longer and get less sleep. The U.S. Department of Transportation (DOT) Federal Motor Carrier Safety Administration (FMCSA) does have Hours of Service (HOS) regulations. Under the old rule, drivers could work up to 82 hours in 7 days. These regulations were modified in 2011; but the new rule only permits drivers to work up to 70 hours in 7 days. There is now an 11-hour-per-day limit with 10 hours off required after the weekly shift. Fines for companies which allow work beyond 11 hours are up to $11,000 and for drivers up to $2,750. Though these fines exist, there is minimal enforcement of the law.Obesity prevalence is affected by access to care for truckers. Company drivers often have issues with insurance, such as necessary pre-approval if out of network. Most owner-operator drivers do not have any kind of medical insurance (that is, in the US where medical treatment isn't free of charge like most countries). Moreover, truckers have difficulties making an appointment on the road and often do not know where to stop for assistance. Many self-diagnose or ignore their health issue altogether. Some are able to be seen at doctor's offices or private clinics while a large percentage depend on emergency rooms and urgent care visits. The Department of Transportation has Convenient Care Clinics across the U.S., but those are hard to find and are few and far between. Health care costs are substantially higher for overweight and obese individuals, so obesity in the truck driver population puts a greater financial demand on the industry. Health issues: Other health problems A study of 1,600 truck drivers from 2014 found that truckers in the US smoke at twice the rate of other working adults in the United States; 51% of truckers reported that they smoked in a 2010 survey. 61% of truckers in the same survey reported having two or more risk factors, which were defined as high blood pressure, obesity, smoking, high cholesterol, no physical activity, or sleep deprivation (6 or fewer hours of sleep per 24 hours). In fact, high blood pressure in truck drivers is 26.4% higher than the global prevalence of hypertension. In another study from 2015, more than 91,000 truck drivers were surveyed and similar types of morbidity were found. Truck drivers also suffer from musculoskeletal disorders, cardiovascular disease, and stress at higher rates. Implementation of drug detection: In the 1980s the administration of President Ronald Reagan proposed to put an end to drug abuse in the trucking industry by means of the then-recently developed technique of urinalysis, with his signing of Executive Order 12564, requiring regular random drug testing of all truck drivers nationwide, as well as employees of other DOT-regulated industries specified in the order, though considerations had to be made concerning the effects of an excessively rapid implementation of the measure. Implementation of drug detection: Making sudden great changes in the infrastructures of huge economies and the industries crucial to them always entails risks, the greater the change, the larger the degree. Because of the U.S. economy's strong dependence on the movement of merchandise to and from large metropolitan population centers separated by such great distances, a shortage of truck drivers could have far-reaching effects on the economy.After the 1929 stock-market crash, for example, the chain reaction of reduction in sales due to consumers' prioritizing and reducing purchases of luxury items, with companies responding by reducing production and increasing unemployment, exacerbating the cycle of reduction or elimination of production, sales, and employment, had the ultimate result of plunging the nation's economy into the Great Depression.Likewise, it had to be considered that a sudden halting or stunting of the movement of merchandise, as would occur with a large and sudden vacating of the cargo-transportation workforce, would have similar consequences. Even the 1974 nationwide speed-limit reduction to 55 mph, which merely slowed the movement of merchandise, was followed by the recession of the late 1970s.In the years and decades following Executive Order 12564, efforts to begin random drug testing and pre-employment drug screening of truck drivers were not expedited, leaving the change to occur gradually, out of concern for the dangers of excessively rapid change in economic infrastructure. Since then, a large number of tractor-trailer operators have left the industry in search of other employment, and a new generation of drivers has come in. Subsequent to the measure it became extremely difficult for truck drivers to engage in drug abuse and remain undetected.On 12 October 2015, The National Transportation Safety Board (NTSB) asked the Federal Motor Carrier Safety Administration (FMCSA) to draft a proposed plan to address the use of synthetic drugs among truckers. The NTSB also issued a call to pro-trucking bodies to educate their members about the dangers associated with truckers’ use of synthetic drugs, and to come up with a way to prevent their use while behind the wheel. Truck driver slang: Truck drivers once had a highly elaborate and colorful vocabulary of slang for use over their CB radios, but with the high turnover in the industry in recent decades, this has all but vanished. Most of the newer generation of drivers in the U.S. today speak to one another over their CB radios (or other similar communication devices) in more or less standard English (as understood in the various regions of the country), although a few of the slang words and phrases have remained, and many of these have passed into use in the colloquial language of the general public. Truck driver slang: "Smokey" and "bear" are still used to refer to police officers, especially state patrolmen, and sometimes "diesel bear" for a DOT officer, though many new-school drivers merely say "police", "policeman" and "cop". "Hammer" refers to the accelerator pedal, and "hammer lane" the left lane or passing lane on a freeway, in which traffic generally travels faster. "Handle", meaning a nickname, was once exclusively truck-driver slang, but has now passed into common use by the public, especially for pseudonyms used on Internet forums. Truck driver slang: Most of the "ten codes" have fallen nearly or completely into disuse, except "10/4", meaning "message received", "affirmative", "okay", "understood", and occasionally "10/20", referring to the driver's location, (e.g., "What's your 20?") Often older truck drivers speaking over their CB radios are frustrated at new-school truck drivers' lack of understanding of the trucking slang of the 1960s, '70s and '80s, and grudgingly resort to standard English when communicating with them. However today the slang is mostly gone, and some companies such as Swift Transportation consider the CB a safety hazard and prohibit the installation of a CB radio in their tractors. Truck driver slang: Partial list of some truck-driver slang; Australia All Dark - Weigh Station Closed Bandag band-aid - Retread tyre Candy car– Highway Patrol police car, usually with high-visibility police decals Car park - carrier of cars Chook Truck - Carter of live chickens Clean Skin - Non recap tyre Clear to Jolls - (M1 Motorway Hawksbury Hill North of the river) No police cars in the area from Top of the hill to Jolls Bridge Clear to the river - (M1 Motorway Hawksbury Hill North of the river) No police cars in the area from Jolls Bridge to Hawksbury River The Dipper - (M1 Motorway) Ku-Ring-Gai Chase Road Overpass Hill on the F3 Freeway Dollar - 100 kilometres per hour (60 mph) Evel Knievel– a police motorcycle Flash for cash– speed camera (not to be confused with a manned radar gun) Hair dryer -hand held radar gun Hot plate or Barbie – weigh station Mail Box - Australia Post Truck Double - Rego & Speed checking police car Revenue Straight - Straight (M1 Motorway) Between Dog Trap Rd overpass & Peaks Ridge Turn off The scalies or coneheads– Transport Safety inspectors who man checking/weigh stations Sesame Street - Hume Highway (Sydney to Melbourne) Tanker Wanker - Dry Cement, Flyash, Sugar, Flower ETC or Liquid Tanker Drivers Turd herder - carrier of stock (animal freight) Tyregator - tyre stripped off the rim and usually left lying on the road Visual signaling: One form of unspoken communication between drivers is to flash headlights on or off once or twice to indicate that a passing truck has cleared the passed vehicle and may safely change lanes in front of the signaling vehicle. The passing driver may then flash the trailer or marker lights to indicate thanks. This signal is also sometimes used by other motorists to signal truck drivers. Visual signaling: Continual flashing of headlights or high beams after emerging from around a corner beside a high wall or from any roadway out of sight to oncoming traffic will alert a truck driver in the oncoming lanes to an accident or other obstruction ahead and will warn him to reduce speed or to proceed with caution. Visual signaling: Since truck-driver language has no signal for "Do not move in front of me", nor has any understood length of time for turning headlights or high beams on or off, flashing the high-beams to say "Do not move in front of me" may be misinterpreted to mean that the truck is clear to proceed with the lane change in front of the vehicle giving the signal. Visual signaling: Europe As a rule, "thanks" is signaled to the vehicle behind by switching between the left- and right-turn signal several times, whereas turning on the hazard-warning lights (both turn signals) means "Slow down; danger ahead". As cars would normally use the hazard-warning lights for "thanks", in trucks distinction is necessary. The truck blocks the view of drivers behind it, hence a distinction must be made between "Thanks for letting me pass" and "Danger in front, I may brake hard!" Turning on the left-turn signal (in a right-hand traffic country) when a vehicle behind attempts to overtake means "Back off; lane not clear", and turning on the right-turn signal means "Go ahead; lane clear". Visual signaling: Truck drivers also use flashing headlights to warn drivers in the oncoming lane(s) of a police patrol down the road. Though not official, two consecutive flashes indicate a police patrol, whereas a rapid series of flashing indicates DMV or other law-enforcement agency that only controls truck drivers. During the day time, the latter is sometimes accompanied by the signaling driver making a circle with both hands (as if holding a tachograph ring). Visual signaling: Flashing headlights to the vehicle in front (intended for the other driver to see in their mirror) has two meanings. Long flashes are used to signal a truck driver that they are clear to return to the lane. A series of rapid flashes generally means "You're doing something stupid or dangerous" as in "Do not move in front, trailer not clear!" or "I'm overtaking, move aside". Visual signaling: Truckers also use their 4 ways flashing up a steep hill, mountain roads and on ramps on express ways to let others know that they are traveling at a slow speed and to be cautious approaching them. Visual signaling: Greeting In Europe, the general rule for truckers in a right hand driving country is to raise the left hand and to simply open the hand with all fingers extended without waving it at all with the palm facing forward, known as 'the flat hand'. Or a shorter version is to simply extend the fingers while still keeping the palm in contact with the steering wheel. Raising the right hand is also used in the same way but very rare. In popular culture: Truck drivers have been the subject of many films, such as They Drive by Night (1940), but they became an especially popular topic in popular culture in the mid-1970s, following the release of White Line Fever, and the hit song "Convoy" by C. W. McCall, both in 1975. The main character of "Convoy" was a truck driver known only by his CB handle (C.B. name), "Rubber Duck". Three years later, in 1978, a film was released with the same name. In 1977, another film Smokey and the Bandit, was released, which revolves around the escapades of a truck driver and his friend as they transport a load of bootleg beer across state lines. Smokey and the Bandit spawned two sequels. The 1978 film F.I.S.T. was a fictionalized account of the unionization of the trucking industry in the earlier 20th century, while the future of truck driving was speculated on in the 1996 film Space Truckers in which trucking has gone beyond planetary loads to interplanetary ones. One episode of Cowboy Bebop, "Heavy Metal Queen", also features spacefaring "truck" drivers. In popular culture: Truck drivers have also been villainously portrayed in such films as Duel, Joy Ride, The Transporter, Breakdown, The Hitcher, Thelma & Louise, Superman II, Supergirl, and Man of Steel. In popular culture: B. J. and the Bear is a television series depicting the exploits of a truck driver and his chimpanzee companion. Another is Movin' On, starring Claude Akins and Frank Converse. On 17 June 2007, the History Channel began to air Ice Road Truckers, a documentary-style reality television series following truck drivers as they drive across the ice roads in the Northwest Territories in Canada, as they transport equipment to the oil and natural gas mines in that area.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Colocation (business)** Colocation (business): Colocation (or co-location) is the act of placing multiple (sometimes related) entities within a single location. Examples: In an organization, it refers to placing related roles or groups in a single room, building or campus. In business, it refers to the practice of locating multiple similar businesses in the same facility. Examples: In trading, it often refers to placing multiple data centers in proximity to trading centers In telecommunications, primarily wireless telecommunications facilities such as mobile wireless (cell sites) and radio broadcasting, it refers to the practice of locating multiple wireless broadcast facilities/providers within the same facility. Many jurisdictions now mandate the colocation of mobile wireless carriers within a single facility to avoid the proliferation of wireless communication towers.In the fast food restaurant industry, one primary use of this concept is Yum! Brands with its KFC, Taco Bell, and Pizza Hut menus appearing in the same restaurant. All of the WingStreet chain locations are co-located with Pizza Hut. Examples: In the retail sector, Sears Holdings often operates its large-format Sears stores with an income tax services office, an optical shop, and other independent operations. Walmart is also known for this, in addition to including fast-food restaurants such as McDonald's, Subway or Dunkin' Donuts within its stores. Target often includes Starbucks and Pizza Hut in its stores. Examples: In the airline industry, colocation commonly occurs at airports. Airline alliances will be assigned or build a fortress out of certain terminals or dominant carrier-specific terminals; Star Alliance in particular makes colocation in a single terminal alliance policy (termed "Move Under One Roof"). An example would be at Tokyo's Narita Airport, where local carrier All Nippon Airways, a Star Alliance member, and its partners operate in one terminal to facilitate partner connections and product offerings, even offering combined check-in, member lounges, and ground services. Data: Colocation is often used in the data sourcing industry to mean off-site data storage, usually in a data center. This is very important for businesses since the loss of data can be crucial for companies of any size, up to and including disciplinary action for employees or loss of their job. An unexpected loss in data can result from fires, earthquakes, floods, or any sort of natural disaster. Data: Data colocation technology began to take hold in the telecommunications industry. Colocation enables multiple customers to access network, server, and data storage space, connecting them to a variety of service providers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Microsoft Developer Network** Microsoft Developer Network: Microsoft Developer Network (MSDN) was the division of Microsoft responsible for managing the firm's relationship with developers and testers, such as hardware developers interested in the operating system (OS), and software developers developing on the various OS platforms or using the API or scripting languages of Microsoft's applications. The relationship management is situated in assorted media: web sites, newsletters, developer conferences, trade media, blogs and DVD distribution.Starting in January 2020, the website is fully integrated with Microsoft Docs. Websites: MSDN's primary web presence at msdn.microsoft.com is a collection of sites for the developer community that provide information, documentation, and discussion that is authored both by Microsoft and by the community at large. Recently, Microsoft has placed emphasis on incorporation of forums, blogs, library annotations and social bookmarking to make MSDN an open dialog with the developer community rather than a one-way service. The main website, and most of its constituent applications below are available in 56 or more languages. Websites: Library MSDN Library is a library of official technical documentation intended for independent developers of software for Microsoft Windows. MSDN Library documents the APIs that ship with Microsoft products and also includes sample code, technical articles, and other programming information. The library was freely available on the web, with CDs and DVDs of the most recent materials initially issued quarterly as part of an MSDN subscription. However, since 2006, they can be freely downloaded from Microsoft Download Center in the form of ISO images.Visual Studio Express edition integrates only with MSDN Express Library, which is a subset of the full MSDN Library, although either edition of the MSDN Library can be freely downloaded and installed standalone. Websites: In Visual Studio 2010 MSDN Library is replaced with the new Help System, which is installed as a part of Visual Studio 2010 installation. Help Library Manager is used to install Help Content books covering selected topics. In 2016, Microsoft introduced the new technical documentation platform, Microsoft Docs, intended as a replacement of TechNet and MSDN libraries. Over the next two years, the content of MSDN Library was gradually migrated into Microsoft Docs. Now most of MSDN Library pages redirect to the corresponding Microsoft Docs pages. Websites: Integration with Visual Studio Each edition of MSDN Library can only be accessed with one help viewer (Microsoft Document Explorer or other help viewer), which is integrated with the then current single version or sometimes two versions of Visual Studio. In addition, each new version of Visual Studio does not integrate with an earlier version of MSDN. A compatible MSDN Library is released with each new version of Visual Studio and included on Visual Studio DVD. As newer versions of Visual Studio are released, newer editions of MSDN Library do not integrate with older Visual Studio versions and do not even include old/obsolete documentation for deprecated or discontinued products. MSDN Library versions can be installed side-by-side, that is, both the older as well as the newer version of MSDN Library can co-exist. Websites: Forums MSDN Forums are the web-based forums used by the community to discuss a wide variety of software development topics. MSDN Forums were migrated to an all-new platform during 2008 that provided new features designed to improve efficiency such as inline preview of threads, AJAX filtering, and a slide-up post editor. Websites: Blogs MSDN blogs is a series of blogs that were hosted under Microsoft's domain blogs.msdn.com. Some blogs are dedicated to a product – e.g. Visual Studio, Internet Explorer, PowerShell – or a version of a product – e.g. Windows 7, Windows 8 – while others belong to a Microsoft employee, e.g. Michael Howard or Raymond Chen.In May 2020, the MSDN and TechNet blogs were closed and the content was archived at Microsoft Docs. Websites: Social bookmarking Social bookmarking on MSDN Social was first launched in 2008, built on a new web platform that has user-tagging and feeds at its core. The goal of the social bookmarking application is to provide a method whereby members of the developer community can: Contribute to a database of quality links on any topic from across the web. By filtering on one or more tags, (e.g. ".net" and "database") users can discover popular or recent links and subscribe to a feed of those links. Websites: Find and follow experts' recommended sites. Each profile page includes a feed of the user's contributions. Users can be discovered through a drop-down menu on each bookmark. Demonstrate their expertise through the links displayed in their profile. Store their favorite links online.The initial release of the application provides standard features for the genre, including a bookmarklet and import capabilities. The MSDN web site is also starting to incorporate feeds of social bookmarks from experts and the community, displayed alongside feeds from relevant bloggers.The social bookmarking feature was discontinued on October 1, 2009. Gallery MSDN Gallery is a repository of community-authored code samples and projects. Launched in 2008, the purpose of the site is still evolving to complement Codeplex, the open-source project hosting site from Microsoft. Software subscriptions: MSDN has historically offered a subscription package whereby developers have access and licenses to use nearly all Microsoft software that has ever been released to the public. Subscriptions are sold on an annual basis, and cost anywhere from US$1,000 to US$6,000 per year per subscription, as it is offered in several tiers. Software subscriptions: Although in most cases the software itself functions exactly like the full product, the MSDN end-user license agreement prohibits use of the software in a business production environment. This is a legal restriction, not a technical one. An exception is made for Microsoft Office, allowing personal use even for business purposes without a separate license—but only with the "MSDN Premium Subscription" and even so only "directly related to the design, development and test and/or documentation of software projects;" this does not terminate MSDN Magazine: Microsoft provides the editorial content for MSDN Magazine, a monthly publication. The magazine was created as a merger between Microsoft Systems Journal (MSJ) and Microsoft Internet Developer (MIND) magazines in March 2000.|MSJ back issues are available online. MSDN Magazine was available as a print magazine in the United States, and online in 11 languages. The last issue of the magazine was released in November 2019. MSDN Magazine: Microsoft Systems Journal Microsoft Systems Journal was a 1986-founded bi-monthly Microsoft magazine. History: MSDN was launched in September 1992 as a quarterly, CD-ROM-based compilation of technical articles, sample code, and software development kits. The first two MSDN CD releases (September 1992 and January 1993) were marked as pre-release discs (P1 and P2, respectively). Disc 3, released in April 1993, was the first full release. In addition to CDs, there was a 16-page tabloid newspaper, Microsoft Developer Network News, edited by Andrew Himes, who had previously been the founding editor of MacTech, the premiere Macintosh technology journal. A Level II subscription was added in 1993, that included the MAPI, ODBC, TAPI and VFW SDKs. History: MSDN2 was opened in November 2004 as a source for Visual Studio 2005 API information, with noteworthy differences being updated web site code, conforming better to web standards and thus giving a long-awaited improved support for alternative web browsers to Internet Explorer in the API browser. In 2008, the original MSDN cluster was retired and MSDN2 became msdn.microsoft.com. History: Dr GUI and the MSDN Writers Team In 1996, Bob Gunderson began writing a column in Microsoft Developer Network News, edited by Andrew Himes, using the pseudonym "Dr.GUI". The column provided answers to questions submitted by MSDN subscribers. The caricature of Dr. GUI was based on a photo of Gunderson. When he left the MSDN team, Dennis Crain took over the Dr. GUI role and added medical humor to the column. Upon his departure, Dr. GUI became the composite identity of the original group (most notably Paul Johns) of Developer Technology Engineers that provided in-depth technical articles to the Library. The early members included: Bob Gunderson, Dale Rogerson, Rüdiger R. Asche, Ken Lassesen, Nigel Thompson (a.k.a. Herman Rodent), Nancy Cluts, Paul Johns, Dennis Crain, and Ken Bergmann. Nigel Thompson was the development manager for Windows Multimedia Extensions that originally added multimedia capabilities to Windows. Renan Jeffreis produced the original system (Panda) to publish MSDN on the Internet and in HTML instead of the earlier multimedia viewer engine. Dale Rogerson, Nigel Thompson and Nancy Cluts all published MS Press books while on the MSDN team. As of August 2010, only Dennis Crain and Dale Rogerson remain employed by Microsoft.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DS-1 (drug)** DS-1 (drug): DS-1 is a drug from the imidazopyridine family, which is the first drug developed that acts as a GABAA receptor positive allosteric modulator (PAM) selective for the α4β3δ subtype, which is not targeted by other GABAA receptor PAMs such as the benzodiazepines or other nonbenzodiazepine drugs. Novel selective drugs such as DS-1 should prove useful in the study of this receptor subtype.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Glare (vision)** Glare (vision): Glare is difficulty of seeing in the presence of bright light such as direct or reflected sunlight or artificial light such as car headlamps at night. Because of this, some cars include mirrors with automatic anti-glare functions and in buildings, blinds or louvers are often used to protect occupants. Glare is caused by a significant ratio of luminance between the task (that which is being looked at) and the glare source. Factors such as the angle between the task and the glare source and eye adaptation have significant impacts on the experience of glare. Discomfort and disability: Glare can be generally divided into two types, discomfort glare and disability glare. Discomfort glare is a psychological sensation caused by high brightness (or brightness contrast) within the field of view, which does not necessarily impair vision. In buildings, discomfort glare can originate from small artificial lights (e.g. ceiling fixtures) that have brightnesses that are significantly greater than their surrounding. When the luminous source occupies a much greater portion of the visual field (e.g. daylit windows), discomfort caused by glare can be linked to a saturating effect. Since observers will not always look directly at a bright illuminated source, discomfort glare usually arises when an observer is focusing on a visual task (e.g. a computer-screen) and the bright source is within their peripheral visual field.Disability glare impairs the vision of objects without necessarily causing discomfort. This could arise for instance when driving westward at sunset. Disability glare is often caused by the inter-reflection of light within the eyeball, reducing the contrast between task and glare source to the point where the task cannot be distinguished. When glare is so intense that vision is completely impaired, it is sometimes called dazzle. Reducing factors: Glare can reduce visibility by: Reduction of brightness of the rest of the scene by constriction of the pupils Reduction in contrast of the rest of the scene by scattering of the bright light within the eye. Reduction in contrast by scattering light in particles in the air, as when the headlights of a car illuminate the fog close to the vehicle, impeding vision at larger distance. Reduction in contrast between print and paper by reflection of the light source in the printed matter (veiling glare). Reduction in contrast by reflection of bright areas on the surface of a transparent medium as glass, plastic or water; for example when the sky is reflected in a lake, so that the bottom below or objects in the water cannot be seen (veiling glare). Reducing factors: bloom surrounding objects in front of glareSunglasses are often worn to reduce glare; polarized sunglasses are designed to reduce glare caused by light reflected from non-metallic surfaces such as water, glossy printed matter or painted surfaces. An anti-reflective treatment on eyeglasses reduces the glare at night and glare from inside lights and computer screens that is caused by light bouncing off the lens. Some types of eyeglasses can reduce glare that occurs because of the imperfections on the surface of the eye. Reducing factors: Light field measurements can be taken to reduce glare with digital post-processing. Measurement: Methods Discomfort glare has often been studied using psychophysics experiments, where the common methods have been the luminance adjustment and category rating procedures. Studies conducted by Petherbridge and Hopkinson and Luckiesh and Guth. were amongst the first to compared subjective assessments given by observers against physical measurements produced by a glare source. Measurement: Biases A comprehensive review of the methods used to measure glare showed that there are biases associated with its measurement. Luminance adjustments are sensitive to anchoring (cognitive bias) effects caused when the initial starting luminance viewed influences the final assessment of visual discomfort. Glare is also subject to stimulus range bias effects. This occurs when the luminance range influences the final evaluation of glare given by the observer. A larger range, often results in higher glare evaluations given. Measurement: Prediction models Glare from artificial lights is typically measured with luminance meters. From daylit windows, cameras are used to convert the pixels into luminance. Both of which are able to determine the luminance of objects within small solid angles. The glare of a scene i.e. visual field of view, is then calculated from the luminance data of that scene. The International Commission on Illumination (CIE) defines glare as: "Visual conditions in which there is excessive contrast or an inappropriate distribution of light sources that disturbs the observer or limits the ability to distinguish details and objects". The CIE recommends the Unified glare rating (UGR) as a quantitative measure of glare. Other glare calculation methods include CIBSE Glare Index, IES Glare Index and the Daylight Glare Index (DGI). Measurement: Unified glare rating The unified glare rating (UGR) is a measure of the glare in a given environment, proposed by Sorensen in 1987 and adopted by the International Commission on Illumination (CIE). It is basically the logarithm of the glare of all visible lamps, divided by the background lumination Lb log 0.25 Lb∑n(Ln2ωnpn2), Where log is the common logarithm (base 10), Ln is the luminance of each light source numbered n , ωn is the solid angle of the light source seen from the observer and pn is the Guth position index, which depends on the distance from the line of sight of the viewer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tensoba** Tensoba: Tensoba, or tempura soba, is a Japanese dish of soba noodles and tempura. Overview: There are two varieties of tensoba: one is served with a hot broth of dashi and soy sauce; the other is served with cooled soba and dipped in tsukejiru (lit. 'dipping sauce'), either chilled or hot and usually strongly flavored. The dipping variety is also called tenzaru-soba or ten-seiro, depending on the soba shop or stand. Like tendon, tensoba uses many kind of vegetable or seafood tempura, or kakiage (lit. 'scratch tempura', using a mixture of vegetable or seafood bits). History: Tensoba originated during the mid-Edo-period. It was first eaten as a hot broth soba with kakiage, using the adductor muscles of surf clams. At that time, shrimp-tempura soba was more expensive than other ingredients. So, shrimp-tempura-soba is also called jo-tempura soba (lit. 'upper class tempura-soba') or ebiten-soba. Regional variety: There are some regional varieties for tensoba toppings. In Kanto and Kyushu, the soba shops often use satsuma age (fried fish cake) or chikuwa for tempura. These two fish cakes are sometimes batter-fried.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Eskimo Rescue** Eskimo Rescue: An Eskimo rescue, bow rescue or T-rescue is a kayaking technique performed to recover a kayaker from a capsize without them having to leave their boat or perform a self-rescue such as a kayak roll. The advantages of this manoeuvre are that the kayaker does not have to get out of the kayak and the kayak does not then have to be emptied of water. However, it relies on another kayaker being able to assist quickly enough. More advanced kayakers will often prefer to rely on a kayak roll instead. Technique: After drawing attention to the capsize by banging on the bottom of their boat, the kayaker who capsized waits upside down underwater until another kayak arrives to help. The capsized kayaker finds the other kayak, usually the bow, with their hand and uses this for support while they perform a hip-flick to right their kayak. If the kayaker runs out of breath before managing to complete the eskimo rescue, as sometimes happens, they will exit their kayak by releasing their spray deck. Naming: An eskimo rescue is often used synonymously with T rescue and bow rescue, these names come from the shape the boats make (a T) and the part of the boat that is presented to the capsized kayaker respectively. However, an eskimo rescue is really the general term for any rescue in which a capsized kayaker is righted with help from another.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Biscuits and gravy** Biscuits and gravy: Biscuits and gravy is a popular breakfast dish in the United States, especially in the South. The dish consists of soft dough biscuits covered in white gravy (sawmill gravy), made from the drippings of cooked pork sausage, flour, milk, and often (but not always) bits of sausage, bacon, ground beef, or other meat. The gravy is often flavored with black pepper. History: The meal emerged as a distinct regional dish after the American Revolutionary War (1775–1783), when stocks of foodstuffs were in short supply. Breakfast was necessarily the most substantial meal of the day for a person facing a day of work on the plantations in the American South. In addition, the lack of supplies and money meant it had to be cheap.Restaurant chains specializing in biscuits and gravy include Biscuitville, in Virginia and North Carolina, and Tudor's Biscuit World, in West Virginia. Variations: Tomato gravy is white gravy mixed with crushed or diced tomatoes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Computerised Pilot Selection System** Computerised Pilot Selection System: The Computerised Pilot Selection System is used for screening candidates into the flight branch of the Indian Air Force. It replaced the earlier pilot selection test named Pilot Aptitude Battery Test (PABT). It was originally conceived by then Scientific Advisor to the Prime Minister Dr. APJ Abdul Kalam with a view to adopt a better tool for conducting pilot aptitude test in consonance with the modern aircraft of the IAF. It has been developed jointly by Aeronautical Development Establishment, Bangalore and Defence Institute of Psychological Research, Delhi.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nucleoside oxidase** Nucleoside oxidase: Nucleoside oxidase (EC 1.1.3.28) is an enzyme with systematic name nucleoside:oxygen 5'-oxidoreductase. This enzyme catalyses the following chemical reaction inosine + O2 ⇌ 9-riburonosylhypoxanthine + H2O (1a) 2 inosine + O2 ⇌ 2 5'-dehydroinosine + 2 H2O (1b) 2 5'-dehydroinosine + O2 ⇌ 2 9-riburonosylhypoxanthineThis enzyme could use other purine and pyrimidine nucleosides (as well as 2'-deoxynucleosides) as substrates.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Libwww** Libwww: Libwww is an early World Wide Web software library providing core functions for web browsers, implementing HTML, HTTP, and other technologies. Tim Berners-Lee, at the European Organization for Nuclear Research (CERN), released libwww (then also called the Common Library) in late 1992, comprising reusable code from the first browsers (WorldWideWeb and Line Mode Browser). Libwww: Libwww was relied upon by the then popular browser Mosaic. By 1997, interest in libwww declined, and the World Wide Web Consortium (W3C), which took over from CERN, reduced its commitment to the project. Later, the purpose of libwww was redefined to be "a testbed for protocol experiments"; in that role it was maintained for the benefit of the W3C's web standards-promoting browser Amaya. Active development of libwww stopped in 2000.libcurl is considered to be a modern replacement for libwww. History: In 1991 and 1992, Tim Berners-Lee and a student at CERN named Jean-François Groff rewrote various components of the original WorldWideWeb browser for the NeXTstep operating system in portable C code, in order to demonstrate the potential of the World Wide Web. In the beginning, libwww was referred to as the Common Library and was not available as a separate product. Before becoming generally available, libwww was integrated in the CERN program library (CERNLIB). In July 1992 the library was ported to DECnet. In the May 1993 World Wide Web Newsletter Berners-Lee announced that the Common Library was now called libwww and was licensed as public domain to encourage the development of web browsers. He initially considered releasing the software under the GNU General Public License, rather than into the public domain, but decided against it due to concerns that large corporations such as IBM would be deterred from using it by the restrictions of the GPL. The rapid early development of the library caused Robert Cailliau problems when integrating it into his MacWWW browser.From February 1994 to July 1999 (versions 2.17 to 5.2.8), Henrik Frystyk Nielsen was responsible for libwww, first as a graduate student at CERN and later at the World Wide Web Consortium (W3C). On 21 March 1995, with the release of version 3.0, CERN transferred responsibility for libwww to the W3C. From 1995 onwards, the Line Mode Browser was no longer released separately, but part of the libwww package.On 2 March 1997, Nielsen announced that Libwww 5.1 was expected to be the last release. Later that year, on 24 Dec 1997, Nielsen put out an unsuccessful call for another party outside W3C to take over maintenance of the library.Nielsen left the W3C in July 1999, and the project was thereafter headed by José Kahan as the only W3C employee involved with the project.On 2 September 2003 the W3C (re-)stated that development had stopped, citing a lack of resources. On 29 January 2004, the W3C once again confirmed that it would not continue development, and was seeking open source community maintainers.The first (and only) "community supported maintenance release" was made in 2005, after a gap of 3 years. After a further lapse of 12 years, a security patch was released in 2017. Features: In 2003, Kahan claimed that "libwww is the only library that has a full implementation of the HTTP specification, including caching and pipelining."Libwww supports following protocols: file FTP Gopher HTTP 1.1 with a Persistent Cache Manager, pipelining NNTP Telnet WAISOther features include: TLS and SSL can be used through OpenSSL. gzip compression and decompression through zlib a HTML, RDF, SGML and XML parser and a style sheet manager an integration of a SQL database (using the MySQL server) for e.g. web crawlersLibwww supports plug-ins. Applications using libwww: It has been used for applications of varying sizes, including web browsers, editors, Internet bots, and batch tools. Pluggable modules provided with libwww add support for HTTP/1.1 with caching, pipelining, POST, Digest Authentication, and deflate. The W3C created the Arena web browser as a testbed and testing tool for HTML3, Cascading Style Sheets (CSS), Portable Network Graphics (PNG) and libwww, among other technologies. Arena was later replaced in that role by Amaya.According to a survey conducted in September 2003, at least 19 applications used libwww. Agora Arena Amaya Cello CERN httpd server Cygwin Distributed Oceanographic Data Systems with the OPeNDAP GRIF Symposia, a HTML editor Lynx MacWWW Mosaic Robot Operating System (ROS) TkWeb tkWWW WorldWideWeb (later Nexus)Integrated applications in libwww are: Command Line Tool, an application which shows how to use libwww to build simple batch mode tools to access the Web. Line Mode Browser, a Spartan web browser. Webbot, a simple application showing how to use libwww to build robots. Mini Server, a small application showing how to implement a server or a proxy using libwww. Criticism: The developers of libcurl have criticised libwww as being not as portable, not thread-safe and lacking several HTTP authentication types. Neither libcurl nor libwww are lightweight enough for some projects.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**(S)-tetrahydroprotoberberine N-methyltransferase** (S)-tetrahydroprotoberberine N-methyltransferase: In enzymology, a (S)-tetrahydroprotoberberine N-methyltransferase (EC 2.1.1.122) is an enzyme that catalyzes the chemical reaction S-adenosyl-L-methionine + (S)-7,8,13,14-tetrahydroprotoberberine ⇌ S-adenosyl-L-homocysteine + cis-N-methyl-(S)-7,8,13,14-tetrahydroprotoberberineThus, the two substrates of this enzyme are S-adenosyl methionine and (S)-7,8,13,14-tetrahydroprotoberberine, whereas its two products are S-adenosylhomocysteine and cis-N-methyl-(S)-7,8,13,14-tetrahydroprotoberberine. This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:(S)-7,8,13,14-tetrahydroprotoberberine cis-N-methyltransferase. This enzyme is also called tetrahydroprotoberberine cis-N-methyltransferase. This enzyme participates in alkaloid biosynthesis i.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Double penetration** Double penetration: Double penetration (sometimes called DP for short) is a term that usually refers to a vaginal and anal sex act involving one penis penetrating a woman's vagina while another penetrates her anus. Practice: Double penetration usually involves the insertion and thrusting of two erect penises into a woman's vagina and anus simultaneously. It is a common practice in pornography. The term can also describe the insertion and thrusting of two erect penises into a single vagina or anus. This is known as double vaginal penetration (DVP) and double anal penetration (DAP) respectively. Practice: Double penetration can be carried out not only with penises, but also with different parts of the body (hands, fingers) or with specific sex toys.The sexual act may be pleasurable for the penetrated partner due to the simultaneous stimulation of the G-spot and the anterior fornix, or the prostate. The penetrating partners may derive pleasure from the tightness of the vagina and/or anus, as well as from their penises rubbing together either in the same orifice or through the lining of the rectovaginal fascia. History: Representations of double penetration have been depicted in many Roman erotic objects, as well as in the Kama Sutra. The first filmed double penetration in history appeared in 1970, in the movie "Delphia the Greek", by director Lasse Braun.The feminist Bernadette Barton has argued that double penetration, because it usually involves two men's penises close together or directly touching, is an "unusually homoerotic" act in an otherwise homophobic culture. Barton also thinks double penetration is "bizarre" and "body punishing" because it can stretch a woman's orifices to their physical limit. Pornographic actor James Deen denies that double penetration is inherently "homosexual activity", even if two penises are in the same orifice, believing that the motivation of the male performer determines whether the act is homosexual or not.Gail Dines believes double penetration is an extreme act that was "almost non-existent" before the 2010s, but that has now become one of the most popular types of pornography, a shift she attributes to misogyny. Pornographic actress Anikka Albrite called double penetration "a fantastic feeling" and "the ultimate feel-good drug" that everyone should try as there is nothing more pleasurable compared to it, "words cannot describe how amazing d.p.s are," she said. Spit-roast: The spit-roast is a variation of double penetration whereby a person is penetrated in the rear by one penis (either in the vagina or anus) and performs oral sex on another penis. This sexual act combines both the doggy style position with fellatio; the "spit-roast" is frequently depicted within pornography but some do not consider it a form of double penetration.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Elliptic-curve Diffie–Hellman** Elliptic-curve Diffie–Hellman: Elliptic-curve Diffie–Hellman (ECDH) is a key agreement protocol that allows two parties, each having an elliptic-curve public–private key pair, to establish a shared secret over an insecure channel. This shared secret may be directly used as a key, or to derive another key. The key, or the derived key, can then be used to encrypt subsequent communications using a symmetric-key cipher. It is a variant of the Diffie–Hellman protocol using elliptic-curve cryptography. Key establishment protocol: The following example illustrates how a shared key is established. Suppose Alice wants to establish a shared key with Bob, but the only channel available for them may be eavesdropped by a third party. Initially, the domain parameters (that is, (p,a,b,G,n,h) in the prime case or (m,f(x),a,b,G,n,h) in the binary case) must be agreed upon. Also, each party must have a key pair suitable for elliptic curve cryptography, consisting of a private key d (a randomly selected integer in the interval [1,n−1] ) and a public key represented by a point Q (where Q=d⋅G , that is, the result of adding G to itself d times). Let Alice's key pair be (dA,QA) and Bob's key pair be (dB,QB) . Each party must know the other party's public key prior to execution of the protocol. Key establishment protocol: Alice computes point (xk,yk)=dA⋅QB . Bob computes point (xk,yk)=dB⋅QA . The shared secret is xk (the x coordinate of the point). Most standardized protocols based on ECDH derive a symmetric key from xk using some hash-based key derivation function. Key establishment protocol: The shared secret calculated by both parties is equal, because dA⋅QB=dA⋅dB⋅G=dB⋅dA⋅G=dB⋅QA The only information about her key that Alice initially exposes is her public key. So, no party except Alice can determine Alice's private key (Alice of course knows it by having selected it), unless that party can solve the elliptic curve discrete logarithm problem. Bob's private key is similarly secure. No party other than Alice or Bob can compute the shared secret, unless that party can solve the elliptic curve Diffie–Hellman problem. Key establishment protocol: The public keys are either static (and trusted, say via a certificate) or ephemeral (also known as ECDHE, where final 'E' stands for "ephemeral"). Ephemeral keys are temporary and not necessarily authenticated, so if authentication is desired, authenticity assurances must be obtained by other means. Authentication is necessary to avoid man-in-the-middle attacks. If one of either Alice's or Bob's public keys is static, then man-in-the-middle attacks are thwarted. Static public keys provide neither forward secrecy nor key-compromise impersonation resilience, among other advanced security properties. Holders of static private keys should validate the other public key, and should apply a secure key derivation function to the raw Diffie–Hellman shared secret to avoid leaking information about the static private key. For schemes with other security properties, see MQV. Key establishment protocol: If Alice maliciously chooses invalid curve points for her key and Bob does not validate that Alice's points are part of the selected group, she can collect enough residues of Bob's key to derive his private key. Several TLS libraries were found to be vulnerable to this attack.The shared secret is uniformly distributed on a subset of [0,p) of size (n+1)/2 . For this reason, the secret should not be used directly as a symmetric key, but it can be used as entropy for a key derivation function. Software: Curve25519 is a popular set of elliptic curve parameters and reference implementation by Daniel J. Bernstein in C. Bindings and alternative implementations are also available. LINE messenger app has used the ECDH protocol for its "Letter Sealing" end-to-end encryption of all messages sent through said app since October 2015. Signal Protocol uses ECDH to obtain post-compromise security. Implementations of this protocol are found in Signal, WhatsApp, Facebook Messenger and Skype.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kinamycin** Kinamycin: Kinamycins are a group of bacterial polyketide secondary metabolites containing a diazo group. Kinamycins are known for their cytotoxicity and are considered of interest for potential use in anti-cancer therapies. Synthesis: In 2006 and 2007 the means to totally and enantioselectively synthesize kinamycins C, F, and J were discovered. In 2010 a method was found to allow easier synthesis of these compounds in fewer steps, making research into their properties more feasible.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tuberculosis hut** Tuberculosis hut: A tuberculosis hut or TB hut is a small wooden building that was used, mostly in the early twentieth century, by tuberculosis patients to recover in solitude. Introduction: By the end of the 19th century, one out of four deaths in Europe was related to tuberculosis. The disease was often associated with bad hygienical circumstances and air pollution in the cities. As a result of improvements in housing and healthcare in the beginning of the 20th century, there was a downward trend in the number of patients, but still there was no cure. The medical treatment consisted mainly out of bedrest, sunlight, fresh air and healthy food. As one of the alternatives to a treatment in a sanatorium, the tuberculosis huts were introduced. Locations: In the United Kingdom and the Netherlands, the houses could be found in groups near hospitals or sanatoria. In the Netherlands, these could also be found near health associations, on farmground just outside the town center or in gardens of individuals. Once a day a nurse visited the patient for medical treatment; the family of the patient took care of the rest. The huts could be bought, borrowed or rented.In the United States, similar huts were built in Colorado Springs. Where patients in Europe were sent to sanatoria in the Alps, patients in the United States were encouraged to cure in the fresh mountain air of Colorado Springs. Charles Fox Gardiner, a local doctor, decided to avoid any cross-contamination between patients by isolating them in small tents, instead of putting them all in one room. He developed special octahedral huts, that were placed in rows. Design: Tuberculosis huts existed in various forms, but in general they were simple premanufactured wooden buildings, that could be put together on the spot. They were white or green, with a lot of glass to allow the entering of as much sunlight as possible. On the front side, the houses were either fully open, or they contained large doors that could be opened wide. The huts in Colorado Springs were fixed to one place. The type of huts that was used in British hospitals, could be rotated on turntables towards the sun and out of the wind, to optimise the recovery conditions for the patients. In the Netherlands, both the fixed and the revolving types could be found. Use: The original purpose of the hut was that the patient could recover by resting in solitude. The patient was supposed to stay in the hut night and day, and this stay could take months or even years. The huts were acquired for other purposes too, like a summerhouse or gazebo. At least the Irish playwright George Bernard Shaw and sexologist Havelock Ellis are known to have owned a revolving "writing hut". Until the late 1940s tuberculosis patients were often put in tuberculosis huts. With the introduction of effective medication in the 1950s the huts lost their original purpose and started to serve new ones. In 1982 the collection of the Netherlands Open Air Museum was expanded with a tuberculosis hut, as a gift from the National Cross Association. One of the open-air TB huts of Montcalm Sanitarium in Manitou Springs, Colorado was donated to the Miramont Castle museum in 1998. Rijksmuseum Boerhaave in Leiden (The Netherlands) owns a hut since 2011 and Open Air Museum Het Hoogeland in Warffum (The Netherlands) owns one since April 2016.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sensor (character)** Sensor (character): This page discusses the post-Zero Hour reboot version of the character. For the other versions, see Princess Projectra. Jeka Wynzorr, codenamed Sensor, is a fictional character, a superheroine in the future of the DC Comics universe, and a member of the Legion of Super-Heroes. She is a snake-like alien, who was later altered by "Hypertaxis energy" and Ra's al Ghul into a semi-humanoid shape, retaining her serpent's tail but gaining a humanoid upper body. Fictional character biography: Jeka Wynzorr was the princess of the planet Orando, a world populated by a ruling class of large, sentient, snakes and an underclass of (similarly sentient) small, raccoon-like mammals. She renounced this heritage to travel the galaxy, using her illusion-casting powers to disguise herself as a humanoid to avoid attention. It is not explained how her race, which lack manual manipulators, constructed an advanced civilization, but is implied that it may have been via the enslavement of the raccoons. Upon joining the Legion, she immediately donned a set of cybernetic arms, which she was thereafter rarely seen without. Fictional character biography: Eventually, she arrived on Earth, and chose to join in the Legion tryouts. She was accepted, alongside Magno and Umbra, and chose the codename Sensor (a homage to "Sensor Girl", Princess Projectra's later codename in the pre-Zero Hour Legion) because of her powers as a mentalist. She served with the Legion for some time - even during their disbandment, when half the team was "Lost", she was a key component of R. J. Brande's plan to construct the artificial planetoid, Legion World, allowing them to hide the construction efforts while Brande and Cosmic Boy plotted to restart the team. Fictional character biography: After most of the Lost members returned and the team formally reconstituted, she continued to serve with the team, until she was struck by a bolt of "Hypertaxis energy" while on Xanthu. This caused her to mutate out of control until Ra's al Ghul managed to stabilize her in a form rather different from her original body. Bitter after the change, she took to hiding in her quarters with the lights off, refusing to speak to anyone until she was forced out of hiding when the rest of the team (as well as everyone else on Legion World and several other planets) was enslaved by Universo, while she proved to have a natural immunity. After Shikari managed to free herself, the two were forced to use an unstable Threshold link to the planet Steeple to escape their teammates. There, they met with Ferro and Karate Kid, and the monks who resided there created crystal necklaces which allowed her to extend her immunity to mental takeover to them. While the other three were enslaved by Universo quickly, she managed to fool him long enough to free Saturn Girl and Dreamer. With aid from Apparition and Ultra Boy's child, Cub, Saturn Girl proved able to defeat Universo, and Sensor co-nominated Dreamer for Legion membership. She is now slowly adjusting to her change of form. Fictional character biography: In the new Legion continuity launched in 2004, Sensor has disappeared and in her place is a new version of the original, humanoid Princess Projectra. Fictional character biography: Final Crisis In Final Crisis: Legion of 3 Worlds #2, the reboot Legion and the "threeboot" Legion are summoned to the 31st Century of New Earth. Sensor is among the Legionnaires rescued from limbo, as Princess Projectra is a member of the threeboot Legion, and Sensor Girl is a member of the original Legion. Her original form was seen among dozens of Legionnaires pulled from alternate realities in issue #5. Name: Sensor's first name, Jeka, is a reference to her preboot counterpart's nickname "Jeckie" (short for "Projectra"). Her surname, Wynzorr, is a reference to the House of Windsor, the current British royal family. In keeping with this, her father is King Charlz, and she has a brother named Willum, a play on King Charles III and his son William, Prince of Wales. In other media: Sensor appears in Adventures in the DC Universe #10.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Drawing (manufacturing)** Drawing (manufacturing): Drawing is a metalworking process that uses tensile forces to elongate metal, glass, or plastic. As the material is drawn (pulled), it stretches and becomes thinner, achieving a desired shape and thickness. Drawing is classified into two types: sheet metal drawing and wire, bar, and tube drawing. Sheet metal drawing is defined as a plastic deformation over a curved axis. For wire, bar, and tube drawing, the starting stock is drawn through a die to reduce its diameter and increase its length. Drawing is usually performed at room temperature, thus classified as a cold working process; however, drawing may also be performed at higher temperatures to hot work large wires, rods, or hollow tubes in order to reduce forces.Drawing differs from rolling in that pressure is not applied by the turning action of a mill but instead depends on force applied locally near the area of compression. This means the maximal drawing force is limited by the tensile strength of the material, a fact particularly evident when drawing thin wires.The starting point of cold drawing is hot-rolled stock of a suitable size. Metal: Successful drawing depends on the flow and stretch of the material. Steels, copper alloys, and aluminium alloys are commonly drawn metals.In sheet metal drawing, as a die forms a shape from a flat sheet of metal (the "blank"), the material is forced to move and conform to the die. The flow of material is controlled through pressure applied to the blank and lubrication applied to the die or the blank. If the form moves too easily, wrinkles will occur in the part. To correct this, more pressure or less lubrication is applied to the blank to limit the flow of material and cause the material to stretch or set thin. If too much pressure is applied, the part will become too thin and break. Drawing metal requires finding the correct balance between wrinkles and breaking to achieve a successful part. Metal: Sheet metal drawing becomes deep drawing when the workpiece is longer than its diameter. It is common that the workpiece is also processed using other forming processes, such as piercing, ironing, necking, rolling, and beading. In shallow drawing, the depth of drawing is less than the smallest dimension of the hole. Metal: Bar, tube, and wire drawing all work upon the same principle: the starting stock is drawn through a die to reduce its diameter and increase its length. Usually, the die is mounted on a draw bench. The starting end of the workpiece is narrowed or pointed to get the end through the die. The end is then placed in grips which pull the rest of the workpiece through the die.Drawing can also be used to cold form a shaped cross-section. Cold drawn cross-sections are more precise and have a better surface finish than hot extruded parts. Inexpensive materials can be used instead of expensive alloys for strength requirements, due to work hardening. Bars or rods that are drawn cannot be coiled; therefore, straight-pull draw benches are used. Chain drives are used to draw workpieces up to 30 m (98 ft). Hydraulic cylinders are used for shorter length workpieces. The reduction in area is usually restricted to between 20% and 50%, because greater reductions would exceed the tensile strength of the material, depending on its ductility. To achieve a certain size or shape, multiple passes through progressively smaller dies and intermediate anneals may be required. Tube drawing is very similar to bar drawing, except the beginning stock is a tube. It is used to decrease the diameter, improve surface finish, and improve dimensional accuracy. A mandrel may or may not be used depending on the specific process used. A floating plug may also be inserted into the inside diameter of the tube to control the wall thickness. Wire drawing has long been used to produce flexible metal wire by drawing the material through a series of dies of decreasing size. These dies are manufactured from a number of materials, the most common being tungsten carbide and diamond. Metal: The cold drawing process for steel bars and wire is as follows: Tube lubrication: The surface of the bar or tube is coated with a drawing lubricant such as phosphate or oil to aid cold drawing. Push Pointing: Several inches of the lead ends of the bar or tube are reduced in size by swaging or extruding so that it can pass freely through the drawing die. This is done because the die opening is always smaller in size than the original bar or coil section. Metal: Cold drawing, process drawing: In this process, the material is drawn at room temperature. The reduced end of the bar or coil, which is smaller than the die opening, is passed through the die where it enters a gripping device of the drawing machine. The drawing machine pulls ("draws") the remaining unreduced section of the bar or coil through the die. The die reduces the cross section of the bar or coil, shapes its profile, and increases its length. Metal: Finished product: The drawn product, which is referred to as "cold drawn" or "cold finished", exhibits a bright or polished finish, increased mechanical properties, improved machining characteristics, and precise and uniform dimensional tolerances. Multi-pass drawing: The cold drawing of complex shapes or profiles may involve the workpiece being drawn multiple times through progressively smaller die openings in order to produce the desired shape and tolerances. Material is generally annealed between each drawing pass to increase its ductility and remove internal stresses produced during the cold working. Annealing: This is a thermal treatment generally used to soften the material being drawn; to modify the microstructure, the mechanical properties, and the machining characteristics of the steel; and to remove internal stresses in the product. Depending on the material and desired final characteristics, annealing may be used before, during (between passes), or after the cold drawing operation. Glass: Similar drawing processes are applied in glassblowing and in making glass and plastic optical fiber. Plastics: Plastic drawing, sometimes referred to as cold drawing, is the same process as used on metal bars, applied to plastics. Plastic drawing is primarily used in manufacturing plastic fibers. The process was discovered by Julian W. Hill in 1930 while trying to make fibers from an early polyester.It is performed after the material has been "spun" into filaments; by extruding the polymer melt through pores of a spinneret. During this process, the individual polymer chains tend to somewhat align because of viscous flow. These filaments still have an amorphous structure, so they are drawn to align the fibers further, thus increasing crystallinity, tensile strength, and stiffness. This is done on a draw twister machine. For nylon, the fiber is stretched to four times its spun length. The crystals formed during drawing are held together by hydrogen bonds between the amide hydrogens of one chain and the carbonyl oxygens of another chain. Polyethylene terephthalate (PET) sheet is drawn in two dimensions to make BoPET (biaxially-oriented polyethylene terephthalate) with improved mechanical properties.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Μ operator** Μ operator: In computability theory, the μ-operator, minimization operator, or unbounded search operator searches for the least natural number with a given property. Adding the μ-operator to the primitive recursive functions makes it possible to define all computable functions. Definition: Suppose that R(y, x1, ..., xk) is a fixed (k+1)-ary relation on the natural numbers. The μ-operator "μy", in either the unbounded or bounded form, is a "number theoretic function" defined from the natural numbers to the natural numbers. However, "μy" contains a predicate over the natural numbers, which can be thought of as a condition that evaluates to true when the predicate is satisfied and false when it is not. Definition: The bounded μ-operator appears earlier in Kleene (1952) Chapter IX Primitive Recursive Functions, §45 Predicates, prime factor representation as: The least such that if otherwise ,z. Definition: " (p. 225)Stephen Kleene notes that any of the six inequality restrictions on the range of the variable y is permitted, i.e. y < z, y ≤ z, w < y < z, w < y ≤ z, w ≤ y < z and w ≤ y ≤ z. "When the indicated range contains no y such that R(y) [is "true"], the value of the "μy" expression is the cardinal number of the range" (p. 226); this is why the default "z" appears in the definition above. As shown below, the bounded μ-operator "μyy<z" is defined in terms of two primitive recursive functions called the finite sum Σ and finite product Π, a predicate function that "does the test" and a representing function that converts {t, f} to {0, 1}. Definition: In Chapter XI §57 General Recursive Functions, Kleene defines the unbounded μ-operator over the variable y in the following manner, the least (natural number) such that R(y)} " (p. 279, where " (∃y) " means "there exists a y such that...")In this instance R itself, or its representing function, delivers 0 when it is satisfied (i.e. delivers true); the function then delivers the number y. No upper bound exists on y, hence no inequality expressions appear in its definition. Definition: For a given R(y) the unbounded μ-operator μyR(y) (note no requirement for "(Ey)" ) is a partial function. Kleene makes it as a total function instead (cf. p. 317): the least such that if otherwise . The total version of the unbounded μ-operator is studied in higher-order reverse mathematics (Kohlenbach (2005)) in the following form: (∃μ2)(∀f1)((∃n0)(f(n)=0)→f(μ(f))=0), where the superscripts mean that n is zeroth-order, f is first-order, and μ is second-order. This axiom gives rise to the Big Five system ACA0 when combined with the usual base theory of higher-order reverse mathematics. Properties: (i) In context of the primitive recursive functions, where the search variable y of the μ-operator is bounded, e.g. y < z in the formula below, if the predicate R is primitive recursive (Kleene Proof #E p. 228), then μyy<zR(y, x1, ..., xn) is a primitive recursive function.(ii) In the context of the (total) recursive functions, where the search variable y is unbounded but guaranteed to exist for all values xi of the total recursive predicate R's parameters, (x1),...,(xn) (Ey) R(y, xi, ..., xn) implies that μyR(y, xi, ..., xn) is a total recursive function. Properties: Here (xi) means "for all xi" and Ey means "there exists at least one value of y such that..." (cf Kleene (1952) p. 279.)then the five primitive recursive operators plus the unbounded-but-total μ-operator give rise to what Kleene called the "general" recursive functions (i.e. total functions defined by the six recursion operators). Properties: (iii) In the context of the partial recursive functions: Suppose that the relation R holds if and only if a partial recursive function converges to zero. And suppose that that partial recursive function converges (to something, not necessarily zero) whenever μyR(y, x1, ..., xk) is defined and y is μyR(y, x1, ..., xk) or smaller. Then the function μyR(y, x1, ..., xk) is also a partial recursive function. Properties: The μ-operator is used in the characterization of the computable functions as the μ recursive functions. In constructive mathematics, the unbounded search operator is related to Markov's principle. Examples: Example 1: The bounded μ-operator is a primitive recursive function In the following x represents the string xi, ..., xn.The bounded μ-operator can be expressed rather simply in terms of two primitive recursive functions (hereafter "prf") that also are used to define the CASE function—the product-of-terms Π and the sum-of-terms Σ (cf Kleene #B page 224). (As needed, any boundary for the variable such as s ≤ t or t < z, or 5 < x < 17 etc. is appropriate). For example: Πs≤t fs(x, s) = f0(x, 0) × f1(x, 1) × ... × ft(x, t) Σt<z gt(x, t) = g0(x, 0) + g1(x, 1) + ... + gz-1(x, z-1)Before we proceed we need to introduce a function ψ called "the representing function" of predicate R. Function ψ is defined from inputs (t = "truth", f = "falsity") to outputs (0, 1) (note the order!). In this case the input to ψ. i.e. {t, f}. is coming from the output of R: ψ(R = t) = 0 ψ(R = f) = 1Kleene demonstrates that μyy<zR(y) is defined as follows; we see the product function Π is acting like a Boolean OR operator, and the sum Σ is acting somewhat like a Boolean AND but is producing {Σ≠0, Σ=0} rather than just {1, 0}: μyy<zR(y) = Σt<zΠs≤t ψ(R(x, t, s)) = [ψ(x, 0, 0)] + [ψ(x, 1, 0) × ψ(x, 1, 1)] + [ψ(x, 2, 0) × ψ(x, 2, 1) × ψ(x, 2, 2)] + ... + [ψ(x, z-1, 0) × ψ(x, z-1, 1) × ψ(x, z-1, 2) × . . . × ψ (x, z-1, z-1)]Note that Σ is actually a primitive recursion with the base Σ(x, 0) = 0 and the induction step Σ(x, y+1) = Σ(x, y) + Π( x, y). The product Π is also a primitive recursion with base step Π(x, 0) = ψ(x, 0) and induction step Π(x, y+1) = Π(x, y) × ψ(x, y+1).The equation is easier if observed with an example, as given by Kleene. He just made up the entries for the representing function ψ(R(y)). He designated the representing functions χ(y) rather than ψ(x, y): Example 2: The unbounded μ-operator is not primitive-recursive The unbounded μ-operator—the function μy—is the one commonly defined in the texts. But the reader may wonder why the unbounded μ-operator is searching for a function R(x, y) to yield zero, rather than some other natural number. Examples: In a footnote Minsky does allow his operator to terminate when the function inside produces a match to the parameter "k"; this example is also useful because it shows another author's format: "For μt[φ(t) = k]" (p. 210)The reason for zero is that the unbounded operator μy will be defined in terms of the function "product" Π with its index y allowed to "grow" as the μ-operator searches. As noted in the example above, the product Πx<y of a string of numbers ψ(x, 0) *, ..., * ψ(x, y) yields zero whenever one of its members ψ(x, i) is zero: Πs<y = ψ(x, 0) * , ..., * ψ(x, y) = 0if any ψ(x, i) = 0 where 0≤i≤s. Thus the Π is acting like a Boolean AND. Examples: The function μy produces as "output" a single natural number y = {0, 1, 2, 3, ...}. However, inside the operator one of a couple "situations" can appear: (a) a "number-theoretic function" χ that produces a single natural number, or (b) a "predicate" R that produces either {t = true, f = false}. (And, in the context of partial recursive functions Kleene later admits a third outcome: "μ = undecided".) Kleene splits his definition of the unbounded μ-operator to handle the two situations (a) and (b). For situation (b), before the predicate R(x, y) can serve in an arithmetic capacity in the product Π, its output {t, f} must first be "operated on" by its representing function χ to yield {0, 1}. And for situation (a) if one definition is to be used then the number theoretic function χ must produce zero to "satisfy" the μ-operator. With this matter settled, he demonstrates with single "Proof III" that either types (a) or (b) together with the five primitive recursive operators yield the (total) recursive functions, with this proviso for a total function: For all parameters x, a demonstration must be provided to show that a y exists that satisfies (a) μyψ(x, y) or (b) μyR(x, y).Kleene also admits a third situation (c) that does not require the demonstration of "for all x a y exists such that ψ(x, y)." He uses this in his proof that more total recursive functions exist than can be enumerated; c.f. footnote Total function demonstration. Examples: Kleene's proof is informal and uses an example similar to the first example, but first he casts the μ-operator into a different form that uses the "product-of-terms" Π operating on function χ that yields a natural number n, which can be any natural number, and 0 in the instance when the u-operator's test is "satisfied". Examples: The definition recast with the Π-function: μyy<zχ(y) = (i): π(x, y) = Πs<yχ(x, s) (ii): φ(x) = τ(π(x, y), π(x, y' ), y) (iii): τ(z' , 0, y) = y ;τ(u, v, w) is undefined for u = 0 or v > 0.This is subtle. At first glance the equations seem to be using primitive recursion. But Kleene has not provided us with a base step and an induction step of the general form: base step: φ(0, x) = φ(x) induction step: φ(0, x) = ψ(y, φ(0,x), x)To see what is going on, we first have to remind ourselves that we have assigned a parameter (a natural number) to every variable xi. Second, we do see a successor-operator at work iterating y (i.e. the y' ). And third, we see that the function μy y<zχ(y, x) is just producing instances of χ(y,x) i.e. χ(0,x), χ(1,x), ... until an instance yields 0. Fourth, when an instance χ(n, x) yields 0 it causes the middle term of τ, i.e. v = π(x, y' ) to yield 0. Finally, when the middle term v = 0, μyy<zχ(y) executes line (iii) and "exits". Kleene's presentation of equations (ii) and (iii) have been exchanged to make this point that line (iii) represents an exit—an exit taken only when the search successfully finds a y to satisfy χ(y) and the middle product-term π(x, y' ) is 0; the operator then terminates its search with τ(z' , 0, y) = y. Examples: τ(π(x, y), π(x, y' ), y), i.e.: τ(π(x, 0), π(x, 1), 0), τ(π(x, 1), π(x, 2), 1) τ(π(x, 2), π(x, 3), 2) τ(π(x, 3), π(x, 4), 3) ... until a match occurs at y=n and then: τ(z' , 0, y) = τ(z' , 0, n) = n and the μ-operator's search is done.For the example Kleene "...consider[s] any fixed values of (xi, ..., xn) and write[s] simply 'χ(y)' for 'χ(xi, ..., xn), y)'": Example 3: Definition of the unbounded μ-operator in terms of an abstract machine Both Minsky (1967) p. 21 and Boolos-Burgess-Jeffrey (2002) p. 60-61 provide definitions of the μ-operator as an abstract machine; see footnote Alternative definitions of μ. Examples: The following demonstration follows Minsky without the "peculiarity" mentioned in the footnote. The demonstration will use a "successor" counter machine model closely related to the Peano Axioms and the primitive recursive functions. The model consists of (i) a finite state machine with a TABLE of instructions and a so-called 'state register' that we will rename "the Instruction Register" (IR), (ii) a few "registers" each of which can contain only a single natural number, and (iii) an instruction set of four "commands" described in the following table: In the following, the symbolism " [ r ] " means "the contents of", and " →r " indicates an action with respect to register r.The algorithm for the minimization operator μy[φ(x, y)] will, in essence, create a sequence of instances of the function φ(x, y) as the value of parameter y (a natural number) increases; the process will continue (see Note † below) until a match occurs between the output of function φ(x, y) and some pre-established number (usually 0). Thus the evaluation of φ(x, y) requires, at the outset, assignment of a natural number to each of its variables x and an assignment of a "match-number" (usually 0) to a register "w", and a number (usually 0) to register y. Examples: Note †: The unbounded μ-operator will continue this attempt-to-match process ad infinitum or until a match occurs. Thus the "y" register must be unbounded -- it must be able to "hold" a number of arbitrary size. Unlike a "real" computer model, abstract machine models allow this. In the case of a bounded μ-operator, a lower-bounded μ-operator would start with the contents of y set to a number other than zero. An upper-bounded μ-operator would require an additional register "ub" to contain the number that represents the upper bound plus an additional comparison operation; an algorithm could provide for both lower- and upper bounds.In the following we are assuming that the Instruction Register (IR) encounters the μy "routine" at instruction number "n". Its first action will be to establish a number in a dedicated "w" register—an "example of" the number that function φ(x, y) must produce before the algorithm can terminate (classically this is the number zero, but see the footnote about the use of numbers other than zero). The algorithm's next action at instructiton "n+1" will be to clear the "y" register -- "y" will act as an "up-counter" that starts from 0. Then at instruction "n+2" the algorithm evaluates its function φ(x, y) -- we assume this takes j instructions to accomplish—and at the end of its evaluation φ(x, y) deposits its output in register "φ". At the (n+j+3)rd instruction the algorithm compares the number in the "w" register (e.g. 0) to the number in the "φ" register—if they are the same the algorithm has succeeded and it escapes through exit; otherwise it increments the contents of the "y" register and loops back with this new y-value to test function φ(x, y) again. Footnotes: Total function demonstration What is mandatory if the function is to be a total function is a demonstration by some other method (e.g. induction) that for each and every combination of values of its parameters xi some natural number y will satisfy the μ-operator so that the algorithm that represents the calculation can terminate: "...we must always hesitate to assume that a system of equations really defines a general-recursive (i.e. total) function. We normally require auxiliary evidence for this, e.g. in the form of an inductive proof that, for each argument value, the computation terminates with a unique value." (Minsky (1967) p.186)"In other words, we should not claim that a function is effectively calculable on the ground that it has been shown to be general (i.e. total) recursive, unless the demonstration that it is general recursive is effective."(Kleene (1952) p.319)For an example of what this means in practice see the examples at mu recursive functions—even the simplest truncated subtraction algorithm "x - y = d" can yield, for the undefined cases when x < y, (1) no termination, (2) no numbers (i.e. something wrong with the format so the yield is not considered a natural number), or (3) deceit: wrong numbers in the correct format. The "proper" subtraction algorithm requires careful attention to all the "cases" (x, y) = {(0, 0), (a, 0), (0, b), (a≥b, b), (a=b, b), (a<b, b)}.But even when the algorithm has been shown to produce the expected output in the instances {(0, 0), (1, 0), (0, 1), (2, 1), (1, 1), (1, 2)}, we are left with an uneasy feeling until we can devise a "convincing demonstration" that the cases (x, y) = (n, m) all yield the expected results. To Kleene's point: is our "demonstration" (i.e. the algorithm that is our demonstration) convincing enough to be considered effective? Alternative abstract machine models of the unbounded μ-operator from Minsky (1967) and Boolos-Burgess-Jeffrey (2002) The unbounded μ-operator is defined by Minsky (1967) p. 210 but with a peculiar flaw: the-operator will not yield t=0 when its predicate (the IF-THEN-ELSE test) is satisfied; rather it yields t=2. In Minsky's version the counter is "t", and the function φ(t, x) deposits its number in register φ. In the usual μ definition register w will contain 0, but Minsky observes that it can contain any number k. Minsky's instruction set is equivalent to the following where "JNE" = Jump to z if Not Equal: { CLR (r), INC (r), JNE (rj, rk, z) }The unbounded μ-operator is also defined by Boolos-Burgess-Jeffrey (2002) p. 60-61 for a counter machine with an instruction set equivalent to the following: { CLR (r), INC (r), DEC (r), JZ (r, z), H }In this version the counter "y" is called "r2", and the function f( x, r2 ) deposits its number in register "r3". Perhaps the reason Boolos-Burgess-Jeffrey clear r3 is to facilitate an unconditional jump to loop; this is often done by use of a dedicated register "0" that contains "0":
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pulmonary artery catheter** Pulmonary artery catheter: A pulmonary artery catheter (PAC), also known as a Swan-Ganz catheter or right heart catheter, is a balloon-tipped catheter that is inserted into a pulmonary artery in a procedure known as pulmonary artery catheterization or right heart catheterization. Pulmonary artery catheterization is a useful measure of the overall function of the heart particularly in those with complications from heart failure, heart attack, arrythmias or pulmonary embolism. It is also a good measure for those needing intravenous fluid therapy, for instance post heart surgery, shock, and severe burns. The procedure can also be used to measure pressures in the heart chambers. Pulmonary artery catheter: The pulmonary artery catheter allows direct, simultaneous measurement of pressures in the right atrium, right ventricle, pulmonary artery, and the filling pressure (pulmonary wedge pressure) of the left atrium. The pulmonary artery catheter is frequently referred to as a Swan-Ganz catheter, in honor of its inventors Jeremy Swan and William Ganz, from Cedars-Sinai Medical Center. Indications: General indications are: Management of complicated myocardial infarction Hypovolemia vs cardiogenic shock Ventricular septal rupture (VSR) vs acute mitral regurgitation Severe left ventricular failure Right ventricular infarction Unstable angina Refractory ventricular tachycardia Assessment of respiratory distress Cardiogenic vs non-cardiogenic pulmonary edema Primary vs secondary pulmonary hypertension Assessment of types of shock Assessment of therapy Afterload reduction Vasopressors Beta blockers Intra-aortic balloon counter-pulsation Assessment of fluid requirement in critically ill patients Hemorrhage Sepsis Acute kidney injury Burns Management of postoperative open heart surgical patients Assessment of valvular heart disease Assessment of cardiac tamponade/constrictionNo study has definitively demonstrated improved outcome in critically ill patients managed with PA catheters. Given that the PA catheter is a monitoring tool and not a therapy in and of itself this is not entirely surprising. Justification for its continued use rests on a large body of clinical experience, disadvantages of other cardiac output monitoring systems, its ability to accurately measure pulmonary artery pressure, and the potential to use the catheter as a direct conduit for drug administration into the pulmonary artery. Procedure: The catheter is introduced through a large vein—often the internal jugular, subclavian, or femoral veins. Ease of placement for a pulmonary artery catheter from easiest to difficult is: right internal jugular > left subclavian > left internal jugular > right subclavian. From this entry site, it is threaded through the right atrium of the heart, the right ventricle, and subsequently into the pulmonary artery. The passage of the catheter may be monitored by dynamic pressure readings from the catheter tip or with the aid of fluoroscopy. Procedure: The standard pulmonary artery catheter has two lumens (Swan-Ganz) and is equipped with an inflatable balloon at the tip, which facilitates its placement into the pulmonary artery through the flow of blood. The balloon, when inflated, causes the catheter to "wedge" in a small pulmonary blood vessel. So wedged, the catheter can provide an indirect measurement of the pressure in the left atrium of the heart, showing a mean pressure, in addition to a, x, v, and y waves which have implications for status of the left atria and the mitral valve. Left ventricular end diastolic pressure (LVedp) is measured using a different procedure, with a catheter that has directly crossed the aortic valve and is well positioned in the left ventricle. LV edp reflects fluid status of the individual in addition to heart health. See also pulmonary wedge pressure and ventricular pressure. Technical developments: Thermal dilution The idea for a sail or balloon tip modification of Ronald Bradley's simple portex tubing method came about from Swan's observation from the Laguna Beach CA shore of sail boats on the water on a relatively calm day. Boats with conventional slot sails were still; one with a spinnaker was able to make reasonable headway. The concept of using thermodilution to measure cardiac output was originally the idea of Arnost Fronek. As a former colleague of Fronek, Ganz added the thermistor modification after Swan showed him the initial balloon design, which was fabricated by Edwards Laboratories, which had previously contracted with Swan as a consultant. Technical developments: After Swan developed the initial balloon tip, Ganz used Fronek's idea and added a small thermistor (temperature probe) about 3 cm behind the tip. 10 ml of saline (0.9% NaCl) under 10 °C or room temperature (not as accurate) is injected into an opening in the right atrium. As this cooler fluid passes the tip thermistor, a very brief drop in the blood temperature is recorded. A recent variation in design is the incorporation of a heating coil on the catheter (30 cm from the tip, residing in the atrium area) which eliminates the cold fluid bolus, a major factor in human technique variation. Technical developments: By attaching both the injector site and the ventricular thermistor to a small computer, the thermodilution curve can be plotted. If details about the patient's body mass index (size); core temp, Systolic, diastolic, central venous pressure CVP (measured from the atrium by the third lumen simultaneously) and pulmonary artery pressure are input, a comprehensive flow vs pressure map can be calculated. Technical developments: In crude terms, this measurement compares left and right cardiac activity and calculates preload and afterload flow and pressures which, theoretically, can be stabilized or adjusted with drugs to either constrict or dilate the vessels (to raise or lower, respectively, the pressure of blood flowing to the lungs), in order to maximize oxygen for delivery to the body tissues. The ability to record results is not a guarantee of patient survivability. Technical developments: Pharmacotherapy lumina Modern catheters have multiple lumina — five or six are common — and have openings along the length to allow administration of inotropes and other drugs directly into the atrium. Drugs to achieve these changes can be delivered into the atrium via the fourth lumen, usually dedicated to medication. Common drugs used are various inotropes, norepinephrine or even atropine. A further set of calculations can be made by measuring the arterial blood and central venous (from the third lumen) and inputting these figures into a spreadsheet or the cardiac output computer, if so equipped, and plotting an oxygen delivery profile. Technical developments: SvO2 measurement One further development in recent years has been the invention of a catheter with a fiber-optic based probe which is extended and lodged into the ventricle wall providing instant readings of SvO2 or oxygen saturation of the ventricle tissues. This technique has a finite life as the sensor becomes coated with protein and it can irritate the ventricle via the contact area. Technical developments: Alternatives Various other techniques have largely relegated the PA catheter to history, e.g. the lithium dilution technique; the external bio-resistance monitor, pulse contour analysis or the very simple and reliable technique of esophogeal doppler measurements of the descending aorta. Complications: The procedure is not without risk, and complications can be life-threatening. It can lead to arrhythmias, pseudoaneurysm formation or rupture of the pulmonary artery, thrombosis, infection, pneumothorax, bleeding, and other problems. Controversy: The benefit of the use of this type of catheter has been controversial. Therefore, many clinicians minimize its use. Controversy: Evidence of benefit Several studies in the 1980s seemed to show a benefit of the increase in physiological information. Many reports showing benefit of the PA catheter are from anaesthetic, and Intensive Care Unit (ICU) settings. In these settings cardiovascular performance was optimized thinking patients would have supra-normal metabolic requirements. In 2005, a multi-center randomized controlled trial found no difference in mortality or length of stay in ICU patients who received pulmonary artery catheters, though it did find a 10% incidence of complications related to the procedure. Controversy: Evidence of harm or lack of benefit Contrary to earlier studies there is growing evidence the use of a PA catheter (PAC) does not necessarily lead to improved outcome. One explanation could be that nurses and physicians are insufficiently knowledgeable to adequately interpret the PA catheter measurements. Also, the benefits might be reduced by the complications from the use of the PAC. Furthermore, using information from the PAC might result in a more aggressive therapy causing the detrimental effect. Or, it could give rise to more harmful therapies (i.e. achieving supra-normal values could be associated with increased mortality). Controversy: Utility of pulmonary artery catheterization This interpretation of Adolph Ficks' formulation for cardiac output by time/temperature curves is an expedient but limited and invasive model of right heart performance. It remains an exceptional method of monitoring volume overload leading to pulmonary edema in an ICU setting. Controversy: A feature of the pulmonary artery catheter that has been largely ignored in the clinical setting is its ability to monitor total body oxygen extraction by measuring the mixed venous oxygen saturation. Regardless of the value obtained by measurements of the cardiac output, the mixed venous oxygen saturation is an accurate parameter of total body blood flow and therefore cardiac output. The assumption that a low mixed venous oxygen saturation (normal = 60% except for the coronary sinus where it approximates 40% reflecting the high metabolic rate of the myocardium) represents less than adequate oxygen delivery is consistent with physiological and metabolic observations. High oxygen extraction is associated with low cardiac output and decreased mixed venous oxygen saturation. Except during hypothermia and in severe sepsis, low mixed venous oxygen saturations are indication of inadequate hemodynamics. The ability of the pulmonary artery catheter to sample mixed venous blood is of great utility to manage low cardiac output states. Controversy: Non-invasive echocardiography and pulse-wave cardiac output monitoring are concordant with (and much safer) if not better than invasive methods defining right and left heart performance. The emergence of MRSA and similar hospital based catheter infections now clearly limits the utility of this type of invasive cardiac ICU intervention.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lisp machine** Lisp machine: Lisp machines are general-purpose computers designed to efficiently run Lisp as their main software and programming language, usually via hardware support. They are an example of a high-level language computer architecture, and in a sense, they were the first commercial single-user workstations. Despite being modest in number (perhaps 7,000 units total as of 1988) Lisp machines commercially pioneered many now-commonplace technologies, including effective garbage collection, laser printing, windowing systems, computer mice, high-resolution bit-mapped raster graphics, computer graphic rendering, and networking innovations such as Chaosnet. Several firms built and sold Lisp machines in the 1980s: Symbolics (3600, 3640, XL1200, MacIvory, and other models), Lisp Machines Incorporated (LMI Lambda), Texas Instruments (Explorer, MicroExplorer), and Xerox (Interlisp-D workstations). The operating systems were written in Lisp Machine Lisp, Interlisp (Xerox), and later partly in Common Lisp. History: Historical context Artificial intelligence (AI) computer programs of the 1960s and 1970s intrinsically required what was then considered a huge amount of computer power, as measured in processor time and memory space. The power requirements of AI research were exacerbated by the Lisp symbolic programming language, when commercial hardware was designed and optimized for assembly- and Fortran-like programming languages. At first, the cost of such computer hardware meant that it had to be shared among many users. As integrated circuit technology shrank the size and cost of computers in the 1960s and early 1970s, and the memory needs of AI programs began to exceed the address space of the most common research computer, the Digital Equipment Corporation (DEC) PDP-10, researchers considered a new approach: a computer designed specifically to develop and run large artificial intelligence programs, and tailored to the semantics of the Lisp language. To keep the operating system (relatively) simple, these machines would not be shared, but would be dedicated to single users. History: Initial development In 1973, Richard Greenblatt and Thomas Knight, programmers at Massachusetts Institute of Technology (MIT) Artificial Intelligence Laboratory (AI Lab), began what would become the MIT Lisp Machine Project when they first began building a computer hardwired to run certain basic Lisp operations, rather than run them in software, in a 24-bit tagged architecture. The machine also did incremental (or Arena) garbage collection. More specifically, since Lisp variables are typed at runtime rather than compile time, a simple addition of two variables could take five times as long on conventional hardware, due to test and branch instructions. Lisp Machines ran the tests in parallel with the more conventional single instruction additions. If the simultaneous tests failed, then the result was discarded and recomputed; this meant in many cases a speed increase by several factors. This simultaneous checking approach was used as well in testing the bounds of arrays when referenced, and other memory management necessities (not merely garbage collection or arrays). History: Type checking was further improved and automated when the conventional byte word of 32-bits was lengthened to 36-bits for Symbolics 3600-model Lisp machines and eventually to 40-bits or more (usually, the excess bits not accounted for by the following were used for error-correcting codes). The first group of extra bits were used to hold type data, making the machine a tagged architecture, and the remaining bits were used to implement CDR coding (wherein the usual linked list elements are compressed to occupy roughly half the space), aiding garbage collection by reportedly an order of magnitude. A further improvement was two microcode instructions which specifically supported Lisp functions, reducing the cost of calling a function to as little as 20 clock cycles, in some Symbolics implementations. History: The first machine was called the CONS machine (named after the list construction operator cons in Lisp). Often it was affectionately referred to as the Knight machine, perhaps since Knight wrote his master's thesis on the subject; it was extremely well received. It was subsequently improved into a version called CADR (a pun; in Lisp, the cadr function, which returns the second item of a list, is pronounced /ˈkeɪ.dəɹ/ or /ˈkɑ.dəɹ/, as some pronounce the word "cadre") which was based on essentially the same architecture. About 25 of what were essentially prototype CADRs were sold within and without MIT for ~$50,000; it quickly became the favorite machine for hacking- many of the most favored software tools were quickly ported to it (e.g. Emacs was ported from ITS in 1975). It was so well received at an AI conference held at MIT in 1978 that Defense Advanced Research Projects Agency (DARPA) began funding its development. History: Commercializing MIT Lisp machine technology In 1979, Russell Noftsker, being convinced that Lisp machines had a bright commercial future due to the strength of the Lisp language and the enabling factor of hardware acceleration, proposed to Greenblatt that they commercialize the technology. In a counter-intuitive move for an AI Lab hacker, Greenblatt acquiesced, hoping perhaps that he could recreate the informal and productive atmosphere of the Lab in a real business. These ideas and goals were considerably different from those of Noftsker. The two negotiated at length, but neither would compromise. As the proposed firm could succeed only with the full and undivided assistance of the AI Lab hackers as a group, Noftsker and Greenblatt decided that the fate of the enterprise was up to them, and so the choice should be left to the hackers. History: The ensuing discussions of the choice divided the lab into two factions. In February 1979, matters came to a head. The hackers sided with Noftsker, believing that a commercial venture fund-backed firm had a better chance of surviving and commercializing Lisp machines than Greenblatt's proposed self-sustaining start-up. Greenblatt lost the battle. History: It was at this juncture that Symbolics, Noftsker's enterprise, slowly came together. While Noftsker was paying his staff a salary, he had no building or any equipment for the hackers to work on. He bargained with Patrick Winston that, in exchange for allowing Symbolics' staff to keep working out of MIT, Symbolics would let MIT use internally and freely all the software Symbolics developed. A consultant from CDC, who was trying to put together a natural language computer application with a group of West-coast programmers, came to Greenblatt, seeking a Lisp machine for his group to work with, about eight months after the disastrous conference with Noftsker. Greenblatt had decided to start his own rival Lisp machine firm, but he had done nothing. The consultant, Alexander Jacobson, decided that the only way Greenblatt was going to start the firm and build the Lisp machines that Jacobson desperately needed was if Jacobson pushed and otherwise helped Greenblatt launch the firm. Jacobson pulled together business plans, a board, a partner for Greenblatt (one F. Stephen Wyle). The newfound firm was named LISP Machine, Inc. (LMI), and was funded by CDC orders, via Jacobson. History: Around this time Symbolics (Noftsker's firm) began operating. It had been hindered by Noftsker's promise to give Greenblatt a year's head start, and by severe delays in procuring venture capital. Symbolics still had the major advantage that while 3 or 4 of the AI Lab hackers had gone to work for Greenblatt, a solid 14 other hackers had signed onto Symbolics. Two AI Lab people were not hired by either: Richard Stallman and Marvin Minsky. Stallman, however, blamed Symbolics for the decline of the hacker community that had centered around the AI lab. For two years, from 1982 to the end of 1983, Stallman worked by himself to clone the output of the Symbolics programmers, with the aim of preventing them from gaining a monopoly on the lab's computers.Regardless, after a series of internal battles, Symbolics did get off the ground in 1980/1981, selling the CADR as the LM-2, while Lisp Machines, Inc. sold it as the LMI-CADR. Symbolics did not intend to produce many LM-2s, since the 3600 family of Lisp machines was supposed to ship quickly, but the 3600s were repeatedly delayed, and Symbolics ended up producing ~100 LM-2s, each of which sold for $70,000. Both firms developed second-generation products based on the CADR: the Symbolics 3600 and the LMI-LAMBDA (of which LMI managed to sell ~200). The 3600, which shipped a year late, expanded on the CADR by widening the machine word to 36-bits, expanding the address space to 28-bits, and adding hardware to accelerate certain common functions that were implemented in microcode on the CADR. The LMI-LAMBDA, which came out a year after the 3600, in 1983, was compatible with the CADR (it could run CADR microcode), but hardware differences existed. Texas Instruments (TI) joined the fray when it licensed the LMI-LAMBDA design and produced its own variant, the TI Explorer. Some of the LMI-LAMBDAs and the TI Explorer were dual systems with both a Lisp and a Unix processor. TI also developed a 32-bit microprocessor version of its Lisp CPU for the TI Explorer. This Lisp chip also was used for the MicroExplorer – a NuBus board for the Apple Macintosh II (NuBus was initially developed at MIT for use in Lisp machines). History: Symbolics continued to develop the 3600 family and its operating system, Genera, and produced the Ivory, a VLSI implementation of the Symbolics architecture. Starting in 1987, several machines based on the Ivory processor were developed: boards for Suns and Macs, stand-alone workstations and even embedded systems (I-Machine Custom LSI, 32 bit address, Symbolics XL-400, UX-400, MacIvory II; in 1989 available platforms were Symbolics XL-1200, MacIvory III, UX-1200, Zora, NXP1000 "pizza box"). Texas Instruments shrank the Explorer into silicon as the MicroExplorer which was offered as a card for the Apple Mac II. LMI abandoned the CADR architecture and developed its own K-Machine, but LMI went bankrupt before the machine could be brought to market. Before its demise, LMI was working on a distributed system for the LAMBDA using Moby space.These machines had hardware support for various primitive Lisp operations (data type testing, CDR coding) and also hardware support for incremental garbage collection. They ran large Lisp programs very efficiently. The Symbolics machine was competitive against many commercial super minicomputers, but was never adapted for conventional purposes. The Symbolics Lisp Machines were also sold to some non-AI markets like computer graphics, modeling, and animation. History: The MIT-derived Lisp machines ran a Lisp dialect named Lisp Machine Lisp, descended from MIT's Maclisp. The operating systems were written from the ground up in Lisp, often using object-oriented extensions. Later, these Lisp machines also supported various versions of Common Lisp (with Flavors, New Flavors, and Common Lisp Object System (CLOS)). History: Interlisp, BBN, and Xerox Bolt, Beranek and Newman (BBN) developed its own Lisp machine, named Jericho, which ran a version of Interlisp. It was never marketed. Frustrated, the whole AI group resigned, and were hired mostly by Xerox. So, Xerox Palo Alto Research Center had, simultaneously with Greenblatt's own development at MIT, developed their own Lisp machines which were designed to run InterLisp (and later Common Lisp). The same hardware was used with different software also as Smalltalk machines and as the Xerox Star office system. These included the Xerox 1100, Dolphin (1979); the Xerox 1132, Dorado; the Xerox 1108, Dandelion (1981); the Xerox 1109, Dandetiger; and the Xerox 1186/6085, Daybreak. The operating system of the Xerox Lisp machines has also been ported to a virtual machine and is available for several platforms as a product named Medley. The Xerox machine was well known for its advanced development environment (InterLisp-D), the ROOMS window manager, for its early graphical user interface and for novel applications like NoteCards (one of the first hypertext applications). History: Xerox also worked on a Lisp machine based on reduced instruction set computing (RISC), using the 'Xerox Common Lisp Processor' and planned to bring it to market by 1987, which did not occur. Integrated Inference Machines In the mid-1980s, Integrated Inference Machines (IIM) built prototypes of Lisp machines named Inferstar. History: Developments of Lisp machines outside the United States In 1984–85 a UK firm, Racal-Norsk, a joint subsidiary of Racal and Norsk Data, attempted to repurpose Norsk Data's ND-500 supermini as a microcoded Lisp machine, running CADR software: the Knowledge Processing System (KPS).There were several attempts by Japanese manufacturers to enter the Lisp machine market: the Fujitsu Facom-alpha mainframe co-processor, NTT's Elis, Toshiba's AI processor (AIP) and NEC's LIME. Several university research efforts produced working prototypes, among them are Kobe University's TAKITAC-7, RIKEN's FLATS, and Osaka University's EVLIS.In France, two Lisp Machine projects arose: M3L at Toulouse Paul Sabatier University and later MAIA.In Germany Siemens designed the RISC-based Lisp co-processor COLIBRI. History: End of the Lisp machines With the onset of the AI winter and the early beginnings of the microcomputer revolution, which would sweep away the minicomputer and workstation makers, cheaper desktop PCs soon could run Lisp programs even faster than Lisp machines, with no use of special purpose hardware. Their high profit margin hardware business eliminated, most Lisp machine makers had gone out of business by the early 90s, leaving only software based firms like Lucid Inc. or hardware makers who had switched to software and services to avoid the crash. As of January 2015, besides Xerox and TI, Symbolics is the only Lisp machine firm still operating, selling the Open Genera Lisp machine software environment and the Macsyma computer algebra system. History: Legacy Several attempts to write open-source emulators for various Lisp Machines have been made: CADR Emulation, Symbolics L Lisp Machine Emulation, the E3 Project (TI Explorer II Emulation), Meroko (TI Explorer I), and Nevermore (TI Explorer I). On 3 October 2005, the MIT released the CADR Lisp Machine source code as open source.In September 2014, Alexander Burger, developer of PicoLisp, announced PilMCU, an implementation of PicoLisp in hardware.The Bitsavers' PDF Document Archive has PDF versions of the extensive documentation for the Symbolics Lisp Machines, the TI Explorer and MicroExplorer Lisp Machines and the Xerox Interlisp-D Lisp Machines. History: Applications Domains using the Lisp machines were mostly in the wide field of artificial intelligence applications, but also in computer graphics, medical image processing, and many others. The main commercial expert systems of the 80s were available: Intellicorp's Knowledge Engineering Environment (KEE), Knowledge Craft, from The Carnegie Group Inc., and ART (Automated Reasoning Tool) from Inference Corporation. Technical overview: Initially the Lisp machines were designed as personal workstations for software development in Lisp. They were used by one person and offered no multi-user mode. The machines provided a large, black and white, bitmap display, keyboard and mouse, network adapter, local hard disks, more than 1 MB RAM, serial interfaces, and a local bus for extension cards. Color graphics cards, tape drives, and laser printers were optional. Technical overview: The processor did not run Lisp directly, but was a stack machine with instructions optimized for compiled Lisp. The early Lisp machines used microcode to provide the instruction set. For several operations, type checking and dispatching was done in hardware at runtime. For example, only one addition operation could be used with various numeric types (integer, float, rational, and complex numbers). The result was a very compact compiled representation of Lisp code. Technical overview: The following example uses a function that counts the number of elements of a list for which a predicate returns true. Technical overview: The disassembled machine code for above function (for the Ivory microprocessor from Symbolics): The operating system used virtual memory to provide a large address space. Memory management was done with garbage collection. All code shared a single address space. All data objects were stored with a tag in memory, so that the type could be determined at runtime. Multiple execution threads were supported and termed processes. All processes ran in the one address space. Technical overview: All operating system software was written in Lisp. Xerox used Interlisp. Symbolics, LMI, and TI used Lisp Machine Lisp (descendant of MacLisp). With the appearance of Common Lisp, Common Lisp was supported on the Lisp Machines and some system software was ported to Common Lisp or later written in Common Lisp. Some later Lisp machines (like the TI MicroExplorer, the Symbolics MacIvory or the Symbolics UX400/1200) were no longer complete workstations, but boards designed to be embedded in host computers: Apple Macintosh II and Sun-3 or Sun-4. Some Lisp machines, such as the Symbolics XL1200, had extensive graphics abilities using special graphics boards. These machines were used in domains like medical image processing, 3D animation, and CAD.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**László Fejes Tóth** László Fejes Tóth: László Fejes Tóth (Hungarian: Fejes Tóth László, pronounced [ˈfɛjɛʃ ˈtoːt ˈlaːsloː] 12 March 1915 – 17 March 2005) was a Hungarian mathematician who specialized in geometry. He proved that a lattice pattern is the most efficient way to pack centrally symmetric convex sets on the Euclidean plane (a generalization of Thue's theorem, a 2-dimensional analog of the Kepler conjecture). He also investigated the sphere packing problem. He was the first to show, in 1953, that proof of the Kepler conjecture can be reduced to a finite case analysis and, later, that the problem might be solved using a computer. László Fejes Tóth: He was a member of the Hungarian Academy of Sciences (from 1962) and a director of the Alfréd Rényi Institute of Mathematics (1970-1983). He received both the Kossuth Prize (1957) and State Award (1973).Together with H.S.M. Coxeter and Paul Erdős, he laid the foundations of discrete geometry. Early life and career: As described in a 1999 interview with István Hargittai, Fejes Tóth's father was a railway worker, who advanced in his career within the railway organization ultimately to earn a doctorate in law. Fejes Tóth's mother taught Hungarian and German literature in a high school. The family moved to Budapest, when Fejes Tóth was five; there he attended elementary school and high school—the Széchenyi István Reálgimnázium—where his interest in mathematics began.Fejes Tóth attended Pázmány Péter University, now the Eötvös Loránd University. As a freshman, he developed a generalized solution regarding Cauchy exponential series, which he published in the proceedings of the French Academy of Sciences—1935. He then received his doctorate at Pázmány Péter University, under the direction of Lipót Fejér.After university, he served as a soldier for two years, but received a medical exemption. In 1941 he joined the University of Kolozsvár (Cluj). It was here that he became interested in packing problems. In 1944, he returned to Budapest to teach mathematics at Árpád High School. Between 1946 and 1949 he lectured at Pázmány Péter University and starting in 1949 became a professor at the University of Veszprém (now University of Pannonia) for 15 years, where he was the primary developer of the "geometric patterns" theory "of the plane, the sphere and the surface space" and where he "had studied non grid-like structures and quasicrystals" which later became an independent discipline, as reported by János Pach.The editors of a book dedicated to Fejes Tóth described some highlights of his early work; e.g. having shown that the maximum density of a packing of repeated symmetric convex bodies occurs with a lattice pattern of packing. He also showed that, of all convex polytopes of given surface area that are equivalent to a given Platonic solid (e.g. a tetrahedron or an octahedron), a regular polytope always has the largest possible volume. He developed a technique that proved Steiner's conjecture for the cube and for the dodecahedron. By 1953, Fejes Tóth had written dozens of papers devoted to these types of fundamental issues. His distinguished academic career allowed him to travel abroad beyond the Iron Curtain to attend international conferences and teach at various universities, including those at Freiburg; Madison, Wisconsin; Ohio; and Salzburg. Early life and career: Fejes Tóth met his wife in university. She was a chemist. They were parents of three children, two sons—one a professor of mathematics at the Alfréd Rényi Institute of Mathematics, the other a professor of physiology at Dartmouth College—and one daughter, a psychologist. He enjoyed sports, being skilled at table tennis, tennis, and gymnastics. A family photograph shows him swinging by his arms over the top of a high bar when he was around fifty.Fejes Tóth held the following positions over his career: Assistant instructor, University of Kolozsvár (Cluj) (1941–44) Teacher, Árpád High School (1944–48) Private Lecturer, Pázmány Péter University (1946–48) Professor, University of Veszprém (1949–64) Researcher, then director (in 1970), Mathematical Research Institute (Alfréd Rényi Institute of Mathematics) (1965–83)In addition to his positions in residence, he was a corresponding member of the Saxonian Academy of Sciences and Humanities, Akademie der Wissenschaften der DDR, and of the Braunschweigische Wissenschaftlische Gesellschaft. Work on regular figures: According to J. A. Todd, a reviewer of Fejes Tóth's book Regular Figures, Fejes Tóth divided the topic into two sections. One, entitled "Systematology of the Regular Figures", develops a theory of "regular and Archimedean polyhedra and of regular polytopes". Todd explains that the treatment includes: Plane Ornaments, including two-dimensional crystallographic groups Spherical arrangements, including an enumeration of the 32 crystal classes Hyperbolic tessellations, those discrete groups generated by two operations whose product is involutary Polyhedra, including regular solids and convex Archimedean solids Regular polytopes The other section, entitled "Genetics of the Regular Figures", covers a number of special problems, according to Todd. These problems include "packings and coverings of circles in a plane, and ... with tessellations on a sphere" and also problems "in the hyperbolic plane, and in Euclidean space of three or more dimensions." At the time, Todd opined that those problems were "a subject in which there is still much scope for research, and one which calls for considerable ingenuity in approaching its problems". Honors and recognition: Imre Bárány credited Fejes Tóth with several influential proofs in the field of discrete and convex geometry, pertaining to packings and coverings by circles, to convex sets in a plane and to packings and coverings in higher dimensions, including the first correct proof of Thue's theorem. He credits Fejes Tóth, along with Paul Erdős, as having helped to "create the school of Hungarian discrete geometry."Fejes Tóth's monograph, Lagerungen in der Ebene, auf der Kugel und im Raum, which was translated into Russian and Japanese, won him the Kossuth Prize in 1957 and the Hungarian Academy of Sciences membership in 1962.William Edge, another reviewer of Regular Figures, cites Fejes Tóth's earlier work, Lagerungen in der Ebene, auf der Kugel und im Raum, as the foundation of his second chapter in Regular Figures. He emphasized that, at the time of this work, the problem of the upper bound for the density of a packing of equal spheres was still unsolved. Honors and recognition: The approach that Fejes Tóth suggested in that work, which translates as "packing [of objects] in a plane, on a sphere and in a space", provided Thomas Hales a basis for a proof of the Kepler conjecture in 1998. The Kepler conjecture, named after the 17th-century German mathematician and astronomer Johannes Kepler, says that no arrangement of equally sized spheres filling space has a greater average density than that of the cubic close packing (face-centered cubic) and hexagonal close packing arrangements. Hales used a proof by exhaustion involving the checking of many individual cases, using complex computer calculations.Fejes Tóth received the following prizes: Klug Lipót Prize (1943) Kossuth Prize (1957) State Prize (now the Széchenyi Prize) (1973) Tibor Szele Prize (1977) Gauss Bicentennial Medal (1977) Gold Medal of the Hungarian Academy of Sciences (2002)He received honorary degrees from the University of Salzburg (1991) and the University of Veszprém (1997). Honors and recognition: In 2008, a conference was convened in Fejes Tóth's memory in Budapest from June 30 – July 6; it celebrated the term, "Intuitive Geometry", coined by Fejes Tóth to refer to the kind of geometry, which is accessible to the "man in the street". According to the conference organizers, the term encompasses combinatorial geometry, the theory of packing, covering and tiling, convexity, computational geometry, rigidity theory, the geometry of numbers, crystallography and classical differential geometry. Honors and recognition: The University of Pannonia administers the László Fejes Tóth Prize (Hungarian: Fejes Tóth László-díj) to recognize "outstanding contributions and development in the field of mathematical sciences". In 2015, the year of Fejes Tóth's centennial birth anniversary, the prize was awarded to Károly Bezdek of the University of Calgary in a ceremony held on 19 June 2015 in Veszprém, Hungary.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**AFG3L2** AFG3L2: AFG3 ATPase family gene 3-like 2 (S. cerevisiae) is a protein that in humans is encoded by the AFG3L2 gene.This gene encodes a protein localized in mitochondria and closely related to paraplegin. The paraplegin gene is responsible for an autosomal recessive form of hereditary spastic paraplegia. This gene is a candidate gene for other hereditary spastic paraplegias or neurodegenerative disorders as well as spastic ataxia-neuropathy syndrome.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Roughing filter** Roughing filter: Roughing filters provide pretreatment for turbid water or simple, low maintenance treatment when high water quality is not needed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Autonomic nerve** Autonomic nerve: The autonomic nerve is a small nerve which carries postganglionic sympathetic and parasympathetic neurons from the zygomaticotemporal nerve; a branch of the maxillary nerve, to the lacrimal nerve; a branch of the ophthalmic nerve. These neurons derive from the superior cervical ganglion and the pterygopalatine ganglion respectively. They will travel to the lacrimal gland via the lacrimal nerve. Parasympathetic will induce lacrimation and vice versa.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**60S ribosomal protein L21** 60S ribosomal protein L21: 60S ribosomal protein L21 is a protein that in humans is encoded by the RPL21 gene.Ribosomes, the organelles that catalyze protein synthesis, consist of a small 40S subunit and a large 60S subunit. Together these subunits are composed of 4 RNA species and approximately 80 structurally distinct proteins. This gene encodes a ribosomal protein that is a component of the 60S subunit. The protein belongs to the L21E family of ribosomal proteins. It is located in the cytoplasm. As is typical for genes encoding ribosomal proteins, there are multiple processed pseudogenes of this gene dispersed through the genome. Clinical relevance: Mutations in the RPL21 gene result in Hypotrichosis simplex of the scalp.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Substantia ferruginea** Substantia ferruginea: The substantia ferruginea is an underlying patch of deeply pigmented nerve cells located in the floor of the superior part of the sulcus limitans.It was coined in 1838 and 1851.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dealer's choice** Dealer's choice: Dealer's choice is a style of poker where each player may deal a different variant. As the deal passes clockwise around the table, each player occupying the dealer position chooses a variant which is either played just for the current hand or for an entire orbit. It is a common choice for home games, where the tone of the game is usually more recreational than competitive. It is also rarely played online, due to the complexities involved in creating the appropriate algorithms that would allow the format of poker to change during each hand, or orbit. Dealer's choice: Dealer's choice games often break from the typical forms of poker through the use of wild cards and kill cards in addition to variations on betting structure. The majority of the dealer's choice poker games were derived from children's games. Depending on house rules, dealers may also call card games that are not true poker variants, such as Acey Deucey, Screw Your Neighbor, and Guts. Dealer's choice: There are two different approaches to a standard DC game: Per hand: In this type of format the player on the button (known as the dealer) selects the format of poker to play for that hand only. After the hand is over the dealer button moves to the left and the next player in that position chooses the next format of poker that will be played. Dealer's choice: Per orbit: In this type of format the player on the button (known as the dealer) selects the format of poker to play for the next orbit, which is an entire revolution of the table. So, for example, if the game was being played at a nine-handed table then an orbit would last nine hands. Poker variants: The varieties of poker wholly depend on the level of knowledge and understanding on the table that you are playing. In home games, players will agree in advance the types of games that can be chosen, and then they are selected at random by each player. In casino games, the games are normally shown on a rolodex, and players can either choose a game from the rolodex, or it is spun in order for the choice of game to be entirely random. Poker variants: The most popular form of poker in the world (No Limit Texas hold'em) is rarely played in Dealer's Choice. There are many reasons for this, but the most common is the fact that people play Dealer's Choice to get away from the regularity of playing No Limit Hold'em. That being said, it is a game that many beginners will choose to play if they are unfamiliar with the other mixed games.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Linear graph grammar** Linear graph grammar: In computer science, a linear graph grammar (also a connection graph reduction system or a port graph grammar) is a class of graph grammar on which nodes have a number of ports connected together by edges and edges connect exactly two ports together. Interaction nets are a special subclass of linear graph grammars in which rewriting is confluent. Implementations: Bawden introduces linear graphs in the context of a compiler for a fragment of the Scheme programming language. Bawden and Mairson (1998) describe the design of a distributed implementation in which the linear graph is spread across many computing nodes and may freely migrate in order to make rewrites possible.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Open Theology** Open Theology: Open Theology is a peer-reviewed open access academic journal published by De Gruyter since 2015. It covers theology and religious studies. The editor-in-chief is Charles Taliaferro (St. Olaf College). Abstracting and indexing: The journal is abstracted and indexed in EBSCO databases, Emerging Sources Citation Index, ERIH PLUS, and Scopus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kronos effect** Kronos effect: The Kronos effect is a term coined by Columbia Law School professor Tim Wu in his 2010 book The Master Switch: The Rise and Fall of Information Empires. It describes how companies that establish early dominance in a period of disruptive innovation will do everything in their power to maintain their first-mover advantage.The name derives from Greek mythology, in which the Titan Kronos ate his own children in order to preempt the prophecy that one would dethrone him.In The Master Switch, Wu described the Kronos effect as critical to the history of information technology. In his book, he gives the example of radio pioneer and American business executive David Sarnoff. Sarnoff was originally what Wu described as "a radio idealist," but later in his career when heading the Radio Corporation of America (RCA), he came to view newly emergent FM technology as a threat to incumbent AM businesses including RCA's own NBC network. Sarnoff went on to pressure the U.S. Federal Communications Commission to restrict the growth of FM in a variety of ways, successfully suppressing its widespread adoption for more than thirty years, and proving, Wu wrote, that "the best antidote to the disruptive power of innovation is overregulation."The Kronos effect's role in the technological disruption cycle is to hurt innovation, efficiency, openness and decentralization.Other examples from Wu include: Western Union's failed attempt to suppress the telephone as a threat to the telegram Co-opting of the nascent television industry by existing radio networks like the National Broadcasting Company (NBC)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Arabinosyl nucleosides** Arabinosyl nucleosides: Arabinosyl nucleosides are derivatives of the nucleosides. They contain – in contrast to most nucleosides – instead of the β-D-Ribofuranose the β-D-Arabinofuranose. They are mostly used as cytostatics or virostatics. Literature: W. E. Müller: "Rational design of arabinosyl nucleosides as antitumor and antiviral agents", Jpn J Antibiot. 1977 Dec;30 Suppl:104–120; PMID 612702.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Endo-exo isomerism** Endo-exo isomerism: In organic chemistry, endo–exo isomerism is a special type of stereoisomerism found in organic compounds with a substituent on a bridged ring system. The prefix endo is reserved for the isomer with the substituent located closest, or "syn", to the longest bridge. The prefix exo is reserved for the isomer with the substituent located farthest, or "anti", to the longest bridge. Here "longest" and "shortest" refer to the number of atoms that comprise the bridge. This type of molecular geometry is found in norbornane systems such as dicyclopentadiene. The terms endo and exo are used in a similar sense in discussions of the stereoselectivity in Diels–Alder reactions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sable Systems** Sable Systems: Sable Systems develops and manufactures equipment for whole animal respirometry and offers courses in respirometry.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shadowgraph** Shadowgraph: Shadowgraph is an optical method that reveals non-uniformities in transparent media like air, water, or glass. It is related to, but simpler than, the schlieren and schlieren photography methods that perform a similar function. Shadowgraph is a type of flow visualisation. Shadowgraph: In principle, a difference in temperature, a different gas, or a shock wave in the transparent air cannot be seen by the human eye or cameras. However, all these disturbances refract light rays, so they can cast shadows. The plume of hot air rising from a fire, for example, can be seen by way of its shadow cast upon a nearby surface by the uniform sunlight. Sunlight shadowgraph: Some aquatic predators detect their transparent prey by way of their shadows cast upon the ocean floor. It was Robert Hooke who first scientifically demonstrated the sunlight shadowgraph and Jean-Paul Marat who first used it to study fire. A modern account of shadowgraphy is given by Gary S. Settles. Applications: Applications of shadowgraphy in science and technology are very broad. It is used in aeronautical engineering to see the flow about high-speed aircraft and missiles, as well as in combustion research, ballistics, explosions, and in the testing of glass. Ideal for identification of flow patterns. Applications: Shadowgram (Shadowgraph) According to F. J. Weinberg, the result of applying the shadowgraph technique should be known as a shadowgram. A shadowgram is not a focused image, rather it is a mere shadow. In the shadowgram, the differences in light intensity are proportional to the second spatial derivative (Laplacian) of the refractive index field in the transparent medium under study. Once the distance from the transparent disturbance to the cast shadow becomes too large, then the shadow no longer constitutes a useful representation of the disturbance that caused it. Applications: Cartoons The shadowgraph and shadowgram have been used in animation, where they reinforce the cartoon's realism. One first use was made by Disney Studios on the Three Blind Mouseketeers (1936) in the series of animated short films Silly Symphonies. Postcards Additionally the term Shadowgraph was used by English postcard publishers E.T.W. Dennis & Sons Ltd. of London and Scarborough for a series of 'Hold up to the Light' postcards in the 1950s. In these a saucy image can be seen through what seems an innocent picture when a light is shone through the card.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metrics (networking)** Metrics (networking): Router metrics are configuration values used by a router to make routing decisions. A metric is typically one of many fields in a routing table. Router metrics help the router choose the best route among multiple feasible routes to a destination. The route will go in the direction of the gateway with the lowest metric. A router metric is typically based on information such as path length, bandwidth, load, hop count, path cost, delay, maximum transmission unit (MTU), reliability and communications cost. Examples: A metric can include: measuring link utilization (using SNMP) number of hops (hop count) speed of the path packet loss (router congestion/conditions) Network delay path reliability path bandwidth throughput [SNMP - query routers] load Maximum transmission unit (MTU) administrator configured valueIn EIGRP, metrics is represented by an integer from 0 to 4,294,967,295 (The size of a 32-bit integer). In Microsoft Windows XP routing it ranges from 1 to 9999. Examples: A metric can be considered as: additive - the total cost of a path is the sum of the costs of individual links along the path, concave - the total cost of a path is the minimum of the costs of individual links along the path, multiplicative - the total cost of a path is the product of the costs of individual links along the path. Service level metrics: Router metrics are metrics used by a router to make routing decisions. It is typically one of many fields in a routing table. Router metrics can contain any number of values that help the router determine the best route among multiple routes to a destination. A router metric typically based on information like path length, bandwidth, load, hop count, path cost, delay, MTU, reliability and communications cost.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Edding** Edding: edding AG is a German company that manufactures writing and marking tools such as felt-tip pens and permanent markers. History: edding AG was founded in 1960 in Hamburg by Carl-Wilhelm Edding and Volker Detlef Ledermann. At that time they started with a start-up capital of just 500 Deutsche Mark. In 1965, they established the group brand planMASTER and started to sell products for planning and visual communication. By the end of 1970, almost 100 million edding felt and fibre-tipped pens had been sold across the world. Eight years later the Group presented its first ever paint marker with an opaque color, which is suitable even on darker surfaces. Shares have been traded on the stock exchange since 1986. In the same year, Edding Vertrieb GmbH was founded as a distribution and logistics centre for the German market. It is based in Wunstorf near Hannover to this day. In 1992, edding founded V.D. Ledermann & Co. GmbH in Bautzen. Since 2005, the company is headed by Per Ledermann, the son of the co-founder. In 2008, edding introduced its EcoLine. This product series includes permanent markers and board markers with at least 90% of the total plastic used being made from recycled material. Brands and Products: edding edding sells paint markers of different line thickness and color. These are mostly made in Japan. Graffiti writers use "edding" frequently for tagging. Legamaster Legamaster is the visual communication division of edding AG, the leading manufacturer of high-quality marking and writing instruments. Legamaster has been actively adapting its range to the latest communication technology trends and developments for more than 50 years. Awards: 1995: For his enduring commitment to the environment, co-founder Volker Detlef Ledermann was honored with the B.A.U.M. award. 2000-2010: edding 3000 and edding 2000 permanent markers were voted best markers in the world by the Global Consumer Index. 2007: The University of St. Gallen in Switzerland lists edding AG as one of the top 100 German employers in the small and medium-sized businesses sector. 2008: Deloitte awarded edding AG the "Axia Award 2008" (in the small and medium-sized businesses category) for its excellent strategic orientation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rubber ducky antenna** Rubber ducky antenna: The rubber ducky antenna (or rubber duck aerial) is an electrically short monopole antenna that functions somewhat like a base-loaded whip antenna. It consists of a springy wire in the shape of a narrow helix, sealed in a rubber or plastic jacket to protect the antenna. Rubber ducky antenna is a form of normal-mode helical antenna. Rubber ducky antenna: Electrically short antennas like the rubber ducky are used in portable handheld radio equipment at VHF and UHF frequencies in place of a quarter wavelength whip antenna, which is inconveniently long and cumbersome at these frequencies. Many years after its invention in 1958, the rubber ducky antenna became the antenna of choice for many portable radio devices, including walkie-talkies and other portable transceivers, scanners and other devices where safety and robustness take precedence over electromagnetic performance. The rubber ducky is quite flexible, making it more suitable for handheld operation, especially when worn on the belt, than earlier rigid telescoping antennas. Origin of the name: Two rumors link the naming of the antenna with the Kennedy family. In the early 1960s the rubber ducky became the antenna of choice for personal walkie-talkie transceivers used by police and security services, including the U.S. Secret Service, which guards the President of the United States. According to one rumor, the young Caroline Kennedy, daughter of President John F. Kennedy, named the flexible device when she pointed at one on an agent's transceiver and said, "rubber ducky". On the other hand, Thomas A. Clark, a senior scientist with NASA, claims to have named it after listening to one of Vaughn Meader's comedies about the Kennedy family. Origin of the name: An alternative name is based on the short stub format: the "stubby antenna". Description: Before the rubber ducky, antennas on portable radios usually consisted of quarter-wave whip antennas, rods whose length was one-quarter of the wavelength of the radio waves used. In the VHF range where they were used, these antennas were 0.6 or 0.9 m (2 or 3 feet) long, making them cumbersome. They were often made of telescoping tubes that could be retracted when not in use. To make the antenna more compact, electrically short antennas, shorter than one-quarter wavelength, began to be used. Electrically short antennas have considerable capacitive reactance, so to make them resonant at the operating frequency an inductor (loading coil) is added in series with the antenna. Antennas which have these inductors built into their bases are called base-loaded whips. Description: The rubber ducky is an electrically short quarter-wave antenna in which the inductor, instead of being in the base, is built into the antenna itself. The antenna is made of a narrow helix of wire like a spring, which functions as the needed inductor. The springy wire is flexible, making it less prone to damage than a stiff antenna. The spring antenna is further enclosed in a plastic or rubber-like covering to protect it. The technical name for this type of antenna is a normal-mode helix. Rubber ducky antennas are typically 4% to 15% of a wavelength long; that is, 16% to 60% of the length of a standard quarter-wave whip. Effective aperture: Because the length of this antenna is significantly smaller than a wavelength the effective aperture, if 100% efficient, would be approximately: Ae=3λ28π Like other electrically short antennas the rubber ducky has poorer performance (less gain) due to losses and thus considerably less gain than a quarter-wave whip. However it has somewhat better performance than an equal length base loaded antenna. This is because the inductance is distributed throughout the antenna and so allows somewhat greater current in the antenna. Performance: Rubber ducky antennas have lower gain than a full size quarter-wavelength antenna, reducing the range of the radio. They are typically used in short-range two way radios where maximum range is not a requirement. Their design is a compromise between antenna gain and small size. They are difficult to characterize electrically because the current distribution along the element is not sinusoidal as is the case with a thin linear antenna. Performance: In common with other inductively loaded short monopoles, the rubber ducky has a high Q factor and thus a narrow bandwidth. This means that as the frequency departs from the antenna's designed center frequency, its SWR increases and thus its efficiency falls off quickly. This type of antenna is often used over a wide frequency range, e.g. 100–500 MHz, and over this range its performance is poor, but in many mobile radio applications there is sufficient excess signal strength to overcome any deficiencies in the antenna. Design rules: If the coils of the spring are wide (a large diameter), relative to the length of the array, the resulting antenna will have narrow bandwidth. Conversely, if the coils of the spring are narrow, relative to the length of the array, the resulting antenna will have its largest possible bandwidth. If the antenna is resonant, and the spring has a large diameter, the impedance will be well below 50 Ω, tending towards 0 Ω with large inductors as the structure starts to resemble a series-tuned circuit with little radiation resistance. Design rules: If the antenna is resonant, and the spring has a small diameter, the impedance will increase towards 70 Ω.From these rules, one can surmise that it is possible to design a rubber ducky antenna that has about 50 Ω impedance at its feed-point, but a compromise of bandwidth may be necessary. Modern rubber ducky antennas such as those used on cell phones are tapered in such a way that few performance compromises are necessary. Variations: Some rubber ducky antennas are designed quite differently than the original design. One type uses a spring only for support. The spring is electrically shorted out. The antenna is therefore electrically a linear element antenna. Some other rubber ducky antennas use a spring of non-conducting material for support and comprise a collinear array antenna. Such antennas are still called rubber ducky antennas even though they function quite differently (and often better) than the original spring antenna.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**British land speed record** British land speed record: The British land speed record is the fastest land speed achieved by a vehicle in the United Kingdom, as opposed to one on water or in the air. It is standardised as the speed over a course of fixed length, averaged over two runs in opposite directions. Historical records: On 25 September 1924, Malcolm Campbell driving the 350 hp Sunbeam Blue Bird set records for the Flying Mile (146.16 m.p.h.) and Flying Kilometre (146.15 m.p.h.) at Pendine Sands, in Wales.On 21 July 1925, Malcolm Campbell, Sunbeam Blue Bird, at Pendine Sands, broke the records for the Flying Mile (150.76 m.p.h.) and Flying Kilometre (150.86 m.p.h.).On 16 March 1926, Henry Segrave set the land speed record in his 4-litre Sunbeam Tiger 'Ladybird' on the sands at Southport, England at 152.3 m.p.h. "The mean time for the flying kilometre was 14.6876 seconds equal to 245.11 kilometres per hour, or 152.308 miles per hour." The car suffered supercharger failure during the record run and did not break the mile record. Historical records: On 27 April 1926, at Pendine Sands J. G. Parry-Thomas in the Higham-Thomas Special Babs set the Flying Mile record at 168.07 m.p.h. and the Flying Kilometre at 169.29 m.p.h. The following day on 28 April 1926, Parry-Thomas raised the Flying Mile to 170.62 m.p.h. and the Flying Kilometre to 171.01 m.p.h.On 4 February 1927, Malcolm Campbell set the World Land Speed Record at Pendine Sands covering the Flying Kilometre in a mean average of 174.883 m.p.h. and the Flying Mile in 174.224 m.p.h. on the Napier-Campbell Blue Bird. These also established British records that were to last for many years. The achievement was overshadowed by the death of Parry-Thomas at Pendine Sands on 3 March 1927. Historical records: On 3 October 1970, Tony Densham, driving the Ford-powered "Commuter" dragster set a record at Elvington, Yorkshire, averaging 207.6 m.p.h. over the Flying Kilometre course. This broke Campbell's record set 43 years previously. Historical records: On 27 April 1977, Robert Horne set a Flying Mile record, at RAF Fairford, Gloucestershire, in the ex-Scuderia Montjuich Ferrari 512M, chassis number 1002, at a speed of 191.64 m.p.h.In October 2013, Paul Drayson, set the electric land speed record reaching an average speed of 205 mph in October 2013.On 17 May 2014 - Motorcycle - Sam Green, set the first British Electric Motorcycle Land Speed Record at Elvington Airfield in Yorkshire with Saietta R, a British electric urban sports road motorcycle brand, and in partnership with Darvill Racing team. The average record speed achieved was 100.89 mph. The first record attempt saw Saietta R achieve its top speed of 105 mph. Historical records: In May 2018 - Motorcycle - Zef Eisenberg, the fastest motorbike on sand was recorded at 201.5 mph over 1.5 miles at Pendine sands in Wales on a supercharged Suzuki Hayabusa. This was a one way record, officiated and recorded by UKTA and the British Record club. Zef Eisenberg also holds the record for World's fastest Turbine bike and Britain's fastest ever naked bike (no fairing) on his Rolls-Royce C20B Turbine powered motorbike with an average speed of 225.75 mph over a mile from a standing start at Elvington Airfield on 17 May 2015. This was recorded by UKTA and Guinness World Records.On 6 April 2019, Zef Eisenberg, recorded the fastest ever wheel powered flying mile (British Record, not World Record) on a supercharged Suzuki Hayabusa at 182.49 mph at Pendine Sands, exceeded the flying mile record of Idris Elba in 2015 and that of Sir Malcolm Campbell in 1927.On 17 May 2019, Zef Eisenberg, returned to Pendine with a bespoke 1200 hp Porsche 911 Turbo and on his very first pair of runs, he achieved the following records; Fastest sand speed record achieved by a wheel-powered vehicle at 210.332 mph at Pendine Sands. Historical records: Fastest flying quarter (one way) wheel powered record at 206.492 mph, Pendine record (and MSA under 5000cc record). Fastest flying mile (one way) wheel powered record at 196.970 mph, Pendine record (and MSA under 5000cc record). Historical records: Fastest flying mile (2 way) 187.962 mph (same measurement as Sir Malcolm Campbell), Pendine record As of 2019, Zef Eisenberg, is the only person in history to have achieved over 200 mph on bike and car at Pendine, and a flying mile record in bike and car in Britain, and the only person to hold car and bike records, other than John Surtees. Non wheel-driven vehicles: On 25 September 1980 Thrust2 driven by Richard Noble broke the Flying Mile record at a speed of 248.87 mph and the Flying Kilometre at 251.190 mph. at RAF Greenham Common.In the summer of 1998, Colin Fallows bettered Richard Noble's outright UK Record in his Vampire jet dragster at an average speed of 269 mph at Elvington, Yorkshire. Mark Newby raised this to 272 mph in Split Second in July 2000 but Colin Fallows raised the record again on the same day using Vampire to record an average speed of 300.3 mph with a peak of 329 mph. Non wheel-driven vehicles: On 7 July 2006, Colin Fallows raised this 300.3 mph average speed again by 1 mph with an each-way average of 301 mph at RAF Fairford in Vampire. His peak speed was 331 mph. At the same event at RAF Fairford on 7 July 2006, Mark Newby drove his jet car Split Second to an MSA/FIA accredited average speed of 338.74 mph with a peak of 362 mph, the fastest speed ever recorded in the UK. The car was unable to make a return run so the one-way record remains an unofficial one. (Sources: UK Speed Record Club, FAST Facts. RACMSA) On 20 September 2006, Top Gear presenter Richard Hammond reached a peak speed of 314 mph (505 km/h) whilst being taught to drive the Vampire jet car. It was not a record attempt, and no official MSA or FIA Accredited timekeeping was in place, the peak speed of 314 mph being recorded by the BBC's own on-board data management equipment.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Octagon (video game)** Octagon (video game): Octagon (fully titled Octagon – A Minimal Arcade Game with Maximum Challenge) is a minimalist twitch-reflex video game by Lukas Korba. Gameplay: Octagon tasks the player with controlling an octagon in an octagonal world without falling off. There are an infinite number of levels player can play in with the goal of completing the level without falling off. These levels increase in complexity as the player completes levels. The game has 3 controls which the player must use to complete a level which are tapping or swiping left or right to move left or right and swiping upwards to clear vertical gaps.There is also an endless mode where the player is tasked with the task of controlling the octagon for as long as possible before falling off. Reception: Octagon received mixed reviews. Apple'N'Apps gave the game 3.0 out of 5, praising the game's "great design work," "extreme challenge from the outset," and "intuitive controls," while criticizing the lack of variety, the fact that the "controls can cause mix-ups," and the "intrinsic repetition to complete levels." Sequel: A sequel was released on May 6, 2020.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HLA-B15** HLA-B15: HLA-B15 (B15) is an HLA-B serotype. The serotype identifies the B*15 gene-allele protein products of HLA-B.B15 is a broad antigen can be subdivided into several split antigens that are often used in characterization. These are B62, B63, B70, B71, B72, B75, B76, B77. B*15 is the largest allele grouping for any known human autosomal locus, identified as of August 2008 there are more than 150 alleles and ~140 amino acid sequence variants from those gene products. Some of these alleles are discussed below. Other alleles, such as B*46 evolved from B*15. One reason for the diversity of this group is that B15 is among a group of alleles enriched in the original humans that left Africa and dispersed across East Asia and Australia. As people traveled east the frequency of many alleles dropped or disappeared from migrants. However B*15 persisted, expanded and diversified. The wide range and complex environment selected for new alleles and promoted their expansion. B*46 for example is not found in Africa, and appears to have evolved and spread in East Asia, to several 100 million bearers worldwide. HLA-B15: HLA-B15 allele *15:02 is associated with the severe skin conditions Stevens–Johnson syndrome (SJS) and toxic epidermal necrolysis (TEN) caused by carbamazepine drug sensitivity in East Asians. Carriers of the HLA-B15 allele *15:01 (B62) are much more likely to be asymptomatic when infected with SARS-CoV-2 (the virus that causes COVID-19).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jordan and Einstein frames** Jordan and Einstein frames: The Lagrangian in scalar-tensor theory can be expressed in the Jordan frame in which the scalar field or some function of it multiplies the Ricci scalar, or in the Einstein frame in which Ricci scalar is not multiplied by the scalar field. There exist various transformations between these frames. Despite the fact that these frames have been around for some time there is has been debate about whether either, both, or neither frame is a 'physical' frame which can be compared to observations and experiment. Christopher Hill and Graham Ross have shown that there exist ``gravitational contact terms" in the Jordan frame, whereby the action is modified by graviton exchange. This modification leads back to the Einstein frame as the effective theory. Contact interactions arise in Feynman diagrams when a vertex contains a power of the exchanged momentum, q2 , which then cancels against the Feynman propagator, 1/q2 , leading to a point-like interaction. This must be included as part of the effective action of the theory. When the contact term is included results for amplitudes in the Jordan frame will be equivalent to those in the Einstein frame, and results of physical calculations in the Jordan frame that omit the contact terms will generally be incorrect. This implies that the Jordan frame action is misleading, and the Einstein frame is uniquely correct for fully representing the physics. Equations and physical interpretation: If we perform the Weyl rescaling g~μν=Φ−2/(d−2)gμν , then the Riemann and Ricci tensors are modified as follows. Equations and physical interpretation: −g~=Φ−d/(d−2)−g R~=Φ2/(d−2)[R+2(d−1)d−2◻ΦΦ−3(d−1)(d−2)(∇ΦΦ)2] As an example consider the transformation of a simple Scalar-tensor action with an arbitrary set of matter fields ψm coupled minimally to the curved background ln ⁡Φ))2]+Sm[Φ−2/(d−2)gμν,ψm] The tilde fields then correspond to quantities in the Jordan frame and the fields without the tilde correspond to fields in the Einstein frame. See that the matter action Sm changes only in the rescaling of the metric. Equations and physical interpretation: The Jordan and Einstein frames are constructed to render certain parts of physical equations simpler which also gives the frames and the fields appearing in them particular physical interpretations. For instance, in the Einstein frame, the equations for the gravitational field will be of the form Rμν−12Rgμν=otherfields. I.e., they can be interpreted as the usual Einstein equations with particular sources on the right-hand side. Similarly, in the Newtonian limit one would recover the Poisson equation for the Newtonian potential with separate source terms. However, by transforming to the Einstein frame the matter fields are now coupled not only to the background but also to the field Φ which now acts as an effective potential. Specifically, an isolated test particle will experience a universal four-acceleration aμ=−1d−2Φ,νΦ(gμν+uμuν), where uμ is the particle four-velocity. I.e., no particle will be in free-fall in the Einstein frame. Equations and physical interpretation: On the other hand, in the Jordan frame, all the matter fields ψm are coupled minimally to g~μν and isolated test particles will move on geodesics with respect to the metric g~μν . This means that if we were to reconstruct the Riemann curvature tensor by measurements of geodesic deviation, we would in fact obtain the curvature tensor in the Jordan frame. When, on the other hand, we deduce on the presence of matter sources from gravitational lensing from the usual relativistic theory, we obtain the distribution of the matter sources in the sense of the Einstein frame. Models: Jordan frame gravity can be used to calculate type IV singular bouncing cosmological evolution, to derive the type IV singularity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Normorphine** Normorphine: Normorphine is an opiate analogue, the N-demethylated derivative of morphine, that was first described in the 1950s when a large group of N-substituted morphine analogues were characterized for activity. The compound has relatively little opioid activity in its own right, but is a useful intermediate which can be used to produce both opioid antagonists such as nalorphine, and also potent opioid agonists such as N-phenethylnormorphine. with its formation from morphine catalyzed by the liver enzymes CYP3A4 and CYP2C8.Normorphine is a controlled substance listed under the Single Convention On Narcotic Drugs 1961 and the laws in various states implementing it; for example, in the United States it is a Schedule I Narcotic controlled substance, with an ACSCN of 9313 and an annual aggregate manufacturing quota of 18 grams in 2014, unchanged from the prior year. The salts in use are the free base hexahydrate (free base conversion ratio 0.715), and hydrochloride (0.833).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Commonly misspelled words in French** Commonly misspelled words in French: Misspellings in French are a subset of errors in French orthography. Commonly misspelled words in French: Many errors are caused by homonyms, for example French contains hundreds of words ending with IPA [εn] written diversely as -ène, -en, -enne, -aine.Many French words and verb endings end with silent consonants, lettres muettes, creating also homonyms are spelled differently but pronounced identically: tu parles, il parle, ils parlent, or confusion of je parlais instead of je parlai. Homonyms also occur with accents il eut dit compared with il eût dit. Commonly misspelled words in French: Further problems are caused by examples of confusion with English, such as connection (incorrect) and connexion (correct).Misspellings of French words outside the French language occur regularly and account for part of the etymology of some modern loanwords in English - such as English "caddie."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SourceForge Installer** SourceForge Installer: The SourceForge Installer is a discontinued piece of software that was included in some downloads from SourceForge. It was often bundled with adware and crapware designed to trick people into installing unwanted software. SourceForge has been criticized about its use of this installer. Opinions of this feature vary, with some complaining about users not being as aware of what they are getting or being able to trust the downloaded content, whereas others see it as a reasonably harmless option that keeps individual projects and users in control. History: In July 2013, SourceForge started allowing project owners to sign up for DevShare. When the project owner signed up for this, the closed-source installer would be placed in the project. While most people opposed the use of it, many projects began using DevShare because part of the ad revenue was given to the owner. The most notable project to use this was FileZilla, an open-source FTP software. History: As of early 2016, DevShare and the SourceForge Installer have been discontinued after BizX bought SourceForge.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RNAi-Based Identification System and interference of Specific Cancer Cells** RNAi-Based Identification System and interference of Specific Cancer Cells: A "classifier" is created to categorize cells by identifying specific characteristics of cervical cancer. These characteristics are consistent with HeLa cells, which serve as the target cell line for cell death. Upon identifying these cells, the classifier releases specific proteins within the HeLa cell that trigger apoptosis without killing or endangering neighboring, healthy cells.The defining characteristics of these classifiers are elements whose levels within the cells create markers that can be measured. High markers and low markers are established, and a "classifier molecule" created to insert into prospective cells, which can induce apoptosis only when cells exhibit the threshold level of high or low markers. These classifiers use a small interfering RNA, which targets the repressor and activator in the Lac operon. This holds potential for therapeutic use, provided that an efficient delivery system can be established for in vivo DNA. In vitro applications are possible, provided the classifier molecule can be safely integrated into cultured cells. Cancer cell identification and classification: Cancer cells can be classified by identifying MicroRNA expression. These mRNA expression levels can be used as a diagnostic and prognostic tool in tumor and cancer classifications, although current tumor classification methods do not incorporate experimental knowledge. As is evident in experimental knowledge, different types of cancer can be associated with the irregular expression of particular miRNAs. Other parameters considered to be critical are the location of the miRNAs on the strand, cancer-associated genomic regions, epigenetic alteration of miRNA expression, and abnormalities in processing target genes and proteins. Recent evidence shows that miRNAs play an important role in human malignancies and could act as a tumor/oncogene suppressor. RNAi for apoptosis: Recently, it has been discovered that small RNA can trigger specific gene silencing in human cells. The RNAi reaction enables a complete elimination of a specific protein, which can potentially enable researchers to target pivotal structures within a cell to eliminate the cell altogether. RNAi silencing can also strongly inhibit proliferation of cells with genetic mutations that encourage oncogenic activation. RNAi delivery methods: Since its discovery, RNAi knowledge has grown substantially. Although quite useful, RNAi in vivo delivery to tissues proves to be a challenge that still eludes science—especially to deep tissues within the body. RNAi delivery is only easily accessible to surface tissues, such as the eye and respiratory tract. In these instances, siRNA has been used in direct contact with the tissue for transport, and the resulting RNAi has been extremely successful in focusing on target genes. When delivering siRNA to deep tissue layers within the body, measures need be taken to protect the siRNA from nucleases, but targeting specific areas becomes the main difficulty. This difficulty has been combatted with high-dosage levels of siRNA to ensure the tissues have been reached; however, in these cases, hepatotoxicity was reported.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Castration anxiety** Castration anxiety: Castration anxiety is an idea in psychoanalytic theory. Castration anxiety: This is the fear of emasculation in both the literal and metaphorical sense. Castration anxiety is an overwhelming fear of damage to, or loss of, the penis—a derivative of Sigmund Freud's theory of the castration complex, one of his earliest psychoanalytic theories. Although Freud regarded castration anxiety as a universal human experience, few empirical studies have been conducted on the topic. It has been theorized that castration anxiety begins between the ages of 3 and 5, otherwise known as the phallic stage of development according to Freud. In Freud's theory it is the child's perception of anatomical difference (the possession of a penis) that induces castration anxiety as a result of an assumed paternal threat made in response to their sexual activities. Although typically associated with males, castration anxiety is theorized to be experienced in differing ways for both the male and female sexes. Literal: Castration anxiety is the conscious or unconscious fear of losing all or part of the sex organs, or the function of such. In the literal sense, castration anxiety refers to the fear of having one's genitalia disfigured or removed to punish sexual desires of a child.In Freudian psychoanalysis, castration anxiety (Kastrationsangst) refers to an unconscious fear of penile loss originating during the phallic stage of psychosexual development and lasting a lifetime. According to Freud, when the infantile male becomes aware of differences between male and female genitalia he assumes that the female's penis has been removed and becomes anxious that his penis will be cut off by his rival, the father figure, as punishment for desiring the mother figure.In 19th-century Europe, it was not unheard of for parents to threaten their misbehaving sons with castration or otherwise threaten their genitals. This theme is explored in the story Tupik by French writer Michel Tournier in his collection of stories entitled Le Coq de Bruyère (1978) and is a phenomenon Freud documents several times. In this same period, Kellogg and others in America and English-speaking countries offered to Victorian parents circumcision and, in grave instances, castration of their boys and girls as a terminal cure and punishment for a wide variety of perceived misbehaviours (such as masturbation), a practice that became widely used at the time. Metaphorical: Castration anxiety can also refer to being castrated symbolically. In the metaphorical sense, castration anxiety refers to the idea of feeling or being insignificant; there is a need to keep one's self from being dominated; whether it be socially or in a relationship. Symbolic castration anxiety refers to the fear of being degraded, dominated or made insignificant, usually an irrational fear where the person will go to extreme lengths to save their pride and/or perceives trivial things as being degrading making their anxiety restrictive and sometimes damaging. This can also tie in with literal castration anxiety in fearing the loss of virility or sexual dominance. Relation to power and control: According to Freudian psychoanalysis, castration anxiety can be completely overwhelming to the individual, often breaching other aspects of his or her life. A link has been found between castration anxiety and fear of death. Although differing degrees of anxiety are common, young men who felt the most threatened in their youth tended to show chronic anxiety. Because the consequences are extreme, the fear can evolve from potential disfigurement to life-threatening situations. Essentially, castration anxiety can lead to a fear of death, and a feeling of loss of control over one's life.To feel so powerless can be detrimental to an individual's mental health. One of the most concerning problems with all of this is the idea that the individual does not recognize that their sexual desires are the cause of the emotional distress. Because of unconscious thoughts, as theorized in the ideas of psychoanalysis, the anxiety is brought to the surface where it is experienced symbolically. This will lead to the fear associated with bodily injury in castration anxiety, which can then lead to the fear of dying or being killed. Relation to circumcision: Freud had a strongly critical view of circumcision, believing it to be a 'substitute for castration', and an 'expression of submission to the father's will'. This view was shared by others in the psychoanalytic community, such as Wilhelm Reich, Hermann Nunberg, and Jaques Lacan, who stated that there is "nothing less castrating than circumcision!"Themes central to castration anxiety that feature prominently in circumcision include pain, fear, loss of control (with the child's forced restraint, and in the psychological effects of the event, which may include sensation seeking, and lower emotional stability) and the perception that the event is a form of punishment.The ritual's origination as a result of Oedipal conflict was tested by examining 111 societies, finding that circumcision is likely to be found in societies in which the son sleeps in the mother's bed during the nursing period in bodily contact with her, and/or the father sleeps in a different hut.A study of the procedure without anaesthesia on children in Turkey found 'each child looked at his penis immediately after the circumcision 'as if to make sure that all was not cut off'. Another study of 60 males subject to communal circumcision ceremonies in Turkey found that 21.5% of them "remembered that they were specifically afraid that their penis might or would be cut off entirely," while 'specific fears of castration' occurred in 28% of the village-reared men. Fear of the authoritarian father increased considerably in 12 children.Psychoanalytic interpretation of Biblical stories shows themes of castration anxiety present in Judaic mythology concerning circumcision. Relation to circumcision: The figure of Lilith, described as "a hot fiery female who first cohabited with man" presents as an archetypal representation of the first mother of man, and primordial sexual temptation. Male children were said to be at risk of Lilith's wrath for eight days after birth. Deceiving Lilith into believing newborn babies were a girl – letting the boy's hair grow and even dressing him in girl clothes – were said to be the most effective means to avoid her harm, until they were ritually circumcised on the eighth day of life as part of a covenant with God.The figure of Judith, depicted both as "a type of the praying Virgin... who tramples Satan and harrows Hell," and also as "seducer-assassin" archetypically reflects the dichotomous themes presented by castration anxiety and circumcision: sexual purity, chastity, violence, and eroticism. Judith defeats Assyrian General, Holofernes by cutting his head off – decapitation being an act that Freud equated with castration in his essay, "Medusa's Head". Counterpart in females: It is implied in Freudian psychology that both girls and boys pass through the same developmental stages: oral, anal, and phallic stages. Freud, however, believed that the results may be different because the anatomy of the different sexes is different. Counterpart in females: The counterpart of castration anxiety for females is penis envy. Penis envy, and the concept of such, was first introduced by Freud in an article published in 1908 titled "On the Sexual Theories of Children". The idea was presumed that females/girls envied those (mostly their fathers) with a penis because theirs was taken from them—essentially they were already "castrated". Freud entertained that the envy they experienced was their unconscious wish to be like a boy and to have a penis.Penis envy, in Freudian psychology, refers to the reaction of the female/young girl during development when she realizes that she does not possess a penis. According to Freud, this was a major development in the identity (gender and sexual) of the girl. The contemporary culture assumes that penis envy is the woman wishing they were in fact a man. This is unrelated to the notion of "small penis syndrome" which is the assumption by the man that his penis is too small. According to Freud's beliefs, girls developed a weaker superego, which he considered a consequence of penis envy. Counterpart in females: Among his many suggestions, Freud believed that during the phallic stage, young girls distance themselves from their mothers and instead envy their fathers and show this envy by showing love and affection towards their fathers. According to Cohler and Galatzer, Freud believed that all of the concepts related to penis envy were among his greatest accomplishments. However, these are also his most criticized theories as well—most famously by Karen Horney. Empirical testing: Sarnoff et al. surmised that men differ in their degree of castration anxiety through the castration threat they experienced in childhood. Therefore, these men may be expected to respond in different ways to different degrees of castration anxiety that they experience from the same sexually arousing stimulus. The experimenters aimed to demonstrate that in the absence of a particular stimulus, men who were severely threatened with castration, as children, might experience long-lasting anxiety. The researchers claimed that this anxiety is from the repressed desires for sexual contact with women. It was thought that these desires are trying to reach the men's consciousness. The experimenters deduced that unconscious anxiety of being castrated might come from the fear the consciousness has of bodily injury. The researchers concluded that individuals who are in excellent health and who have never experienced any serious accident or illness may be obsessed by gruesome and relentless fears of dying or of being killed.In another article related to castration anxiety, Hall et al. investigated whether sex differences would be found in the manifestations of castration anxiety in their subject's dreams. The researchers hypothesized that male dreamers would report more dreams that would express their fear of castration anxiety instead of dreams involving castration wish and penis envy. They further hypothesized that women will have a reversed affect, that is, female dreamers will report more dreams containing fear of castration wish and penis envy than dreams including castration anxiety. The results demonstrated that many more women than men dreamt about babies and weddings and that men had more dreams about castration anxiety than women.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Acetiromate** Acetiromate: Acetiromate is an antilipidemic drug which is used to treat hyperlipidemia. It is also known as Adecol, TBF 43, or acetyltriiodothyronine formic acid.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chin Na** Chin Na: Qinna (Chinese: 擒拿; pinyin: qínná; Wade–Giles: ch'in na) is the set of joint lock techniques used in the Chinese martial arts to control or lock an opponent's joints or muscles/tendons so they cannot move, thus neutralizing the opponent's fighting ability. Qinna Shu (Chinese: 術; pinyin: shù meaning "technique") literally translates as lock catch technique. Some schools simply use the word na ("hold") to describe the techniques. Qinna features both standing and ground-based grappling techniques.Some Chinese martial arts instructors focus more on their qinna techniques than others. This is one of the many reasons why the qinna of one school may differ from that of another. All martial arts contain qinna techniques in some degree. The southern Chinese martial arts have more developed qinna techniques than northern Chinese martial systems. The southern martial arts have much more prevalent reliance on hand techniques which causes the practitioner to be in closer range to their opponent. There are over 700 qinna traditional techniques found in all martial arts. In the Non-Temple White Crane style there are 150-200 qinna techniques alone. Along with Fujian White Crane, styles such as Northern Eagle Claw (Ying Jow Pai) and Tiger Claw (Fu Jow Pai) have qinna as their martial focus and tend to rely on these advanced techniques. Chin Na: There is no universally accepted systemized form of qinna. Instead, each school varies depending on the instructor's training and/or personal preference of focus. Chin-Na is the facet of Kung-Fu which involves grappling, joint locks, pressure points, take-downs, and throws for immobilizing an attacker. These techniques are derived from animal attributes such as the praying mantis hooking or eagle claw. Chin Na: Today The recent understanding that grappling is as important as striking, has also caused some Kung Fu systems to focus on their Chin Na techniques, even expanding the system by incorporating/developing new ones. This is one reason why Chin Na of one school differs from that of another. There are over 700 traditional techniques and countless more being developed/adopted, depending on the specific school. Chin Na: Qinna and the development of Jujutsu Qinna is also accredited in the development of Jujutsu. It is stated in numerous Japanese and Chinese documents, that Chen Yuan-Yun (Chin Gempin or Chen Yuan-Pin; 1587-1674) was the first to introduce Chinese ju techniques (柔道 Rou Dao) into Japan during the early to middle 1600’s. One such Japanese document is “Collections of Ancestor’s Conversations Volume 2.” “Honcho Bugei Shoden” (also referred to as “Kanjo Shoden”) written by Hinatsu Shigetaka in 1716 states the following: Recently, Chin Gempin came to Japan and stayed at the Kokusa monastery, where he met three ronin: Fukuno Hichiroemon, Isogai Jirozaemon, and Miura Yojiemon. Chin Gempin told them that in China, there is an art of seizing a man. He said that he had seen it practiced and gave a brief example of the art. Chin Gempin also stated that he had not learned all of the principles of the art. Upon hearing this, the samurai further researched this art. Once achieving a degree of skill, the samurai founded the Kito-ryu school of Jujutsu. Chin Na: This same story is repeated in various Japanese documents including Honcho Seji Danki, Bujutsu Ryusoroku, Roi Shintoryo Hisho, Kitoryu Kempohi, Kitoryu Toka Mondo, Owan Meisho Zue, and Zoin Kinsei Kijindenas. Rickson Gracie also attributes the Chinese with bringing the techniques of Jiu Jitsu into Japan, as stated on his website, when explaining the origin of Brazilian Jiu Jitsu. Qinna Rou Dao can also be found in Shuai Jiao. Judo’s development was influenced by Kito-ryu. Similarities between Judo and Shuai Jiao are apparent through the common link with Qinna Rou Dao. The process of both of these arts becoming a sport further influenced similarities within their softer techniques.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Water jet cutter** Water jet cutter: A water jet cutter, also known as a water jet or waterjet, is an industrial tool capable of cutting a wide variety of materials using an extremely high-pressure jet of water, or a mixture of water and an abrasive substance. The term abrasive jet refers specifically to the use of a mixture of water and an abrasive to cut hard materials such as metal, stone or glass, while the terms pure waterjet and water-only cutting refer to waterjet cutting without the use of added abrasives, often used for softer materials such as wood or rubber.Waterjet cutting is often used during the fabrication of machine parts. It is the preferred method when the materials being cut are sensitive to the high temperatures generated by other methods; examples of such materials include plastic and aluminium. Waterjet cutting is used in various industries, including mining and aerospace, for cutting, shaping, and reaming. History: Waterjet While using high-pressure water for erosion dates back as far as the mid-1800s with hydraulic mining, it was not until the 1930s that narrow jets of water started to appear as an industrial cutting device. In 1933, the Paper Patents Company in Wisconsin developed a paper metering, cutting, and reeling machine that used a diagonally moving waterjet nozzle to cut a horizontally moving sheet of continuous paper. These early applications were at low pressure and restricted to soft materials like paper. History: Waterjet technology evolved in the post-war era as researchers around the world searched for new methods of efficient cutting systems. In 1956, Carl Johnson of Durox International in Luxembourg developed a method for cutting plastic shapes using a thin stream high-pressure water jet, but those materials, like paper, were soft materials. In 1958, Billie Schwacha of North American Aviation developed a system using ultra-high-pressure liquid to cut hard materials. This system used a 100,000 psi (690 MPa) pump to deliver a hypersonic liquid jet that could cut high-strength alloys such as PH15-7-MO stainless steel. Used to cut honeycomb laminate for the Mach 3 North American XB-70 Valkyrie, this cutting method resulted in delaminating at high speed, requiring changes to the manufacturing process.While not effective for the XB-70 project, the concept was valid and further research continued to evolve waterjet cutting. In 1962, Philip Rice of Union Carbide explored using a pulsing waterjet at up to 50,000 psi (340 MPa) to cut metals, stone, and other materials. Research by S.J. Leach and G.L. Walker in the mid-1960s expanded on traditional coal waterjet cutting to determine the ideal nozzle shape for high-pressure waterjet cutting of stone, and Norman Franz in the late 1960s focused on waterjet cutting of soft materials by dissolving long-chain polymers in the water to improve the cohesiveness of the jet stream. In the early 1970s, the desire to improve the durability of the waterjet nozzle led Ray Chadwick, Michael Kurko, and Joseph Corriveau of the Bendix Corporation to come up with the idea of using corundum crystal to form a waterjet orifice, while Norman Franz expanded on this and created a waterjet nozzle with an orifice as small as 0.002 inches (0.051 mm) that operated at pressures up to 70,000 psi (480 MPa). John Olsen, along with George Hurlburt and Louis Kapcsandy at Flow Research (later Flow Industries), further improved the commercial potential of the water jet by showing that treating the water beforehand could increase the operational life of the nozzle. History: High pressure High-pressure vessels and pumps became affordable and reliable with the advent of steam power. By the mid-1800s, steam locomotives were common and the first efficient steam-driven fire engine was operational. By the turn of the century, high-pressure reliability improved, with locomotive research leading to a sixfold increase in boiler pressure, some reaching 1,600 psi (11 MPa). Most high-pressure pumps at this time, though, operated around 500–800 psi (3.4–5.5 MPa). History: High-pressure systems were further shaped by the aviation, automotive, and oil industries. Aircraft manufacturers such as Boeing developed seals for hydraulically boosted control systems in the 1940s, while automotive designers followed similar research for hydraulic suspension systems. Higher pressures in hydraulic systems in the oil industry also led to the development of advanced seals and packing to prevent leaks.These advances in seal technology, plus the rise of plastics in the post-war years, led to the development of the first reliable high-pressure pump. The invention of Marlex by Robert Banks and John Paul Hogan of the Phillips Petroleum Company required a catalyst to be injected into the polyethylene. McCartney Manufacturing Company in Baxter Springs, Kansas, began manufacturing these high-pressure pumps in 1960 for the polyethylene industry. Flow Industries in Kent, Washington set the groundwork for commercial viability of waterjets with John Olsen’s development of the high-pressure fluid intensifier in 1973, a design that was further refined in 1976. Flow Industries then combined the high-pressure pump research with their waterjet nozzle research and brought waterjet cutting into the manufacturing world. History: Abrasive waterjet While cutting with water is possible for soft materials, adding an abrasive turned the water jet into a modern machining tool for all materials. This began in 1935 when the idea of adding an abrasive to the water stream was developed by Elmo Smith for liquid abrasive blasting. Smith’s design was further refined by Leslie Tirrell of the Hydroblast Corporation in 1937, resulting in a nozzle design that created a mix of high-pressure water and abrasive for the purpose of wet blasting.The first publications on modern abrasive waterjet (AWJ) cutting were published by Mohamed Hashish in the 1982 BHR proceedings showing, for the first time, that waterjets with relatively small amounts of abrasives are capable of cutting hard materials such as steel and concrete. The March 1984 issue of the Mechanical Engineering magazine showed more details and materials cut with AWJ such as titanium, aluminium, glass, and stone. Mohamed Hashish was awarded a patent on forming AWJ in 1987. Hashish, who also coined the new term abrasive waterjet, and his team continued to develop and improve the AWJ technology and its hardware for many applications. A critical development was creating a durable mixing tube that could withstand the power of the high-pressure AWJ, and it was Boride Products (now Kennametal) development of their ROCTEC line of ceramic tungsten carbide composite tubes that significantly increased the operational life of the AWJ nozzle. Current work on AWJ nozzles is on micro abrasive waterjets so that cutting with jets smaller than 0.015 inches (0.38 mm) in diameter can be commercialized. History: Working with Ingersoll-Rand Waterjet Systems, Michael Dixon implemented the first production practical means of cutting titanium sheets—an abrasive waterjet system very similar to those in widespread use today. By January 1989, that system was being run 24 hours a day producing titanium parts for the B-1B largely at Rockwell's North American Aviation facility in Newark, Ohio. History: Today, there are two different types of Abrasive Waterjets: Abrasive Water Suspension Jet (AWSJ) cutting The Abrasive Water Suspension Jet (AWSJ) - often called “Slurry Jet” or “Water Abrasive Suspension (WAS) jet” - is a specific type of abrasive water jet, which is used for waterjet cutting. In contrast to the abrasive water injector jet (AWIJ), the abrasive water suspension jet (AWSJ) is characterised by the fact that the mixing of abrasive and water takes place before the nozzle. This has the effect that, in contrast to AWIJ, the jet consists of only two components: the water and the abrasive. History: Since there are only 2 components (water and abrasive) in the AWSJ, the acceleration of the abrasive grains by the water takes place with a significantly increased efficiency compared to the AWIJ. The abrasive grains become faster with the WASS than with the WAIS for the same hydraulic power of the system. Therefore, comparatively deeper or faster cuts can be made with the AWSJ. History: AWSJ cutting, in contrast to the AWIJ cutting process described below, can also be used for mobile cutting applications and cutting underwater, in addition to machining demanding materials. Examples include bomb disposal s well as the dismantling of offshore installations or the dismantling of reactor pressure vessel installations in nuclear power plants. History: Abrasive Water Injector Jet (AWIJ) cutting The AWIJ is generated by a water jet that passes through a mixing chamber (a cavity) after exiting the water nozzle and enters a focusing tube at the exit of the mixing chamber. The interaction of the water jet in the mixing chamber with the air inside creates negative pressure, the water jet entrains air particles. This negative pressure is used for the pneumatic transport of the abrasive into the chamber (the abrasive is led to a lateral opening (bore) of the mixing chamber by means of a hose). History: After contact with the abrasive material in the mixing chamber with the water jet, the individual abrasive grains are accelerated and entrained in the direction of the focusing tube. The air used as a carrier medium for transporting the abrasive into the mixing chamber also becomes part of the AWIJ, which now consists of three components (water - abrasive - air). In the focusing tube, which is (should be) optimised in its length for this purpose, the abrasive is further accelerated (energy transfer from the water to the abrasive grain) and the AWIJ ideally leaves the focusing tube at the maximum possible abrasive grain speed. History: Waterjet control As waterjet cutting moved into traditional manufacturing shops, controlling the cutter reliably and accurately was essential. Early waterjet cutting systems adapted traditional systems such as mechanical pantographs and CNC systems based on John Parsons’ 1952 NC milling machine and running G-code. Challenges inherent to waterjet technology revealed the inadequacies of traditional G-Code. The accuracy depends on varying the speed of the nozzle as it approaches corners and details. Creating motion control systems to incorporate those variables became a major innovation for leading waterjet manufacturers in the early 1990s, with John Olsen of OMAX Corporation developing systems to precisely position the waterjet nozzle while accurately specifying the speed at every point along the path, and also utilizing common PCs as a controller. The largest waterjet manufacturer, Flow International (a spinoff of Flow Industries), recognized the benefits of that system and licensed the OMAX software, with the result that the vast majority of waterjet cutting machines worldwide are simple to use, fast, and accurate. Operation: All waterjets follow the same principle of using high-pressure water focused into a beam by a nozzle. Most machines accomplish this by first running the water through a high-pressure pump. There are two types of pumps used to create this high pressure; an intensifier pump and a direct drive or crankshaft pump. A direct drive pump works much like a car engine, forcing water through high-pressure tubing using plungers attached to a crankshaft. An intensifier pump creates pressure by using hydraulic oil to move a piston forcing the water through a tiny hole. The water then travels along the high-pressure tubing to the nozzle of the waterjet. In the nozzle, the water is focused into a thin beam by a jewel orifice. This beam of water is ejected from the nozzle, cutting through the material by spraying it with the jet of speed on the order of Mach 3, around 2,500 ft/s (760 m/s). The process is the same for abrasive waterjets until the water reaches the nozzle. Here abrasives such as garnet and aluminium oxide, are fed into the nozzle via an abrasive inlet. The abrasive then mixes with the water in a mixing tube and is forced out the end at high pressure. Benefits: An important benefit of the water jet is the ability to cut material without interfering with its inherent structure, as there is no heat-affected zone (HAZ). Minimizing the effects of heat allows metals to be cut without warping, affecting tempers, or changing intrinsic properties. Sharp corners, bevels, pierce holes, and shapes with minimal inner radii are all possible.Water jet cutters are also capable of producing intricate cuts in material. With specialized software and 3-D machining heads, complex shapes can be produced.The kerf, or width, of the cut can be adjusted by swapping parts in the nozzle, as well as changing the type and size of the abrasive. Typical abrasive cuts have a kerf in the range of 0.04 to 0.05 in (1.0–1.3 mm), but can be as narrow as 0.02 inches (0.51 mm). Non-abrasive cuts are normally 0.007 to 0.013 in (0.18–0.33 mm), but can be as small as 0.003 inches (0.076 mm), which is approximately that of a human hair. These small jets can permit small details in a wide range of applications. Benefits: Water jets are capable of attaining accuracy down to 0.005 inches (0.13 mm) and repeatability down to 0.001 inches (0.025 mm).Due to its relatively narrow kerf, water jet cutting can reduce the amount of scrap material produced, by allowing uncut parts to be nested more closely together than traditional cutting methods. Water jets use approximately 0.5 to 1 US gal (1.9–3.8 L) per minute (depending on the cutting head's orifice size), and the water can be recycled using a closed-loop system. Waste water usually is clean enough to filter and dispose of down a drain. The garnet abrasive is a non-toxic material that can be mostly recycled for repeated use; otherwise, it can usually be disposed of in a landfill. Water jets also produce fewer airborne dust particles, smoke, fumes, and contaminants, reducing operator exposure to hazardous materials.Meatcutting using waterjet technology eliminates the risk of cross contamination since the contact medium is discarded. Versatility: Because the nature of the cutting stream can be easily modified the water jet can be used in nearly every industry; there are many different materials that the water jet can cut. Some of them have unique characteristics that require special attention when cutting. Versatility: Materials commonly cut with a water jet include textiles, rubber, foam, plastics, leather, composites, stone, tile, glass, metals, food, paper and much more. "Most ceramics can also be cut on an abrasive water jet as long as the material is softer than the abrasive being used (between 7.5 and 8.5 on the Mohs scale)". Examples of materials that cannot be cut with a water jet are tempered glass and diamonds. Water jets are capable of cutting up to 6 in (150 mm) of metals and 18 in (460 mm) of most materials, though in specialized coal mining applications, water jets are capable of cutting up to 100 ft (30 m) using a 1 in (25 mm) nozzle.Specially designed water jet cutters are commonly used to remove excess bitumen from road surfaces that have become the subject of binder flushing. Flushing is a natural occurrence caused during hot weather where the aggregate becomes level with the bituminous binder layer creating a hazardously smooth road surface during wet weather. Availability: Commercial water jet cutting systems are available from manufacturers all over the world, in a range of sizes, and with water pumps capable of a range of pressures. Typical water jet cutting machines have a working envelope as small as a few square feet, or up to hundreds of square feet. Ultra-high-pressure water pumps are available from as low as 40,000 psi (280 MPa) up to 100,000 psi (690 MPa). Process: There are six main process characteristics of water jet cutting: Uses a high-velocity stream of ultra high-pressure water 30,000–90,000 psi (210–620 MPa) which is produced by a high-pressure pump with possible abrasive particles suspended in the stream. Is used for machining a large array of materials, including heat-sensitive, delicate, or very hard materials. Produces no heat damage to the workpiece surface or edges. Nozzles are typically made of sintered boride or composite tungsten carbide. Produces a taper of less than 1° on most cuts, which can be reduced or eliminated entirely by slowing down the cut process or tilting the jet. Distance of the nozzle from the workpiece affects the size of the kerf and the removal rate of material. Typical distance is .125 in (3.2 mm).Temperature is not much of a factor because the water used also acts as a coolant. Edge quality: Edge quality for water jet cut parts is defined with the quality numbers Q1 through Q5. Lower numbers indicate rougher edge finish; higher numbers are smoother. For thin materials, the difference in cutting speed for Q1 could be as much as 3 times faster than the speed for Q5. For thicker materials, Q1 could be 6 times faster than Q5. For example, 4 inches (100 mm) thick aluminium Q5 would be 0.72 in/min (18 mm/min) and Q1 would be 4.2 in/min (110 mm/min), 5.8 times faster. Multi-axis cutting: In 1987, Ingersoll-Rand Waterjet Systems offered a 5-axis pure-water waterjet cutting system called the Robotic Waterjet System. The system was an overhead gantry design, similar in overall size to the HS-1000. Multi-axis cutting: With recent advances in control and motion technology, 5-axis water jet cutting (abrasive and pure) has become a reality. Where the normal axes on a water jet are named Y (back/forth), X (left/right) and Z (up/down), a 5-axis system will typically add an A axis (angle from perpendicular) and C axis (rotation around the Z-axis). Depending on the cutting head, the maximum cutting angle for the A axis can be anywhere from 55, 60, or in some cases even 90 degrees from vertical. As such, 5-axis cutting opens up a wide range of applications that can be machined on a water jet cutting machine. Multi-axis cutting: A 5-axis cutting head can be used to cut 4-axis parts, where the bottom surface geometries are shifted a certain amount to produce the appropriate angle and the Z-axis remains at one height. This can be useful for applications like weld preparation where a bevel angle needs to be cut on all sides of a part that will later be welded, or for taper compensation purposes where the kerf angle is transferred to the waste material – thus eliminating the taper commonly found on water jet-cut parts. A 5-axis head can cut parts where the Z-axis is also moving along with all the other axes. This full 5-axis cutting could be used for cutting contours on various surfaces of formed parts. Multi-axis cutting: Because of the angles that can be cut, part programs may need to have additional cuts to free the part from the sheet. Attempting to slide a complex part at a severe angle from a plate can be difficult without appropriate relief cuts.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Digital perm** Digital perm: A digital perm is a perm that uses hot rods with the temperature regulated by a machine with a digital display, hence the name. The process is otherwise similar to that of a traditional perm. The name "digital perm" is trademarked by a Japanese company, Paimore Co. Hairstylists usually call it a "hot perm." A normal perm basically requires only the perm solution. A digital perm requires a (different) solution plus heat. This type of perm is popular in several countries, including South Korea and Japan. Difference between a normal perm and a digital perm: The biggest difference between other perms and a digital perm is the shape and the texture of the wave created by the digital process. A normal perm, or "cold perm," makes the wave most prominent when the hair is wet, and loose when it is dry. The hair tends to look moist and as locks. A digital perm makes the wave most prominent when the hair is dry, and loose when it is wet. Therefore, you can create the dry and curly look of the curl iron or the hot curler. Difference between a normal perm and a digital perm: Digital perms thermally recondition the hair, though the chemicals and processing are similar to a straight perm. The hair often feels softer, smoother, and shinier after a digital perm. Cost and time of a digital perm: The price depends on the hair salon, but a digital perm is usually a little more expensive than a cold perm. Also, some hair salons have systems where they can use the machine one at a time, in which case the price could be a lot higher. The time it takes to perm the hair also depends on the hair salon and the hair type, but it usually takes longer than a cold perm. In some cases, it takes about the same time, but different salons use different solutions and machines, so the time varies. Styling: A cold perm makes the hair most wavy when it is wet, so adding styling gel/foam when it is wet and air-drying it makes the wave most prominent. A digital perm makes the hair wavy when it is dry, so it can be dried with a blow dryer, and a hand can be used to make the curl. Styling is very easy, and if the curl is set in the morning, at the end of the day when the wave loosens, the curls can be revived by curling around a finger.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BAG3** BAG3: BAG family molecular chaperone regulator 3 is a protein that in humans is encoded by the BAG3 gene. BAG3 is involved in chaperone-assisted selective autophagy. Function: BAG proteins compete with Hip-1 for binding to the Hsc70/Hsp70 ATPase domain and promote substrate release. All the BAG proteins have an approximately 45-amino acid BAG domain near the C terminus but differ markedly in their N-terminal regions. The protein encoded by this gene contains a WW domain in the N-terminal region and a BAG domain in the C-terminal region. The BAG domains of BAG1, BAG2, and BAG3 interact specifically with the Hsc70 ATPase domain in vitro and in mammalian cells. All 3 proteins bind with high affinity to the ATPase domain of Hsc70 and inhibit its chaperone activity in a Hip-repressible manner. Clinical significance: BAG gene has been implicated in age related neurodegenerative diseases such as Alzheimer's. It has been demonstrated that BAG1 and BAG3 regulate the proteasomal and lysosomal protein elimination pathways, respectively. It has also been shown to be a cause of familial dilated cardiomyopathy. Clinical significance: That BAG3 mutations are responsible for familial dilated cardiomyopathy is confirmed by another study describing 6 new molecular variants (2 missense and 4 premature Stops ). Moreover, the same publication reported that BAG3 polymorphisms are also associated with sporadic forms of the disease together with HSPB7 locus.In muscle cells, BAG3 cooperates with the molecular chaperones Hsc70 and HspB8 to induce the degradation of mechanically damaged cytoskeleton components in lysosomes. This process is called chaperone-assisted selective autophagy and is essential for maintaining muscle activity in flies, mice and men.BAG3 is able to stimulate the expression of cytoskeleton proteins in response to mechanical tension by activating the transcription regulators YAP1 and WWTR1. BAG3 balances protein synthesis and protein degradation under mechanical stress. Interactions: PLCG1 has been shown to interact with:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Small nucleolar RNA Z247** Small nucleolar RNA Z247: In molecular biology, Small nucleolar RNA Z247 is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA. Small nucleolar RNA Z247: snoRNA Z247 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs.Plant snoRNA Z247 was identified in a screen of Oryza sativa.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Closed (poker)** Closed (poker): In the game of poker, a betting round is said to be closed if no player will have the right to raise in the round. Normally this occurs when a player calls, and the next player whose turn it is to act is the one who made the last raise, so he cannot raise further (this ends the betting round). The round can also said to be closed before it has actually ended if there are still players remaining to act, but they will not be entitled to raise either because the last raise was a sub-minimum all-in raise (see poker table stakes rules) or because the limit ("cap") on allowed raises has been reached. Closed (poker): The term is also used to describe a category of poker game in which no cards held by individual players are visible to any other player before the showdown. Most forms of draw poker are closed games (draw games with a rollout are an exception). Most forms of stud poker, in contrast, are open games, because some players' cards are dealt face up or are exposed during play (blind stud games are an exception). Most community card poker games like Texas hold 'em are considered closed as well, because the only cards exposed before showdown belong to everyone; the individual players' cards are never seen until showdown. Strategic implications: A player who closes the betting round by calling or overcalling is entitled to greater freedom by doing so, since he does not face the threat of subsequent raises. This is especially true when comparing limit hold'em games with a standard cap (3 raises) to an elevated cap (4 raises) or capless game. A player can cap with as much as 80% of his flat calling range when he knows he cannot be forced out of the pot and no opponent can make his hand appear much stronger by raising. This is particularly correct when closing the action on the river in Texas hold'em or on the 7th street in stud poker, where a player can call down with hands that are unlikely to win simply because of the pot odds he is getting and the fact he cannot be bluffed out of the pot.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Language change** Language change: Language change is variation over time in a language's features. It is studied in several subfields of linguistics: historical linguistics, sociolinguistics, and evolutionary linguistics. Traditional theories of historical linguistics identify three main types of change: systematic change in the pronunciation of phonemes, or sound change; borrowing, in which features of a language or dialect are altered as a result of influence from another language or dialect; and analogical change, in which the shape or grammatical behavior of a word is altered to more closely resemble that of another word. Language change: All living languages are continually undergoing change. Some commentators use derogatory labels such as "corruption" to suggest that language change constitutes a degradation in the quality of a language, especially when the change originates from human error or is a prescriptively discouraged usage. Modern linguistics rejects this concept, since from a scientific point of view such innovations cannot be judged in terms of good or bad. John Lyons notes that "any standard of evaluation applied to language-change must be based upon a recognition of the various functions a language 'is called upon' to fulfil in the society which uses it".Over a sufficiently long period of time, changes in a language can accumulate to such an extent that it is no longer recognizable as the same language. For instance, modern English is the result of centuries of language change applying to Old English, even though modern English is extremely divergent from Old English in grammar, vocabulary, and pronunciation. The two may be thought of as distinct languages, but Modern English is a "descendant" of its "ancestor" Old English. When multiple languages are all descended from the same ancestor language, as the Romance languages are from Vulgar Latin, they are said to form a language family and be "genetically" related. Causes: Economy: Speech communities tend to change their utterances to be as efficient and effective (with as little effort) as possible, while still reaching communicative goals. Purposeful speaking therefore involves a trade-off of costs and benefits. Causes: The principle of least effort tends to result in phonetic reduction of speech forms. See vowel reduction, cluster reduction, lenition, and elision. After some time a change may become widely accepted (it becomes a regular sound change) and may end up treated as standard. For instance: going to [ˈɡoʊ.ɪŋ.tʊ] → gonna [ˈɡɔnə] or [ˈɡʌnə], with examples of both vowel reduction [ʊ] → [ə] and elision [nt] → [n], [oʊ.ɪ] → [ʌ]. Causes: Expressiveness: Common or overused language tends to lose its emotional or rhetorical intensity over time; therefore, new words and constructions are continuously employed to revive that intensity Analogy: Over time, speech communities unconsciously apply patterns of rules in certain words, sounds, etc. to unrelated other words, sounds, etc. Language contact: Words and constructions are borrowed from one language into another. Cultural environment: As a culture evolves, new places, situations, and objects inevitably enter its language, whether or not the culture encounters different people. Migration/Movement: Speech communities, moving into a region with a new or more complex linguistic situation, will influence, and be influenced by, language change; they sometimes even end up with entirely new languages, such as pidgins and creoles. Imperfect learning: According to one view, children regularly learn the adult forms imperfectly, and the changed forms then turn into a new standard. Alternatively, imperfect learning occurs regularly in one part of society, such as an immigrant group, where the minority language forms a substratum, and the changed forms can ultimately influence majority usage. Causes: Social prestige: Language may not only change towards features that have more social prestige, but also away from ones with negative prestige, as in the case of the loss of rhoticity in the British Received Pronunciation accent. Such movements can go back and forth.According to Guy Deutscher, the tricky question is "Why are changes not brought up short and stopped in their tracks? At first sight, there seem to be all the reasons in the world why society should never let the changes through." He sees the reason for tolerating change in the fact that we already are used to "synchronic variation", to the extent that we are hardly aware of it. For example, when we hear the word "wicked", we automatically interpret it as either "evil" or "wonderful", depending on whether it is uttered by an elderly lady or a teenager. Deutscher speculates that "[i]n a hundred years' time, when the original meaning of 'wicked' has all but been forgotten, people may wonder how it was ever possible for a word meaning 'evil' to change its sense to 'wonderful' so quickly." Types: Phonetic and phonological changes Sound change—i.e., change in the pronunciation of phonemes—can lead to phonological change (i.e., change in the relationships between phonemes within the structure of a language). For instance, if the pronunciation of one phoneme changes to become identical to that of another phoneme, the two original phonemes can merge into a single phoneme, reducing the total number of phonemes the language contains. Types: Determining the exact course of sound change in historical languages can pose difficulties, inasmuch as the technology of sound recording dates only from the 19th century, and thus sound changes before that time must be inferred from written texts. The orthographical practices of historical writers provide the main (indirect) evidence of how language sounds have changed over the centuries. Poetic devices such as rhyme and rhythm can also provide clues to earlier phonetic and phonological patterns. Types: A principal axiom of historical linguistics, established by the linguists of the Neogrammarian school of thought in the 19th century, is that sound change is said to be "regular"—i.e., a given sound change simultaneously affects all words in which the relevant set of phonemes appears, rather than each word's pronunciation changing independently of each other. The degree to which the Neogrammarian hypothesis is an accurate description of how sound change takes place, rather than a useful approximation, is controversial; but it has proven extremely valuable to historical linguistics as a heuristic, and enabled the development of methodologies of comparative reconstruction and internal reconstruction that allow linguists to extrapolate backward from known languages to the properties of earlier, unattested languages and hypothesize sound changes that may have taken place in them. Types: Lexical changes The study of lexical changes forms the diachronic portion of the science of onomasiology. Types: The ongoing influx of new words into the English language (for example) helps make it a rich field for investigation into language change, despite the difficulty of defining precisely and accurately the vocabulary available to speakers of English. Throughout its history English has not only borrowed words from other languages but has re-combined and recycled them to create new meanings, whilst losing some old words. Types: Dictionary-writers try to keep track of the changes in languages by recording (and, ideally, dating) the appearance in a language of new words, or of new usages for existing words. By the same token, they may tag some words eventually as "archaic" or "obsolete". Spelling changes Standardisation of spelling originated centuries ago. Differences in spelling often catch the eye of a reader of a text from a previous century. The pre-print era had fewer literate people: languages lacked fixed systems of orthography, and the handwritten manuscripts that survive often show words spelled according to regional pronunciation and to personal preference. Types: Semantic changes Semantic changes are shifts in the meanings of existing words. Basic types of semantic change include: pejoration, in which a term's connotations become more negative amelioration, in which a term's connotations become more positive broadening, in which a term acquires additional potential uses narrowing, in which a term's potential uses are restrictedAfter a word enters a language, its meaning can change as through a shift in the valence of its connotations. As an example, when "villain" entered English it meant 'peasant' or 'farmhand', but acquired the connotation 'low-born' or 'scoundrel', and today only the negative use survives. Thus 'villain' has undergone pejoration. Conversely, the word "wicked" is undergoing amelioration in colloquial contexts, shifting from its original sense of 'evil', to the much more positive one as of 2009 of 'brilliant'. Types: Words' meanings may also change in terms of the breadth of their semantic domain. Narrowing a word limits its alternative meanings, whereas broadening associates new meanings with it. For example, "hound" (Old English hund) once referred to any dog, whereas in modern English it denotes only a particular type of dog. On the other hand, the word "dog" itself has been broadened from its Old English root 'dogge', the name of a particular breed, to become the general term for all domestic canines. Types: Syntactic change Syntactic change is the evolution of the syntactic structure of a natural language. Over time, syntactic change is the greatest modifier of a particular language. Massive changes – attributable either to creolization or to relexification – may occur both in syntax and in vocabulary. Syntactic change can also be purely language-internal, whether independent within the syntactic component or the eventual result of phonological or morphological change. Sociolinguistics: The sociolinguist Jennifer Coates, following William Labov, describes linguistic change as occurring in the context of linguistic heterogeneity. She explains that "[l]inguistic change can be said to have taken place when a new linguistic form, used by some sub-group within a speech community, is adopted by other members of that community and accepted as the norm."The sociolinguist William Labov recorded the change in pronunciation in a relatively short period in the American resort of Martha's Vineyard and showed how this resulted from social tensions and processes. Sociolinguistics: Even in the relatively short time that broadcast media have recorded their work, one can observe the difference between the pronunciation of the newsreaders of the 1940s and the 1950s and the pronunciation of today. The greater acceptance and fashionability of regional accents in media may also reflect a more democratic, less formal society — compare the widespread adoption of language policies. Sociolinguistics: Can and Patton (2010) provide a quantitative analysis of twentieth-century Turkish literature using forty novels of forty authors. Using weighted least squares regression and a sliding window approach, they show that, as time passes, words, in terms of both tokens (in text) and types (in vocabulary), have become longer. They indicate that the increase in word lengths with time can be attributed to the government-initiated language "reform" of the 20th century. This reform aimed at replacing foreign words used in Turkish, especially Arabic- and Persian-based words (since they were in majority when the reform was initiated in early 1930s), with newly coined pure Turkish neologisms created by adding suffixes to Turkish word stems (Lewis, 1999). Sociolinguistics: Can and Patton (2010), based on their observations of the change of a specific word use (more specifically in newer works the preference of ama over fakat, both borrowed from Arabic and meaning "but", and their inverse usage correlation is statistically significant), also speculate that the word length increase can influence the common word choice preferences of authors. Sociolinguistics: Kadochnikov (2016) analyzes the political and economic logic behind the development of the Russian language. Ever since the emergence of the unified Russian state in the 15th and 16th centuries the government played a key role in standardizing the Russian language and developing its prescriptive norms with the fundamental goal of ensuring that it can be efficiently used as a practical tool in all sorts of legal, judicial, administrative and economic affairs throughout the country. Quantification: Altintas, Can, and Patton (2007) introduce a systematic approach to language change quantification by studying unconsciously used language features in time-separated parallel translations. For this purpose, they use objective style markers such as vocabulary richness and lengths of words, word stems and suffixes, and employ statistical methods to measure their changes over time. Language shift and social status: Languages perceived to be "higher status" stabilise or spread at the expense of other languages perceived by their own speakers to be "lower-status". Language shift and social status: Historical examples are the early Welsh and Lutheran Bible translations, leading to the liturgical languages Welsh and High German thriving today, unlike other Celtic or German variants.For prehistory, Forster and Renfrew (2011) argue that in some cases there is a correlation of language change with intrusive male Y chromosomes but not with female mtDNA. They then speculate that technological innovation (transition from hunting-gathering to agriculture, or from stone to metal tools) or military prowess (as in the abduction of British women by Vikings to Iceland) causes immigration of at least some males, and perceived status change. Then, in mixed-language marriages with these males, prehistoric women would often have chosen to transmit the "higher-status" spouse's language to their children, yielding the language/Y-chromosome correlation seen today.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Boussinesq approximation** Boussinesq approximation: Boussinesq approximation may refer to several modelling concepts – as introduced by Joseph Valentin Boussinesq (1842–1929), a French mathematician and physicist known for advances in fluid dynamics: Boussinesq approximation (buoyancy) for buoyancy-driven flows for small density differences in the fluid Boussinesq approximation (water waves) for long waves propagating on the surface of a fluid layer under the action of gravity Turbulence modeling and eddy viscosity: in modelling the turbulence Reynolds stresses, the Boussinesq approximation results in the use of an eddy viscosity concept
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Racine stages** Racine stages: Racine stages are a categorization of epileptic seizures proposed by Ronald J. Racine in 1972. Prior to Racine's research in epilepsy, a quantifiable means to describe seizure intensities and their causes was not readily available. Racine's work allowed for epilepsy to be understood on a level previously thought impossible. Introduction: In the brain, electrical signals are spread by the firing of neurons which lead to a desired outcome in the body. This can be caused by a release of a neurotransmitter or the voluntary contractions of a muscle. An action potential must be met in order for the electrical signal to be created. In epileptic patients, electrical signals reach a threshold, causing a spread of firing neurons in the brain. This causes multiple signals to spread through the nervous system resulting in a seizure. Once a seizure has occurred, damage can be seen in the area that the action potential came from. For example, if the initial action potential came from the hippocampus, damage can be seen in the surrounding neurons. Introduction: While an EEG is able to determine the presence of a seizure and the intensity of the action potentials, the overall result on the body is hard to determine. In 1972, Ronald J. Racine developed a method to split the severity of seizures into stages: mouth and facial movement, head nodding, forelimb clonus, rearing with forelimb clonus, and rearing and falling with forelimb clonus. Racine stages can be used to determine at which stages the patient is experiencing a seizure and at what level of stimulus the patient is able to reach a certain stage. Over time, mapping of the stimulus level and the resulting seizure intensity can show damage to the stimulated area. Mappings for patients can be made by sending electrical signals at different strengths to measure the body's reaction. Once the epileptic patient experiences a seizure, the patient becomes more susceptible to having further seizures. Racine stages were developed using an animal model to outline the five stages. Once developed, the Racine stages served as a quantitative way to categorize the intensity of a seizure that an epileptic patient experiences. Development: A seizure is described as large amounts of synchronized action potentials which cause the body to perform uncontrollable muscle contractions resulting in involuntary movement and an incapacity to control ones actions. This synchronized action potential must surpass a certain threshold, which is different for each patient, which then reverberates throughout the body. For patients with epilepsy, seizure occurs constantly and continue to grow in intensity. When a patient has epilepsy, they are always at risk of experiencing a seizure. However, for each patient, different environmental stimuli can cause the patient to experience a seizure. For each patient, the treatment method and the success of that treatment method is different. Henry Molaison (HM) is known for his contribution to memory studies in neuroscience. Before he lost his ability to retain long-term memories, he had debilitating seizures. HM, showed small signs of seizures while growing up. Before the age of fifteen HM's only sign of a seizure was a lull in the conversation. For a few seconds he would appear as if he was daydreaming. Some described him as absent minded for a few seconds. His first traumatic seizure happened while he was fifteen. While in the family car, HM experienced a seizure that caused his entire body to convulse. In 1969, A deep brain stimulation experiment was developed to test the fluctuations of thresholds for patients with epilepsy. In this experiment, researchers used implanted electrodes to measure the electrographic activity during the introduction of a stimulus and the resulting seizure. While this experiment was successful in showing that seizure happened at lower thresholds after repeated treatments, the overall severity of each seizure was not well recorded. Development: Rat model Prior to Racine's research into epilepsy, a model for the severity of a seizure was not known. Development: In 1972, Ronald J. Racine sought to develop a model that quantified the severity of a seizure. Using animal testing (rat model), Racine was able to stimulate specific parts of the brain using slight electrical impulses. He used methods of deep brain stimulation in order to ensure the targeted areas of the brain were able to reach the specific threshold to see a reaction in the rats. Rats were separated into categories of the target area, duration of stimuli, and overall intensity of stimuli. He specifically targeted the hippocampus and the amygdala of the test animals. Each rat in the model was anesthetized and special probes were placed into specific parts of the brain according to the target area. Using an electrical stimulation at one second intervals and different intensities, Racine observed a change in muscle stimulation in the rats. Once excited, the rats would demonstrate signs of a seizure. Racine was able to categorize the bodies' reaction to the stimuli into five different categories . He also observed that with the continuation of treatment, it was easier for the seizure to take place. These stages of increasing severity can serve as a way to quantify a seizure. Classical stages: As the intensity of the seizure increases, the severity efferent actions increase. Each stage is a result of the action potentials causing the muscle to contract and relax resulting in an involuntary, observable action. Racine stages Mouth and facial movement Sometimes hard to determine, this can also be observed in human patients by a period in time when the patient experiences absentmindedness or becomes still. Head nodding Uncontrollable muscle contractions in the neck cause slight to severe jarring of the head. Forelimb clonus Involuntary movement of the arms due to increased muscle stimulation. Rearing with forelimb clonus Broadening of the chest. For rat models, rearing can be demonstrated by the rat standing on its hind legs. Classical stages: Rearing and falling with forelimb clonus (generalized motor convulsions) During this final stage, the patient is at the highest risk for injury. Risk of injury due to falling, or situational circumstances may threaten the life of the patient and those around them.As the level of stimulus increases, the resulting involuntary movement goes down the level of stages. Levels further down the Racine stages also contain symptoms of the previous stages. For example, a person who is demonstrating the actions of a stage four seizure may also demonstrate head nodding (indicative of a level two seizure). It is known that repeated exposure to a stimuli lowers the overall threshold for a seizure.) The first two stages have been seen two to four days before an increase in the severity of the seizure is seen. This can be recorded by the patient experiencing reactive behavior seen higher on the Racine scale. This is seen is 80% of patients with seizures. Clinical uses: Since its development, the use of the Racine stages has helped further the research into treating epileptic patients. Currently, Racine stages are being used in rat models. The Racine models are still used in laboratory settings to demonstrate the severity of seizures. While this model serves as the standard for a method to quantify the severity of a seizure, additional stages have been added to model the more severe cases. In 1978, Pinel and Rovne developed a model that added to the traditional five stages. While these stages are based on the classic five stages, the increase in severity called for 5 additional stages. Clinical uses: Pinel and Rovne additional stages Multiple stage five seizures Mostly two level five seizures with additional seizures possible. Jumping Running Jumping and running Two different seizures with a partial seizure in-between Two seizures which are higher than the Racine stages Separated by a lower level seizureStages 6–10 also include the addition of symptoms seen in stages one to five. Clinical uses: Research into the cure for epilepsy is ongoing. Different levels of tolerance to outside stimulus exist for each patient. Some patients experience seizures with audio or visual stimulation. However, some patients are more sensitive to environmental factors than others. In most cases treatment from medication or surgery can help limit the prevalence of seizures. However, these treatment methods do not always cure the patient. Clinical uses: Additional adaptations The classic five Racine stages have been adapted many times since their designation in 1972. Depending on the changes in stimuli intensity and duration, researches add or take away levels according to the reactions of the rat models. While adaptations do exist to the Racine stages model, the original model has served as the backbone to the idea of creating a method for determining the intensity of a seizure. The use of the Racine stages can help further research into new solutions in epileptic treatment.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded