text
stringlengths
60
353k
source
stringclasses
2 values
**Multidrug and toxin extrusion protein 1** Multidrug and toxin extrusion protein 1: Multidrug and toxin extrusion protein 1 (MATE1), also known as solute carrier family 47 member 1, is a protein that in humans is encoded by the SLC47A1 gene. SLC47A1 belongs to the MATE (multidrug and toxic compound extrusion) family of transporters that are found in bacteria, archaea and eukaryotes. Gene: The SLC47A1 gene is located within the Smith–Magenis syndrome region on chromosome 17. Function: SLC47A1 is a member of the MATE family of transporters that excrete endogenous and exogenous toxic electrolytes through urine and bile. Discovery: The multidrug efflux transporter NorM from V. parahaemolyticus which mediates resistance to multiple antimicrobial agents (norfloxacin, kanamycin, ethidium bromide etc.) and its homologue from E. coli were identified in 1998, which is the first of Solute carrier family 47 member. NorM seems to function as drug/sodium antiporter which is the first example of Na+-coupled multidrug efflux transporter. NorM is a prototype of a new transporter family and Brown et al. named it the multidrug and toxic compound extrusion family. The X-ray structure of the transporter NorM was determined to 3.65 Å, revealing an outward-facing conformation with two portals open to the outer leaflet of the membrane and a unique topology of the predicted 12 transmembrane helices distinct from any other known multidrug resistance transporter.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Woonerf** Woonerf: A woonerf (Dutch pronunciation: [ˈʋoːnɛr(ə)f]) is a living street, as originally implemented in the Netherlands and in Flanders (Belgium). Techniques include shared space, traffic calming, and low speed limits. The term woonerf has been adopted directly by some English-language publications. In the United Kingdom, these areas are called home zones. Etymology: The word, of Dutch origin, literally translates as 'living yard' or 'residential grounds'. History: Since the invention of automobiles, cities have been predominantly constructed to accommodate the use of automobiles.The entire locality of Emmen in the Netherlands was designed as a woonerf in the 1970s.In 1999 the Netherlands had over 6000 woonerven and today around 2 million Dutch people are living in woonerven. The benefits of the woonerf are promoted by woonERFgoed, a network of professionals and residents.In 2006 it was reported that people in Hesselterbrink, a neighborhood of Emmen, were disillusioned about how the woonerf principle had become another traffic engineering measure that "entailed precious little more than signs and uniform standards". They have now adopted the shared space principles as a way of rethinking the woonerf. They are reported to "now know that car drivers should become residents. Eye contact and human interaction are more effective means to achieve and maintain attractive and safe areas than signs and rules". Regulation: Belgium Belgian traffic regulation (art. 2.32) defines the woonerf and the generic erf, and their traffic sign. The woonerf has a residential focus; the erf can have other primary uses like “crafts, trade, tourism, education and recreation”. In art. 22bis, the Belgian traffic regulation describes what is and what isn’t allowed in a (woon)erf: Within erven and woonerven: Pedestrians can use the full width of the public road; and playing is also allowed. Drivers may not endanger pedestrians or hinder them; if necessary they must stop. Furthermore they need to be twice as careful regarding children. Pedestrians may not obstruct traffic unnecessarily. Speed is limited to 20 km per hour. Parking is forbidden, except where there are visual markings like different surface colors, a letter P or traffic signs allowing parking. Netherlands Under Article 44 of the Dutch traffic code, motorised traffic in a woonerf or "recreation area" is restricted to walking pace.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Generalist Genes Hypothesis** Generalist Genes Hypothesis: The Generalist Genes Hypothesis of learning abilities and disabilities was originally coined in an article by Plomin & Kovas (2005).The Generalist Genes Hypothesis suggests that most genes associated with common learning disabilities and abilities are generalist in three ways. Firstly, the same genes that influence common learning abilities (e.g., high reading aptitude) are also responsible for common learning disabilities (e.g., reading disability): they are strongly genetically correlated. Secondly, many of the genes associated with one aspect of a learning disability (e.g., vocabulary problems) also influence other aspects of this learning disability (e.g., grammar problems). Thirdly, genes that influence one learning disability (e.g., reading disability) are largely the same as those that influence other learning disabilities (e.g., mathematics disability).The Generalist Genes Hypothesis has important implications for education, cognitive sciences and molecular genetics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Galactan 1,3-beta-galactosidase** Galactan 1,3-beta-galactosidase: Galactan 1,3-beta-galactosidase (EC 3.2.1.145, galactan (1->3)-beta-D-galactosidase) is an enzyme with systematic name galactan 3-beta-D-galactosidase. This enzyme catalyses the following chemical reaction Hydrolysis of terminal, non-reducing beta-D-galactose residues in (1->3)-beta-D-galactopyranansThis enzyme removes not only free galactose, but also 6-glycosylated residues.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Digital Electronic Message Service** Digital Electronic Message Service: The Digital Electronic Message Service (DEMS) is a two-way wireless radio service for passing of message and facsimile data using the 10.6 and 24 GHz band. As of 1997, Associated Communications was expected to use the band to create a network in 31 U.S. cities. In October 2005, the FCC moved part of the DEMS service from the 18/19 GHz band to 24 GHz.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Platform ticket** Platform ticket: A platform ticket is a type of rail ticket issued by some railway systems, permitting the bearer to access the platforms of a railway station, but not to board and use any train services. It allows non-passengers to enter the paid area of the station, for example to walk with their friends, associates and loved ones all the way to the passenger car at stations where the general public is not admitted to platforms. Trainspotters can also purchase platform tickets and enjoy their trainspotting hobbies. They vary in type: some may only allow limited access and a sharply limited time of usage, while others may have totally free access to enter the platform area. During peak usage hours or rush hours, the platforms may only be available for passengers who intend to travel. History: Platform tickets emerged in the 19th century. At that time passenger coaches had no internal corridor, as they have today. In order to inspect tickets, conductors had to move along the outside of the train while it was in motion. Although trains moved much slower than today, there were numerous accidents. Therefore, railway operators began to check the tickets on the platform before passengers boarded the train. Passing these checkpoints required either a ticket for travel or the platform ticket, which was only valid for access to the platform. After railcars were changed, people and conductors could move from carriage to carriage so checking the tickets outside the train was no longer necessary. Most railway transport systems abolished this in the second half of the 20th century. As soon as there were no more checks, the platform ticket was unnecessary and generally was abandoned. However, as there are now automated ticket barriers, railfans and trainspotters buy these tickets to get past the barriers and onto the platform. Usage by country: China China Railways ceased to issue platform tickets from 2014. At some major stations like Beijing West railway station, a person can still escort a passenger in need by applying for a permit with the escorter's ID card. Usage by country: Germany In Germany the Royal Prussian Railway was the first carrier to introduce ticket checks outside the trains in 1893. Other railways in Germany soon followed. Platform checks and tickets were done away with in East Germany in 1970 and in West Germany in 1974. In some local transportation networks, they lasted longer; the last one in which they still apply is public transportation in Hamburg, where platform tickets must be bought to access the platforms without a travel ticket. The price is 0.10 euros. Usage by country: India A platform ticket for any railway station situated across India costs not more than ₹10 and is valid for not more than two hours. Tickets are issued from either ticketing counters and ATVMs on the railway station, or from Indian Railways' UTS app. If a passenger is caught by railway ticket checking staff at any platform without platform ticket or travelling ticket, passenger will be charged double the fare of the last train that arrived at / departed from that platform. The fare will be worked out on the basis of the last ticket checking station on the train's route. Usage by country: Japan Japan Railways Group (JR Group) companies sell platform tickets (入場券, nyūjōken) priced between 120 yen and 160 yen at all staffed stations and platform passes (定期入場券, teiki nyūjōken), which allow unlimited access to the platform area for one month, priced between 3,780 yen and 4,890 yen at limited stations. They do not allow holders to board trains. All staffed stations of JR East, JR Central and JR West, and stations of JR Hokkaido with automatic ticket gates limit the validity of the ticket to two hours from issuance; an additional fee is charged if the ticket holder exits the ticket gate after the two-hour period expires. Usage by country: Taiwan The Taiwan Railways Administration stopped selling platform tickets (月臺票/月台票) on 1 June 2013 to lend platform access certificate (月台出入證) on holding pictured identification documents. Zhongli, Taichung, Chiayi, Tainan, Kaohsiung, Hualien and Yilan stations from the north, counterclockwise, would continue to sell platform tickets in addition to lending platform access passes. A platform ticket or a platform access certificate allows staying in the paid area of a station for up to one hour. Staying longer requires the fare of starting mileage of Fu-Hsing Semi-Express and other train in the same level. An electronic ticket used to enter and exit the same station is charged NT$14 within one hour, NT$112 within three hours, or NT$843 beyond 3 hours. Usage by country: United Kingdom Platform tickets were in common use on the mainline network until the mid 20th century, and the majority of ticket offices are still equipped to issue them. The use of automated ticket barriers at stations has resulted in a renewed demand for platform tickets. Railfans, in particular, are told that they may require a platform ticket for access to platforms, but some individuals have cited difficulty in obtaining them. They are valid for one hour and cost £0.10; the last price increase was in January 1988.Some heritage railways and museums issue platform tickets for admittance or as souvenirs. Usage by country: United States While not a platform ticket per se, Bay Area Rapid Transit charges a specialty excursion fare for entering and exiting the system within three hours at the same station.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Control panel (engineering)** Control panel (engineering): A control panel is a flat, often vertical, area where control or monitoring instruments are displayed or it is an enclosed unit that is the part of a system that users can access, such as the control panel of a security system (also called control unit). They are found in factories to monitor and control machines or production lines and in places such as nuclear power plants, ships, aircraft and mainframe computers. Older control panels are most often equipped with push buttons and analog instruments, whereas nowadays in many cases touchscreens are used for monitoring and control purposes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Central station (electricity)** Central station (electricity): A central station was the name given to the first generation of power stations in the late nineteenth and early twentieth century. Prior to the establishment of electricity grids, central stations were as yet unconnected with one another, each being the sole source of electrical supply to nearby consumers.Central Stations played a key role in the development of electric vehicles: the Electric Vehicle Association of America (EVAA) had representatives from 10 central stations when it was founded in 1910. The New England section of the EVAA, founded in 1909 was called the “Electric Vehicle and Central Station Association”. The monthly official journal of the EVAA was called The Central Station published by Harry Cushing Jr. and edited by Newton Harris.Cushing and Harris published Central Station Management in 1916.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CDP-glucose 4,6-dehydratase** CDP-glucose 4,6-dehydratase: The enzyme CDP-glucose 4,6-dehydratase (EC 4.2.1.45) catalyzes the chemical reaction CDP-glucose ⇌ CDP-4-dehydro-6-deoxy-D-glucose + H2OThis enzyme belongs to the family of lyases, specifically the hydro-lyases, which cleave carbon-oxygen bonds. This enzyme participates in starch and sucrose metabolism. It employs one cofactor, NAD+. Nomenclature: The systematic name of this enzyme class is CDP-glucose 4,6-hydro-lyase (CDP-4-dehydro-6-deoxy-D-glucose-forming). Other names in common use include: cytidine diphosphoglucose oxidoreductase, and CDP-glucose 4,6-hydro-lyase. Structural studies: As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1RKX and 1WVG.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Eleuteroschisis** Eleuteroschisis: Eleuteroschisis is asexual reproduction in dinoflagellates in which the parent organism completely sheds its theca (i.e. undergoes ecdysis) either before or immediately following cell division. Neither daughter cell inherits part of the parent theca. In terms of asexual division of motile cells, desmoschisis is generally the case in gonyaulacaleans whereas eleutheroschisis is generally the case in peridinialeans.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alogliptin** Alogliptin: Alogliptin, sold under the brand names Nesina and Vipidia,) is an oral anti-diabetic drug in the DPP-4 inhibitor (gliptin) class. Alogliptin does not decrease the risk of heart attack and stroke. Like other members of the gliptin class, it causes little or no weight gain, exhibits relatively little risk of hypoglycemia, and has relatively modest glucose-lowering activity. Alogliptin and other gliptins are commonly used in combination with metformin in people whose diabetes cannot adequately be controlled with metformin alone.In April 2016, the U.S. Food and Drug Administration (FDA) added a warning about increased risk of heart failure. It was developed by Syrrx, a company which was acquired by Takeda Pharmaceutical Company in 2005. In 2020, it was the 295th most commonly prescribed medication in the United States, with more than 1 million prescriptions. Medical uses: Alogliptin is a dipeptidyl peptidase-4 inhibitor that decreases blood sugar similar to the other. Side effects: Adverse events include mild hypoglycemia based on clinical studies. Alogliptin is not associated with increased weight, increased risk of cardiovascular events. It may also cause joint pain that can be severe and disabling. In April 2016, the U.S. Food and Drug Administration (FDA) added a warning about increased risk of heart failure. Market access: In December 2007, Takeda submitted a New Drug Application (NDA) for alogliptin to the United States Food and Drug Administration (USFDA), after positive results from Phase III clinical trials. In September 2008, the company also filed for approval in Japan, winning approval in April 2010. The company also filed a Marketing Authorization Application (MAA) elsewhere outside the United States, which was withdrawn in June 2009 needing more data. The first USFDA NDA failed to gain approval and was followed by a pair of NDAs (one for alogliptin and a second for a combination of alogliptin and pioglitazone) in July 2011. In 2012, Takeda received a negative response from the USFDA on both of these NDAs, citing a need for additional data.In 2013, the FDA approved the drug in three formulations: as a stand-alone with the brand-name Nesina, combined with metformin using the name Kazano, and when combined with pioglitazone as Oseni.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Acetyl-CoA carboxylase** Acetyl-CoA carboxylase: Acetyl-CoA carboxylase (ACC) is a biotin-dependent enzyme (EC 6.4.1.2) that catalyzes the irreversible carboxylation of acetyl-CoA to produce malonyl-CoA through its two catalytic activities, biotin carboxylase (BC) and carboxyltransferase (CT). ACC is a multi-subunit enzyme in most prokaryotes and in the chloroplasts of most plants and algae, whereas it is a large, multi-domain enzyme in the cytoplasm of most eukaryotes. The most important function of ACC is to provide the malonyl-CoA substrate for the biosynthesis of fatty acids. The activity of ACC can be controlled at the transcriptional level as well as by small molecule modulators and covalent modification. The human genome contains the genes for two different ACCs—ACACA and ACACB. Structure: Prokaryotes and plants have multi-subunit ACCs composed of several polypeptides. Biotin carboxylase (BC) activity, biotin carboxyl carrier protein (BCCP), and carboxyl transferase (CT) activity are each contained on a different subunit. The stoichiometry of these subunits in the ACC holoenzyme differs amongst organisms. Humans and most eukaryotes have evolved an ACC with CT and BC catalytic domains and BCCP domains on a single polypeptide. Most plants also have this homomeric form in cytosol. ACC functional regions, starting from the N-terminus to C-terminus are the biotin carboxylase (BC), biotin binding (BB), carboxyl transferase (CT), and ATP-binding (AB). AB lies within BC. Biotin is covalently attached through an amide bond to the long side chain of a lysine reside in BB. As BB is between BC and CT regions, biotin can easily translocate to both of the active sites where it is required. Structure: In mammals where two isoforms of ACC are expressed, the main structural difference between these isoforms is the extended ACC2 N-terminus containing a mitochondrial targeting sequence. Genes: The polypeptides composing the multi-subunit ACCs of prokaryotes and plants are encoded by distinct genes. In Escherichia coli, accA encodes the alpha subunit of the acetyl-CoA carboxylase, and accD encodes its beta subunit. Mechanism: The overall reaction of ACAC(A,B) proceeds by a two-step mechanism. The first reaction is carried out by BC and involves the ATP-dependent carboxylation of biotin with bicarbonate serving as the source of CO2. The carboxyl group is transferred from biotin to acetyl CoA to form malonyl CoA in the second reaction, which is catalyzed by CT. Mechanism: In the active site, the reaction proceeds with extensive interaction of the residues Glu296 and positively charged Arg338 and Arg292 with the substrates. Two Mg2+ are coordinated by the phosphate groups on the ATP, and are required for ATP binding to the enzyme. Bicarbonate is deprotonated by Glu296, although in solution, this proton transfer is unlikely as the pKa of bicarbonate is 10.3. The enzyme apparently manipulates the pKa to facilitate the deprotonation of bicarbonate. The pKa of bicarbonate is decreased by its interaction with positively charged side chains of Arg338 and Arg292. Furthermore, Glu296 interacts with the side chain of Glu211, an interaction that has been shown to cause an increase in the apparent pKa. Following deprotonation of bicarbonate, the oxygen of the bicarbonate acts as a nucleophile and attacks the gamma phosphate on ATP. The carboxyphosphate intermediate quickly decomposes to CO2 and PO43−. The PO43− deprotonates biotin, creating an enolate, stabilized by Arg338, that subsequently attacks CO2 resulting in the production of carboxybiotin. The carboxybiotin translocates to the carboxyl transferase (CT) active site, where the carboxyl group is transferred to acetyl-CoA. In contrast to the BC domain, little is known about the reaction mechanism of CT. A proposed mechanism is the release of CO2 from biotin, which subsequently abstracts a proton from the methyl group from acetyl CoA carboxylase. The resulting enolate attacks CO2 to form malonyl CoA. In a competing mechanism, proton abstraction is concerted with the attack of acetyl CoA. Function: The function of ACC is to regulate the metabolism of fatty acids. When the enzyme is active, the product, malonyl-CoA, is produced which is a building block for new fatty acids and can inhibit the transfer of the fatty acyl group from acyl CoA to carnitine with carnitine acyltransferase, which inhibits the beta-oxidation of fatty acids in the mitochondria. Function: In mammals, two main isoforms of ACC are expressed, ACC1 and ACC2, which differ in both tissue distribution and function. ACC1 is found in the cytoplasm of all cells but is enriched in lipogenic tissue, such as adipose tissue and lactating mammary glands, where fatty acid synthesis is important. In oxidative tissues, such as the skeletal muscle and the heart, the ratio of ACC2 expressed is higher. ACC1 and ACC2 are both highly expressed in the liver where both fatty acid oxidation and synthesis are important. The differences in tissue distribution indicate that ACC1 maintains regulation of fatty acid synthesis whereas ACC2 mainly regulates fatty acid oxidation (beta oxidation). Function: A mitochondrial isoform of ACC1 (mACC1) plays a partially redundant role in lipoic acid synthesis and thus in protein lipoylation by providing malonyl-CoA for mitochondrial fatty acid synthesis (mtFASII) in tandem with ACSF3. Regulation: The regulation of mammalian ACC is complex, in order to control two distinct pools of malonyl CoA that direct either the inhibition of beta oxidation or the activation of lipid biosynthesis.Mammalian ACC1 and ACC2 are regulated transcriptionally by multiple promoters which mediate ACC abundance in response to the cells nutritional status. Activation of gene expression through different promoters results in alternative splicing; however, the physiological significance of specific ACC isozymes remains unclear. The sensitivity to nutritional status results from the control of these promoters by transcription factors such as sterol regulatory element-binding protein 1, controlled by insulin at the transcriptional level, and ChREBP, which increases in expression with high carbohydrates diets.Through a feed-forward loop, citrate allosterically activates ACC. Citrate may increase ACC polymerization to increase enzymatic activity; however, it is unclear if polymerization is citrate's main mechanism of increasing ACC activity or if polymerization is an artifact of in vitro experiments. Other allosteric activators include glutamate and other dicarboxylic acids. Long and short chain fatty acyl CoAs are negative feedback inhibitors of ACC.One such negative allosteric modulator is pamitoyl-coA.Phosphorylation can result when the hormones glucagon or epinephrine bind to cell surface receptors, but the main cause of phosphorylation is due to a rise in AMP levels when the energy status of the cell is low, leading to the activation of the AMP-activated protein kinase (AMPK). AMPK is the main kinase regulator of ACC, able to phosphorylate a number of serine residues on both isoforms of ACC. On ACC1, AMPK phosphorylates Ser79, Ser1200, and Ser1215. Protein kinase A also has the ability to phosphorylate ACC, with a much greater ability to phosphorylate ACC2 than ACC1. Ser80 and Ser1263 on ACC1 may also serve as a site of phosphorylation as a regulatory mechanism. However, the physiological significance of protein kinase A in the regulation of ACC is currently unknown. Researchers hypothesize there are other ACC kinases important to its regulation as there are many other possible phosphorylation sites on ACC.When insulin binds to its receptors on the cellular membrane, it activates a phosphatase enzyme called protein phosphatase 2A (PP2A) to dephosphorylate the enzyme; thereby removing the inhibitory effect. Furthermore, insulin induces a phosphodiesterase that lowers the level of cAMP in the cell, thus inhibiting PKA, and also inhibits AMPK directly.This protein may use the morpheein model of allosteric regulation. Clinical implications: At the juncture of lipid synthesis and oxidation pathways, ACC presents many clinical possibilities for the production of novel antibiotics and the development of new therapies for diabetes, obesity, and other manifestations of metabolic syndrome. Researchers aim to take advantage of structural differences between bacterial and human ACCs to create antibiotics specific to the bacterial ACC, in efforts to minimize side effects to patients. Promising results for the usefulness of an ACC inhibitor include the finding that mice with no expression of ACC2 have continuous fatty acid oxidation, reduced body fat mass, and reduced body weight despite an increase in food consumption. These mice are also protected from diabetes. A lack of ACC1 in mutant mice is lethal already at the embryonic stage. However, it is unknown whether drugs targeting ACCs in humans must be specific for ACC2.Firsocostat (formerly GS-976, ND-630, NDI-010976) is a potent allosteric ACC inhibitor, acting at the BC domain of ACC. Firsocostat is under development in 2019 (Phase II) by the pharmaceutical company Gilead as part of a combination treatment for non-alcoholic steatohepatitis (NASH), believed to be an increasing cause of liver failure.In addition, plant-selective ACC inhibitors are in widespread use as herbicides, which suggests clinical application against Apicomplexa parasites that rely on a plant-derived ACC isoform, including malaria. Clinical implications: The heterogeneous clinical phenotypes of the metabolic disease combined malonic and methylmalonic aciduria (CMAMMA) due to ACSF3 deficiency are thought to result from partial compensation of a mitochondrial isoform of ACC1 (mACC1) for deficient ACSF3 in mitochondrial fatty acid synthesis (mtFASII).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Differential capacitance** Differential capacitance: Differential capacitance in physics, electronics, and electrochemistry is a measure of the voltage-dependent capacitance of a nonlinear capacitor, such as an electrical double layer or a semiconductor diode. It is defined as the derivative of charge with respect to potential. Description: In electrochemistry differential capacitance is a parameter introduced for characterizing electrical double layers: C=dσdΨ where σ is surface charge and ψ is electric surface potential. Description: Capacitance is usually defined as the stored charge between two conducting surfaces separated by a dielectric divided by the voltage between the surfaces. Another definition is the rate of change of the stored charge or surface charge (σ) divided by the rate of change of the voltage between the surfaces or the electric surface potential (ψ). The latter is called the "differential capacitance," but usually the stored charge is directly proportional to the voltage, making the capacitances given by the two definitions equal. Description: This type of differential capacitance may be called "parallel plate capacitance," after the usual form of the capacitor. However, the term is meaningful when applied to any two conducting bodies such as spheres, and not necessarily ones of the same size, for example, the elevated terminals of a Tesla wireless system and the earth. These are widely spaced insulated conducting bodies positioned over a spherically conducting ground plane. Description: "The differential capacitance between the spheres is obtained by assuming opposite charges ±q on them..." Another form of differential capacitance refers to single isolated conducting bodies. It is usually discussed in books under the topic of "electrostatics." This capacitance is best defined as the rate of change of charge stored in the body divided by the rate of change of the potential of the body. The definition of the absolute potential of the body depends on what is selected as a reference. This is sometimes referred to as the "self-capacitance" of a body. If the body is a conducting sphere, the self-capacitance is proportional to its radius, and is roughly 1pF per centimetre of radius.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Geodetic datum** Geodetic datum: A geodetic datum or geodetic system (also: geodetic reference datum, geodetic reference system, or geodetic reference frame) is a global datum reference or reference frame for precisely representing the position of locations on Earth or other planetary bodies by means of geodetic coordinates. Datums are crucial to any technology or technique based on spatial location, including geodesy, navigation, surveying, geographic information systems, remote sensing, and cartography. A horizontal datum is used to measure a location across the Earth's surface, in latitude and longitude or another coordinate system; a vertical datum is used to measure the elevation or depth relative to a standard origin, such as mean sea level (MSL). Since the rise of the global positioning system (GPS), the ellipsoid and datum WGS 84 it uses has supplanted most others in many applications. The WGS 84 is intended for global use, unlike most earlier datums. Geodetic datum: Before GPS, there was no precise way to measure the position of a location that was far from universal reference points, such as from the Prime Meridian at the Greenwich Observatory for longitude, from the Equator for latitude, or from the nearest coast for sea level. Astronomical and chronological methods have limited precision and accuracy, especially over long distances. Even GPS requires a predefined framework on which to base its measurements, so WGS 84 essentially functions as a datum, even though it is different in some particulars from a traditional standard horizontal or vertical datum. Geodetic datum: A standard datum specification (whether horizontal or vertical) consists of several parts: a model for Earth's shape and dimensions, such as a reference ellipsoid or a geoid; an origin at which the ellipsoid/geoid is tied to a known (often monumented) location on or inside Earth (not necessarily at 0 latitude 0 longitude); and multiple control points that have been precisely measured from the origin and monumented. Then the coordinates of other places are measured from the nearest control point through surveying. Because the ellipsoid or geoid differs between datums, along with their origins and orientation in space, the relationship between coordinates referred to one datum and coordinates referred to another datum is undefined and can only be approximated. Using local datums, the disparity on the ground between a point having the same horizontal coordinates in two different datums could reach kilometers if the point is far from the origin of one or both datums. This phenomenon is called datum shift. Geodetic datum: Because Earth is an imperfect ellipsoid, local datums can give a more accurate representation of some specific area of coverage than WGS 84 can. OSGB36, for example, is a better approximation to the geoid covering the British Isles than the global WGS 84 ellipsoid. However, as the benefits of a global system outweigh the greater accuracy, the global WGS 84 datum has become widely adopted. History: The spherical nature of Earth was known by the ancient Greeks, who also developed the concepts of latitude and longitude, and the first astronomical methods for measuring them. These methods, preserved and further developed by Muslim and Indian astronomers, were sufficient for the global explorations of the 15th and 16th Centuries. However, the scientific advances of the Age of Enlightenment brought a recognition of errors in these measurements, and a demand for greater precision. This led to technological innovations such as the 1735 Marine chronometer by John Harrison, but also to a reconsideration of the underlying assumptions about the shape of Earth itself. Isaac Newton postulated that the conservation of momentum should make Earth oblate (wider at the equator), while the early surveys of Jacques Cassini (1720) led him to believe Earth was prolate (wider at the poles). The subsequent French geodesic missions (1735-1739) to Lapland and Peru corroborated Newton, but also discovered variations in gravity that would eventually lead to the geoid model. History: A contemporary development was the use of the trigonometric survey to accurately measure distance and location over great distances. Starting with the surveys of Jacques Cassini (1718) and the Anglo-French Survey (1784–1790), by the end of the 18th Century, survey control networks covered France and the United Kingdom. More ambitious undertakings such as the Struve Geodetic Arc across Eastern Europe (1816-1855) and the Great Trigonometrical Survey of India (1802-1871) took much longer, but resulted in more accurate estimations of the shape of the Earth ellipsoid. The first triangulation across the United States was not completed until 1899. History: The U.S. survey resulted in the North American Datum (horizontal) of 1927 (NAD27) and the Vertical Datum of 1929 (NAVD29), the first standard datums available for public use. This was followed by the release of national and regional datums over the next several decades. Improving measurements, including the use of early satellites, enabled more accurate datums in the later 20th Century, such as NAD83 in North America, ETRS89 in Europe, and GDA94 in Australia. At this time global datums were also first developed for use in satellite navigation systems, especially the World Geodetic System (WGS 84) used in the U.S. global positioning system (GPS), and the International Terrestrial Reference System and Frame (ITRF) used in the European Galileo system. Dimensions: Horizontal datum The horizontal datum is the model used to measure positions on Earth. A specific point can have substantially different coordinates, depending on the datum used to make the measurement. There are hundreds of local horizontal datums around the world, usually referenced to some convenient local reference point. Contemporary datums, based on increasingly accurate measurements of the shape of Earth, are intended to cover larger areas. The WGS 84 datum, which is almost identical to the NAD83 datum used in North America and the ETRS89 datum used in Europe, is a common standard datum. Dimensions: Vertical datum A vertical datum is a reference surface for vertical positions, such as the elevations of Earth features including terrain, bathymetry, water level, and human-made structures. Dimensions: An approximate definition of sea level is the datum WGS 84, an ellipsoid, whereas a more accurate definition is Earth Gravitational Model 2008 (EGM2008), using at least 2,159 spherical harmonics. Other datums are defined for other areas or at other times; ED50 was defined in 1950 over Europe and differs from WGS 84 by a few hundred meters depending on where in Europe you look. Mars has no oceans and so no sea level, but at least two martian datums have been used to locate places there. Geodetic coordinates: In geodetic coordinates, Earth's surface is approximated by an ellipsoid, and locations near the surface are described in terms of geodetic latitude ( ϕ ), longitude ( λ ), and ellipsoidal height ( h ). Earth reference ellipsoid: Defining and derived parameters The ellipsoid is completely parameterised by the semi-major axis a and the flattening f From a and f it is possible to derive the semi-minor axis b , first eccentricity e and second eccentricity e′ of the ellipsoid Parameters for some geodetic systems The two main reference ellipsoids used worldwide are the GRS80 and the WGS 84.A more comprehensive list of geodetic systems can be found here. Earth reference ellipsoid: Geodetic Reference System 1980 (GRS80) World Geodetic System 1984 (WGS 84) The Global Positioning System (GPS) uses the World Geodetic System 1984 (WGS 84) to determine the location of a point near the surface of Earth. Datum transformation: The difference in co-ordinates between datums is commonly referred to as datum shift. The datum shift between two particular datums can vary from one place to another within one country or region, and can be anything from zero to hundreds of meters (or several kilometers for some remote islands). The North Pole, South Pole and Equator will be in different positions on different datums, so True North will be slightly different. Different datums use different interpolations for the precise shape and size of Earth (reference ellipsoids). For example, in Sydney there is a 200 metres (700 feet) difference between GPS coordinates configured in GDA (based on global standard WGS 84) and AGD (used for most local maps), which is an unacceptably large error for some applications, such as surveying or site location for scuba diving.Datum conversion is the process of converting the coordinates of a point from one datum system to another. Because the survey networks upon which datums were traditionally based are irregular, and the error in early surveys is not evenly distributed, datum conversion cannot be performed using a simple parametric function. For example, converting from NAD27 to NAD83 is performed using NADCON (later improved as HARN), a raster grid covering North America, with the value of each cell being the average adjustment distance for that area in latitude and longitude. Datum conversion may frequently be accompanied by a change of map projection. Discussion and examples: A geodetic reference datum is a known and constant surface which is used to describe the location of unknown points on Earth. Since reference datums can have different radii and different center points, a specific point on Earth can have substantially different coordinates depending on the datum used to make the measurement. There are hundreds of locally developed reference datums around the world, usually referenced to some convenient local reference point. Contemporary datums, based on increasingly accurate measurements of the shape of Earth, are intended to cover larger areas. The most common reference Datums in use in North America are NAD27, NAD83, and WGS 84. Discussion and examples: The North American Datum of 1927 (NAD 27) is "the horizontal control datum for the United States that was defined by a location and azimuth on the Clarke spheroid of 1866, with origin at (the survey station) Meades Ranch (Kansas)." ... The geoidal height at Meades Ranch was assumed to be zero, as sufficient gravity data was not available, and this was needed to relate surface measurements to the datum. "Geodetic positions on the North American Datum of 1927 were derived from the (coordinates of and an azimuth at Meades Ranch) through a readjustment of the triangulation of the entire network in which Laplace azimuths were introduced, and the Bowie method was used." (http://www.ngs.noaa.gov/faq.shtml#WhatDatum ) NAD27 is a local referencing system covering North America. Discussion and examples: The North American Datum of 1983 (NAD 83) is "The horizontal control datum for the United States, Canada, Mexico, and Central America, based on a geocentric origin and the Geodetic Reference System 1980 (GRS80). "This datum, designated as NAD 83 ...is based on the adjustment of 250,000 points including 600 satellite Doppler stations which constrain the system to a geocentric origin." NAD83 may be considered a local referencing system. Discussion and examples: WGS 84 is the World Geodetic System of 1984. It is the reference frame used by the U.S. Department of Defense (DoD) and is defined by the National Geospatial-Intelligence Agency (NGA) (formerly the Defense Mapping Agency, then the National Imagery and Mapping Agency). WGS 84 is used by DoD for all its mapping, charting, surveying, and navigation needs, including its GPS "broadcast" and "precise" orbits. WGS 84 was defined in January 1987 using Doppler satellite surveying techniques. It was used as the reference frame for broadcast GPS Ephemerides (orbits) beginning January 23, 1987. At 0000 GMT January 2, 1994, WGS 84 was upgraded in accuracy using GPS measurements. The formal name then became WGS 84 (G730), since the upgrade date coincided with the start of GPS Week 730. It became the reference frame for broadcast orbits on June 28, 1994. At 0000 GMT September 30, 1996 (the start of GPS Week 873), WGS 84 was redefined again and was more closely aligned with International Earth Rotation Service (IERS) frame ITRF 94. It was then formally called WGS 84 (G873). WGS 84 (G873) was adopted as the reference frame for broadcast orbits on January 29, 1997. Another update brought it to WGS 84 (G1674). Discussion and examples: The WGS 84 datum, within two meters of the NAD83 datum used in North America, is the only world referencing system in place today. WGS 84 is the default standard datum for coordinates stored in recreational and commercial GPS units. Users of GPS are cautioned that they must always check the datum of the maps they are using. To correctly enter, display, and to store map related map coordinates, the datum of the map must be entered into the GPS map datum field. Discussion and examples: Examples Examples of map datums are: WGS 84, 72, 66 and 60 of the World Geodetic System NAD83, the North American Datum which is very similar to WGS 84 NAD27, the older North American Datum, of which NAD83 was basically a readjustment [1] OSGB36 of the Ordnance Survey of Great Britain ETRS89, the European Datum, related to ITRS ED50, the older European Datum GDA94, the Australian Datum JGD2011, the Japanese Datum, adjusted for changes caused by 2011 Tōhoku earthquake and tsunami Tokyo97, the older Japanese Datum KGD2002, the Korean Datum TWD67 and TWD97, different datum currently used in Taiwan. Discussion and examples: BJS54 and XAS80, old geodetic datum used in China GCJ-02 and BD-09, Chinese encrypted geodetic datum. PZ-90.11, the current geodetic reference used by GLONASS GTRF, the geodetic reference used by Galileo; currently defined as ITRF2005 CGCS2000, or CGS-2000, the geodetic reference used by BeiDou Navigation Satellite System; based on ITRF97 International Terrestrial Reference Frames (ITRF88, 89, 90, 91, 92, 93, 94, 96, 97, 2000, 2005, 2008, 2014), different realizations of the ITRS. Hong Kong Principal Datum, a vertical datum used in Hong Kong. SAD69 - South American Datum 1969 Plate movement: The Earth's tectonic plates move relative to one another in different directions at speeds on the order of 50 to 100 mm (2.0 to 3.9 in) per year. Therefore, locations on different plates are in motion relative to one another. For example, the longitudinal difference between a point on the equator in Uganda, on the African Plate, and a point on the equator in Ecuador, on the South American Plate, increases by about 0.0014 arcseconds per year. These tectonic movements likewise affect latitude. Plate movement: If a global reference frame (such as WGS84) is used, the coordinates of a place on the surface generally will change from year to year. Most mapping, such as within a single country, does not span plates. To minimize coordinate changes for that case, a different reference frame can be used, one whose coordinates are fixed to that particular plate. Examples of these reference frames are "NAD83" for North America and "ETRS89" for Europe.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Clostridium perfringens alpha toxin** Clostridium perfringens alpha toxin: Clostridium perfringens alpha toxin is a toxin produced by the bacterium Clostridium perfringens (C. perfringens) and is responsible for gas gangrene and myonecrosis in infected tissues. The toxin also possesses hemolytic activity. Clinical significance: This toxin has been shown to be the key virulence factor in infection with C. perfringens; the bacterium is unable to cause disease without this toxin. Further, vaccination against the alpha toxin toxoid protects mice against C. perfringens gas gangrene. As a result, knowledge about the function of this particular protein greatly aids understanding of myonecrosis. Structure and homology: The alpha toxin has remarkable similarity to toxins produced by other bacteria as well as natural enzymes. There is significant homology with phospholipase C enzymes from Bacillus cereus, C. bifermentans, and Listeria monocytogenes. The C terminal domain shows similarity with non-bacterial enzymes such as pancreatic lipase, soybean lipoxygenase, and synaptotagmin I.The alpha toxin is a zinc metallophospholipase, requiring zinc for activation. First, the toxin binds to a binding site on the cell surface. The C-terminal C2-like PLAT domain binds calcium and allows the toxin to bind to the phospholipid head-groups on the cell surface. The C-terminal domain enters the phospholipid bilayer. The N-terminal domain has phospholipase activity. This property allows hydrolysis of phospholipids such as phosphatidyl choline, mimicking endogenous phospholipase C. The hydrolysis of phosphatidyl choline produces diacylglycerol, which activates a variety of second messenger pathways. The end-result includes activation of arachidonic acid pathway and production of thromboxane A2, production of IL-8, platelet-activating factor, and several intercellular adhesion molecules. These actions combine to cause edema due to increased vascular permeability.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kernel Normal Form** Kernel Normal Form: Kernel normal form, or KNF, is the coding style used in the development of code for the BSD operating systems. Based on the original KNF concept from the Computer Systems Research Group, it dictates a programming style to which contributed code should adhere prior to its inclusion into the codebase. KNF started out as a codification of how Ken Thompson and Dennis Ritchie formatted the original UNIX C source code. It describes such things as how to name variables, use indents and the use of ANSI C or K&R C code styles. Each BSD variant has its own KNF rules, which have evolved over time to differ from each other in small ways. Kernel Normal Form: The SunOS kernel and userland also uses a similar indentation style that was derived from AT&T style documents and that is sometimes known as Bill Joy Normal Form. The correctness of the indentation of a list of source files can be verified by a style checker program written by Bill Shannon. This style checker program is called cstyle.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ankyrin-3** Ankyrin-3: Ankyrin-3 (ANK-3), also known as ankyrin-G, is a protein from ankyrin family that in humans is encoded by the ANK3 gene. Function: The protein encoded by this gene, ankyrin-3 is an immunologically distinct gene product from ankyrins ANK1 and ANK2, and was originally found at the axonal initial segment and nodes of Ranvier of neurons in the central and peripheral nervous systems. Alternatively spliced variants may be expressed in other tissues. Although multiple transcript variants encoding several different isoforms have been found for this gene, the full-length nature of only two have been characterized.Within the nervous system, ankyrin-G is specifically localized to the neuromuscular junction, the axon initial segment and the Nodes of Ranvier. Within the nodes of Ranvier where action potentials are actively propagated, ankyrin-G has long been thought to be the intermediate binding partner to neurofascin and voltage-gated sodium channels. The genetic deletion of ankyrin-G from multiple neuron types has shown that ankyrin-G is required for the normal clustering of voltage-gated sodium channels at the axon hillock and for action potential firing. Disease linkage: The ANK3 protein associates with the cardiac sodium channel Nav1.5 (SCN5A). Both proteins are highly expressed at ventricular intercalated disc and T-tubule membranes in cardiomyocytes. A mutation in the Nav1.5 protein blocks interaction with ANK3 binding and therefore disrupts surface expression of Nav1.5 in cardiomyocytes resulting in Brugada syndrome, a type of cardiac arrhythmia.Other mutations in the ANK3 gene may be involved in the bipolar disorder and intellectual disability. Ankyrin family: The protein encoded by the ANK3 gene is a member of the ankyrin family of proteins that link the integral membrane proteins to the underlying spectrin-actin cytoskeleton. Ankyrins play key roles in activities such as cell motility, activation, proliferation, contact and the maintenance of specialized membrane domains. Most ankyrins are typically composed of three structural domains: an amino-terminal domain containing multiple ankyrin repeats; a central region with a highly conserved spectrin binding domain; and a carboxy-terminal regulatory domain which is the least conserved and subject to variation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Clean-burning stove** Clean-burning stove: A clean-burning stove is a stove with reduced toxic and polluting emissions. The term refers to solid-fuel stoves such as wood-burning stoves for either domestic heating, domestic cooking or both. In the context of a cooking stove, especially in lower-income countries, such a stove is distinct from a clean-burning-fuel stove, which typically burns clean fuels such as ethanol, biogas, LPG, or kerosene. Studies into clean-burning cooking stoves in lower-income countries have shown that they reduce the emissions of dangerous particulates and carbon monoxide significantly, use less fuel than regular stoves, and result in fewer burn injuries. However, the emissions some supposedly clean-burning cookstoves produce are still much greater than safe limits, and in several studies in lower income countries they did not appear to be effective at reducing illnesses such as pneumonia induced by breathing polluted air, which may have many sources. Use: Solid fuel stoves designed to be used for domestic heating, those designed to be used for cooking, and those designed for both, can all be described as being clean-burning. In all cases these types of stove are designed to produce lower emissions of particulates and other pollutants than the open fires, traditional stoves, or other appliances they replace. They have been proposed for introduction to developing countries, particularly the cooking type in order to improve air quality, and where they replace open fires they have other advantages such as a reduction in accidents due to burn injuries and house fires. Development: A research summary of the development of clean-burning, domestic, heating stoves was published in 1982 by Flow Research Inc. Such stoves introduced in the 1980s burnt wood pellets rather than logs. By 1986, a directory was available listing 75 such stoves which had satisfied U.S. emission testing. Operation: Clean-burning stoves can be catalytic (using catalytic converters) or noncatalytic. The noncatalytic designs recirculate smoke to achieve fuller combustion.Once the stove is warmed to within operating temperatures, it produces no visible smoke, emitting mostly water and carbon dioxide. Non-catalytic stoves have higher emissions than the new catalytic stoves do when the latter are operated correctly (as of 2003).A conventional domestic heating stove in 1984 emitted particulates amounting to approximately 20g per kg of fuel (0.3oz/lb). Research by the United States Environmental Protection Agency (EPA) was reported in 1986 to show that conditions such as asthma, bronchitis and emphysema may be aggravated by the use of conventional heating stoves. Regulation: The EPA was reported as announcing plans in 1987 to encourage manufacturers to design heating stoves with reduced emissions. Clean-burning stoves are authorised for use in smoke control areas in some countries by organisations such as the EPA.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Widget Workshop** Widget Workshop: Widget Workshop: A Mad Scientist's Laboratory is a hands-on science kit, for use on the computer and off. It was released in 1995 and is one of the more obscure Maxis products. It was designed by Lauren Elliott, co-author of the Where in the World is Carmen SanDiego game series. The game has two main modes. Much like in The Incredible Machine, users can solve a variety of puzzles using a limited selection of parts or tinker with the freeform mode. Widget Workshop focuses more on the freeform mode than the other game. Widget Workshop: Unlike the Rube Goldberg nature of The Incredible Machine, the parts in Widget Workshop are not restricted to the mechanical or physical. Items include display boxes, graphing windows, random number generators, and mathematical tools ranging from addition and subtraction to Boolean logic gates and trigonometric functions. The items can be connected in a manner similar to dataflow programming. While the arrangement of the items on screen does not matter, the connections do: a numerical constant box could be connected to a mathematical function; connected to a graph, which would display a horizontal line; input as a color value on an RGB monitor; or even used to trigger a sound effect.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ISO 9362** ISO 9362: ISO 9362 is an international standard for Business Identifier Codes (BIC), a unique identifier for business institutions, approved by the International Organization for Standardization (ISO). BIC is also known as SWIFT-BIC, SWIFT ID, or SWIFT code, after the Society for Worldwide Interbank Financial Telecommunication (SWIFT), which is designated by ISO as the BIC registration authority. BIC was defined originally as Bank Identifier Code and is most often assigned to financial organizations; when it is assigned to non-financial organization, the code may also be known as Business Entity Identifier (BEI). These codes are used when transferring money between banks, particularly for international wire transfers, and also for the exchange of other messages between banks. The codes can sometimes be found on account statements. ISO 9362: The overlapping issue between ISO 9362 and ISO 13616 is discussed in the article International Bank Account Number (also called IBAN). The SWIFT network does not require a specific format for the transaction so the identification of accounts and transaction types is left to agreements of the transaction partners. In the process of the Single Euro Payments Area the European central banks have agreed on a common format based on IBAN and BIC including an XML-based transmission format for standardized transactions. TARGET2 is a joint gross clearing system in the European Union that does not require the SWIFT network for transmission (see EBICS). The TARGET directory lists all the BICs of the banks that are attached to the TARGET2-network being a subset of the SWIFT-directory of BICs. History: There are five versions. ISO 9362:1987, from year 1987, withdrawn ISO 9362:1994, from year 1994, withdrawn ISO 9362:2009, from year 2009, withdrawn ISO 9362:2014, from year 2014, withdrawn ISO 9362:2022, from year 2022, validISO 9362 is based on the industry standard created by SWIFT around 1975. Structure: The previous edition is ISO 9362:2009 (dated 2009-10-01). The SWIFT code is 8 or 11 characters, made up of: 4 letters: institution code or bank code. 2 letters: ISO 3166-1 alpha-2 country code (exceptionally, SWIFT has assigned the code XK to Republic of Kosovo, which does not have an ISO 3166-1 country code) 2 letters or digits: location code if the second character is "0", then it is typically a test BIC as opposed to a BIC used on the live network. if the second character is "1", then it denotes a passive participant in the SWIFT network if the second character is "2", then it typically indicates a reverse billing BIC, where the recipient pays for the message as opposed to the more usual mode whereby the sender pays for the message. 3 letters or digits: branch code, optional ('XXX' for primary office)Where an eight digit code is given, it may be assumed that it refers to the primary office. SWIFT Standards, a division of The Society for Worldwide Interbank Financial Telecommunication (SWIFT), handles the registration of these codes. Because SWIFT originally introduced what was later standardized as Business Identifier Codes (BICs), they are still often called SWIFT addresses or codes. The 2009 update of ISO 9362 broadened the scope to include non-financial institutions; before then BIC was commonly understood to be an acronym for Bank Identifier Code. There are over 7,500 "live" codes (for partners actively connected to the SWIFT network) and an estimated 10,000 additional BIC codes which can be used for manual transactions. 2009 version is now replaced by the latest edition (ISO 9362:2014 dated 2014-12-01). Examples: Deutsche Bank is an international bank, with its head office in Frankfurt, Germany. The SWIFT code for its primary office is DEUTDEFF: DEUT identifies Deutsche Bank DE is the country code for Germany FF is the code for FrankfurtDeutsche Bank uses an extended code of 11 characters and has assigned branches or processing areas individual extended codes. This allows the payment to be directed to a specific office. For example, DEUTDEFF500 would direct the payment to an office of Deutsche Bank in Bad Homburg. Examples: Nedbank is a primarily South African bank, with its head office in Johannesburg. The SWIFT code for its primary office is NEDSZAJJ: NEDS identifies Nedbank ZA is the country code for South Africa JJ is the code for JohannesburgNedbank has not implemented the extended code of 11 characters and all SWIFT transfers to its accounts are directed to the primary office for processing. Those transfer interfaces that require an 11 digit code would enter NEDSZAJJXXX. Examples: Danske Bank is a primarily Danish bank, with its head office in Copenhagen. The SWIFT code for its primary office is DABADKKK: DABA identifies Danske Bank DK is the country code for Denmark KK is the code for Copenhagen.UniCredit Banca is a primarily Italian bank with its head office in Milan. The SWIFT code for its primary office is UNCRITMM: UNCR identifies Unicredit Banca IT is the country code for Italy MM is the code for Milan.Dah Sing Bank is a bank based in Hong Kong that has five branches in mainland China (primary mainland China branch in Shenzhen). The SWIFT code for the branch in Shanghai is DSBACNBXSHA. DSBA identifies Dah Sing Bank CN is the country code for China BXSHA is the code for Shanghai.It uses the 11-digit extended code, and SHA identifies the Shanghai branch. BDO Unibank is the biggest bank in the Philippines, with its head office in Makati. The SWIFT Code for BDO is BNORPHMM. All BDO branches have the same SWIFT Code. Examples: BNOR identifies BDO Unibank PH is the country code for the Philippines MM is the code for Metro Manila of which Makati is a part.Note that one bank can seem to have more than one bank identifier in a given country for separation purposes. Bank of East Asia separates its representative branch in the US and its US-based operations for local customers into BEASUS33xxx (following the code used in its home country) and BEAKUS33xxx respectively. This differs from its local mainland China operations which are also BEASCNxxxxx following Hong Kong rather than having a separate identifier code. Examples: An example of this is Bank of America in the United States. For US Dollar denominated wires, its SWIFT code is BOFAUS3N. The SWIFT code for wires sent in foreign currency (non-U.S. dollars) to Bank of America in the United States is BOFAUS6S.In the past, SEPA payments required both BIC and IBAN. Since 2016-02-01 only the IBAN is needed inside the SEPA (European Union and some more countries). Twelve-character SWIFTNet FIN address based on BIC: To identify endpoints on its network, SWIFT also uses twelve-character codes that are derived from the BIC of the institution. Such a code consists of the 'BIC8', followed by a one-character code that identifies the Logic Terminal (LT), (also referred to as "local destination" or "Logic Terminal address"), and the three-character branch code. While 'BIC12's are not part of the ISO standard, and are only relevant in the context of the messaging platform, they play a role in FIN system messaging. According to SWIFT, Logic Terminals are the "entity through which users send and receive FIN messages.", thus, may play a role within routing of the message. Usage: Business Identifier Codes are primarily used for identifying financial and non-financial institutions involving day-to-day business transactions among one or more institutions in transaction lifecycle. Example: In SWIFT messages these BICs are embedded within the messages. Consider the message type for cash transfer MT103, here we can find BIC under different tags like 50a (ordering customer), 56a (intermediary), 57a (account with institution), etc.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hairy root culture** Hairy root culture: Hairy root culture, also called transformed root culture, is a type of plant tissue culture that is used to study plant metabolic processes or to produce valuable secondary metabolites or recombinant proteins, often with plant genetic engineering.A naturally occurring soil bacterium Agrobacterium rhizogenes that contains root-inducing plasmids (also called Ri plasmids) can infect plant roots and cause them to produce a food source for the bacterium, opines, and to grow abnormally. The abnormal roots are particularly easy to culture in artificial media because hormones are not needed in contrast to adventitious roots, and they are neoplastic, with indefinite growth. The neoplastic roots produced by A. rhizogenes infection have a high growth rate (compared to untransformed adventitious roots), as well as genetic and biochemical stability. Hairy root culture: Currently the main constraint for commercial utilization of hairy root culture is the development and up-scaling of appropriate (bioreactors) vessels for the delicate and sensitive hairy roots.Some of the applied research on utilization of hairy root cultures has been and is conducted at VTT Technical Research Centre of Finland Ltd. Other labs working on hairy roots are the phytotechnology lab of Amiens University and the Arkansas Biosciences Institute. Metabolic studies: Hairy root cultures can be used for phytoremediation, and are particularly valuable for studies of the metabolic processes involved in phytoremediation.Further applications include detailed studies of fundamental molecular, genetic and biochemical aspects of genetic transformation and of hairy root induction. Genetically transformed cultures: The Ri plasmids can be engineered to also contain T-DNA, used for genetic transformation (biotransformation) of the plant cells. The resulting genetically transformed root cultures can produce high levels of secondary metabolites, comparable or even higher than those of intact plants. Use in plant propagation: Hairy root culture can also be used for regeneration of whole plants and for production of artificial seeds.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Endonuclease** Endonuclease: In molecular biology, endonucleases are enzymes that cleave the phosphodiester bond within a polynucleotide chain (namely DNA or RNA). Some, such as deoxyribonuclease I, cut DNA relatively nonspecifically (without regard to sequence), while many, typically called restriction endonucleases or restriction enzymes, cleave only at very specific nucleotide sequences. Endonucleases differ from exonucleases, which cleave the ends of recognition sequences instead of the middle (endo) portion. Some enzymes known as "exo-endonucleases", however, are not limited to either nuclease function, displaying qualities that are both endo- and exo-like. Evidence suggests that endonuclease activity experiences a lag compared to exonuclease activity.Restriction enzymes are endonucleases from eubacteria and archaea that recognize a specific DNA sequence. The nucleotide sequence recognized for cleavage by a restriction enzyme is called the restriction site. Typically, a restriction site will be a palindromic sequence about four to six nucleotides long. Most restriction endonucleases cleave the DNA strand unevenly, leaving complementary single-stranded ends. These ends can reconnect through hybridization and are termed "sticky ends". Once paired, the phosphodiester bonds of the fragments can be joined by DNA ligase. There are hundreds of restriction endonucleases known, each attacking a different restriction site. The DNA fragments cleaved by the same endonuclease can be joined regardless of the origin of the DNA. Such DNA is called recombinant DNA; DNA formed by the joining of genes into new combinations. Restriction endonucleases (restriction enzymes) are divided into three categories, Type I, Type II, and Type III, according to their mechanism of action. These enzymes are often used in genetic engineering to make recombinant DNA for introduction into bacterial, plant, or animal cells, as well as in synthetic biology. One of the more famous endonucleases is Cas9. Categories: Ultimately, there are three categories of restriction endonucleases that relatively contribute to the cleavage of specific sequences. The types I and III are large multisubunit complexes that include both the endonucleases and methylase activities. Type I can cleave at random sites of about 1000 base pairs or more from the recognition sequence and it requires ATP as source of energy. Type II behaves slightly differently and was first isolated by Hamilton Smith in 1970. They are simpler versions of the endonucleases and require no ATP in their degradation processes. Some examples of type II restriction endonucleases include BamHI, EcoRI, EcoRV, HindIII, and HaeIII. Type III, however, cleaves the DNA at about 25 base pairs from the recognition sequence and also requires ATP in the process. Notations: The commonly used notation for restriction endonucleases is of the form "VwxyZ", where "Vwx" are, in italics, the first letter of the genus and the first two letters of the species where this restriction endonuclease may be found, for example, Escherichia coli, Eco, and Haemophilus influenzae, Hin. This is followed by the optional, non-italicized symbol "y", which indicates the type or strain identification, for example, EcoR for E. coli strains bearing the drug resistance transfer factor RTF-1, EcoB for E. coli strain B, and Hind for H. influenzae strain d. Finally, when a particular type or strain has several different restriction endonucleases, these are identified by Roman numerals, thus, the restriction endonucleases from H. influenzae strain d are named HindI, HindII, HindIII, etc. Another example: "HaeII" and "HaeIII" refer to bacterium Haemophilus aegyptius (strain not specified), restriction endonucleases number II and number III, respectively.: 64–64  The restriction enzymes used in molecular biology usually recognize short target sequences of about 4 – 8 base pairs. For instance, the EcoRI enzyme recognizes and cleaves the sequence 5' – GAATTC – 3'. Notations: Restriction endonucleases come in several types. A restriction endonuclease typically requires a recognition site and a cleavage pattern (typically of nucleotide bases: A, C, G, T). If the recognition site is outside the region of the cleavage pattern, then the restriction endonuclease is referred to as Type I. If the recognition sequence overlaps with the cleavage sequence, then the restriction endonuclease restriction enzyme is Type II. Further discussion: Restriction endonucleases may be found that cleave standard dsDNA (double-stranded DNA), or ssDNA (single-stranded DNA), or even RNA. This discussion is restricted to dsDNA; however, the discussion can be extended to the following: Standard dsDNA Non-standard DNAHolliday junctions Triple-stranded DNA, quadruple-stranded DNA (G-quadruplex) Double-stranded hybrids of DNA and RNA (one strand is DNA, the other strand is RNA): 72–73  Synthetic or artificial DNA (for example, containing bases other than A, C, G, T, refer to the work of Eric T. Kool). Research with synthetic codons, refer to the research by S. Benner, and enlarging the amino acid set in polypeptides, thus enlarging the proteome or proteomics, see the research by P. Schultz.: chapter 3 In addition, research is now underway to construct synthetic or artificial restriction endonucleases, especially with recognition sites that are unique within a genome.Restriction endonucleases or restriction enzymes typically cleave in two ways: blunt-ended or sticky-ended patterns. An example of a Type I restriction endonuclease.: 64 Furthermore, there exist DNA/RNA non-specific endonucleases, such as those that are found in Serratia marcescens, which act on dsDNA, ssDNA, and RNA. DNA repair: Endonucleases play a role in DNA repair. AP endonuclease, specifically, catalyzes the incision of DNA exclusively at AP sites, and therefore prepares DNA for subsequent excision, repair synthesis and DNA ligation. For example, when depurination occurs, this lesion leaves a deoxyribose sugar with a missing base. The AP endonuclease recognizes this sugar and essentially cuts the DNA at this site and then allows for DNA repair to continue. E. coli cells contain two AP endonucleases: endonuclease IV (endoIV) and exonuclease III (exoIII) while in eukaryotes, there is only one AP endonuclease. DNA repair: DNA crosslink repair Repair of DNA in which the two complementary strands are joined by an interstrand covalent crosslink requires multiple incisions in order to disengage the strands and remove the damage. Incisions are required on both sides of the crosslink and on both strands of the duplex DNA. In mouse embryonic stem cells, an intermediate stage of crosslink repair involves production of double-strand breaks. MUS81/EME1 is a structure specific endonuclease involved in converting interstrand crosslinks to double-strand breaks in a DNA replication-dependent manner. After introduction of a double-strand break, further steps are required to complete the repair process. If a crosslink is not properly repaired it can block DNA replication. DNA repair: Thymine dimer repair Exposure of bacteriophage (phage) T4 to ultraviolet irradiation induces thymine dimers in the phage DNA. The phage T4 denV gene encodes endonuclease V that catalyzes the initial steps in the repair of these UV-induced thymine dimers. Endonuclease V first cleaves the glycosylic bond on the 5’ side of a pyrimidine dimer and then catalyzes cleavage of the DNA phospodiester bond that originally linked the two nucleotides of the dimer. Subsequent steps in the repair process involve removal of the dimer remnants and repair synthesis to fill in the resulting single-strand gap using the undamaged strand as template. Common endonucleases: Below are tables of common prokaryotic and eukaryotic endonucleases. Mutations: Xeroderma pigmentosa is a rare, autosomal recessive disease caused by a defective UV-specific endonuclease. Patients with mutations are unable to repair DNA damage caused by sunlight.Sickle Cell anemia is a disease caused by a point mutation. The sequence altered by the mutation eliminates the recognition site for the restriction endonuclease MstII that recognizes the nucleotide sequence.tRNA splicing endonuclease mutations cause pontocerebellar hypoplasia. Pontocerebellar hypoplasias (PCH) represent a group of neurodegenerative autosomal recessive disorders that is caused by mutations in three of the four different subunits of the tRNA-splicing endonuclease complex.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Checkdown** Checkdown: In American football, a checkdown pass is when the quarterback attempts to complete a short, accurate pass, typically to a running back or tight end, as a last option when the primary option(s) as designed by the play call are covered. The term means that the quarterback has "checked down" his list of receivers. Because the quarterback does not look for the checkdown pass until after they have scanned for open receivers down the field for about 3–4 seconds, the defensive line has had time to enter the backfield and so a checkdown pass is often thrown in the face of pressure from the defensive line. Alternatively, if the quarterback is inexperienced or the defensive team has sent a blitz, with linebackers and/or defensive backs also looking to sack the quarterback, the checkdown may also turn out to be the quarterback's second or even first look.A screen pass and a checkdown are different, because a screen pass is designed primarily to go to the short route after drawing in pass rushers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FMN reductase (NAD(P)H)** FMN reductase (NAD(P)H): FMN reductase (NAD(P)H) (EC 1.5.1.39, FRG) is an enzyme with systematic name FMNH2:NAD(P)+ oxidoreductase. This enzyme catalyses the following chemical reaction FMNH2 + NAD(P)+ ⇌ FMN + NAD(P)H + H+This enzyme contains FMN.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kūdō** Kūdō: Kūdō (空道, Kūdō) is a Japanese hybrid martial art. It is a full-contact combat sport that aims to achieve both safety and practicality, a style of mixed martial arts practised with headgear and gloves. It features stand-up striking, with throwing and grappling techniques being also allowed in the competition, including restraint, locks and chokeholds.Kūdō is a budo martial art that originated in the Daido Juku school. Daido Juku is an organization founded by Azuma Takashi in 1981. The relationship between the Daido Juku school and kudo is similar to that between the kodokan school and judo. Kūdō: The kudo is found in more than 100 locations in Japan and is practised in more than 50 countries around the world. Although it is a martial art created by the Japanese, Russia currently has the largest number of Kudo athletes, eclipsing the number of Japanese practitioners. History: Takashi Azuma and conception of Daido juku Takashi Azuma (東 孝, Azuma Takashi) (born 1949 in Kesennuma, Japan - 3 April 2021) was the founder of Kūdō and the President of the Kudo International Federation. He held a 9th degree black belt in Kyokushin Budokai (awarded by Jon Bluming), a 3rd degree black belt in judo, and a 9th degree black belt in Kūdō. History: Azuma came in contact with budo for the first time when he entered the judo club of his school in Kesennuma at the age of 16 in 1965. In 1972 after his service in the Japanese armed forces, he joined Kyokushin Karate. That same year he founded a Kyokushin dan at Waseda University.In 1981, Azuma founded his own martial art because he was unhappy with some characteristics of Kyokushin. Azuma was bothered that in Kyokushin serious head injuries are common. Azuma was also of the opinion that physically smaller fighters are at a disadvantage compared to bigger fighters. Especially he had his own experiences of receiving so many nasty blows that his nose got bent out of its place. In his book, he quotes that he was "good at grabbing the collar and head-butting in a fight" and felt full-contact rules of Kyokushin very limiting.One of the fundamental precepts in Daidojuku was the creation of a realistic and versatile fighting style that encompassed effective offensive and defensive techniques including head punches, elbows, headbutts, throws and joint-locks from Judo combined with other ground fighting techniques. Azuma's early development of a martial art was first a hybrid of Kyokushin Karate and Judo. Kyokushin was the basis, however, the regulations changed dramatically. The style would not have been limited by the boundaries of a single style but would have used techniques from different martial arts, not just the initial judo and karate mix. Later in the 1980s and 1990s, this style began to include several martial arts techniques such as boxing, muay thai, jujitsu, wrestling, and others all merged in the style of Daidojuku. Protective clothing was introduced, which allowed hand techniques to the head, and provides sufficient protection to the head during kicking techniques. History: Early Daido juku and Kakutō karate The Daido Juku organization became operational on February 17, 1981. The first dojo was opened in Miyagi prefecture under the name "Karate-do Daidojuku." The in-house martial arts style was also known as Kakutō karate (格闘空手, eng. Fighting Karate) and/or Combat Karate Daidojuku. In the same year, Daidojuku's alumni made its competition debut at the "1981 Hokutoki Karate Championships". History: Daidojuku played a part in the late '80s and early '90s martial arts boom in Japan, being one of the few mixed martial arts organizations in the martial arts industry at the time. It is credited in helping K-1 and the "U-series" promotions to reach the Japanese mainstream. Minoki Ichihara was a Kakuto Karate practitioner from Daidojuku who fought in UFC 2, being the first Japanese fighter to participate in the UFC in a time when Japanese martial arts organizations were reluctant to take on the challenge of the UFC. However, Ichihara would lose to Royce Gracie. History: In the 1990s, Daidojuku held kickboxing events known as THE WARS, which was centred on "gloved" ruleset of full contact karate, and showcased Daido juku's top talents.In the media, there were many voices waiting for the dream confrontation between Kenichi Osada, who was the ace of Daido Juku, and Masaaki Satake of Seidokaikan. Athletes belonging to Daido Juku were displayed on the covers of various martial arts magazines, and in the martial arts world at that time, the Daidojuku along with Seidokaikan served as the forefront of Japanese martial arts. History: In 1995 the name of the "Karate Do Daidojuku" association officially changed to "Kakuto Karate International Federation Daidojuku" (KKIF). Kudo, the new direction and present From the mid-1990s, Daidojuku would move away from media-centric promotion and return to the original course of developing the "safe yet practical" style that Daido Juku had been aiming for since its establishment. History: In 2001, Takashi Azuma, founder and president of daidojuku, held an official press conference where he announced that the style promoted by daidojuku will be now referred to as Kudo, becoming its own budō martial art. The relationship between the Daido Juku school and kudo is similar to that between the kodokan school and judo. In the same year, Daidojuku launched the first world championship competition to great success, launching Kudo to the international stage. History: Based on the philosophy of budō, Kudo is extended worldwide and all its instructors and leaders are certified and registered under the Kudo International Federation, also known as K.I.F. It is a unique fighting organization in terms of aiming for activities as a social and physical education organization, such as with the support of the Japanese Ministry of Education, Culture, Sports, Science and Technology. History: On April 3, 2021, Azuma died due to stomach cancer, leaving the position of president of Daido Juku to Kenichi Osada. Exchange with other organizations: In the 1990s, Daidojuku would exchange talent with numerous martial arts organizations, until ceasing the activity after conception of Kudo. In the 1990s Daidojuku had agreements with Submission Arts Wrestling (SAW), and after that would interact with entities from Wushu, Sanshou, Aikido S.A., Paraestra and Hatenkai. In addition, certain fighters from Daidojuku would go fight in other martial arts organizations, such as RISE, etc. Exchange with other organizations: Daidojuku used to compete with other martial arts bodies, such as Nippon Kempo and Shooto. In the past, the organization has had clashes with practitioners of Muay Thai, Sanshou and Taekwondo as well. International spread: Kudo has more than 100 locations in Japan and is practised in more than 50 countries around the world. International spread: Kudo in Russia In 1991, the first in Russia section of daido-juku karate-do was opened in Vladivostok. The founder of the style, Azuma Takashi, visited Moscow, after which a foreign branch of the Kudo Federation was opened there. On July 7, 1994, the Moscow Federation of Daido Juku Karate-do was registered by the Moscow Justice DepartmentIn May 1994, the Moscow Cup was organized and held in Moscow, the first international Daido Juku tournament in Russia. The first victory of Russian athletes in Japan took place in 1996, Alexey Kononenko took 1st place in his weight category.In 2004, the Russian Kudo Federation was established. Since 2001, the official championship of Russia in kudo has been held, in the same year Russian athletes won two gold, three silver and two bronze medals. Russian kudo wrestlers headed the refereeing team at the 2nd international tournament "Baltic States Open Cup", which took place in 2003 and brought together athletes from Russia, Japan, the Baltic countries, Azerbaijan, Italy, Germany and Poland. International spread: The 1st Kudo World Cup was held in 2011 in Moscow. On January 13, 2013, Roman Anashkin qualified for 6th dan kudo, becoming the first non-Japanese to receive such a degree. Kudo in Iran Iran has a high position in the world's karate Kudo was founded in Iran by a person named Baghdarnia, He later claimed that Dai Dojoku was his own style. Tensions between Azuma and Baghdarnia escalated. To the extent that Azuma addressed the devil in a speech and canceled his representation. After that, Mohammad Shahriari represented Iran. Shahriari resigned from representation in 2021 and one of Azuma's disciples, Ardalan, assumed the presidency of Kudo in Iran Overview: The goal of Kūdō is to come as close as possible to realistic, real fighting, with appropriate protective clothing. To achieve this, Kūdō is fought with very few regulations, and has specialised techniques and actions. The techniques of Kūdō include the entire spectrum of a real struggle-fighting standing up, throwing techniques, grappling and ground fighting. The training of Kūdō consists primarily of kihon, general fitness training and combat. The kata of Kyokushin were eliminated without replacement. Kūdō is a comprehensive martial art and philosophy, in which both the physical as well as mental development are considered. Traditional Japanese etiquette in budō (as reigi) is followed, there are certain Japanese greeting rituals, a traditional training keikogi is worn, the names of the techniques are in Japanese, etc. Dojo kun: Dōjō kun is a Japanese martial arts term literally meaning (training hall) rules. They are generally posted at the entrance to a dōjō or at the "front" of the dojo (shomen) and outline behaviour expected and disallowed. The dojo kun of kudo is the following: Equipment: Kudo athletes, or kudoka, wear an official uniform, "dogi" or "kudogi" (similar to judo gi, resistant to throwing, but with shorter sleeves than a traditional karate gi). This design is ideal for gripping and throwing techniques. Kudo practitioners use white and blue Gi colors for easy identification. All athletes must wear dogi, headgear, kudo bandage, a mouthguard, a K.I.F. approved gloves (which protect the knuckles but which leave the fingers free and uncovered to allow grappling) and a special K.I.F. approved Plexiglas visor to protect fighters from severe facial damage and brain trauma. Underage athletes, in addition to the Kudo Gi, the Plexiglas helmet and the gloves, must wear the shin guards and the bodice. Regulations on the protection of underage athletes may vary from tournament to tournament. Combat Categories: Athletes are not ranked by weight, but by physical index. The physical index (PI) is the sum of weight, in kilograms, plus height, in centimeters. Combat Categories: This system for identifying categories in which to fight is the only one of its kind. Usually, in other combat sports or other martial arts, the categories in which to fight are classified according to the weight in kilograms. By means of this system of categories, we try to value not only the weight but also the height which, generally, is synonymous with a longer arm and therefore an advantage over long distances Regulations: There are definitive base rules in Kudo. Although each tournament uses its own rules, they too are rooted in the base rules. Regulation used at Kudo world championships state that: the fight on the ground only twice and respectively no more than thirty seconds, and blows to the back and/or private parts are prohibited. The competitions are held on a 13x13 meter tatami mat with an internal 9×9 meter square, in which there is the fighting area. On the four corners of the contest area there are 4 referees plus one inside the tatami. The principle with which points are awarded is based on the strength of movies, since it is a consequence of the technique and of one's physical abilities. Points are to be rewarded not by technique but by effectiveness, based on how much the opponent has felt the blow. The rating is from 1 to 8. The points, in Japanese, are called koka, yuko, wazari and ippon. They are worth 1 point, 2 points, 4 points and 8 points respectively (if the opponent scores 8 points he is awarded the victory). In addition, victory can occur by submission or choke-out, a knock-out or whichever fighter at the end of the match has scored more points. In the event of a tie, either a decision is made or another match takes place. Famous practitioners: Semmy Schilt, mixed martial artist and kickboxer. Hokutoki champion in 1996 and 1997. Roman Anashkin – prestigious Russian martial artist, holds 7 dan black belt in Kudo. Yoshinori Nishi, mixed martial arts fighter and founder of the Wajyutsu Keishukai gym. Hokutoki champion in 1984 and 1985. Lee Hasdell, mixed martial artist and kickboxer. Katsumasa Kuroki, professional wrestler. Minoki Ichihara, mixed martial artist and Ultimate Fighting Championship competitor. Famous practitioners: Kolyan Edgar - Honored Master of Sports of Russia in Kudo, Candidate for Master of Sports of Russia in Army hand-to-hand fighting, Candidate for Master of Sports of Russia in Combat Sambo, two-time world champion in Kudo (2005, 2009), silver medalist of the World Kudo Championship (2014), bronze medalist of the championship World Kudo (2018), winner of the Kudo World Cup (2011), European Kudo champion (2008), five-time Russian Kudo champion. Famous practitioners: Hisaki Kato, Mixed Martial Artist who competes in Bellator MMA and Kickboxing. Akshay Kumar Bollywood super star and martial artist. Taapsee Pannu Bollywood actress and martial artist. Mehul Vora 5th DAN Black Belt in Kudo & Indian Celebrity Coach. Isaac Almeida Writer and actor, first Judd Reid Shihan's Mexican student. Vladimir Zorin – coach of the Russian national martial arts team Kudo, judge of the international category in eastern martial arts kudo, vice-president of the Russian Kudo Federation, black belt, 6th dan in kudo, author of the book "Fundamentals of Kudo". Together with Roman Anashkin he is one of the progenitors of Kudo in Russia. Irina Bykova - World Kudo Champion 2005 in the women's absolute weight category, 18-time Russian Kudo champion, European Kudo champion 2008, holds 4th Dan in Kudo.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Option style** Option style: In finance, the style or family of an option is the class into which the option falls, usually defined by the dates on which the option may be exercised. The vast majority of options are either European or American (style) options. These options—as well as others where the payoff is calculated similarly—are referred to as "vanilla options". Options where the payoff is calculated differently are categorized as "exotic options". Exotic options can pose challenging problems in valuation and hedging. American and European options: The key difference between American and European options relates to when the options can be exercised: A European option may be exercised only at the expiration date of the option, i.e. at a single pre-defined point in time. An American option on the other hand may be exercised at any time before the expiration date.For both, the payoff—when it occurs—is given by max {(S−K),0} , for a call option max {(K−S),0} , for a put optionwhere K is the strike price and S is the spot price of the underlying asset. Option contracts traded on futures exchanges are mainly American-style, whereas those traded over-the-counter are mainly European. Most stock and equity options are American options, while indexes are generally represented by European options. Commodity options can be either style. American and European options: Expiration date Traditional monthly American options expire the third Saturday of every month (or the third Friday if the first of the month begins on a Saturday). They are closed for trading the Friday prior. European options traditionally expire the Friday prior to the third Saturday of every month. Therefore, they are closed for trading the Thursday prior to the third Saturday of every month. American and European options: Difference in value Assuming an arbitrage-free market, a partial differential equation known as the Black-Scholes equation can be derived to describe the prices of derivative securities as a function of few parameters. Under simplifying assumptions of the widely adopted Black model, the Black-Scholes equation for European options has a closed-form solution known as the Black-Scholes formula. In general, no corresponding formula exist for American options, but a choice of methods to approximate the price are available (for example Roll-Geske-Whaley, Barone-Adesi and Whaley, Bjerksund and Stensland, binomial options model by Cox-Ross-Rubinstein, Black's approximation and others; there is no consensus on which is preferable). Obtaining a general formula for American options without assuming constant volatility is one of finance's unsolved problems. American and European options: An investor holding an American-style option and seeking optimal value will only exercise it before maturity under certain circumstances. Owners who wish to realise the full value of their option will mostly prefer to sell it as late as possible, rather than exercise it immediately, which sacrifices the time value. See early exercise consideration for a discussion of when it makes sense to exercise early. American and European options: Where an American and a European option are otherwise identical (having the same strike price, etc.), the American option will be worth at least as much as the European (which it entails). If it is worth more, then the difference is a guide to the likelihood of early exercise. In practice, one can calculate the Black–Scholes price of a European option that is equivalent to the American option (except for the exercise dates of course). The difference between the two prices can then be used to calibrate the more complex American option model. American and European options: To account for the American's higher value there must be some situations in which it is optimal to exercise the American option before the expiration date. This can arise in several ways, such as: An in the money (ITM) call option on a stock is often exercised just before the stock pays a dividend that would lower its value by more than the option's remaining time value. American and European options: A put option will usually be exercised early if the underlying asset files for bankruptcy. A deep ITM currency option (FX option) where the strike currency has a lower interest rate than the currency to be received will often be exercised early because the time value sacrificed is less valuable than the expected depreciation of the received currency against the strike. An American bond option on the dirty price of a bond (such as some convertible bonds) may be exercised immediately if ITM and a coupon is due. A put option on gold will be exercised early when deep ITM, because gold tends to hold its value whereas the currency used as the strike is often expected to lose value through inflation if the holder waits until final maturity to exercise the option (they will almost certainly exercise a contract deep ITM, minimizing its time value). Less common exercise rights: There are other, more unusual exercise styles in which the payoff value remains the same as a standard option (as in the classic American and European options above) but where early exercise occurs differently: Bermudan option A Bermudan option is an option where the buyer has the right to exercise at a set (always discretely spaced) number of times. This is intermediate between a European option—which allows exercise at a single time, namely expiry—and an American option, which allows exercise at any time (the name is jocular: Bermuda, a British overseas territory, is somewhat American and somewhat European—in terms of both option style and physical location—but is nearer to American in terms of both). For example, a typical Bermudian swaption might confer the opportunity to enter into an interest rate swap. The option holder might decide to enter into the swap at the first exercise date (and so enter into, say, a ten-year swap) or defer and have the opportunity to enter in six months time (and so enter a nine-year and six-month swap); see Swaption: Valuation. Most exotic interest rate options are of Bermudan style. Less common exercise rights: Canary option A Canary option is an option whose exercise style lies somewhere between European options and Bermudian options. (The name refers to the relative geography of the Canary Islands.) Typically, the holder can exercise the option at quarterly dates, but not before a set time period (typically one year) has elapsed. The ability to exercise the option ends prior to the maturity date of the product. The term was coined by Keith Kline, who at the time was an agency fixed income trader at the Bank of New York. Less common exercise rights: Capped-style option A capped-style option is not an interest rate cap but a conventional option with a pre-defined profit cap written into the contract. A capped-style option is automatically exercised when the underlying security closes at a price making the option's mark to market match the specified amount. Compound option A compound option is an option on another option, and as such presents the holder with two separate exercise dates and decisions. If the first exercise date arrives and the 'inner' option's market price is below the agreed strike the first option will be exercised (European style), giving the holder a further option at final maturity. Less common exercise rights: Shout option A shout option allows the holder effectively two exercise dates: during the life of the option they can (at any time) "shout" to the seller that they are locking-in the current price, and if this gives them a better deal than the payoff at maturity they'll use the underlying price on the shout date rather than the price at maturity to calculate their final payoff. Less common exercise rights: Double option A double option gives the purchaser a composite call-and-put option (an option to either buy or sell) in a single contract. This has only ever been available in commodities markets and has never been traded on exchange. Less common exercise rights: Swing option A swing option gives the purchaser the right to exercise one and only one call or put on any one of a number of specified exercise dates (this latter aspect is Bermudan). Penalties are imposed on the buyer if the net volume purchased exceeds or falls below specified upper and lower limits. Allows the buyer to "swing" the price of the underlying asset. Primarily used in energy trading. Less common exercise rights: Evergreen option An evergreen option is an option where the buyer has the right to exercise by providing a pre-determined period of notice. This option could be either American or European in nature or alternatively it could be combined with option styles that have non-vanilla exercise rights. For example, an ‘Evergreen-Bermudan’ option provides the buyer of the option with the right to exercise at set specific points in time after providing the other counterparty with a pre-determined period of notice of their intent to exercise the option. Evergreen options provide sellers with a period of time to prepare for settlement once the buyer has exercised their rights under the option. Embedding evergreen optionality within on and off-balance sheet products can enable counterparties (such as banks that must adhere to Basel III) to lengthen their inflow or outflow obligations. "Exotic" options with standard exercise styles: These options can be exercised either European style or American style; they differ from the plain vanilla option only in the calculation of their payoff value: Composite option A cross option (or composite option) is an option on some underlying asset in one currency with a strike denominated in another currency. For example, a standard call option on IBM, which is denominated in dollars, pays max USD (where S is the stock price at maturity and K is the strike price). A composite stock option might instead pay max JPY , where FXT is the prevailing exchange rate, that is, JPY USD on the exercise date. The pricing of such options naturally needs to take into account exchange rate volatility and the correlation between the exchange rate of the two currencies involved and the underlying stock price. "Exotic" options with standard exercise styles: Quanto option A quanto option is a cross option in which the exchange rate is fixed at the outset of the trade, typically at 1. These options are often used by traders to gain exposure to foreign markets without exposure to exchange rate. Continuing the example from the composite option, the payoff of an IBM quanto call option would then be max JPY , where FX0 is the exchange rate fixed at the outset of the trade. This would be useful for traders in Japan who wish to be exposed to IBM stock price without exposure to JPY/USD exchange rate. "Exotic" options with standard exercise styles: Exchange option An exchange option is the right to exchange one asset for another (such as a sugar future for a corporate bond). Basket option A basket option is an option on the weighted average of several underlyings. Rainbow option A rainbow option is a basket option where the weightings depend on the final performances of the components. A common special case is an option on the worst-performing of several stocks. Low Exercise Price Option A Low Exercise Price Option (LEPO) is a European style call option with a low exercise price of $0.01. Boston option A Boston option is an American option but with premium deferred until the option expiration date. Non-vanilla path-dependent "exotic" options: The following "exotic options" are still options, but have payoffs calculated quite differently from those above. Although these instruments are far more unusual they can also vary in exercise style (at least theoretically) between European and American: Lookback option A lookback option is a path dependent option where the option owner has the right to buy (sell) the underlying instrument at its lowest (highest) price over some preceding period. Non-vanilla path-dependent "exotic" options: Asian option An Asian option (or average option) is an option where the payoff is not determined by the underlying price at maturity but by the average underlying price over some pre-set period of time. For example, an Asian call option might pay MAX(DAILY_AVERAGE_OVER_LAST_THREE_MONTHS(S) − K, 0). Asian options were originated in commodity markets to prevent option traders from attempting to manipulate the price of the underlying security on the exercise date. They were named 'Asian' because their creators were in Tokyo when they created the first pricing model A Russian option is a lookback option that runs for perpetuity. That is, there is no end to the period into which the owner can look back. Non-vanilla path-dependent "exotic" options: Game option A game option or Israeli option is an option where the writer has the opportunity to cancel the option she has offered, but must pay the payoff at that point plus a penalty fee. Cumulative Parisian option The payoff of a cumulative Parisian option is dependent on the total amount of time the underlying asset value has spent above or below a strike price. Standard Parisian option The payoff of a standard Parisian option is dependent on the maximum amount of time the underlying asset value has spent consecutively above or below a strike price. Barrier option A barrier option involves a mechanism where if a 'limit price' is crossed by the underlying, the option either can be exercised or can no longer be exercised. Double barrier option A double barrier option involves a mechanism where if either of two 'limit prices' is crossed by the underlying, the option either can be exercised or can no longer be exercised. Cumulative Parisian barrier option A cumulative Parisian barrier option involves a mechanism, in which if the total amount of time the underlying asset value has spent above or below a 'limit price' exceeds a certain threshold, then the option can be exercised or can no longer be exercised. Standard Parisian barrier option A standard Parisian barrier option involves a mechanism, in which if the maximum amount of time the underlying asset value has spent consecutively above or below a 'limit price' exceeds a certain threshold, the option can be exercised or can no longer be exercised. Reoption A reoption occurs when a contract has expired without having been exercised. The owner of the underlying security may then reoption the security. Binary option A binary option (also known as a digital option) pays a fixed amount, or nothing at all, depending on the price of the underlying instrument at maturity. Chooser option A chooser option gives the purchaser a fixed period of time to decide whether the derivative will be a vanilla call or put. Forward start option A forward start option is an option whose strike price is determined in the future. Cliquet option A cliquet option is a sequence of forward start options.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hope (cigarette)** Hope (cigarette): Hope is a brand of Tobacco that refers to two unrelated cigarette brands; one produced in Japan by Japan Tobacco and the other produced in the Philippines by PMFTC. Hope (Japan): Hope is a long-selling product familiar with the names of "Short Hope" and "Shoppa" (it is the official product name to put the number of entries per package in parentheses). There was a Hope brand with a similar name which was introduced in 1931 and existed until September 1940, after Emperor Hirohito forbade any foreign-named brands, Hope was relaunched in 1957, but it is not related to the pre-war Hope. It is the counterpart to Peace cigarettes Package: The bow and arrow of the package design reference the bow and arrow used by the Roman mythical figure Cupid. The color of the bow and arrow and the brand name are navy blue, light uses the color red, super light uses the color monotone and menthol uses the color green. Package: The design of Hope packages has a smaller proportion of letters of "HOPE" of the logotype compared to the originally released packs, the proportion such as the serif portion is thick (minor change in mid-November 1995, at the same time Tatar/nicotine, was 15mg and 1.3 mg and was changed to 14 mg and 1.2mg). Although other warning texts were inserted, it has kept the image since the introduction of the brand in 1957. This package was designed by Shirozu Shiozuka. Package: Hope Light was also released and, at the time of its launch, was adopted with a different design, unified design at the minor change of Hope in November 1995 (at the same time tar / nicotine values went from 11mg of tar and 1.0mg of nicotine to 9mg of tar and 0.8mg of nicotine), and the Super Lights and Menthol variants released after were the same as Hope, except that the color of the bow and arrow were different. Later, in September 2009, the Super Light was changed, instead of featuring the traditional silver arrows, it featured monotone to craft tones, and in October of that year the Light and Menthol variants also complied with the Super Light as it was renewed to a craft style package. At the same time, the Light adopted a charcoal filter and the taste had been changed. Package: The pack design was once again changed in February 2014, based on the design of Hope, the bow and arrow would be arranged in three dimensions and a shadow would be placed in the character of "HOPE". The design was once again unified with all 4 variants. Package: "Hope Dry Gold" which was released for a limited time from April 2014 (the whole pack was gold) as well as "Hope Sour Red" (which had an entirely red pack) which was also released for a limited time from November 2014. "Hope Hot Black" (which has an entirely black pack) and "Hope Passion Yellow" (the pack is entirely yellow) were also released for a limited time. The basic design is the same for other models, however, "Hot Black" and "Passion Yellow" were designed on the left side. Package: In addition, in the inner pack other than Hope, before the renewal in February 2014, a different illustration was drawn depending on the issue. For example, there was a hidden playful spirit such as a drawing of a drawing that a Samurai draws a bow and arrow, and a hand of a scissors was drawn. Cardboard was embossed in the initial package (different wrapping paper also varied depending on the brand name). Products: Below are all the variants of Hope cigarettes, with the levels of tar and nicotine included. Products: Hope (20), which was once sold, was a soft package with a long size (later king size). It is known as "Long Hope". The mark of the bow and arrow of the package is red, close to vermillion and is slightly smaller than the current Hope. In addition, the logo type of this package is the difference between the normal one is Roman body "HOPE", and the difference is seen that it the "Century Gothic" and "hope". Despite the fact that all current Hope variants are of the same regular size, it is called "short hope" because of the existence of this former Hope (20). Nakajima's writers also love it, and they also appear during the work. Hope (Philippines): In the Philippines, Hope is a brand owned by Fortune Tobacco Corporation and is manufactured and distributed by PMFTC, Inc. It is unrelated to Japan Tobacco's Hope brand, although the Philippine brand renders the Hope brand name in a similar typeface. It sold as a mentholated cigarette in 100-mm and 85-mm sticks. It is labeled with the word "Luxury" beneath the Hope brand name. Hope (Philippines): The brand was advertised on the basis of "mentholated freshness". The television commercials showed foreign talents engaged in exhilarating Western leisure activities like skydiving, wakeboarding and boat racing to drive home the "freshness" story. The commercials were made even more popular with its jingle, sang by a 21-year-old Claire de la Fuente, a Karen Carpenter sound-alike. The advertisements lasted from 1975 until 2006. Since January 1, 2007, all tobacco advertising on radio and television has since been banned.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Charlotte spiral** Charlotte spiral: The Charlotte (pronounced shar-lot) spiral, also known as the candle stick or fadeout, is a figure skating spiral. The skater bends forward and glides on one leg with the other one lifted into the air. The skater's torso is upright, but during the Charlotte, the skater's torso is as close to the grounded foot as possible. When performed well, the skater's legs are almost in a straight vertical split position. The Charlotte requires great flexibility and balance. A Charlotte can be performed either forward or backward. It is usually performed backwards, although some skaters have performed it forwards in competition. Charlotte spiral: The Charlotte is named for German skater Charlotte Oelschlägel, who first performed the move in the early 1900s. Sonia Henie performed the move in some of her films, as well as her Olympic program in 1936. Michelle Kwan and Sasha Cohen picked up the move several decades later and are generally credited as bringing the move back into popularity during the late 1990s and early 2000s. Charlotte spiral: The position is rarely performed by men, notable exceptions being John Curry, Rohene Ward, Michael Christian Martinez and a few others.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Open biopsy** Open biopsy: An open biopsy is a procedure in which a surgical incision (cut) is made through the skin to expose and remove tissues. The biopsy tissue is examined under a microscope by a pathologist. An open biopsy may be done in the doctor's office or hospital, and may use local anesthesia or general anesthesia. A lumpectomy to remove a breast tumor is a type of open biopsy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**W00t** W00t: The term w00t (spelled with double-zero, "00"), or woot, is a slang interjection used to express happiness or excitement, usually used in online conversation. The expression is most popular on forums, Usenet posts, multiplayer computer games (especially first-person shooters), IRC chats, and instant messages, though use in webpages of the World Wide Web is by no means uncommon. The w00t spelling (with double-zero "00") is a leetspeak variant of woot; alternative spellings include whoot, wOOt, wh00t, wewt, wought, etc. Etymology: See the Wiktionary article w00t for details of etymology and citations; while origins are never certain, the below is supported by contemporary written references, and is credited by American lexicographer Grant Barrett.The term woot was recalled by a Canadian in the early 2000s to have been used in the 80s and 90s on an RPG BBS as a contraction of "what a hoot".w00t (1996) is a leetspeak form of earlier whoot (1993), which in turn was popularized by the rap song “Whoot, There It Is” (single released March 22, 1993) by group 95 South; this is often confused with “Whoomp! (There It Is)” (single released May 7, 1993) by group Tag Team. Both these songs are in the same year, in the Miami bass genre. The terms whoot and whoomp (and the less common form “Whoops, there it is”) are standardizations of earlier oral use of hooting sounds variously rendered as whoo, whoof, woo, woof (compare standard woohoo), notably by studio audience on The Arsenio Hall Show (1989–94) and in the movie Pretty Woman (1990). The use by the “dog pound” section of The Arsenio Hall Show audience was based on a dog’s woof, from chants used by football fans of the Cleveland Browns in Hall’s home town.Many folk etymologies exist, but the written record is clear: the term appears widely in popular print use only from 1993, particularly used both in dancehalls and at sporting events, and is credited to the songs. The "w00t" form gained popularity on the Internet from 1996, especially in massively multiplayer online role-playing games (MMORPGs). Etymology: Folk etymologies Many folk etymologies and backronyms exist, none supported by the written record: these often credit the term to games that appeared years after whoot had been popularized (1993) or w00t has appeared in common Internet usage (1996). Etymology: One such incorrect etymology derives w00t as a contraction of a phrase like "wow, loot!", "woo, loot!", "wondrous loot", and "Wonderful Loot", etc. in a MMORPG when a player found large quantities of/or rare valuable items in game, or as an acronym for "We Owned the Other Team". These games appeared after w00t was already common. Another supposed origin is as an expression used by a cracker (see security cracking) who has just broken into a computer system, obtaining "root" access: "woot, I have root!". Some people say it was just a parody on a child with a speech defect trying to say "loot" and saying "woot" instead. Etymology: Other etymologies relate it to "hoot" or "toot", as in trains in children's books, that went "Woot! Woot!", doing so as a statement of victory, or applauding good news. (Some people today say "Woot! Woot!" while making the hand-gesture of pulling a train's horn cord.) Alternatively, attempts are made to relate it to the Scots word "hoots", which is used in a somewhat similar manner — an exclamation signifying surprise, disbelief, or kindred reaction, though not for positive feelings (delight, joy) as w00t is. This is also along the lines of people's use of "w00t?", replacing "wot?" or "what?" as a response to a happy surprise. In popular culture: The word was featured on the list of Merriam-Webster's Words of the Year for 2007. They said, it "reflects a new direction in the American language led by a generation raised on video games and cell phone text-messaging".Apart from the British digital sales house w00t!media the expression also made it into a URL-shortener. Garaj Mahal named their 2008 album w00t.In 2011, "woot" was added to the Concise Oxford English Dictionary. The word is officially recognized in the dictionary without zeroes, and is instead spelled with two Os.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Report generator** Report generator: A report generator is a computer program whose purpose is to take data from a source such as a database, XML stream or a spreadsheet, and use it to produce a document in a format which satisfies a particular human readership. Report generation functionality is almost always present in database systems, where the source of the data is the database itself. It can also be argued that report generation is part of the purpose of a spreadsheet. Standalone report generators may work with multiple data sources and export reports to different document formats. Information systems theory specifies that information delivered to a target human reader must be timely, accurate and relevant. Report generation software targets the final requirement by making sure that the information delivered is presented in the way most readily understood by the target reader. History: An early report writer was part of the Nomad software which was developed in the 1970s and had its widest use was in the 1970s and 1980s.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Star domain** Star domain: In geometry, a set S in the Euclidean space Rn is called a star domain (or star-convex set, star-shaped set or radially convex set) if there exists an s0∈S such that for all s∈S, the line segment from s0 to s lies in S. This definition is immediately generalizable to any real, or complex, vector space. Intuitively, if one thinks of S as a region surrounded by a wall, S is a star domain if one can find a vantage point s0 in S from which any point s in S is within line-of-sight. A similar, but distinct, concept is that of a radial set. Definition: Given two points x and y in a vector space X (such as Euclidean space Rn ), the convex hull of {x,y} is called the closed interval with endpoints x and y and it is denoted by where := {zt:0≤t≤1} for every vector z. A subset S of a vector space X is said to be star-shaped at s0∈S if for every s∈S, the closed interval [s0,s]⊆S. A set S is star shaped and is called a star domain if there exists some point s0∈S such that S is star-shaped at s0. A set that is star-shaped at the origin is sometimes called a star set. Such sets are closed related to Minkowski functionals. Examples: Any line or plane in Rn is a star domain. A line or a plane with a single point removed is not a star domain. If A is a set in Rn, the set B={ta:a∈A,t∈[0,1]} obtained by connecting all points in A to the origin is a star domain. Any non-empty convex set is a star domain. A set is convex if and only if it is a star domain with respect to any point in that set. A cross-shaped figure is a star domain but is not convex. A star-shaped polygon is a star domain whose boundary is a sequence of connected line segments. Properties: The closure of a star domain is a star domain, but the interior of a star domain is not necessarily a star domain. Every star domain is a contractible set, via a straight-line homotopy. In particular, any star domain is a simply connected set. Every star domain, and only a star domain, can be "shrunken into itself"; that is, for every dilation ratio r<1, the star domain can be dilated by a ratio r such that the dilated star domain is contained in the original star domain. The union and intersection of two star domains is not necessarily a star domain. A non-empty open star domain S in Rn is diffeomorphic to Rn. Given W⊆X, the set ⋂|u|=1uW (where u ranges over all unit length scalars) is a balanced set whenever W is a star shaped at the origin (meaning that 0∈W and rw∈W for all 0≤r≤1 and w∈W ).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bismuth oxychloride** Bismuth oxychloride: Bismuth oxychloride is an inorganic compound of bismuth with the formula BiOCl. It is a lustrous white solid used since antiquity, notably in ancient Egypt. Light wave interference from its plate-like structure gives a pearly iridescent light reflectivity similar to nacre. It is also known as pearl white. Structure: The structure of bismuth oxychloride can be thought of as consisting of layers of Cl−, Bi3+ and O2− ions (in the image Bi = grey, O = red, Cl = green). These ions are ordered as Cl–Bi–O–Bi–Cl–Cl–Bi–O–Bi–Cl, i.e., with alternating anions (Cl−, O2−) and cations (Bi3+). The layered structure gives rise to the pearlescent properties of this material. Structure: Focusing on the coordination environment of the individual ions, the bismuth centers adopt a distorted square antiprismatic coordination geometry. The Bi atom is coordinated to four Cl atoms, forming one of the square faces, each at a distance of 3.06 Å from Bi, and four O atoms forming the other square face, each at a distance of 2.32 Å from Bi. The O atoms are tetrahedrally coordinated by four Bi atoms. Synthesis and reactions: BiOCl is formed during the reaction of bismuth chloride with water, i.e. the hydrolysis: BiCl3 + H2O → BiOCl + 2 HClWhen heated above 600 °C, BiOCl converts to Bi24O31Cl10, called the "Arppe compound" which has a complex layer structure. Use and occurrence: It has been used in cosmetics since the days of ancient Egypt. It is part of the "pearly pigment found in eye shadow, hair sprays, powders, nail polishes, and other cosmetic products". Owing to the plate-like structure of the BiOCl, its suspensions exhibit optical properties like nacre. In cosmetic its name is C.I. 77163.BiOCl exists in nature as the rare mineral bismoclite, which is part of the matlockite mineral group.An analogous compound, bismuth oxynitrate, is used as a white pigment.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Martyrdom video** Martyrdom video: Martyrdom videos are video recordings, generally from Islamist jihadists who are about to take part in a suicide attack and expect to die during their intended actions. They typically include a statement by the person preparing to be a martyr for their cause. They can be of amateur or professional quality and often incorporate text, music, and sentimental clips. The people in these videos typically sit or stand in front of a black Islamic flag, in their explosive-rigged vehicles (in cases of ISIS, Al-Nusra Front, Imam Bukhari Jamaat, and other Islamist groups), or media (in case of ISIS showing two teen suicide bombers by Al-Jazeera in Afghanistan) or other symbol of their allegiance. Suicide bombers considered themselves religiously justified by sharia and consider themselves to be shahid. Martyrdom video: Such videos are widely circulated for propaganda purposes following the event by the groups behind them. Martyrdom videos are psychological weapons as their primary purpose is to establish validity for their actions, inspire fear in enemies, or spread their ideology for political or religious ambitions. Religious justification of suicide bombings: Martyrdom videos base their product on the religious justifiability of their actions; namely martyrdom operations, more commonly known to non-extreamists as suicide attacks. Verses from the Qur'an are regularly cited in martyrdom videos to provide religious justification. Suicide is condemned in Islam according to the Qur'an, whereas martyrdom is praised. Also, fellow Muslims are not to be attacked or they commit fitna. Religious justification of suicide bombings: "You shall spend in the cause of Allah; do not throw yourselves with your own hands into destruction. You shall be charitable; Allah loves the charitable." —Qur'an 2:195"Let those fight in the way of Allah who sell the life of this world for the other. Whoso fight in the way of Allah, be he slain or be he victorious, on him We shall bestow a vast reward." —Qur'an 4:74 Suicide bombers do not see their act as an act of suicide but of martyrdom. This point is clearly stated in a martyrdom video by Dani Dwi Permana, a suicide bomber in the July 2009 attack in Jakarta, who saysThis is not suicide. This is what our enemies fear. It is an obligation for all [Muslims]. Those who do not execute this obligation are sinners. By declaring his action to be obligatory for Muslims, he is using his martyrdom video to present the religious justification for his action. This view is upheld by many radical scholars’ fatwas. The call against fitna is suspended in radical fatwas as proclaimed by the martyrdom video of Tanvir HussainI don’t hold any person to be innocent for the slaughter of the Muslim. Collateral damage is going to be inevitable. People are going to die. Religious justification of suicide bombings: The term martyrdom operations is favoured by extremists, and invariably used in martyrdom videos, as it links the suicide bombing with the Islamic call to jihad and thus justifies them religiously. The religious justifiability of suicide bombing hinges on its applicability to legitimate jihad, which has changed throughout time. Suicide bombings are outlined as acceptable jihad in extremist fatwas, therefore, are acceptable in Islamic law according to extremists. However, suicide bombings are not considered religiously justified by the great majority of the world's Muslims. As propaganda: A type of propaganda by deed though not necessarily a form of anarchism, martyrdom videos have two propaganda purposes: externally (the enemy) and internally (the Islamic community). The language, style, and messages will differ depending on what audience they are trying to reach. The videos are intended to preserve the memory of their subjects, and to justify and glorify their actions. They may also serve the function of committing their makers to their actions, by making a public statement of commitment that they feel they cannot go back on. Martyrdom videos tend to be more elaborate and show different stages of activity than other terrorist media. Subsequently, military personnel and other victims can piece together informants, collaborators, and techniques based on the propaganda element of the videos. As propaganda: Internal As Robert Pape points out, "Only a community can make a martyr. Using elaborate ceremonies… to identify the death of a suicide attacker with the good of the community, suicide terrorist organizations can promote the idea that their members should be accorded martyr status." In his view, the goal of terrorists employing suicide tactics is to redefine acts of suicide as acts of martyrdom.Recruitment is a main objective for martyrdom videos. Martyrs videotape themselves giving their reasons for their attack in the hopes of affecting the people of their community or family. Hezbollah is particularly prolific in their production of martyrdom videos to generate support locally. As propaganda: External Martyrdom videos seek to humiliate and demoralize opposition forces. Videos also try to convince would-be martyrs to launch their attacks. Often cited in martyr testimonials is the desire to be freed from the oppression of some external force. Despite massive losses to Al-Qaeda since 2002, they managed to distribute a seven-minute, professionally edited martyrdom video in the hopes of seeking recruits and snubbing the American war effort. Many of the sites that distribute anti-Western or anti-Israeli messages are in Arabic, though it is fairly common to find videos in English, or dubbed into English, to reach a Western audience. Stages: Training stage Martyrdom recruits are often filmed throughout their training. One such video in Pakistan shows a young man telling the camera, "If I die, do not cry for me. I will be in Heaven waiting for you"; he later died in a suicide attack. Humam Khalil Abu Mulal al-Balawi had his martyrdom operation postponed for several days in order to get the necessary footage. The training stage is typically immediately before the event, but some suicide bombers are trained from a very young age. Stages: The attack A cameraman is usually near the location of the attack to record the event. Video taped terrorist attacks are distinct from martyrdom videos in they show the action and sequelae, but not the intention. Martyrdom videos are intended to provide justification and reasoning behind the action. IED attacks are likely to be videotaped, as the recording provides evidence of their success. Camera operators have been killed while documenting attacks, with an example being the assassination of Rajiv Gandhi, whereat a camera operator at the scene was killed in the process. Stages: Martyr biography Martyrdom videos typically include the name of the insurgent group, a religious message or passage from the Qur’an, a sermon from a notable extremist figurehead or song, and a message from the "martyr" describing their reason(s) and motivation(s) for the attack. Although filmed insurgents engaged in military actions are usually masked, individuals in martyrdom videos are intentionally identifiable. Often, if the attack is not a lone wolf operation, the martyr will appear with a notable extremist. Stages: Producers Although many martyrdom videos are self-produced and of low quality, professional-grade martyrdom videos are becoming more common. Local TV stations may air martyrdom videos, although in Pakistan it is illegal to broadcast them. “Filmmaker” has become an explicit job in any major terrorist organizations. Al-Jazeera and the Global Islamic Media Forum (GIMF) are known to broadcast martyrdom videos. Cases: Abdulmutallab case Umar Farouk Abdulmutallab was inspired by Al-Awlaki’s internet writings and traveled to Yemen to meet with Al-Awlaki. After he was deemed to be a suitable suicide bomber, Abdulmutallab was given two weeks of weapons training. He was then taught to use the bomb, which was sewed into his underwear. After the operation was planned, Abdulmutallab was filmed in a 5-minute martyrdom video, which was produced by a professional team. Cases: Humam Khalil Abu Mulal al-Balawi case Balawi was caught by Jordanian intelligence and used as a double agent against Baitullah Mehsud. He supposedly made contact with Al-Zawahiri and proposed to meet with the CIA at Camp Chapman. He trained for the attack for a few days and was videotaped. Upon meeting the CIA agents in Afghanistan, he detonated his explosive vest, killing seven. After the attack, Hakimullah Mehsud released his martyrdom video where Balawi proclaimed, "We never forget our martyrs, we never forget our prisoners." Arshad Ali case His martyrdom video is straightforward. He cradles an AK-47, stares into the camera and proclaims that, "some hypocrites say that we are doing this for money- or because of brainwashing- but we are told by Allah to target these pagans," in order to disprove the belief that the families of suicide bombers were paid after the attacks. He then pleads with his father to quit working at the bank as it practices usury, which is against Islam. He finishes by saying, "I invite my fellows to sacrifice themselves." The film switches to the devastation caused by his attack on a polling station in Pakistan. He was 15 years old. Reactions: Negative Eliciting a reaction is the primary ambition for a suicide bombing; videos make the mission objective. Many Islamic scholars refuse to accept the religious justifiability of such operations and thus have negative reactions to the videos. Advocate/scholars promote a wider anti-martyrdom operation and desire to get their message out using the same media as the terrorists. Some people who oppose martyrdom videos have begun making their own videos on YouTube in protest. Reactions: Encouragement for Murder-Suicide Islamists who seek to become martyrs find motivation and courage to carry out actions from martyrdom videos. Suicide bombings have high symbolic value, which is represented in their martyrdom videos, and serve as symbols of a just struggle, galvanize popular support, generate financial support for the organization and become a source of new recruits for future suicide missions. Some scholars believe the whole of suicide bombing is to gain acceptance by the community, thus the movies aide in gaining the sought after recognition. If unsuccessful, suicide bombers will likely be open to trying again even after seeing footage of the people they would have killed. Reactions: Criminally sympathetic community reaction example Mahmoud al-Obeid attacked a Jewish settlement in June 2002. His mother, Naima, appeared in her son's martyrdom video and proclaimed, "God willing, you will succeed. May every bullet hit its target, and may God give you martyrdom. This is the best day of my life."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vertically transmitted infection** Vertically transmitted infection: A vertically transmitted infection is an infection caused by pathogenic bacteria or viruses that use mother-to-child transmission, that is, transmission directly from the mother to an embryo, fetus, or baby during pregnancy or childbirth. It can occur when the mother has a pre-existing disease or becomes infected during pregnancy. Nutritional deficiencies may exacerbate the risks of perinatal infections. Vertical transmission is important for the mathematical modelling of infectious diseases, especially for diseases of animals with large litter sizes, as it causes a wave of new infectious individuals. Types of infections: Bacteria, viruses, and other organisms are able to be passed from mother to child. Several vertically transmitted infections are included in the TORCH complex: T – toxoplasmosis from Toxoplasma gondii O – other infections (see below) R – rubella C – cytomegalovirus H – herpes simplex virus-2 or neonatal herpes simplexOther infections include: Parvovirus B19 Coxsackievirus Chickenpox (caused by varicella zoster virus) Chlamydia HIV Human T-lymphotropic virus Syphilis Zika fever, caused by Zika virus, can cause microcephaly and other brain defects in the child. Types of infections: COVID-19 in pregnancy is associated with an increased risk of stillbirth with an odds ratio of approximately 2.Hepatitis B may also be classified as a vertically transmitted infection. The hepatitis B virus is large and does not cross the placenta. Hence, it cannot infect the fetus unless breaks in the maternal-fetal barrier have occurred, but such breaks can occur in bleeding during childbirth or amniocentesis.The TORCH complex was originally considered to consist of the four conditions mentioned above, with the "TO" referring to Toxoplasma. The four-term form is still used in many modern references, and the capitalization "ToRCH" is sometimes used in these contexts. The acronym has also been listed as TORCHES, for TOxoplasmosis, Rubella, Cytomegalovirus, HErpes simplex, and Syphilis.A further expansion of this acronym, CHEAPTORCHES, was proposed by Ford-Jones and Kellner in 1995: C – chickenpox and shingles H – hepatitis, C (D), E E – enteroviruses A – AIDS (HIV infection) P – parvovirus B19 (produces hydrops fetalis secondary to aplastic anemia) T – toxoplasmosis O – other (group B streptococci, Listeria, Candida, and Lyme disease) R – rubella C – cytomegalovirus H – herpes simplex E – everything else sexually transmitted (gonorrhea, Chlamydia infection, Ureaplasma urealyticum, and human papillomavirus) S – Syphilis Signs and symptoms: The signs and symptoms of a vertically transmitted infection depend on the individual pathogen. In the mother, it may cause subtle signs such as an influenza-like illness, or possibly no symptoms at all. In such cases, the effects may be seen first at birth.Symptoms of a vertically transmitted infection may include fever and flu-like symptoms. The newborn is often small for gestational age. A petechial rash on the skin may be present, with small reddish or purplish spots due to bleeding from capillaries under the skin. An enlarged liver and spleen (hepatosplenomegaly) is common, as is jaundice. However, jaundice is less common in hepatitis B because a newborn's immune system is not developed well enough to mount a response against liver cells, as would normally be the cause of jaundice in an older child or adult. Hearing impairment, eye problems, mental retardation, autism, and death can be caused by vertically transmitted infections.The genetic conditions of Aicardi-Goutieres syndrome are possibly present in a similar manner. Causal routes: The main routes of transmission of vertically transmitted infections are across the placenta (transplacental) and across the female reproductive tract during childbirth. Transmission is also possible by breaks in the maternal-fetal barrier such by amniocentesis or major trauma. Causal routes: Transplacental The embryo and fetus have little or no immune function. They depend on the immune function of their mother. Several pathogens can cross the placenta and cause perinatal infection. Often, microorganisms that produce minor illness in the mother are very dangerous for the developing embryo or fetus. This can result in spontaneous abortion or major developmental disorders. For many infections, the baby is more at risk at particular stages of pregnancy. Problems related to perinatal infection are not always directly noticeable.Apart from infecting the fetus, transplacental pathogens may cause placentitis (inflammation of the placenta) and/or chorioamnionitis (inflammation of the fetal membranes). Causal routes: During childbirth Babies can also become infected by their mothers during birth. Some infectious agents may be transmitted to the embryo or fetus in the uterus, while passing through the birth canal, or even shortly after birth. The distinction is important because when transmission is primarily during or after birth, medical intervention can help prevent infections in the infant.During birth, babies are exposed to maternal blood, body fluids, and to the maternal genital tract without the placental barrier intervening. Because of this, blood-borne microorganisms (hepatitis B, HIV), organisms associated with sexually transmitted diseases (e.g., Neisseria gonorrhoeae and Chlamydia trachomatis), and normal fauna of the genitourinary tract (e.g., Candida albicans) are among those commonly seen in infection of newborns. Pathophysiology: Virulence versus symbiosis In the spectrum of optimal virulence, vertical transmission tends to evolve benign symbiosis, so is a critical concept for evolutionary medicine. Because a pathogen's ability to pass from mother to child depends significantly on the hosts' ability to reproduce, pathogens' transmissibility tends to be inversely related to their virulence. In other words, as pathogens become more harmful to, and thus decrease the reproduction rate of, their host organism, they are less likely to be passed on to the hosts' offspring since they will have fewer offspring.Although HIV is sometimes transmitted through perinatal transmission, its virulence can be accounted for because its primary mode of transmission is not vertical. Moreover, medicine has further decreased the frequency of vertical transmission of HIV. The incidence of perinatal HIV cases in the United States has declined as a result of the implementation of recommendations on HIV counselling and voluntary testing practices and the use of zidovudine therapy by providers to reduce perinatal HIV transmission.The price paid in the evolution of symbiosis is, however, great: for many generations, almost all cases of vertical transmission continue to be pathological—in particular if any other routes of transmission exist. Many generations of random mutation and selection are needed to evolve symbiosis. During this time, the vast majority of vertical transmission cases exhibit the initial virulence.In dual inheritance theory, vertical transmission refers to the passing of cultural traits from parents to children. Diagnosis: When physical examination of the newborn shows signs of a vertically transmitted infection, the examiner may test blood, urine, and spinal fluid for evidence of the infections listed above. Diagnosis can be confirmed by culture of one of the specific pathogens or by increased levels of IgM against the pathogen. Classification A vertically transmitted infection can be called a perinatal infection if it is transmitted in the perinatal period, which starts at gestational ages between 22 and 28 weeks (with regional variations in the definition) and ending seven completed days after birth.The term congenital infection can be used if the vertically transmitted infection persists after childbirth. Treatment: Some vertically transmitted infections, such as toxoplasmosis and syphilis, can be effectively treated with antibiotics if the mother is diagnosed early in her pregnancy. Many viral vertically transmitted infections have no effective treatment, but some, notably rubella and varicella-zoster, can be prevented by vaccinating the mother prior to pregnancy.Pregnant women living in malaria-endemic areas are candidates for malaria prophylaxis. It clinically improves the anemia and parasitemia of the pregnant women, and birthweight in their infants.If the mother has active herpes simplex (as may be suggested by a pap test), delivery by Caesarean section can prevent the newborn from contact, and consequent infection, with this virus.IgG2 antibody may play a crucial role in prevention of intrauterine infections and extensive research is going on for developing IgG2-based therapies for treatment and vaccination. Prognosis: Each type of vertically transmitted infection has a different prognosis. The stage of the pregnancy at the time of infection also can change the effect on the newborn.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Union type** Union type: In computer science, a union is a value that may have any of several representations or formats within the same position in memory; that consists of a variable that may hold such a data structure. Some programming languages support special data types, called union types, to describe such values and variables. In other words, a union type definition will specify which of a number of permitted primitive types may be stored in its instances, e.g., "float or long integer". In contrast with a record (or structure), which could be defined to contain both a float and an integer; in a union, there is only one value at any given time. Union type: A union can be pictured as a chunk of memory that is used to store variables of different data types. Once a new value is assigned to a field, the existing data is overwritten with the new data. The memory area storing the value has no intrinsic type (other than just bytes or words of memory), but the value can be treated as one of several abstract data types, having the type of the value that was last written to the memory area. Union type: In type theory, a union has a sum type; this corresponds to disjoint union in mathematics. Depending on the language and type, a union value may be used in some operations, such as assignment and comparison for equality, without knowing its specific type. Other operations may require that knowledge, either by some external information, or by the use of a tagged union. Untagged unions: Because of the limitations of their use, untagged unions are generally only provided in untyped languages or in a type-unsafe way (as in C). They have the advantage over simple tagged unions of not requiring space to store a data type tag. Untagged unions: The name "union" stems from the type's formal definition. If a type is considered as the set of all values that that type can take on, a union type is simply the mathematical union of its constituting types, since it can take on any value any of its fields can. Also, because a mathematical union discards duplicates, if more than one field of the union can take on a single common value, it is impossible to tell from the value alone which field was last written. Untagged unions: However, one useful programming function of unions is to map smaller data elements to larger ones for easier manipulation. A data structure consisting, for example, of 4 bytes and a 32-bit integer, can form a union with an unsigned 64-bit integer, and thus be more readily accessed for purposes of comparison etc. Unions in various programming languages: ALGOL 68 ALGOL 68 has tagged unions, and uses a case clause to distinguish and extract the constituent type at runtime. A union containing another union is treated as the set of all its constituent possibilities, and if the context requires it a union is automatically coerced into the wider union. A union can explicitly contain no value, which can be distinguished at runtime. An example is: mode node = union (real, int, string, void); node n := "abc"; case n in (real r): print(("real:", r)), (int i): print(("int:", i)), (string s): print(("string:", s)), (void): print(("void:", "EMPTY")), out print(("?:", n)) esac The syntax of the C/C++ union type and the notion of casts was derived from ALGOL 68, though in an untagged form. Unions in various programming languages: C/C++ In C and C++, untagged unions are expressed nearly exactly like structures (structs), except that each data member begins at the same location in memory. The data members, as in structures, need not be primitive values, and in fact may be structures or even other unions. C++ (since C++11) also allows for a data member to be any type that has a full-fledged constructor/destructor and/or copy constructor, or a non-trivial copy assignment operator. For example, it is possible to have the standard C++ string as a member of a union. Unions in various programming languages: The primary use of a union is allowing access to a common location by different data types, for example hardware input/output access, bitfield and word sharing, or type punning. Unions can also provide low-level polymorphism. However, there is no checking of types, so it is up to the programmer to be sure that the proper fields are accessed in different contexts. The relevant field of a union variable is typically determined by the state of other variables, possibly in an enclosing struct. Unions in various programming languages: One common C programming idiom uses unions to perform what C++ calls a reinterpret_cast, by assigning to one field of a union and reading from another, as is done in code which depends on the raw representation of the values. A practical example is the method of computing square roots using the IEEE representation. This is not, however, a safe use of unions in general. Unions in various programming languages: Structure and union specifiers have the same form. [ . . . ] The size of a union is sufficient to contain the largest of its members. The value of at most one of the members can be stored in a union object at any time. A pointer to a union object, suitably converted, points to each of its members (or if a member is a bit-field, then to the unit in which it resides), and vice versa. Unions in various programming languages: Anonymous union In C++, C11, and as a non-standard extension in many compilers, unions can also be anonymous. Their data members do not need to be referenced, are instead accessed directly. They have some restrictions as opposed to traditional unions: in C11, they must be a member of another structure or union, and in C++, they can not have methods or access specifiers. Unions in various programming languages: Simply omitting the class-name portion of the syntax does not make a union an anonymous union. For a union to qualify as an anonymous union, the declaration must not declare an object. Example: Anonymous unions are also useful in C struct definitions to provide a sense of namespacing. Unions in various programming languages: Transparent union In compilers such as GCC, Clang, and IBM XL C for AIX, a transparent_union attribute is available for union types. Types contained in the union can be converted transparently to the union type itself in a function call, provided that all types have the same size. It is mainly intended for function with multiple parameter interfaces, a use necessitated by early Unix extensions and later re-standarisation. Unions in various programming languages: COBOL In COBOL, union data items are defined in two ways. The first uses the RENAMES (66 level) keyword, which effectively maps a second alphanumeric data item on top of the same memory location as a preceding data item. In the example code below, data item PERSON-REC is defined as a group containing another group and a numeric data item. PERSON-DATA is defined as an alphanumeric data item that renames PERSON-REC, treating the data bytes continued within it as character data. Unions in various programming languages: The second way to define a union type is by using the REDEFINES keyword. In the example code below, data item VERS-NUM is defined as a 2-byte binary integer containing a version number. A second data item VERS-BYTES is defined as a two-character alphanumeric variable. Since the second item is redefined over the first item, the two items share the same address in memory, and therefore share the same underlying data bytes. The first item interprets the two data bytes as a binary value, while the second item interprets the bytes as character values. Unions in various programming languages: Pascal In Pascal, there are two ways to create unions. One is the standard way through a variant record. The second is a nonstandard means of declaring a variable as absolute, meaning it is placed at the same memory location as another variable or at an absolute address. While all Pascal compilers support variant records, only some support absolute variables. Unions in various programming languages: For the purposes of this example, the following are all integer types: a byte consists of 8 bits, a word is 16 bits, and an integer is 32 bits. The following example shows the non-standard absolute form: In the first example, each of the elements of the array B maps to one of the specific bytes of the variable A. In the second example, the variable C is assigned to the exact machine address 0. Unions in various programming languages: In the following example, a record has variants, some of which share the same location as others: PL/I In PL/I then original term for a union was cell, which is still accepted as a synonym for union by several compilers. The union declaration is similar to the structure definition, where elements at the same level within the union declaration occupy the same storage. Elements of the union can be any data type, including structures and array.: pp192–193  Here vers_num and vers_bytes occupy the same storage locations. Unions in various programming languages: An alternative to a union declaration is the DEFINED attribute, which allows alternative declarations of storage, however the data types of the base and defined variables must match.: pp.289–293 Rust Rust implements both tagged and untagged unions. In Rust, tagged unions are implemented using the enum keyword. Unlike enumerated types in most other languages, enum variants in Rust can contain additional data in the form of a tuple or struct, making them tagged unions rather than simple enumerated types.Rust also supports untagged unions using the union keyword. The memory layout of unions in Rust is undefined by default, but a union with the #[repr(C)] attribute will be laid out in memory exactly like the equivalent union in C. Reading the fields of a union can only be done within an unsafe function or block, as the compiler cannot guarantee that the data in the union will be valid for the type of the field; if this is not the case, it will result in undefined behavior. Syntax and example: C/C++ In C and C++, the syntax is: A structure can also be a member of a union, as the following example shows: This example defines a variable uvar as a union (tagged as name1), which contains two members, a structure (tagged as name2) named svar (which in turn contains three members), and an integer variable named d. Unions may occur within structures and arrays, and vice versa: The number ival is referred to as symtab[i].u.ival and the first character of string sval by either of *symtab[i].u.sval or symtab[i].u.sval[0]. PHP Union types were introduced in PHP 8.0. The values are implicitly "tagged" with a type by the language, and may be retrieved by "gettype()". Python Support for typing was introduced in Python 3.5. The new syntax for union types were introduced in Python 3.10. TypeScript Union types are supported in TypeScript. The values are implicitly "tagged" with a type by the language, and may be retrieved by "typeof()". Rust Tagged unions in Rust use the enum keyword, and can contain tuple and struct variants: Untagged unions in Rust use the union keyword: Reading from the fields of an untagged union results in undefined behavior if the data in the union is not valid as the type of the field, and thus requires an unsafe block:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shell (structure)** Shell (structure): A shell is a three-dimensional solid structural element whose thickness is very small compared to its other dimensions. It is characterized in structural terms by mid-plane stress which is both coplanar and normal to the surface. A shell can be derived from a plate in two steps: by initially forming the middle surface as a singly or doubly curved surface, then by applying loads which are coplanar to the plate's plane thus generating significant stresses. Shell (structure): Materials range from concrete (a concrete shell) to fabric (as in fabric structures). Thin-shell structures (also called plate and shell structures) are lightweight constructions using shell elements. These elements, typically curved, are assembled to make large structures. Typical applications include aircraft fuselages, boat hulls, and the roofs of large buildings. Definition: A thin shell is defined as a shell with a thickness which is small compared to its other dimensions and in which deformations are not large compared to thickness. A primary difference between a shell structure and a plate structure is that, in the unstressed state, the shell structure has curvature as opposed to the plates structure which is flat. Membrane action in a shell is primarily caused by in-plane forces (plane stress), but there may be secondary forces resulting from flexural deformations. Where a flat plate acts similar to a beam with bending and shear stresses, shells are analogous to a cable which resists loads through tensile stresses. The ideal thin shell must be capable of developing both tension and compression. Types: The most popular types of thin-shell structures are: Concrete shell structures, often cast as a monolithic dome or stressed ribbon bridge or saddle roof Lattice shell structures, also called gridshell structures, often in the form of a geodesic dome or a hyperboloid structure Membrane structures, which include fabric structures and other tensile structures, cable domes, and pneumatic structures.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NZ COVID Tracer** NZ COVID Tracer: NZ COVID Tracer is a mobile software application that enables a person to record places they have visited, in order to facilitate tracing who may have been in contact with a person infected with the COVID-19 virus. The app allows users to scan official QR codes at the premises of businesses and other organisations they visit, to create a digital diary. It was launched by New Zealand's Ministry of Health on 20 May 2020, during the ongoing COVID-19 pandemic. It can be downloaded from the App Store and Google Play. History: 2020 The NZ COVID Tracer app was developed for the Ministry of Health by New Zealand company Rush Digital and partially relies on an Amazon Web Services platform. It was formally launched on 20 May 2020. Some people were able to download it from App Store on 19 May and a Health Ministry spokesperson said later that it had been submitted to App Store and Google Play that evening and that there can be a variation in time between submitting an app and it going live. Prime Minister Jacinda Ardern described the NZ COVID Tracer app as a "digital diary". It worked on mobile devices using Android 7.0 or above, and Apple iOS 12 or above, with future updates to include support for older systems.Immediately following its release, several users encountered difficulties with logging into the app or using it. There were also complaints from the public about iPhones and Android phones running the older, incompatible versions of operating systems being unable to download the app. The Ministry advised people without a compatible mobile device to keep a manual record of the people and places they have visited for contact tracing purposes. The app received an average rating of 2.6 on Google Play Store, with most complaints relating to its poor design and difficulty with finding the app. As of 20 May, more than 92,000 people had downloaded the app and over 1,000 businesses had signed up to it.On 12 June, Stuff reported that many businesses were finding the app clunky to use and had to rely upon secondary apps. A poll conducted on Neighbourly found that less than 37% of respondents had downloaded the COVID Tracer app due to privacy concerns, lack of access to a smartphone, and confusion about how to use it. By 17 June, the Ministry of Health reported that 562,000 people had registered with the COVID Tracer app. 56,552 posters displaying the official QR codes had been created. There have been 1,035,154 poster scans to date.The Ministry of Health released a new version of the app on 30 July 2020 that could be used on phones running Android 6 or iOS 11. Additionally, the update added the ability for users to manually enter location data for places they visit that do not have a QR code displayed.It became compulsory for businesses to display the official QR codes at their doors or reception areas from noon 19 August. By 18 August, more than 234,000 QR posters had been generated but not all clinics and retirement villages had displayed them. From 11:59 pm on 3 September, it became compulsory for all public transport providers, including buses, trains, ferries, ride-share vehicles and train operators, to provide the QR codes for passengers to use.On 23 November 2020, the Ministry of Health released an update to the app that would no longer require users to sign up for an account or set a password. Existing users will also no longer be required to periodically sign in. The dashboard user interface was also improved to make the layout and navigation simpler.As of 25 November, there are 2,381,200 registered users.On 10 December, the COVID Tracer app received an update that would allow contact tracing via Bluetooth. At the same time, the app's source code was released on GitHub. History: 2021 On 30 January, the Ministry of Health recommended that people with an Android 5 device use the similar Rippl app, as NZ COVID Tracer does not work on Android 5.By 8 February 2021, the Health Minister reported that the COVID Tracer app had 2,559,151 users following a Government plea for New Zealanders to keep scanning over summer in order to avoid another outbreak. In addition, there were a total of 173,583,645 poster scans and users have created 7,100,714 manual diary entries.On 22 August, the Government announced that record-keeping including scanning with the COVID Tracer app or manual signing will now be mandatory for most events and businesses at all alert levels in response to the detection of the Delta variant on 17 August 2021.Following the delta outbreak in August 2021, it was noted that the Bluetooth tracking feature was being used sparsely by government contact tracers. History: 2022 On 23 March, Prime Minister Jacinda Ardern announced that people would not need to use their COVID Tracer apps to scan QR codes at businesses and venues from 11:59 pm on 4 April.On 1 April, 1 News reported a decline in NZ COVID Tracer app usage, citing data from the Ministry of Health. Between 1 December 2020 and 1 April 2022, COVID Tracer app usage declined from a peak of 1.5 million devices to below 200,000 on 1 April. However, devices using the associated Bluetooth tracing rose from below 400,000 in December 2020 to below 2.4 million by early April 2022. Contact tracing: The NZ COVID Tracer app allows users to scan official Ministry of Health QR codes at businesses, public buildings and other organisations to track where they have been, for contact tracing purposes. People can also register their contact tracing details on the official NZ COVID Tracer website. Information on people's movements will be stored for 60 days by the Ministry of Health securely on the user's device before being automatically deleted. Initially the period of retention for scan data was 31 days, however an update to the app on 9 September increased this to 60 days to make it easier for contact tracers to establish links between cases of COVID-19.First-time users are required to enter their name, phone number, email address and create a password. They will then receive a six digit code that allows them to complete the registration process. The app also allows users to store their contact details and physical address, which is provided to the National Close Contact Service (NCCS) in order to facilitate contact tracing in the event that the user is identified as a close contact of someone who has contracted COVID-19. The COVID Tracer app also supports two-factor authentication.While the NZ COVID Tracer app is currently only available in English, the New Zealand Government intends to include support for the Māori, Chinese, and several unspecified Pacific languages.On 10 June, the Government announced that it would be updating the NZ COVID Tracer app to allow the app to contact users who may have been exposed to COVID-19 and giving them the option of voluntarily sending their location history to public health officials.On 10 December, the COVID Tracer app received a Bluetooth update that would create an anonymised record of every person the user has ever been near, bringing New Zealand in line with other countries such as Singapore which have been using Bluetooth contact tracing since March 2020. Using technology developed by Google and Apple, the Bluetooth upgrade also allows the swapping of "randomised keys" with other phones carrying the updated COVID Tracer app. Privacy and oversight: The NZ COVID Tracer app was developed by the Ministry of Health in consultation with the Privacy Commissioner and has also undergone independent security testing. Any personal information and contact details registered on the Tracer app are provided to the National Close Contact Service. This information is retained for public health purposes only and is not shared with agencies outside the health sector. Any information entered through the Tracer app including the locations that users sign into is stored securely on the phone and automatically deleted after 60 days. Users have the right to share information with contact tracers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Generalized vaccinia** Generalized vaccinia: Generalized vaccinia is a cutaneous condition that occurs 6–9 days after vaccination, characterized by a generalized eruption of skin lesions, and caused by the vaccinia virus.: 391
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**First passage percolation** First passage percolation: First passage percolation is a mathematical method used to describe the paths reachable in a random medium within a given amount of time. Introduction: First passage percolation is one of the most classical areas of probability theory. It was first introduced by John Hammersley and Dominic Welsh in 1965 as a model of fluid flow in a porous media. It is part of percolation theory, and classical Bernoulli percolation can be viewed as a subset of first passage percolation. Introduction: Most of the beauty of the model lies in its simple definition (as a random metric space) and the property that several of its fascinating conjectures do not require much effort to be stated. Most times, the goal of first passage percolation is to understand a random distance on a graph, where weights are assigned to edges. Most questions are tied to either find the path with the least weight between two points, known as a geodesic, or to understand how the random geometry behaves in large scales. Mathematics: As is the case in percolation theory in general, many of the problems related to first passage percolation involve finding optimal routes or optimal times. The model is defined as follows. Let G be a graph. We place a non-negative random variable t(e) , called the passage time of the edge e , at each nearest-neighbor edge of the graph G . The collection t(e) is usually assumed to be independent, identically distributed but there are variants of the model. The random variable t(e) is interpreted as the time or the cost needed to traverse edge e Since each edge in first passage percolation has its own individual weight (or time) we can write the total time of a path as the summation of weights of each edge in the path. Mathematics: Given two vertices x,y of G one then sets inf γT(γ), where the infimum is over all finite paths that start at x and end at y The function T induces a random pseudo-metric on G The most famous model of first passage percolation is on the lattice Zd . One of its most notorious questions is "What does a ball of large radius look like?". This question was raised in the original paper of Hammersley and Welsh in 1969 and gave rise to the Cox-Durrett limit shape theorem in 1981. Although the Cox-Durrett theorem provides existence of the limit shape, not many properties of this set are known. For instance, it is expected that under mild assumptions this set should be strictly convex. As of 2016, the best result is the existence of the Auffinger-Damron differentiability point in the Cox-Liggett flat edge case.There are also some specific examples of first passage percolation that can be modeled using Markov chains. For example: a complete graph can be described using Markov chains and recursive trees and 2-width strips can be described using a Markov chain and solved using a Harris chain. Applications: First passage percolation is well-known for giving rise to other tools of mathematics, including the Subadditive Ergodic Theorem, a fundamental result in ergodic theory. Applications: Outside mathematics, the Eden growth model is used to model bacteria growth and deposition of material. Another example is comparing a minimized cost from the Vickrey–Clarke–Groves auction (VCG-auction) to a minimized path from first passage percolation to gauge how pessimistic the VCG-auction is at its lower limit. Both problems are solved similarly and one can find distributions to use in auction theory.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alper Demir** Alper Demir: Alper Demir is a Professor of Electrical Engineering at Koç University in Istanbul, Turkey. He was named a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2012 for his contributions to stochastic modeling and analysis of phase noise. Education and career: After graduating from Ankara Science High School in 1987, Demir studied electrical engineering at Bilkent University, where he got his B.S. in 1991, and then immigrated to the United States where he received his M.S. and Ph.D. degrees from the University of California, Berkeley in 1994 and 1997, respectively.Prior to getting to a full career at Koç University, Demir worked on some summer jobs in Motorola (1995) and Cadence Design Systems (1996). He then joined Bell Labs, serving there from 1997 to 2000 and until 2002 worked at CeLight. While there, he was responsible for creation of as many as six inventions and was co-author of two books about nonlinear noise analysis and analog design methodologies. Between summer of 2002 and August 2005, he worked at the Research Laboratory of Electronics at MIT, and from 2009 to 2010, served as visiting professor at the University of California, Berkeley. Following it, Demir moved to Koç University, where, after serving as assistant professor at their Department of Electrical and Electronics Engineering from 2002 to 2007, he was promoted to associate the following year.From February to June 2017, Demir was a visiting scientist at Massachusetts Institute of Technology and between 2011 and 2017, served as an associate editor of the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. Demir's research interests are in the fields of computational and quantitative biology, stochastic and nonlinear dynamical systems in electronics and biology, and the study of noises in electronic, optical, communication and biological systems.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DDX23** DDX23: Probable ATP-dependent RNA helicase DDX23 is an enzyme that in humans is encoded by the DDX23 gene.This gene encodes a member of the DEAD box protein family. DEAD box proteins, characterized by the conserved motif Asp-Glu-Ala-Asp (DEAD), are putative RNA helicases. They are implicated in a number of cellular processes involving alteration of RNA secondary structure, such as translation initiation, nuclear and mitochondrial splicing, and ribosome and spliceosome assembly. Based on their distribution patterns, some members of this family are believed to be involved in embryogenesis, spermatogenesis, and cellular growth and division. The protein encoded by this gene is a component of the U5 snRNP complex; it may facilitate conformational changes in the spliceosome during nuclear pre-mRNA splicing. An alternatively spliced transcript variant has been found for this gene, but its biological validity has not been determined.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GalP (protein)** GalP (protein): The galactose permease or GalP found in Escherichia coli is an integral membrane protein involved in the transport of monosaccharides, primarily hexoses, for utilization by E. coli in glycolysis and other metabolic and catabolic pathways (3,4). It is a member of the Major Facilitator Super Family (MFS) and is homologue of the human GLUT1 transporter (4). Below you will find descriptions of the structure, specificity, effects on homeostasis, expression, and regulation of GalP along with examples of several of its homologues. Structure: Galactose Permease (GalP), is a member of the Major Facilitator Super Family (MFS) and therefore has structural similarities to the other members of this super family such as GLUT1 (4). All members of the MFS have 12 membrane spanning alpha(α)-helices with both the C- and N-termini located on the cytoplasmic side of the membrane (4). Figure 1a (3) depicts how the 12 helices are divided into two halves, that are pseudo-symmetric, of 6 helices which are attached by a long hydrophilic cytoplasmic loop between helix 6 and helix 7 (2,3,4). These two halves come together to form a pore for substrate transport, in GalP, the substrates are primarily galactose, glucose, and H+. GalP monomers have a pore of approximately 10Å in diameter, which is consistent with the pore sizes found in other members of the MFS, between 10-15Å (4). GalP has been found as an oligomer formed by a homotrimer of GalP monomers that exhibits p3 or 3-fold rotational symmetry (Figure 1b-c) (4). GalP is the first member of the MFS that has been found as a trimer and to be biologically active in its trimeric form; it is thought that the GalP oligomer is formed for stability (4). Specificity: GalP is a monosaccharide transporter that uses a chemiosmotic mechanism to transport its substrates into the cytoplasm of E. coli (1). Glucose, galactose and other hexoses are transported by GalP by the use of the proton gradient produced by the electron transport chain and reversible ATPase (1). GalP can bind specifically to the hexoses with preferential binding of galactose and glucose through the pores in each monomer (2,3). It transports these sugars at faster rates with a proton gradient but can still transport them in a leaky fashion without a proton gradient present (4). As stated before GalP shares similarities with GLUT1 and other members of the MFS and like GLUT1, GalP can be inhibited by the antibiotics cytochalasin B and forskolin (Figure 1a) (3), which competitively bind to the pore blocking sugar transport into the cell (2,3,4). Forskolin is a structural homologue of D-galactose (Figure 1a) (3) and therefore can bind with a similar affinity to the transporter. Cytochalasin B may bind to an asparagine residue (Asn394) in the pore, blocking saccharide uptake, which is also found in the GLUT1 transporter (2,3). GalP can transport lactose or fructose but with low affinity, only allowing these sugars to "leak" across the membrane when glucose, galactose, or other hexoses aren't present for transport (4). Homeostasis: The GalP symporter links galactose and proton import, using the favorable proton concentration gradient to move galactose against its concentration gradient. However, this mechanism, if in isolation, would result in acidification of the cytoplasm and cessation of galactose import(14). To prevent this, E. coli utilizes ion pumps designed to raise intracellular pH (13,14). During electron transport (a key step in ATP production in respiration), energy harnessed from electrons is used to pump protons into the periplasmic space to build a proton motive force. Primary proton pumps, responsible for pumping protons out of the cytoplasm, can be active without the synthesis of ATP and are the primary mechanism through which protons are exported (13,14). Coupling galactose/proton import with proton export would maintain pH homeostasis. As protons are charged molecules, their import or export could disrupt the membrane potential of the cell (14). However, simultaneous import and export of protons would result in no change in the net charge of the cell, thus no net change in membrane potential. Regulation/Expression: The GalP/H+ symporter is the galactose permease from the galP gene of the Escherichia coli genome. Galactose is an alternate carbon source to the preferable glucose . The cAMP/CRP catabolite repression regulator is most likely involved in the regulation of GalP expression (Figure 2) (9). The two proteins responsible for inhibiting transcription from the gal regulon are GalR and GalS (Figure 4) (11). GalR and GalS have very similar primary structure sequences, and have the same binding sites on the operator (11). In the presence of D-galactose, GalR and GalS are inhibited since they are repressors (5, 11). However, when GalP is not required (i.e. when glucose is available), GalR/GalS will bind the promoter operator site thus blocking transcription and preventing cAMP-CRP activation (11). GalS is seen to bind only in the presence of GalR, so both of these proteins are required for repression (11). cAMP is what modulates CRP at the promoter. The cAMP-CRP complex activates the gal regulon and is responsible for upregulation of GalP (Figure 2) (9,11). GalP is also repressed in the presence of glucose since the cell will prefer glucose over galactose (7). There is also a study on the involvement of NagC in regulation, a protein from the nagC gene which is responsible for N-acetylglucosamine repression (5). This study suspects that NagC cooperates with GalR and GalS by binding to a single-high affinity site upstream of the galP promoter as well in order to suppress gal regulon transcription (5). Other Bacteria Symporters: Several other symporters have been identified in E. coli and in other bacteria. E. coli has a well-studied GltS glutamate/Na+ symporter that aids in the uptake of glutamate into the cell along with an influx of sodium ions. It also has a serine-threonine symporter, SstT, that also uses an influx of sodium ions for solute uptake. Other Bacteria Symporters: A Na+/glucose symporter (SglT) has been identified in Vibrio parahaemolyticus (10). Sodium ions induced the cells’ uptake of glucose in a study of phosphotransferase-system (PTS) mutants (10). Clostridium difficile has a symporter homologous to that of the V. parahaemolyticus SglT (6). A citrate/Na+ symporter, CitS, seems to be common between Vibrio cholerae, Salmonella Typhi, and Klebsiella pneumoniae (6). This symporter uses the influx of sodium ions in order to bring citrate into the cell, which is an important substrate to have for metabolic processes such as decarboxylation of oxaloacetate (6). A H+/amino acid symporter BrnQ can be found in Lactobacillus delbruckii, and Pseudomonas aeruginosa has the BraB symporter for substrates such as glutamate as well (6). Other Bacteria Symporters: Solute/ion symporters are very commonly found in bacteria since they are very important. Homeostasis and regulated uptake for metabolic pathways is essential for bacterial survival. GLUT-1: A Eukaryotic Homolog: GalP is homologous to GLUT-1 found in mammalian cells (12). Both transporters are MFS transporters and possess 29% sequence identity (4). GLUT-1 is a glucose transporter present in most mammalian cells (Figure 5) (12). Its structure is nearly identical to that of GalP – possessing cytoplasmic amino and carboxy termini, twelve membrane spanning α helices, a periplasmic glycosylation site between helices 1 and 2, and a cytoplasmic α-helix loop between helices 6 and 7 (12). GLUT-1 ranges from 45 to 55 kDa; the size variation depends upon the extent of glycosylation (12). GLUT-1: A Eukaryotic Homolog: While GLUT-1 is found in most mammalian cells, certain tissue types express this transporter more so than others. GLUT-1 is expressed in high levels on erythrocytes, embryonic cells, fibroblasts, and endothelial cells (12). GLUT-1 is also one of the main transporters involved in transporting glucose across the blood brain barrier (12). GLUT-1: A Eukaryotic Homolog: Generally, GLUT-1 acts as a facilitative transporter of glucose, transporter glucose along its concentration gradient. When glucose binds to GLUT-1, it stimulates a conformational change, allowing glucose to be released on the opposite side of the membrane (4,12). GLUT-1 is a bidirectional transporter and possesses glucose binding sites accessible on both the cytoplasmic and extracellular faces (4,12). On the rare occasion that GLUT-1 transports glucose against its concentration gradient, Glut-1 uses an energy source, typically ATP, to move the glucose. Like GalP, GLUT-1 is inhibited via the binding of cytochalasin B and forskolin (12).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flying qualities** Flying qualities: Flying qualities is one of the three principal regimes in the science of flight test, which also includes performance and systems. Flying qualities involves the study and evaluation of the stability and control characteristics of an aircraft. They have a critical bearing on the safety of flight and on the ease of controlling an airplane in steady flight and in maneuvers. Relation to stability: To understand the discipline of flying qualities, the concept of stability should be understood. Stability can be defined only when the vehicle is in trim; that is, there are no unbalanced forces or moments acting on the vehicle to cause it to deviate from steady flight. If this condition exists, and if the vehicle is disturbed, stability refers to the tendency of the vehicle to return to the trimmed condition. If the vehicle initially tends to return to a trimmed condition, it is said to be statically stable. If it continues to approach the trimmed condition without overshooting, the motion is called a subsidence. If the motion causes the vehicle to overshoot the trimmed condition, it may oscillate back and forth. If this oscillation damps out, the motion is called a damped oscillation and the vehicle is said to be dynamically stable. On the other hand, if the motion increases in amplitude, the vehicle is said to be dynamically unstable. Relation to stability: The theory of stability of airplanes was worked out by G. H. Bryan in England in 1904. This theory is essentially equivalent to the theory taught to aeronautical students today and was a remarkable intellectual achievement considering that at the time Bryan developed the theory, he had not even heard of the Wright brothers' first flight. Because of the complication of the theory and the tedious computations required in its use, it was rarely applied by airplane designers. Obviously, to fly successfully, pilotless airplanes had to be dynamically stable. The airplane flown by the Wright brothers, and most airplanes flown thereafter, were not stable, but by trial and error, designers developed a few planes that had satisfactory flying qualities. Many other airplanes, however, had poor flying qualities, which sometimes resulted in crashes. Relation to stability: Handling qualities are those characteristics of a flight vehicle that govern the ease and precision with which a pilot is able to perform a flying task. This includes the human-machine interface. The way in which particular vehicle factors affect flying qualities has been studied in aircraft for decades, and reference standards for the flying qualities of both fixed-wing aircraft and rotary-wing aircraft have been developed and are now in common use. These standards define a subset of the dynamics and control design space that provides good handling qualities for a given vehicle type and flying task. Historical development: Bryan showed that the stability characteristics of airplanes could be separated into longitudinal and lateral groups with the corresponding motions called modes of motion. These modes of motion were either aperiodic, which means that the airplane steadily approaches or diverges from a trimmed condition, or oscillatory, which means that the airplane oscillates about the trim condition. The longitudinal modes of a statically stable airplane following a disturbance were shown to consist of a long-period oscillation called the phugoid oscillation, usually with a period in seconds about one-quarter of the airspeed in miles per hour and a short-period oscillation with a period of only a few seconds. The lateral motion had three modes of motion: an aperiodic mode called the spiral mode that could be a divergence or subsidence, a heavily damped aperiodic mode called the roll subsidence, and a short-period oscillation, usually poorly damped, called the Dutch roll mode. Historical development: Some early airplane designers attempted to make airplanes that were dynamically stable, but it was found that the requirements for stability conflicted with those for satisfactory flying qualities. Meanwhile, no information was available to guide the designer as to just what characteristics should be incorporated to provide satisfactory flying qualities. Historical development: By the 1930s, there was a general feeling that airplanes should be dynamically stable, but some aeronautical engineers were starting to recognize the conflict between the requirements for stability and flying qualities. To resolve this question, Edward Warner, who was working as a consultant to the Douglas Aircraft Company on the design of the DC-4, a large four-engine transport airplane, made the first effort in the United States to write a set of requirements for satisfactory flying qualities. Dr. Warner, a member of the main committee of the NACA, also requested that a flight study be made to determine the flying qualities of an airplane along the lines of the suggested requirements. This study was conducted by Hartley A. Soulé of Langley. Entitled Preliminary Investigation of the Flying Qualities of Airplanes, Soulé's report showed several areas in which the suggested requirements needed revision and showed the need for more research on other types of airplanes. As a result, a program was started by Robert R. Gilruth with Melvin N. Gough as the chief test pilot. Evaluation of flying qualities: The technique for the study of flying qualities requirements used by Gilruth was first to install instruments to record relevant quantities such as control positions and forces, airplane angular velocities, linear accelerations, airspeed, and altitude. Then a program of specified flight conditions and maneuvers was flown by an experienced test pilot. After the flight, data were transcribed from the records and the results were correlated with pilot opinion. This approach would be considered routine today, but it was a notable original contribution by Gilruth that took advantage of the flight recording instruments already available at Langley and the variety of airplanes available for tests under comparable conditions. Evaluation of flying qualities: An important quantity in flying qualities measurements in turns or pull-ups is the variation of control force on the control stick or wheel with the value of acceleration normal to the flight direction expressed in g units. This quantity is usually called the force per g. Relation to Spacecraft: A new generation of spacecraft now under development by NASA to replace the Space Shuttle and return astronauts to the Moon will have a manual control capability for several mission tasks, and the ease and precision with which pilots can execute these tasks will have an important effect on performance, mission risk and training costs. No reference standards currently exist for flying qualities of piloted spacecraft.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Geomagnetic secular variation** Geomagnetic secular variation: Geomagnetic secular variation refers to changes in the Earth's magnetic field on time scales of about a year or more. These changes mostly reflect changes in the Earth's interior, while more rapid changes mostly originate in the ionosphere or magnetosphere.The geomagnetic field changes on time scales from milliseconds to millions of years. Shorter time scales mostly arise from currents in the ionosphere and magnetosphere, and some changes can be traced to geomagnetic storms or daily variations in currents. Changes over time scales of a year or more mostly reflect changes in the Earth's interior, particularly the iron-rich core. These changes are referred to as secular variation. In most models, the secular variation is the amortized time derivative of the magnetic field B , B˙ . The second derivative, B¨ is the secular acceleration. Recent changes: Secular variation can be observed in measurements at magnetic observatories, some of which have been operating for hundreds of years (the Kew Observatory, for example). Over such a time scale, magnetic declination is observed to vary over tens of degrees. A movie on the right shows how global declinations have changed over the last few centuries.To analyze global patterns of change in the geomagnetic field, geophysicists fit the field data to a spherical harmonic expansion (see International Geomagnetic Reference Field). The terms in this expansion can be divided into a dipolar part, like the field around a bar magnet, and a nondipolar part. The dipolar part dominates the geomagnetic field and determines the direction of the geomagnetic poles. The direction and intensity of the dipole change over time. Over the last two centuries the dipole strength has been decreasing at a rate of about 6.3% per century. At this rate of decrease, the field would reach zero in about 1600 years. However, this strength is about average for the last 7 thousand years, and the current rate of change is not unusual.A prominent feature in the non-dipolar part of the secular variation is a westward drift at a rate of about 0.2 degrees per year. This drift is not the same everywhere and has varied over time. The globally averaged drift has been westward since about 1400 AD but eastward between about 1000 AD and 1400 AD. Paleomagnetic secular variation: Changes that predate magnetic observatories are recorded in archaeological and geological materials. Such changes are referred to as paleomagnetic secular variation or paleosecular variation (PSV). The records typically include long periods of small change with occasional large changes reflecting geomagnetic excursions and geomagnetic reversals.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Programming by demonstration** Programming by demonstration: In computer science, programming by demonstration (PbD) is an end-user development technique for teaching a computer or a robot new behaviors by demonstrating the task to transfer directly instead of programming it through machine commands. Programming by demonstration: The terms programming by example (PbE) and programming by demonstration (PbD) appeared in software development research as early as the mid 1980s to define a way to define a sequence of operations without having to learn a programming language. The usual distinction in literature between these terms is that in PbE the user gives a prototypical product of the computer execution, such as a row in the desired results of a query; while in PbD the user performs a sequence of actions that the computer must repeat, generalizing it to be used in different data sets. Programming by demonstration: These two terms were first undifferentiated, but PbE then tended to be mostly adopted by software development researchers while PbD tended to be adopted by robotics researchers. Today, PbE refers to an entirely different concept, supported by new programming languages that are similar to simulators. This framework can be contrasted with Bayesian program synthesis. Robot programming by demonstration: The PbD paradigm is first attractive to the robotics industry due to the costs involved in the development and maintenance of robot programs. In this field, the operator often has implicit knowledge on the task to achieve (he/she knows how to do it), but does not have usually the programming skills (or the time) required to reconfigure the robot. Demonstrating how to achieve the task through examples thus allows to learn the skill without explicitly programming each detail. Robot programming by demonstration: The first PbD strategies proposed in robotics were based on teach-in, guiding or play-back methods that consisted basically in moving the robot (through a dedicated interface or manually) through a set of relevant configurations that the robot should adopt sequentially (position, orientation, state of the gripper). The method was then progressively ameliorated by focusing principally on the teleoperation control and by using different interfaces such as vision. Robot programming by demonstration: However, these PbD methods still used direct repetition, which was useful in industry only when conceiving an assembly line using exactly the same product components. To apply this concept to products with different variants or to apply the programs to new robots, the generalization issue became a crucial point. To address this issue, the first attempts at generalizing the skill were mainly based on the help of the user through queries about the user's intentions. Then, different levels of abstractions were proposed to resolve the generalization issue, basically dichotomized in learning methods at a symbolic level or at a trajectory level. Robot programming by demonstration: The development of humanoid robots naturally brought a growing interest in robot programming by demonstration. As a humanoid robot is supposed by its nature to adapt to new environments, not only the human appearance is important but the algorithms used for its control require flexibility and versatility. Due to the continuously changing environments and to the huge varieties of tasks that a robot is expected to perform, the robot requires the ability to continuously learn new skills and adapt the existing skills to new contexts. Robot programming by demonstration: Research in PbD also progressively departed from its original purely engineering perspective to adopt an interdisciplinary approach, taking insights from neuroscience and social sciences to emulate the process of imitation in humans and animals. With the increasing consideration of this body of work in robotics, the notion of Robot programming by demonstration (also known as RPD or RbD) was also progressively replaced by the more biological label of Learning by imitation. Robot programming by demonstration: Neurally-imprinted Stable Vector Fields (NiVF) Neurally-imprinted Stable Vector Fields (NiVF) was introduced as a novel learning scheme during ESANN 2013 and show how to imprint vector fields into neurals networks such as Extreme Learning Machines (ELMs) in a guaranteed stable manner. Furthermore, the paper won the best student paper award. The networks represent movements, where asymptotic stability is incorporated through constraints derived from Lyapunov stability theory. It is shown that this approach successfully performs stable and smooth point-to-point movements learned from human handwriting movements. Robot programming by demonstration: It is also possible to learn the Lyapunov candidate that is used for stabilization of the dynamical system. For this reason, neural learning scheme that estimates stable dynamical systems from demonstrations based on a two-stage process are needed: first, a data-driven Lyapunov function candidate is estimated. Second, stability is incorporated by means of a novel method to respect local constraints in the neural learning. This allows for learning stable dynamics while simultaneously sustaining the accuracy of the dynamical system and robustly generate complex movements. Robot programming by demonstration: Diffeomorphic Transformations Diffeomorphic transformations turn out to be particularly suitable for substantially increasing the learnability of dynamical systems for robotic motions. The stable estimator of dynamical systems (SEDS) is an interesting approach to learn time invariant systems to control robotic motions. However, this is restricted to dynamical systems with only quadratic Lyapunov functions. The new approach Tau-SEDS overcomes this limitations in a mathematical elegant manner. Robot programming by demonstration: Parameterized skills After a task was demonstrated by a human operator, the trajectory is stored in a database. Getting easier access to the raw data is realized with parameterized skills. A skill is requesting a database and generates a trajectory. For example, at first the skill “opengripper(slow)” is send to the motion database and in response, the stored movement of the robotarm is provided. The parameters of a skill allow to modify the policy to fulfill external constraints. Robot programming by demonstration: A skill is an interface between task names, given in natural language and the underlying spatiotemporal movement in the 3d space, which consists of points. Single skills can be combined into a task for defining longer motion sequences from a high level perspective. For practical applications, different actions are stored in a skill library. For increasing the abstraction level further, skills can be converted into dynamic movement primitives (DMP). They generate a robot trajectory on the fly which was unknown at the time of the demonstration. This helps to increase the flexibility of the solver. Non-robotic use: For final users, to automate a workflow in a complex tool (e.g. Photoshop), the most simple case of PbD is the macro recorder.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mycoplasma laboratorium** Mycoplasma laboratorium: Mycoplasma laboratorium or Synthia refers to a synthetic strain of bacterium. The project to build the new bacterium has evolved since its inception. Initially the goal was to identify a minimal set of genes that are required to sustain life from the genome of Mycoplasma genitalium, and rebuild these genes synthetically to create a "new" organism. Mycoplasma genitalium was originally chosen as the basis for this project because at the time it had the smallest number of genes of all organisms analyzed. Later, the focus switched to Mycoplasma mycoides and took a more trial-and-error approach.To identify the minimal genes required for life, each of the 482 genes of M. genitalium was individually deleted and the viability of the resulting mutants was tested. This resulted in the identification of a minimal set of 382 genes that theoretically should represent a minimal genome. In 2008 the full set of M. genitalium genes was constructed in the laboratory with watermarks added to identify the genes as synthetic. However M. genitalium grows extremely slowly and M. mycoides was chosen as the new focus to accelerate experiments aimed at determining the set of genes actually needed for growth.In 2010, the complete genome of M. mycoides was successfully synthesized from a computer record and transplanted into an existing cell of Mycoplasma capricolum that had had its DNA removed. It is estimated that the synthetic genome used for this project cost US$40 million and 200 man-years to produce. The new bacterium was able to grow and was named JCVI-syn1.0, or Synthia. After additional experimentation to identify a smaller set of genes that could produce a functional organism, JCVI-syn3.0 was produced, containing 473 genes. 149 of these genes are of unknown function. Since the genome of JCVI-syn3.0 is novel, it is considered the first truly synthetic organism. Minimal genome project: The production of Synthia is an effort in synthetic biology at the J. Craig Venter Institute by a team of approximately 20 scientists headed by Nobel laureate Hamilton Smith and including DNA researcher Craig Venter and microbiologist Clyde A. Hutchison III. The overall goal is to reduce a living organism to its essentials and thus understand what is required to build a new organism from scratch. The initial focus was the bacterium M. genitalium, an obligate intracellular parasite whose genome consists of 482 genes comprising 582,970 base pairs, arranged on one circular chromosome (at the time the project began, this was the smallest genome of any known natural organism that can be grown in free culture). They used transposon mutagenesis to identify genes that were not essential for the growth of the organism, resulting in a minimal set of 382 genes. This effort was known as the Minimal Genome Project. Choice of organism: Mycoplasma Mycoplasma is a genus of bacteria of the class Mollicutes in the division Mycoplasmatota (formerly Tenericutes), characterised by the lack of a cell wall (making it Gram negative) due to its parasitic or commensal lifestyle. Choice of organism: In molecular biology, the genus has received much attention, both for being a notoriously difficult-to-eradicate contaminant in mammalian cell cultures (it is immune to beta-lactams and other antibiotics), and for its potential uses as a model organism due to its small genome size. The choice of genus for the Synthia project dates to 2000, when Karl Reich coined the phrase Mycoplasma laboratorium. Choice of organism: Other organisms with small genomes As of 2005, Pelagibacter ubique (an α-proteobacterium of the order Rickettsiales) has the smallest known genome (1,308,759 base pairs) of any free living organism and is one of the smallest self-replicating cells known. It is possibly the most numerous bacterium in the world (perhaps 1028 individual cells) and, along with other members of the SAR11 clade, are estimated to make up between a quarter and a half of all bacterial or archaeal cells in the ocean. It was identified in 2002 by rRNA sequences and was fully sequenced in 2005. It is extremely hard to cultivate a species which does not reach a high growth density in lab culture. Choice of organism: Several newly discovered species have fewer genes than M. genitalium, but are not free-living: many essential genes that are missing in Hodgkinia cicadicola, Sulcia muelleri, Baumannia cicadellinicola (symbionts of cicadas) and Carsonella ruddi (symbiote of hackberry petiole gall psyllid, Pachypsylla venusta) may be encoded in the host nucleus. The organism with the smallest known set of genes as of 2013 is Nasuia deltocephalinicola, an obligate symbiont. It has only 137 genes and a genome size of 112 kb. Techniques: Several laboratory techniques had to be developed or adapted for the project, since it required synthesis and manipulation of very large pieces of DNA. Techniques: Bacterial genome transplantation In 2007, Venter's team reported that they had managed to transfer the chromosome of the species Mycoplasma mycoides to Mycoplasma capricolum by: isolating the genome of M. mycoides: gentle lysis of cells trapped in agar—molten agar mixed with cells and left to form a gel—followed by pulse field gel electrophoresis and the band of the correct size (circular 1.25Mbp) being isolated; making the recipient cells of M. capricolum competent: growth in rich media followed starvation in poor media where the nucleotide starvation results in inhibition of DNA replication and change of morphology; and polyethylene glycol-mediated transformation of the circular chromosome to the DNA-free cells followed by selection.The term transformation is used to refer to insertion of a vector into a bacterial cell (by electroporation or heatshock). Here, transplantation is used akin to nuclear transplantation. Techniques: Bacterial chromosome synthesis In 2008 Venter's group described the production of a synthetic genome, a copy of M. genitalium G37 sequence L43967, by means of a hierarchical strategy: Synthesis → 1kbp: The genome sequence was synthesized by Blue Heron in 1,078 1080bp cassettes with 80bp overlap and NotI restriction sites (inefficient but infrequent cutter). Ligation → 10kbp: 109 groups of a series of 10 consecutive cassettes were ligated and cloned in E. coli on a plasmid and the correct permutation checked by sequencing. Multiplex PCR → 100kbp: 11 Groups of a series of 10 consecutive 10kbp assemblies (grown in yeast) were joined by multiplex PCR, using a primer pair for each 10kbp assembly. Isolation and recombination → secondary assemblies were isolated, joined and transformed into yeast spheroplasts without a vector sequence (present in assembly 811-900).The genome of this 2008 result, M. genitalium JCVI-1.0, is published on GenBank as CP001621.1. It is not to be confused with the later synthetic organisms, labelled JCVI-syn, based on M. mycoides. Synthetic genome: In 2010 Venter and colleagues created Mycoplasma mycoides strain JCVI-syn1.0 with a synthetic genome. Initially the synthetic construct did not work, so to pinpoint the error—which caused a delay of 3 months in the whole project—a series of semi-synthetic constructs were created. The cause of the failure was a single frameshift mutation in DnaA, a replication initiation factor.The purpose of constructing a cell with a synthetic genome was to test the methodology, as a step to creating modified genomes in the future. Using a natural genome as a template minimized the potential sources of failure. Several differences are present in Mycoplasma mycoides JCVI-syn1.0 relative to the reference genome, notably an E.coli transposon IS1 (an infection from the 10kb stage) and an 85bp duplication, as well as elements required for propagation in yeast and residues from restriction sites.There has been controversy over whether JCVI-syn1.0 is a true synthetic organism. While the genome was synthesized chemically in many pieces, it was constructed to match the parent genome closely and transplanted into the cytoplasm of a natural cell. DNA alone cannot create a viable cell: proteins and RNAs are needed to read the DNA, and lipid membranes are required to compartmentalize the DNA and cytoplasm. In JCVI-syn1.0 the two species used as donor and recipient are of the same genus, reducing potential problems of mismatches between the proteins in the host cytoplasm and the new genome. Paul Keim (a molecular geneticist at Northern Arizona University in Flagstaff) noted that "there are great challenges ahead before genetic engineers can mix, match, and fully design an organism's genome from scratch". Synthetic genome: Watermarks A much publicized feature of JCVI-syn1.0 is the presence of watermark sequences. The 4 watermarks (shown in Figure S1 in the supplementary material of the paper) are coded messages written into the DNA, of length 1246, 1081, 1109 and 1222 base pairs respectively. These messages did not use the standard genetic code, in which sequences of 3 DNA bases encode amino acids, but a new code invented for this purpose, which readers were challenged to solve. The content of the watermarks is as follows: Watermark 1: an HTML script which reads to a browser as text congratulating the decoder, and instructions on how to email the authors to prove the decoding. Synthetic genome: Watermark 2: a list of authors and a quote from James Joyce: "To live, to err, to fall, to triumph, to recreate life out of life". Watermark 3: more authors and a quote from Robert Oppenheimer (uncredited): "See things not as they are, but as they might be". Watermark 4: more authors and a quote from Richard Feynman: "What I cannot build, I cannot understand". Synthetic genome: JCVI-syn3.0 In 2016, the Venter Institute used genes from JCVI-syn1.0 to synthesize a smaller genome they call JCVI-syn3.0, that contains 531,560 base pairs and 473 genes. In 1996, after comparing M. genitalium with another small bacterium Haemophilus influenza, Arcady Mushegian and Eugene Koonin had proposed that there might be a common set of 256 genes which could be a minimal set of genes needed for viability. In this new organism, the number of genes can only be pared down to 473, 149 of which have functions that are completely unknown. As of 2022 the unknown set has been narrowed to about 100. In 2019 a complete computational model of all pathways in Syn3.0 cell was published, representing the first complete in silico model for a living minimal organism. Concerns and controversy: Reception On Oct 6, 2007, Craig Venter announced in an interview with UK's The Guardian newspaper that the same team had synthesized a modified version of the single chromosome of Mycoplasma genitalium chemically. The synthesized genome had not yet been transplanted into a working cell. The next day the Canadian bioethics group, ETC Group issued a statement through their representative, Pat Mooney, saying Venter's "creation" was "a chassis on which you could build almost anything. It could be a contribution to humanity such as new drugs or a huge threat to humanity such as bio-weapons". Venter commented "We are dealing in big ideas. We are trying to create a new value system for life. When dealing at this scale, you can't expect everybody to be happy."On May 21, 2010, Science reported that the Venter group had successfully synthesized the genome of the bacterium Mycoplasma mycoides from a computer record and transplanted the synthesized genome into the existing cell of a Mycoplasma capricolum bacterium that had had its DNA removed. The "synthetic" bacterium was viable, i.e. capable of replicating. Venter described it as "the first species.... to have its parents be a computer".The creation of a new synthetic bacterium, JCVI-3.0 was announced in Science on March 25, 2016. It has only 473 genes. Venter called it “the first designer organism in history” and argued that the fact that 149 of the genes required have unknown functions means that "the entire field of biology has been missing a third of what is essential to life". Concerns and controversy: Press coverage The project received a large amount of coverage from the press due to Venter's showmanship, to the degree that Jay Keasling, a pioneering synthetic biologist and founder of Amyris commented that "The only regulation we need is of my colleague's mouth". Concerns and controversy: Utility Venter has argued that synthetic bacteria are a step towards creating organisms to manufacture hydrogen and biofuels, and also to absorb carbon dioxide and other greenhouse gases. George M. Church, another pioneer in synthetic biology, has expressed the contrasting view that creating a fully synthetic genome is not necessary since E. coli grows more efficiently than M. genitalium even with all its extra DNA; he commented that synthetic genes have been incorporated into E.coli to perform some of the above tasks. Concerns and controversy: Intellectual property The J. Craig Venter Institute filed patents for the Mycoplasma laboratorium genome (the "minimal bacterial genome") in the U.S. and internationally in 2006. The ETC group, a Canadian bioethics group, protested on the grounds that the patent was too broad in scope. Similar projects: From 2002 to 2010, a team at the Hungarian Academy of Science created a strain of Escherichia coli called MDS42, which is now sold by Scarab Genomics of Madison, WI under the name of "Clean Genome. E.coli", where 15% of the genome of the parental strain (E. coli K-12 MG1655) were removed to aid in molecular biology efficiency, removing IS elements, pseudogenes and phages, resulting in better maintenance of plasmid-encoded toxic genes, which are often inactivated by transposons. Biochemistry and replication machinery were not altered.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shiftability theory** Shiftability theory: In banking, shiftability is an approach to keep banks liquid by supporting the shifting of assets. When a bank is short of ready money, it is able to sell or repo its assets to a more liquid bank. Commercial loan theory: Prior to the concept of shiftability, the orthodox theory of banking limited banks to making short-term commercial loans to help producers of goods during their business cycles. For example, apple farmers may require short-term financing until the crop is ready for sale. This theory postulates that by making short-term commercial transactions that will mature in a timely manner will keep banks in a ready state to meet the demands of their depositors. Shiftability: Although banking at the time was not a new concept, what had changed was that deposits had become the primary liability of banks. In 1830 the capital of banks was about three times the deposits, but less than one hundred years later depositors had come to represent approximately 68 percent of the equity in banks. This increase in the proportion of deposits had many worried about the possibility of a run on the banks and the inability to get much needed cash. Shiftability: It was shown that short-term commercial lending oftentimes did not mature or liquidate at maturity due to changing business cycles. Growing opposition began to showcase the need for an improved banking system that could avoid forced liquidation of this short-term paper that came about more or less periodically. It proposed that banks, rather than relying on the liquidity of these assets in a crisis, should be able to shift these earning assets to another institution with a better cash position thereby creating the reserves needed. This ability to shift assets provides liquidity to otherwise non-liquid assets. Shiftability: The key piece of legislation that led to this reality was the Banking Act of 1935. One of its amendments provided that, a federal reserve bank may discount any commercial, agricultural or industrial paper for liquidity purposes. It also allowed necessary advances to its member banks secured by "any sound asset" that would otherwise be described as ineligible by the orthodox theory to provide bank reserves. Shiftability: Although there was much resistance to this idea and many believed it would be better to return to pre-war practices, it was Marriner Stoddard Eccles, an author of the Banking Act of 1935, that continued to push that bank asset liquidity in times of stress was dependent on the ability of a Central bank to exchange those assets for currency or credit. During a crisis: One shortcoming of the Shiftability Theory, similar to one that led the banking system away from the orthodox theory, was that in times of stress or crisis, the effectiveness of these assets for liquidity purposes goes away as there is no market for them. If all banks are looking to liquidate assets, they are doing so at a cost because it would be difficult to find buyers, meaning lower prices for the assets and ultimately by doing so would not leave the banking system as a whole in a more liquid condition.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**UPd2Al3** UPd2Al3: UPd2Al3 is a heavy-fermion superconductor with a hexagonal crystal structure and critical temperature Tc=2.0K that was discovered in 1991. Furthermore, UPd2Al3 orders antiferromagnetically at TN=14K, and UPd2Al3 thus features the unusual behavior that this material, at temperatures below 2K, is simultaneously superconducting and magnetically ordered. Later experiments demonstrated that superconductivity in UPd2Al3 is magnetically mediated, and UPd2Al3 therefore serves as a prime example for non-phonon-mediated superconductors. Discovery: Heavy-fermion superconductivity was discovered already in the late 1970s (with CeCu2Si2 being the first example), but the number of heavy-fermion compounds known to superconduct was still very small in the early 1990s, when Christoph Geibel in the group of Frank Steglich found two closely related heavy-fermion superconductors, UNi2Al3 (Tc=1K) and UPd2Al3 (Tc=2K), which were published in 1991. At that point, the Tc=2.0K of UPd2Al3 was the highest critical temperature amongst all known heavy-fermion superconductors, and this record would stand for 10 years until CeCoIn5 was discovered in 2001. Metallic state: The overall metallic behavior of UPd2Al3, e.g. as deduced from the dc resistivity, is typical for a heavy-fermion material and can be explained as follows: incoherent Kondo scattering above approximately 80 K and coherent heavy-fermion state (in a Kondo lattice) at lower temperatures. Upon cooling below 14 K, UPd2Al3 orders antiferromagnetically in a commensurate fashion (ordering wave vector (0,0,1/2)) and with a sizable ordered magnetic moment of approximately 0.85 µB per uranium atom, as determined from neutron scattering.The metallic heavy-fermion state is characterized by a strongly enhanced effective mass, which is connected to a reduced Fermi velocity, which in turn brings about a strongly suppressed transport scattering rate. Indeed, for UPd2Al3 optical Drude behavior with an extremely low scattering rate was observed at microwave frequencies. This is the 'slowest Drude relaxation' observed for any three-dimensional metallic system so far. Superconducting state: Superconductivity in UPd2Al3 has a critical temperature of 2.0K and a critical field around 3T. The critical field does not show anisotropy despite the hexagonal crystal structure. For heavy-fermion superconductors it is generally believed that the coupling mechanism cannot be phononic in nature. In contrast to many other unconventional superconductors, for UPd2Al3 there actually exists strong experimental evidence (namely from neutron scattering and tunneling spectroscopy ) that superconductivity is magnetically mediated. In the first years after the discovery of UPd2Al3 it was actively discussed whether its superconducting state can support a Fulde–Ferrell–Larkin–Ovchinnikov (FFLO) phase, but this suggestion was later refuted.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hedge** Hedge: A hedge or hedgerow is a line of closely spaced shrubs and sometimes trees, planted and trained to form a barrier or to mark the boundary of an area, such as between neighbouring properties. Hedges that are used to separate a road from adjoining fields or one field from another, and are of sufficient age to incorporate larger trees, are known as hedgerows. Often they serve as windbreaks to improve conditions for the adjacent crops, as in bocage country. When clipped and maintained, hedges are also a simple form of topiary. Hedge: A hedge often operates as, and sometimes is called, a "live fence". This may either consist of individual fence posts connected with wire or other fencing material, or it may be in the form of densely planted hedges without interconnecting wire. This is common in tropical areas where low-income farmers can demarcate properties and reduce maintenance of fence posts that otherwise deteriorate rapidly. Many other benefits can be obtained depending on the species chosen. History: The development of hedges over the centuries is preserved in their structure. The first hedges enclosed land for cereal crops during the Neolithic Age (4000–6000 years ago). The farms were of about 5 to 10 hectares (12 to 25 acres), with fields about 0.1 hectares (0.25 acres) for hand cultivation. Some hedges date from the Bronze and Iron Ages, 2000–4000 years ago, when traditional patterns of landscape became established. Others were built during the Medieval field rationalisations; more originated in the industrial boom of the 18th and 19th centuries, when heaths and uplands were enclosed. History: Many hedgerows separating fields from lanes in the United Kingdom, Ireland and the Low Countries are estimated to have been in existence for more than seven hundred years, originating in the medieval period. The root word of 'hedge' is much older: it appears in the Old English language, in German (Hecke), and Dutch (haag) to mean 'enclosure', as in the name of the Dutch city The Hague, or more formally 's Gravenhage, meaning The Count's hedge. Charles the Bald is recorded as complaining in 864, at a time when most official fortifications were constructed of wooden palisades, that some unauthorized men were constructing haies et fertés; tightly interwoven hedges of hawthorns.In parts of Britain, early hedges were destroyed to make way for the manorial open-field system. Many were replaced after the Enclosure Acts, then removed again during modern agricultural intensification, and now some are being replanted for wildlife. Composition: A hedge may consist of a single species or several, typically mixed at random. In many newly planted British hedges, at least 60 per cent of the shrubs are hawthorn, blackthorn, and (in the southwest) hazel, alone or in combination. The first two are particularly effective barriers to livestock. In North America, Maclura pomifera (i.e., hedge apple) was grown to form a barrier to exclude free-range livestock from vegetable gardens and corn fields. Other shrubs and trees used include holly, beech, oak, ash, and willow; the last three can become very tall. Of the hedgerows in the Normandy region of France, Martin Blumenson said, The hedgerow is a fence, half earth, half hedge. The wall at the base is a dirt parapet that varies in thickness from one to four or more feet and in height from three to twelve feet. Growing out of the wall is a hedge of hawthorn, brambles, vines, and trees, in thickness from one to three feet. Originally property demarcations, hedgerows protect crops and cattle from the ocean winds that sweep across the land. Composition: The hedgerows of Normandy became barriers that slowed the advance of Allied troops following the D-Day invasion during World War II. Allied armed forces modified their armored vehicles to facilitate breaking out of their beachheads into the Normandy bocage. Composition: Species Formal, or modern garden hedges are grown in many varieties, including the following species: Berberis thunbergii Buxus sempervirens (box) Carpinus betulus (hornbeam) Crataegus monogyna (hawthorn) Fagus sylvatica (green beech) Fagus sylvatica 'Purpurea' (purple beech) Ilex aquifolium (holly) Ligustrum ovalifolium (privet) Ligustrum × ibolium (north privet) Photinia × fraseri (red robin) Prunus laurocerasus (common laurel) Prunus lusitanica (Portuguese laurel) Quercus ilex (holm oak) Taxus baccata (yew) Thuja occidentalis (yellow ribbon) Thuja plicata (western red cedar) Hedgerow trees Hedgerow trees are trees that grow in hedgerows but have been allowed to reach their full height and width. There are thought to be around 1.8 million hedgerow trees in Britain (counting only those whose canopies do not touch others) with perhaps 98% of these being in England and Wales. Hedgerow trees are both an important part of the English landscape and valuable habitats for wildlife. Many hedgerow trees are veteran trees and therefore of great wildlife interest. Composition: The most common species are English oak (Quercus robur) and ash (Fraxinus excelsior), though in the past field elm (Ulmus minor 'Atinia') would also have been common. Around 20 million elm trees, most of them hedgerow trees, were felled or died through Dutch elm disease in the late 1960s. Many other species are used, notably including common beech (Fagus sylvatica) and various nut and fruit trees.The age structure of British hedgerow trees is old because the number of new trees is not sufficient to replace the number of trees that are lost through age or disease.New trees can be established by planting but it is generally more successful to leave standard trees behind when laying hedges. Trees should be left at no closer than 10 metres (33 ft) apart and the distances should vary so as to create a more natural landscape. The distance allows the young trees to develop full crowns without competing or producing too much shade.It is suggested that hedgerow trees cause gaps in hedges but it has been found that cutting some lower branches off lets sufficient light through to the hedge below to allow it to grow. Importance of hedgerows: Hedges are recognised as part of a cultural heritage and historical record and for their great value to wildlife and the landscape. Increasingly, they are valued too for the major role they have to play in preventing soil loss and reducing pollution, and for their potential to regulate water supply and to reduce flooding. There is increased earthworm diversity in the soils under hedgerows which also help to store organic carbon and support distinct communities of arbuscular mycorrhizal (AM) fungi.In addition to maintaining the health of the environment, hedgerows also play a huge role in providing shelter for smaller animals like birds and insects. Recent study by Emma Coulthard mentioned the possibility that hedgerows may act as guides for moths, like Acronicta rumicis, when flying from one location to another. As moths are nocturnal, it is highly unlikely that they use visual aids as guides, but rather are following sensory or olfactory markers on the hedgerows. Larkin et al. 2013 find 100% of northwest European farms have hedges, providing 43% of the wildlife habitat there.Historically, hedges were used as a source of firewood, and for providing shelter from wind, rain and sun for crops, farm animals and people. Today, mature hedges' uses include screening unsightly developments.In England and Wales agricultural hedgerow removal is controlled by the Hedgerows Regulations 1997, administered by the local planning authority. Importance of hedgerows: Dating Hedges that have existed for hundreds of years are colonised by additional species. This may be useful as a means of determining the age of the hedge. Hooper's rule (named for Dr. Max Hooper) is based on ecological data obtained from hedges of known age, and suggests that the age of a hedge can be roughly estimated by counting the number of woody species counted in a thirty-yard distance and multiplying by 110 years.Max Hooper published his original formula in the book Hedges in 1974. This method is only a rule of thumb, and can be off by a couple of centuries; it should always be backed up by documentary evidence, if possible, and take into account other factors. Caveats include the fact that planted hedgerows, hedgerows with elm, and hedgerows in the north of England tend not to follow the rule as closely. The formula also does not work on hedges more than a thousand years old. Importance of hedgerows: Hooper's scheme is important not least for its potential use in determining what an important hedgerow is, given their protection in The Hedgerows Regulations (1997; No. 1160) of the Department of the Environment, based on age and other factors. Removal Hedgerow removal is part of the transition of arable land from low-intensity to high-intensity farming. The removal of hedgerows gives larger fields making the sowing and harvesting of crops easier, faster and cheaper, and giving a larger area to grow the crops, increasing yield and profits. Importance of hedgerows: Hedgerows serve as important wildlife corridors, especially in the United Kingdom where they link the country's fractured ancient woodland. They also serve as a habitat for birds and other animals. As the land within a few metres of hedges is difficult to plough, sow, or spray with herbicides, the land around hedges also typically includes high plant biodiversity. Hedges also serve to stabilise the soil and on slopes help prevent soil creep and leaching of minerals and plant nutrients. Removal thus weakens the soil and leads to erosion. Importance of hedgerows: In the United Kingdom hedgerow removal has been occurring since World War I as technology made intensive farming possible, and the increasing population demanded more food from the land. The trend has slowed down somewhat since the 1980s when cheap food imports reduced the demand on British farmland, and as the European Union Common Agricultural Policy made environmental projects financially viable. Under reforms to national and EU agricultural policies the environmental impact of farming features more highly and in many places hedgerow conservation and replanting is taking place. Importance of hedgerows: In England and Wales agricultural hedgerow removal is controlled by the Hedgerows Regulations 1997, administered by the local planning authority. Hedge laying: If hedges are not maintained and trimmed regularly, gaps tend to form at the base over many years. In essence, hedgelaying consists of cutting most of the way through the stem of each plant near the base, bending it over and interweaving or pleaching it between wooden stakes. This also encourages new growth from the base of each plant. Originally, the main purpose of hedgelaying was to ensure the hedge remained stock-proof. Some side branches were also removed and used as firewood. Hedge laying: The maintenance and laying of hedges to form an impenetrable barrier for farm animals is a skilled art. In Britain there are many local hedgelaying traditions, each with a distinct style. Hedges are still being laid today not only for aesthetic and functional purposes but also for their ecological role in helping wildlife and protecting against soil erosion. Hedge laying: Hedge trimming An alternative to hedge laying is trimming using a tractor-mounted flail cutter or circular saw, or a hedge trimmer. The height of the cutting can be increased a little every year. Trimming a hedge helps to promote bushy growth. If a flail cutter is used, then the flail must be kept sharp to ensure that the cutting is effective on the hedge. The disadvantage of this is that the hedge species takes a number of years before it will flower again and subsequently bear fruit for wildlife and people. If the hedge is trimmed repeatedly at the same height, a 'hard knuckle' will start to form at that height – similar to the shape of a pollarded tree. Additionally, hedge trimming causes habitat destruction to species like the small eggar moth which spend nearly their entire life cycle in blackthorn and hawthorn hedgerow. This has led to a decline in the moth's population. It is now nationally scarce in Britain. Hedge laying: General hedge management A 'hedgerow management' scale has been devised by an organisation called Hedgelink UK ranging from 1 to 10. '1' describes the action to take for a heavily over trimmed hedge, '5' is a healthy dense hedgerow more than 2 metres in height, and '10' is a hedge that has not been managed at all and has become a line of trees. Hedge laying: The RSPB suggest that hedges in Britain not be cut between March and August. This is to protect nesting birds, which are protected by law. Coppicing The techniques of coppicing and hard pollarding can be used to rejuvenate a hedge where hedge-laying is not appropriate. Types: Instant hedge The term instant hedge has become known since early this century for hedging plants that are planted collectively in such a way as to form a mature hedge from the moment they are planted together, with a height of at least 1.2 metres. They are usually created from hedging elements or individual plants which means very few are actually hedges from the start, as the plants need time to grow and entwine to form a real hedge. Types: An example of an instant hedge can be seen at the Elveden Hall Estate in East Anglia, where fields of hedges can be seen growing in cultivated rows, since 1998. The development of this type of mature hedge has led to such products being specified by landscape architects, garden designers, property developers, insurance companies, sports clubs, schools and local councils, as well as many private home owners. Demand has also increased from planning authorities in specifying to developers that mature hedges are planted rather than just whips (a slender, unbranched shoot or plant). Types: A 'real' instant hedge could be defined as having a managed root growth system allowing the hedge to be sold with a continuous rootstrips (rather than individual plants) which then enables year-round planting. During its circa 8-year production time, all stock should be irrigated, clipped and treated with controlled-release nutrients to optimise health. Types: Quickset hedge A quickset hedge is a type of hedge created by planting live whitethorn (common hawthorn) cuttings directly into the earth (hazel does not sprout from cuttings). Once planted, these cuttings root and form new plants, creating a dense barrier. The technique is ancient, and the term quickset hedge is first recorded in 1484. The word quick in the name refers to the fact that the cuttings are living (as in "the quick and the dead"), and not to the speed at which the hedge grows, although it will establish quite rapidly. An alternative meaning of quickset hedging is any hedge formed of living plants or of living plants combined with a fence. The technique of quicksetting can also be used for many other shrubs and trees. Types: Devon hedge A Devon hedge is an earth bank topped with shrubs. The bank may be faced with turf or stone. When stone-faced, the stones are generally placed on edge, often laid flat around gateways. A quarter of Devon's hedges are thought to be over 800 years old. There are approximately 33,000 miles (53,000 km) of Devon hedge, which is more than any other county. Traditional farming throughout the county has meant that fewer Devon hedges have been removed than elsewhere. Types: Devon hedges are particularly important for wildlife habitat. Around 20% of the UK's species-rich hedges occur within Devon. Over 600 species of flowering plants, 1500 species of insects, 65 species of birds and 20 species of mammals have been recorded living or feeding in Devon hedges.Hedge laying in Devon is usually referred to as steeping and involves cutting and laying steepers (the stems) along the top of the bank and securing them with crooks (forked sticks). Types: Cornish hedge A Cornish hedge is an earth bank with stones. It normally consists of large stone blocks constructed either side of a narrow earth bank, and held in place with interlocking stones. The neat rows of square stones at the top are called "edgers". The top of the hedge is planted with grass turf.Sometimes hedging plants or trees are planted on the hedge to increase its windbreaking height. A rich flora develops over the lifespan of a Cornish hedge. The Cornish hedge contributes to the distinctive field-pattern of the Cornish landscape and its semi-natural wildlife habitat. There are about 30,000 miles (48,000 km) of hedges in Cornwall today.Hedges suffer from the effects of tree roots, burrowing rabbits, rain, wind, farm animals and people. How often repairs are needed depends on how well the hedge was built, its stone, and what has happened to it since it was last repaired. Typically a hedge needs a cycle of repair every 150 years or so, or less often if it is fenced. Building new hedges, and repairing existing hedges, is a skilled craft, and there are professional hedgers in Cornwall. The Cornish Hedge Research and Education Group (CHREG) supports the development of traditional skills and works with Cornwall Council, FWAG (Farming and Wildlife Advisory Group), Stone Academy Bodmin, Cornwall AONB, Country Trust and professional hedgers to ensure the future of Cornish Hedges in the landscape. In gardening: Hedges, both clipped and unclipped, are often used as ornament in the layout of gardens. Typical woody plants for clipped hedges include privet, hawthorn, beech, yew, leyland cypress, hemlock, arborvitae, barberry, box, holly, oleander, lavender, among others. An early 20th-century fashion was for tapestry hedges, using a mix of golden, green and glaucous dwarf conifers, or beech and copper beech. Unclipped hedges take up more space, generally at a premium in modern gardens, but compensate by flowering. Rosa multiflora is widely used as a dense hedge along the central reservation of dual-carriageway roads, such as parkways in the United States. In mild climates, more exotic flowering hedges are formed, using Ceanothus, Hibiscus, Camellia, orange jessamine (Murraya paniculata),[1] or lillypilly (Syzygium species). It is also possible to prepare really nice and dense hedge from other deciduous plants, however they do not have decorative flowers as the bushes mentioned before. Hedges of clipped trees forming avenues are a feature of 16th-century Italian gardens such as the Boboli Gardens in Florence, and of formal French gardens in the manner of André Le Nôtre, e.g. in the Gardens of Versailles, where they surround bosquets or areas formalized woodland. The English version of this was the wilderness, normal in large gardens until the English landscape garden style and the rise of the shrubbery began to sweep them away from about 1750. The 'hedge on stilts' of clipped hornbeams at Hidcote Manor Garden, Gloucestershire, is famous and has sometimes been imitated; it is fact a standard French and Italian style of the bosquet. In gardening: Hedges below knee height are generally thought of as borders. Elaborately shaped and interlaced borders forming knot gardens or parterres were fashionable in Europe during the 16th and early 17th centuries. Generally they were appreciated from a raised position, either the windows of a house, or a terrace. Clipped hedges above eye level may be laid out in the form of a labyrinth or garden maze. Few such mazes survived the change of fashion towards more naturalistic plantings in the 18th and 19th centuries, but many were replanted in 20th-century restorations of older gardens. An example is behind the Governor's Palace, Colonial Williamsburg, Virginia. Hedges and pruning can both be used to enhance a garden's privacy, as a buffer to visual pollution and to hide fences. A hedge can be aesthetically pleasing, as in a tapestry hedge, where alternate species are planted at regular intervals to present different colours or textures. In gardening: In America, fences have always been more common than hedges to mark garden boundaries. The English radical William Cobbett was already complaining about this in 1819:And why should America not possess this most beautiful and useful plant [the Haw-Thorn]? She has English gew-gaws, English Play-Actors, English Cards and English Dice and Billiards; English fooleries and English vices enough in all conscience; and why not English Hedges, instead of post-and-rail and board fences? If, instead of these steril-looking and cheerless enclosures the gardens and meadows and fields, in the neighbourhood of New York and other cities and towns, were divided by quick-set hedges, what a difference would the alteration make in the look, and in the real value too, of those gardens, meadows and fields! Regulation: In the US, some local jurisdictions may strictly regulate the placement or height of a hedge, such as the case where a Palo Alto city resident was arrested for allowing her xylosma hedge to grow above two feet.In the UK the owner of a large hedge that is adversely affecting the reasonable enjoyment of neighbouring domestic property can be made to reduce it in height. In England and Wales, high hedges are covered under Part 8 of the Anti-Social Behaviour Act 2003. For a hedge to qualify for reduction, it must be made up wholly or mainly of a line of two or more evergreen or semi-evergreen trees or shrubs and be over 2 metres high. To some degree, it must be a barrier to light or access. It must be adversely affecting the complainant's reasonable enjoyment of their domestic property (either their house or garden) because of its height. Later legislation with similar effect was introduced in Northern Ireland, Isle of Man and Scotland. Significant hedges: The 19th-century Great Hedge of India was probably the largest example of a hedge used as a barrier. It was planted and used to collect taxes by the British. The Willow Palisade, constructed during the early Qing dynasty (17th century) to control people's movement and to collect taxes on ginseng and timber in southern Manchuria, also had hedge-like features. The palisade included two dikes and a moat between them, the dikes topped by rows of willow trees, tied to one another with their branches. Significant hedges: Gradually decaying throughout the late 18th and 19th centuries, the palisade disappeared in the early 20th century, its remaining willows cut during the Russo-Japanese War of 1904–1905 by the two countries' soldiers.The Meikleour Beech Hedges, located near Meikleour in Scotland, are noted in the Guinness World Records as the tallest and longest hedge on earth, reaching 30 metres (98 ft) in height and 530 metres (0.33 mi) in length. The beech trees were planted in 1745 by Jean Mercer on the Marquess of Lansdowne's Meikleour estate. Significant hedges: The hedgerows and sunken lanes in Normandy, France posed a problem to Allied tanks after Operation Overlord, the invasion of Europe, in World War 2. The hedgerows prevented the tanks from freely moving about the area, until they were fitted with tusks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Butyltin trichloride** Butyltin trichloride: Monobutyltin trichloride, also known as MBTC, is an organotin compound. It is a colorless oil that is soluble in organic solvents. Relative to other organotin compounds, MBTC is obscure and not widely used. Applications: Glass coating Monobutyltin trichloride has been examined as a precursor to tin dioxide coatings on glass. Such coatings, especially when doped, are low-emissivity and transparent to visible light, reflect infrared light, and provide a high conductance and a low sheet resistance.For example, MBTC is used in the manufacturing process of glass containers such as those used for beers, spirits, and juices. These glass-making processes heat raw materials (sand, soda-ash, limestone, and recycled glass) to produce molten glass. The molten glob is cut into smaller pieces of uniform size, and are then pressed in a mold. MBTC is applied on the external surface of these containers, and then, the glass is annealed and coated with polyethylene.MBTC is a commonly used organotin compound for on-line chemical vapor deposition because it readily decomposes at or close to the hot glass surface. The tin dioxide coatings formed are transparent to visible light, reflect infrared light, and are highly conductive. If these coatings are doped with fluorine from a source like trifluoroacetic acid (TFA), the coating will also have a lowered emissivity. Applications: PVC stabilizer Monobutyltin trichloride is used as a polyvinyl chloride (PVC) stabilizer. PVC is used in mass production for various objects. One such object is a PVC based container for various wines and brandies (especially those produced in Canada). Consequently, the MBTC leaches into the wine along with other organotin compounds (some of which are used as wood preservatives for the wine barrels). These compounds are toxic to the human body, and the amount of organotin compounds, especially MBTC, have been the subject of a lot of food-safety based research.Another object PVC is used in is pipe production. Since these pipes are used to carry drinking water, the MBTC is leaching into the drinking water supply. Safety: Monobutyltin trichloride releases corrosive hydrogen chloride upon hydrolysis. Unlike some organotin compounds, it has relatively low toxicity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**XPNPEP2** XPNPEP2: Xaa-Pro aminopeptidase 2 is an enzyme that in humans is encoded by the XPNPEP2 gene.Aminopeptidase P is a hydrolase specific for N-terminal imido bonds, which are common to several collagen degradation products, neuropeptides, vasoactive peptides, and cytokines. Structurally, the enzyme is a member of the 'pita bread fold' family and occurs in mammalian tissues in both soluble and GPI-anchored membrane-bound forms. A membrane-bound and soluble form of this enzyme have been identified as products of two separate genes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Centromere** Centromere: The centromere links a pair of sister chromatids together during cell division. This constricted region of chromosome connects the sister chromatids, creating a short arm (p) and a long arm (q) on the chromatids. During mitosis, spindle fibers attach to the centromere via the kinetochore. Centromere: The physical role of the centromere is to act as the site of assembly of the kinetochores – a highly complex multiprotein structure that is responsible for the actual events of chromosome segregation – i.e. binding microtubules and signaling to the cell cycle machinery when all chromosomes have adopted correct attachments to the spindle, so that it is safe for cell division to proceed to completion and for cells to enter anaphase. Centromere: There are, broadly speaking, two types of centromeres. "Point centromeres" bind to specific proteins that recognize particular DNA sequences with high efficiency. Any piece of DNA with the point centromere DNA sequence on it will typically form a centromere if present in the appropriate species. The best characterized point centromeres are those of the budding yeast, Saccharomyces cerevisiae. "Regional centromeres" is the term coined to describe most centromeres, which typically form on regions of preferred DNA sequence, but which can form on other DNA sequences as well. The signal for formation of a regional centromere appears to be epigenetic. Most organisms, ranging from the fission yeast Schizosaccharomyces pombe to humans, have regional centromeres. Centromere: Regarding mitotic chromosome structure, centromeres represent a constricted region of the chromosome (often referred to as the primary constriction) where two identical sister chromatids are most closely in contact. When cells enter mitosis, the sister chromatids (the two copies of each chromosomal DNA molecule resulting from DNA replication in chromatin form) are linked along their length by the action of the cohesin complex. It is now believed that this complex is mostly released from chromosome arms during prophase, so that by the time the chromosomes line up at the mid-plane of the mitotic spindle (also known as the metaphase plate), the last place where they are linked with one another is in the chromatin in and around the centromere. Position: In humans, centromere positions define the chromosomal karyotype, in which each chromosome has two arms, p (the shorter of the two) and q (the longer). The short arm 'p' is reportedly named for the French word "petit" meaning 'small'. The position of the centromere relative to any particular linear chromosome is used to classify chromosomes as metacentric, submetacentric, acrocentric, telocentric, or holocentric. Position: Metacentric Metacentric means that the centromere is positioned midway between the chromosome ends, resulting in the arms being approximately equal in length. When the centromeres are metacentric, the chromosomes appear to be "x-shaped." Submetacentric Submetacentric means that the centromere is positioned below the middle, with one chromosome arm shorter than the other, often resulting in an L shape. Acrocentric An acrocentric chromosome's centromere is situated so that one of the chromosome arms is much shorter than the other. The "acro-" in acrocentric refers to the Greek word for "peak." The human genome has six acrocentric chromosomes, including five autosomal chromosomes (13, 14, 15, 21, 22) and the Y chromosome. Position: Short acrocentric p-arms contain little genetic material and can be translocated without significant harm, as in a balanced Robertsonian translocation. In addition to some protein coding genes, human acrocentric p-arms also contain Nucleolus organizer regions (NORs), from which ribosomal RNA is transcribed. However, a proportion of acrocentric p-arms in cell lines and tissues from normal human donors do not contain detectable NORs. The domestic horse genome includes one metacentric chromosome that is homologous to two acrocentric chromosomes in the conspecific but undomesticated Przewalski's horse. This may reflect either fixation of a balanced Robertsonian translocation in domestic horses or, conversely, fixation of the fission of one metacentric chromosome into two acrocentric chromosomes in Przewalski's horses. A similar situation exists between the human and great ape genomes, with a reduction of two acrocentric chromosomes in the great apes to one metacentric chromosome in humans (see aneuploidy and the human chromosome 2). Position: Many diseases from the result of unbalanced translocations more frequently involve acrocentric chromosomes than other non-acrocentric chromosomes. Acrocentric chromosomes are usually located in and around the nucleolus. As a result these chromosomes tend to be less densely packed than chromosomes in the nuclear periphery. Consistently, chromosomal regions that are less densely packed are also more prone to chromosomal translocations in cancers. Position: Telocentric Telocentric chromosomes have a centromere at one end of the chromosome and therefore exhibit only one arm at the cytological (microscopic) level. They are not present in human but can form through cellular chromosomal errors. Telocentric chromosomes occur naturally in many species, such as the house mouse, in which all chromosomes except the Y are telocentric. Subtelocentric Subtelocentric chromosomes' centromeres are located between the middle and the end of the chromosomes, but reside closer to the end of the chromosomes. Centromere types: Acentric An acentric chromosome is fragment of a chromosome that lacks a centromere. Since centromeres are the attachment point for spindle fibers in cell division, acentric fragments are not evenly distributed to daughter cells during cell division. As a result, a daughter cell will lack the acentric fragment and deleterious consequences could occur. Chromosome-breaking events can also generate acentric chromosomes or acentric fragments. Centromere types: Dicentric A dicentric chromosome is an abnormal chromosome with two centromeres, which can be unstable through cell divisions. It can form through translocation between or fusion of two chromosome segments, each with a centromere. Some rearrangements produce both dicentric chromosomes and acentric fragments which can not attach to spindles at mitosis. The formation of dicentric chromosomes has been attributed to genetic processes, such as Robertsonian translocation and paracentric inversion. Dicentric chromosomes can have a variety of fates, including mitotic stability. In some cases, their stability comes from inactivation of one of the two centromeres to make a functionally monocentric chromosome capable of normal transmission to daughter cells during cell division.[1] Monocentric The monocentric chromosome is a chromosome that has only one centromere in a chromosome and forms a narrow constriction. Centromere types: Monocentric centromeres are the most common structure on highly repetitive DNA in plants and animals. Centromere types: Holocentric Unlike monocentric chromosomes, holocentric chromosomes have no distinct primary constriction when viewed at mitosis. Instead, spindle fibers attach along almost the entire (Greek: holo-) length of the chromosome. In holocentric chromosomes centromeric proteins, such as CENPA (CenH3) are spread over the whole chromosome. The nematode, Caenorhabditis elegans, is a well-known example of an organism with holocentric chromosomes, but this type of centromere can be found in various species, plants, and animals, across eukaryotes. Holocentromeres are actually composed of multiple distributed centromere units that form a line-like structure along the chromosomes during mitosis. Alternative or nonconventional strategies are deployed at meiosis to achieve the homologous chromosome pairing and segregation needed to produce viable gametes or gametophytes for sexual reproduction. Centromere types: Different types of holocentromeres exist in different species, namely with or without centromeric repetitive DNA sequences and with or without CenH3. Holocentricity has evolved at least 13 times independently in various green algae, protozoans, invertebrates, and different plant families. Contrary to monocentric species where acentric fragments usually become lost during cell division, the breakage of holocentric chromosomes creates fragments with normal spindle fiber attachment sites. Because of this, organisms with holocentric chromosomes can more rapidly evolve karyotype variation, able to heal fragmented chromosomes through subsequent addition of telomere caps at the sites of breakage. Centromere types: Polycentric Human chromosomes Based on the micrographic characteristics of size, position of the centromere and sometimes the presence of a chromosomal satellite, the human chromosomes are classified into the following groups: Sequence: There are two types of centromeres. In regional centromeres, DNA sequences contribute to but do not define function. Regional centromeres contain large amounts of DNA and are often packaged into heterochromatin. In most eukaryotes, the centromere's DNA sequence consists of large arrays of repetitive DNA (e.g. satellite DNA) where the sequence within individual repeat elements is similar but not identical. In humans, the primary centromeric repeat unit is called α-satellite (or alphoid), although a number of other sequence types are found in this region. Centromere satellites are hypothesized to evolve by a process called layered expansion. They evolve rapidly between species, and analyses in wild mice show that satellite copy number and heterogeneity relates to population origins and subspecies. Additionally, satellite sequences may be affected by inbreeding.Point centromeres are smaller and more compact. DNA sequences are both necessary and sufficient to specify centromere identity and function in organisms with point centromeres. In budding yeasts, the centromere region is relatively small (about 125 bp DNA) and contains two highly conserved DNA sequences that serve as binding sites for essential kinetochore proteins. Inheritance: Since centromeric DNA sequence is not the key determinant of centromeric identity in metazoans, it is thought that epigenetic inheritance plays a major role in specifying the centromere. The daughter chromosomes will assemble centromeres in the same place as the parent chromosome, independent of sequence. It has been proposed that histone H3 variant CENP-A (Centromere Protein A) is the epigenetic mark of the centromere. The question arises whether there must be still some original way in which the centromere is specified, even if it is subsequently propagated epigenetically. If the centromere is inherited epigenetically from one generation to the next, the problem is pushed back to the origin of the first metazoans. Inheritance: On the other hand, thanks to comparisons of the centromeres in the X chromosomes, epigenetic and structural variations have been seen in these regions. In addition, a recent assembly of the human genome has detected a possible mechanism of how pericentromeric and centromeric structures evolve, through a layered expansion model for αSat sequences. This model proposes that different αSat sequence repeats emerge periodically and expand within an active vector, displacing old sequences, and becoming the site of kinetochore assembly. The αSat can originate from the same, or from different vectors. As this process is repeated over time, the layers that flank the active centromere shrink and deteriorate. This process raises questions about the relationship between this dynamic evolutionary process and the position of the centromere. Structure: The centromeric DNA is normally in a heterochromatin state, which is essential for the recruitment of the cohesin complex that mediates sister chromatid cohesion after DNA replication as well as coordinating sister chromatid separation during anaphase. In this chromatin, the normal histone H3 is replaced with a centromere-specific variant, CENP-A in humans. The presence of CENP-A is believed to be important for the assembly of the kinetochore on the centromere. CENP-C has been shown to localise almost exclusively to these regions of CENP-A associated chromatin. In human cells, the histones are found to be most enriched for H4K20me3 and H3K9me3 which are known heterochromatic modifications. In Drosophila, Islands of retroelements are major components of the centromeres.In the yeast Schizosaccharomyces pombe (and probably in other eukaryotes), the formation of centromeric heterochromatin is connected to RNAi. In nematodes such as Caenorhabditis elegans, some plants, and the insect orders Lepidoptera and Hemiptera, chromosomes are "holocentric", indicating that there is not a primary site of microtubule attachments or a primary constriction, and a "diffuse" kinetochore assembles along the entire length of the chromosome. Centromeric aberrations: In rare cases, neocentromeres can form at new sites on a chromosome as a result of a repositioning of the centromere. This phenomenon is most well known from human clinical studies and there are currently over 90 known human neocentromeres identified on 20 different chromosomes. The formation of a neocentromere must be coupled with the inactivation of the previous centromere, since chromosomes with two functional centromeres (Dicentric chromosome) will result in chromosome breakage during mitosis. In some unusual cases human neocentromeres have been observed to form spontaneously on fragmented chromosomes. Some of these new positions were originally euchromatic and lack alpha satellite DNA altogether. Neocentromeres lack the repetitive structure seen in normal centromeres which suggest that centromere formation is mainly controlled epigenetically. Over time a neocentromere can accumulate repetitive elements and mature into what is known as an evolutionary new centromere. There are several well known examples in primate chromosomes where the centromere position is different from the human centromere of the same chromosome and is thought to be evolutionary new centromeres. Centromere repositioning and the formation of evolutionary new centromeres has been suggested to be a mechanism of speciation.Centromere proteins are also the autoantigenic target for some anti-nuclear antibodies, such as anti-centromere antibodies. Dysfunction and disease: It has been known that centromere misregulation contributes to mis-segregation of chromosomes, which is strongly related to cancer and miscarriage. Notably, overexpression of many centromere genes have been linked to cancer malignant phenotypes. Overexpression of these centromere genes can increase genomic instability in cancers. Elevated genomic instability on one hand relates to malignant phenotypes; on the other hand, it makes the tumor cells more vulnerable to specific adjuvant therapies such as certain chemotherapies and radiotherapy. Instability of centromere repetitive DNA was recently shown in cancer and aging. Repair of centromeric DNA: When DNA breaks occur at centromeres in the G1 phase of the cell cycle, the cells are able to recruit the homologous recombinational repair machinery to the damaged site, even in the absence of a sister chromatid. It appears that homologous recombinational repair can occur at centromeric breaks throughout the cell cycle in order to prevent the activation of inaccurate mutagenic DNA repair pathways and to preserve centromeric integrity. Etymology and pronunciation: The word centromere () uses combining forms of centro- and -mere, yielding "central part", describing the centromere's location at the center of the chromosome.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Herkie** Herkie: The herkie (aka hurkie) is a cheerleading jump named after Lawrence Herkimer, the founder of the National Cheerleaders Association and former cheerleader at Southern Methodist University. It is similar to a side-hurdler and to the abstract double hook, except instead of the bent leg's knee being pointed downward, it should be flat while the other leg is straight in a straddle jump (toetouch) position. The jump was invented accidentally, because Herkimer was not able to do an actual side-hurdler. Common misspellings include "hurky" and "herky". Jump position: In a left herkie, the jumper has the left leg straight in a half-straddle position, and the right leg bent flat beneath them. In a right herkie, it is the opposite. When used as a "signature" at the end of an organized cheer, the jumper typically bends their weaker [leg. Arm positions Herkie arm positions depend on how the legs are positioned. A left Herkie has the left arm in a straight up High V motion and the right arm on the right hip. If doing a right Herkie the arm positions are flipped.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Code property graph** Code property graph: In computer science, a code property graph (CPG) is a computer program representation that captures syntactic structure, control flow, and data dependencies in a property graph. The concept was originally introduced to identify security vulnerabilities in C and C++ system code, but has since been employed to analyze web applications, cloud deployments, and smart contracts. Beyond vulnerability discovery, code property graphs find applications in code clone detection, attack-surface detection, exploit generation, measuring code testability, and backporting of security patches. Definition: A code property graph of a program is a graph representation of the program obtained by merging its abstract syntax trees (AST), control-flow graphs (CFG) and program dependence graphs (PDG) at statement and predicate nodes. The resulting graph is a property graph, which is the underlying graph model of graph databases such as Neo4j, JanusGraph and OrientDB where data is stored in the nodes and edges as key-value pairs. In effect, code property graphs can be stored in graph databases and queried using graph query languages. Example: Consider the function of a C program: The code property graph of the function is obtained by merging its abstract syntax tree, control-flow graph, and program dependence graph at statements and predicates as seen in the following figure: Implementations: Joern CPG. The original code property graph was implemented for C/C++ in 2013 at University of Göttingen as part of the open-source code analysis tool Joern. This original version has been discontinued and superseded by the open-source Joern Project, which provides a formal code property graph specification applicable to multiple programming languages. The project provides code property graph generators for C/C++, Java, Java bytecode, Kotlin, Python, JavaScript, TypeScript, LLVM bitcode, and x86 binaries (via the Ghidra disassembler). Implementations: Plume CPG. Developed at Stellenbosch University in 2020 and sponsored by Amazon Science, the open-source Plume project provides a code property graph for Java bytecode compatible with the code property graph specification provided by the Joern project. The two projects merged in 2021. Fraunhofer AISEC CPG. The Fraunhofer Institute for Applied and Integrated Security provides open-source code property graph generators for C/C++, Java, Golang, and Python, albeit without a formal schema specification. It also provides the Cloud Property Graph, an extension of the code property graph concept that models details of cloud deployments. Galois’ CPG for LLVM. Galois Inc. provides a code property graph based on the LLVM compiler. The graph represents code at different stages of the compilation and a mapping between these representations. It follows a custom schema that is defined in its documentation. Machine learning on code property graphs: Code property graphs provide the basis for several machine-learning-based approaches to vulnerability discovery. In particular, graph neural networks (GNN) have been employed to derive vulnerability detectors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Compton telescope** Compton telescope: A Compton telescope (also known as Compton camera or Compton imager) is a gamma-ray detector which utilizes Compton scattering to determine the origin of the observed gamma rays. Compton cameras are usually applied to detect gamma rays in the energy range where Compton scattering is the dominating interaction process, from a few hundred keV to several MeV. They are applied in fields such as astrophysics, nuclear medicine, and nuclear threat detection. In astrophysics, the most famous Compton telescopes was COMPTEL aboard the Compton Gamma Ray Observatory, which pioneered the observation of the gamma-ray sky in the energy range between 0.75 and 30 MeV. A potential successor is NCT - the Nuclear Compton Telescope.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Axial multipole moments** Axial multipole moments: Axial multipole moments are a series expansion of the electric potential of a charge distribution localized close to the origin along one Cartesian axis, denoted here as the z-axis. However, the axial multipole expansion can also be applied to any potential or field that varies inversely with the distance to the source, i.e., as 1R . For clarity, we first illustrate the expansion for a single point charge, then generalize to an arbitrary charge density λ(z) localized to the z-axis. Axial multipole moments of a point charge: The electric potential of a point charge q located on the z-axis at z=a (Fig. 1) equals If the radius r of the observation point is greater than a, we may factor out {\textstyle {\frac {1}{r}}} and expand the square root in powers of (a/r)<1 using Legendre polynomials where the axial multipole moments Mk≡qak contain everything specific to a given charge distribution; the other parts of the electric potential depend only on the coordinates of the observation point P. Special cases include the axial monopole moment M0=q , the axial dipole moment M1=qa and the axial quadrupole moment M2≡qa2 . This illustrates the general theorem that the lowest non-zero multipole moment is independent of the origin of the coordinate system, but higher multipole moments are not (in general). Axial multipole moments of a point charge: Conversely, if the radius r is less than a, we may factor out 1a and expand in powers of (r/a)<1 , once again using Legendre polynomials where the interior axial multipole moments {\textstyle I_{k}\equiv {\frac {q}{a^{k+1}}}} contain everything specific to a given charge distribution; the other parts depend only on the coordinates of the observation point P. General axial multipole moments: To get the general axial multipole moments, we replace the point charge of the previous section with an infinitesimal charge element λ(ζ)dζ , where λ(ζ) represents the charge density at position z=ζ on the z-axis. If the radius r of the observation point P is greater than the largest |ζ| for which λ(ζ) is significant (denoted max ), the electric potential may be written where the axial multipole moments Mk are defined Special cases include the axial monopole moment (=total charge) the axial dipole moment {\textstyle M_{1}\equiv \int d\zeta \ \lambda (\zeta )\ \zeta } , and the axial quadrupole moment {\textstyle M_{2}\equiv \int d\zeta \ \lambda (\zeta )\ \zeta ^{2}} . Each successive term in the expansion varies inversely with a greater power of r , e.g., the monopole potential varies as {\textstyle {\frac {1}{r}}} , the dipole potential varies as {\textstyle {\frac {1}{r^{2}}}} , the quadrupole potential varies as {\textstyle {\frac {1}{r^{3}}}} , etc. Thus, at large distances ( max {\textstyle {\frac {\zeta _{\text{max}}}{r}}\ll 1} ), the potential is well-approximated by the leading nonzero multipole term. General axial multipole moments: The lowest non-zero axial multipole moment is invariant under a shift b in origin, but higher moments generally depend on the choice of origin. The shifted multipole moments Mk′ would be Expanding the polynomial under the integral leads to the equation If the lower moments Mk−1,Mk−2,…,M1,M0 are zero, then Mk′=Mk . The same equation shows that multipole moments higher than the first non-zero moment do depend on the choice of origin (in general). Interior axial multipole moments: Conversely, if the radius r is smaller than the smallest |ζ| for which λ(ζ) is significant (denoted min ), the electric potential may be written where the interior axial multipole moments Ik are defined Special cases include the interior axial monopole moment ( ≠ the total charge) the interior axial dipole moment {\textstyle M_{1}\equiv \int d\zeta \ {\frac {\lambda (\zeta )}{\zeta ^{2}}}} , etc. Each successive term in the expansion varies with a greater power of r , e.g., the interior monopole potential varies as r , the dipole potential varies as r2 , etc. At short distances ( min {\textstyle {\frac {r}{\zeta _{\text{min}}}}\ll 1} ), the potential is well-approximated by the leading nonzero interior multipole term.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Smooth scheme** Smooth scheme: In algebraic geometry, a smooth scheme over a field is a scheme which is well approximated by affine space near any point. Smoothness is one way of making precise the notion of a scheme with no singular points. A special case is the notion of a smooth variety over a field. Smooth schemes play the role in algebraic geometry of manifolds in topology. Definition: First, let X be an affine scheme of finite type over a field k. Equivalently, X has a closed immersion into affine space An over k for some natural number n. Then X is the closed subscheme defined by some equations g1 = 0, ..., gr = 0, where each gi is in the polynomial ring k[x1,..., xn]. The affine scheme X is smooth of dimension m over k if X has dimension at least m in a neighborhood of each point, and the matrix of derivatives (∂gi/∂xj) has rank at least n−m everywhere on X. (It follows that X has dimension equal to m in a neighborhood of each point.) Smoothness is independent of the choice of immersion of X into affine space. Definition: The condition on the matrix of derivatives is understood to mean that the closed subset of X where all (n−m) × (n − m) minors of the matrix of derivatives are zero is the empty set. Equivalently, the ideal in the polynomial ring generated by all gi and all those minors is the whole polynomial ring. Definition: In geometric terms, the matrix of derivatives (∂gi/∂xj) at a point p in X gives a linear map Fn → Fr, where F is the residue field of p. The kernel of this map is called the Zariski tangent space of X at p. Smoothness of X means that the dimension of the Zariski tangent space is equal to the dimension of X near each point; at a singular point, the Zariski tangent space would be bigger. Definition: More generally, a scheme X over a field k is smooth over k if each point of X has an open neighborhood which is a smooth affine scheme of some dimension over k. In particular, a smooth scheme over k is locally of finite type. There is a more general notion of a smooth morphism of schemes, which is roughly a morphism with smooth fibers. In particular, a scheme X is smooth over a field k if and only if the morphism X → Spec k is smooth. Properties: A smooth scheme over a field is regular and hence normal. In particular, a smooth scheme over a field is reduced. Define a variety over a field k to be an integral separated scheme of finite type over k. Then any smooth separated scheme of finite type over k is a finite disjoint union of smooth varieties over k. For a smooth variety X over the complex numbers, the space X(C) of complex points of X is a complex manifold, using the classical (Euclidean) topology. Likewise, for a smooth variety X over the real numbers, the space X(R) of real points is a real manifold, possibly empty. Properties: For any scheme X that is locally of finite type over a field k, there is a coherent sheaf Ω1 of differentials on X. The scheme X is smooth over k if and only if Ω1 is a vector bundle of rank equal to the dimension of X near each point. In that case, Ω1 is called the cotangent bundle of X. The tangent bundle of a smooth scheme over k can be defined as the dual bundle, TX = (Ω1)*. Properties: Smoothness is a geometric property, meaning that for any field extension E of k, a scheme X is smooth over k if and only if the scheme XE := X ×Spec k Spec E is smooth over E. For a perfect field k, a scheme X is smooth over k if and only if X is locally of finite type over k and X is regular. Generic smoothness: A scheme X is said to be generically smooth of dimension n over k if X contains an open dense subset that is smooth of dimension n over k. Every variety over a perfect field (in particular an algebraically closed field) is generically smooth. Examples: Affine space and projective space are smooth schemes over a field k. An example of a smooth hypersurface in projective space Pn over k is the Fermat hypersurface x0d + ... + xnd = 0, for any positive integer d that is invertible in k. An example of a singular (non-smooth) scheme over a field k is the closed subscheme x2 = 0 in the affine line A1 over k. An example of a singular (non-smooth) variety over k is the cuspidal cubic curve x2 = y3 in the affine plane A2, which is smooth outside the origin (x,y) = (0,0). Examples: A 0-dimensional variety X over a field k is of the form X = Spec E, where E is a finite extension field of k. The variety X is smooth over k if and only if E is a separable extension of k. Thus, if E is not separable over k, then X is a regular scheme but is not smooth over k. For example, let k be the field of rational functions Fp(t) for a prime number p, and let E = Fp(t1/p); then Spec E is a variety of dimension 0 over k which is a regular scheme, but not smooth over k. Examples: Schubert varieties are in general not smooth.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Classical Mechanics (Kibble and Berkshire)** Classical Mechanics (Kibble and Berkshire): Classical Mechanics is a well-established textbook written by Thomas Walter Bannerman Kibble and Frank Berkshire of the Imperial College Mathematics Department. The book provides a thorough coverage of the fundamental principles and techniques of classical mechanics, a long-standing subject which is at the base of all of physics. Publication history: The English language editions were published as follows: The first edition was published by Kibble, as Kibble, T. W. B. Classical Mechanics. London: McGraw–Hill, 1966. 296 p. The second ed., also just by Kibble, was in 1973 . The 4th, jointly with F H Berkshire, was is 1996 The 5th, jointly with F H Berkshire, in 2004 The book has been translated into several languages: French, by Michel Le Ray and Françoise Guérin as Mécanique classique Modern Greek, by Δ. Σαρδελής και Π. Δίτσας, επιμέλεια Γ. Ι. Παπαδόπουλος. Σαρδελής, Δ. Δίτσας, Π as Κλασσική μηχανική German Turkish, by Kemal Çolakoğlu as Klasik mekanik Spanish, as Mecánica clásica Portuguese as Mecanica classica Reception: The various editions are held in 1789 libraries. In comparison, the various (2011) editions of Herbert Goldstein's Classical Mechanics are held in 1772. librariesThe original edition was reviewed in Current Science. The fourth edition was reviewed by C. Isenberg in 1997 in the European Journal of Physics, and the fifth edition was reviewed in Contemporary Physics. Contents (5th edition): Preface Useful Constants and Units Chapter 1: Introduction Chapter 2: Linear motion Chapter 3: Energy and Angular momentum Chapter 4: Central Conservative Forces Chapter 5: Rotating Frames Chapter 6: Potential Theory Chapter 7: The Two-Body Problem Chapter 8: Many-Body Systems Chapter 9: Rigid Bodies Chapter 10: Lagrangian mechanics Chapter 11: Small oscillations and Normal modes Chapter 12: Hamiltonian mechanics Chapter 13: Dynamical systems and their geometry Chapter 14: Order and Chaos in Hamiltonian systems Appendix A: Vectors Appendix B: Conics Appendix C: Phase plane Analysis near Critical Points Appendix D: Discrete Dynamical Systems – Maps Answers to Problems Bibliography Index
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Methexis** Methexis: In theatre, methexis (Ancient Greek: μέθεξις; also methectics), is "group sharing". Originating from Greek theatre, the audience participates, creates and improvises the action of the ritual. Methexis: In philosophy, methexis is the relation between a particular and a form (in Plato's sense), e.g. a beautiful object is said to partake of the form of beauty.Methexis is sometimes contrasted with mimesis. The latter "connotes emphasis on the solo performer (the hero) separate from the audience," in direct contrast to the communal methectic theatrical experience which has "little or no 'fourth wall'".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Potty chair** Potty chair: A potty chair, or simply a potty, is a proportionately small chair or enclosure with an opening for seating very young children to "go potty." It is a variant of the close stool which was used by adults before the widespread adoption of water flushed toilets. There are a variety of designs, some placed directly over the toilet called "Toilet Training Seats" so the egested fecal material drops directly into the toilet bowl thereby eliminating manual removal and disposal of the said waste from a receptacle beneath the hole which is often a bag or receptacle similar to a chamber pot. Potty chairs are used during potty training, a.k.a. toilet training. These are very useful for young babies. Potty chair: Usage of the potty chair greatly varies across cultures.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Solothurn Madonna** Solothurn Madonna: The Solothurn Madonna is a 1522 painting produced by Hans Holbein the Younger in Basel. It shows the Virgin Mary and Christ enthroned, flanked by Martin of Tours (shown as a bishop giving alms to a beggar) and Ursus of Solothurn (shown as a soldier in armour). Holbein used his wife Elsbeth as his model for the Madonna, and the baby "may well have been modelled on Holbein and Elsbeth's baby son Philipp."The church which originally commissioned it is unknown, but it resurfaced in 1864 in poor condition in the Allerheiligenkapelle in the Grenchen district of Solothurn. It has been owned by the town of Solothurn since 1879, and it has been named after the town since the late 19th century. It is kept in the Solothurn Art Museum. After the Darmstadt Madonna, the Solothurn Madonna is the second largest surviving Madonna by Hans Holbein the Younger. Bibliography (in German): Jacob Amiet: Hans Holbein's Madonna von Solothurn Und der Stifter Nicolaus Conrad, Solothurn, 1879. Reprint: Bibliolife, LaVergne, 2011. Oskar Bätschmann, Pascal Griener: Hans Holbein d.J. – Die Solothurner Madonna. Eine Sacra Conversazione im Norden, Basel, 1998. ISBN 3-7965-1050-7 Jochen Sander: Hans Holbein d. J. und die niederländische Kunst, am Beispiel der "Solothurner Madonna" in: Zeitschrift für Schweizerische Archäologie und Kunstgeschichte 55 (1998), S. 123–130.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Charting application** Charting application: A charting application is a computer program that is used to create a graphical representation (a chart) based on some non-graphical data that is entered by a user, most often through a spreadsheet application, but also through a dedicated specific scientific application (such as through a symbolic mathematics computing system, or a proprietary data collection application), or using an online spreadsheet service. Charting application: There are several online charting services available, the most popular one being the U.S. Department of Education's Institute of Education Sciences' NCES Chart.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Glossary of board games** Glossary of board games: This glossary of board games explains commonly used terms in board games, in alphabetical order. For a list of board games, see List of board games; for terms specific to chess, see Glossary of chess; for terms specific to chess problems, see Glossary of chess problems. B: bear off To remove game piece(s) from the board and out of play. Past tense: borne off. bit See piece. Black Used often to refer to one of the players in two-player games. Black's pieces are typically a dark color but not necessarily black (e.g. in English draughts official play they are red). Cf. White. See also White and Black in chess. board Short for gameboard. C: capture A method that removes another player's piece(s) from the board. For example: in checkers, if a player jumps an opponent's piece, that piece is captured. Captured pieces are typically removed from the game. In some games, captured pieces remain in hand and can be reentered into active play (e.g. shogi, Bughouse chess). See also Game mechanics § Capture/eliminate. card A piece of cardboard often bearing instructions, and usually chosen randomly from a deck by shuffling. cell See hex and space. checker See piece. checkerboard A square gameboard with alternating dark and light-colored squares. chessboard The square gameboard used in chess, having 64 squares of alternating dark and light-colors. column See file. component A physical item included in the game. E.g. the box itself, the board, the cards, the tokens, zipper-lock bags, inserts, rule books, etc. See also equipment. counter See piece. currency A scoring mechanic used by some games to determine the winner, e.g. money (Monopoly) or counters (Zohn Ahl). C: custodian capture A capture method whereby an enemy piece is captured by being blocked on adjacent sides by opponent pieces. (Typically laterally on two sides as in Tablut and Hasami shogi, or laterally on four sides as in Go. Capture by blocking on two sides diagonally is done in Stone Warriors, and surrounding on three sides is required in Bizingo.) Also called escort capture and interception capture. D: deck A stack of cards. die sing. of dice. D: dice Modern cubic dice are used to generate random numbers in many games – e.g. a single die in Trivial Pursuit, or two dice per player in backgammon. Role-playing games typically use one or more polyhedral dice. Games such as Pachisi and chaupur traditionally use cowrie shells. The games Zohn Ahl and Hyena chase use dice sticks. The game yut uses yut sticks. D: direction of play The order of turns in a multiplayer game, e.g. clockwise around the board means the player to the left has the next turn. disc See piece. displacement capture A capture method whereby a capturing piece replaces the captured piece on its square, cell, or point on the gameboard. doublet 1. The same number displayed by two dice. 2. The number displayed by one or more die is doubled. 3. The union of two game pieces to move as one. E: empty board Many games start with all pieces out of play; for example, Nine men's morris, Conspirateurs, Entropy, and Go (if a handicap is not employed). Some gameboards feature staging areas for the pieces before any are put into play; for example, Ludo and Malefiz. enemy An enemy piece is a piece in the same army or set of pieces controlled by the opponent; or, in a multiplayer game, a piece controlled by the partner of an opponent. Engine-building A board game genre and gameplay mechanic that involves adding and modifying combinations of abilities or resources to assemble a virtuous circle of increasingly powerful and productive outcomes. A successfully built engine can create a snowball or domino effect. equipment Refers to physical components required to play a game, e.g. pieces, gameboard, dice. escort capture See custodian capture. exchange For games featuring captures, the capture of a piece followed immediately by the opponent's recapture. F: file A straight line of spaces running from top to bottom of a gameboard at right angle to a rank. Also called column. friendly A piece in the same army or set of pieces controlled by a player; or, in a multiplayer game, a piece controlled by a player's partner. G: gameboard Or game board. The (usually quadrilateral) marked surface on which one plays a board game. The namesake of the board game, gameboards would seem to be a necessary and sufficient condition of the genre, though card games that do not use a standard deck of cards (as well as games that use neither cards nor a gameboard) are often colloquially included. Most games use a standardized and unchanging board (chess, Go, and backgammon each have such a board), but some games use a modular board whose component tiles or cards can assume varying layouts from one session to another, or even during gameplay. G: game component See component. game equipment See equipment. game piece See piece. gameplay The execution of a game; or specifically its strategy, tactics, conventions, or mechanics. gamer A person who plays board game(s). See also player. gamespace A gameboard for a three-dimensional game (e.g., the 5×5×5 cubic board for Raumschach). grace An extra turn. H: handicap An advantage given to a weaker side at the start of a game to level the winning chances against a stronger opponent. Go has formal handicap systems (see Go handicaps); chess has traditional handicap methods not used in rated competitions (see Chess handicap). hex In hexagon-based board games, this is the common term for a standard space on the board. This is most often used in wargaming, though many abstract strategy games such as Abalone, Agon, hexagonal chess, GIPF project games, and connection games use hexagonal layouts. huff The forfeiture of a piece as a penalty for infringing a rule. I: in hand A piece in hand is one currently not in play on the gameboard, but may be entered into play on a turn. Examples are captured pieces in shogi or Bughouse chess, able to be dropped into play as a move; or pieces that begin the game in a staging area off the main board, as in Ludo or Chessence. I: in play A piece active on the main board, not in hand or in a staging area. Antonym: out of play. interception capture See custodian capture. intervention capture A capture method the reverse of the custodian method: a player captures two opponent pieces by moving to occupy the empty space between them. J: jump To move a piece over one or more pieces or spaces on the gameboard. Depending on the context, jumping may include capturing an opponent's piece. See also Game mechanics § Capture/eliminate. M: man In chess, a piece or a pawn. In draughts, an uncrowned (i.e. not a king) piece. meeple A game piece that represents a person in concept, shaped like an approximation of a person. mill Three or more pieces in a line of adjacent spaces. move See turn. O: odds See handicap. open board A gameboard with no pieces, or one piece, in play. Typically for demonstration or instruction. order of play See direction of play. orthogonal A horizontal (straight left or right) or vertical (straight forward or backward) direction a piece moves on a gameboard. out of play A piece not active on the main board, it might be in hand or in a staging area. Antonym: in play. over the board A game played face to face with the opponent, as opposed to playing remotely (online or other means, for e.g. correspondence chess). P: pass The voluntary or involuntary forfeiture of a turn by a player. pie rule Used in some two-player games to eliminate any advantage of moving first. After the first player's opening move, the second player may optionally swap sides. P: piece Or bit, checker, chip, counter, disc, draughtsman, game piece, man, meeple, mover, pawn, player piece, playing piece, singleton, stone, token, unit. A player's representative on the gameboard made of a piece of material made to look like a known object (such as a scale model of a person, animal, or inanimate object) or otherwise general symbol. Each player may control one or more pieces. Some games involve commanding multiple pieces, such as chess pieces or Monopoly houses and hotels, that have unique designations and capabilities within the parameters of the game; in other games, such as Go, all pieces controlled by a player have the same capabilities. In some modern board games, such as Clue, there are other pieces that are not a player's representative (i.e. weapons). In some games, such as mancala games, pieces may not represent or belong to any particular player. Mancala pieces are undifferentiated and typically seeds but sometimes beans, coins, cowry shells, ivory balls, or pebbles. Note that in chess usage the term piece in some contexts only refers to some of the pieces, which are also known as chessmen. See also Counter (board wargames). P: playboard See gameboard. player The participant(s) in the game. See also gamer. playing area The spaces on a gameboard for use by pieces in play. playspace See playing area. point See space. polyhedral dice Dice that are not cubes, usually some kind of Platonic solid. Polyhedral dice are generally referred to through the construction "d + number of sides" (ex. d4, d8, d12, d20). See also dice. R: rank A straight line of spaces running from one side to the other across a gameboard at right angle to a file. Also called row. replacement capture See displacement capture. row See rank. rule A condition or stipulation by which a game is played. ruleset The comprehensive set of rules which define and govern a game. S: singleton A game piece that is isolated and often prone to attack. S: space A physical unit of progress on a gameboard delimited by a distinct border, and not further divisible according to the game's rules. Alternatively, a unique position on the board on which a piece in play may be located. For example, in Go, the pieces are placed on grid line intersections called points, and not in the areas bounded by the borders, as in chess. The bounded area geometries can be square (e.g. chess), rectangular (e.g. shogi), hexagonal (e.g. Chinese Checkers), triangular (e.g. Bizingo), quadrilateral (e.g. three-player chess), cubic (e.g. Raumschach), or other shapes (e.g. Circular chess). Cf. gamespace. See also Game mechanics § Movement. S: square See space. staging area A space set aside from the main gameboard to contain pieces in hand. In Ludo, the staging areas are called yards. In shogi, pieces in hand are placed on komadai. starting area See staging area. stone See piece. swap See exchange. T: take See capture. token See piece. trade See exchange. triplet The same number displayed by three dice. turn A player's opportunity to move a piece or make a decision that influences gameplay. Turns to move usually alternate equally between competing players or teams. See also Turn-based game. W: White Used often to refer to one of the players in two-player games. White's pieces are typically a light color but not necessarily white (e.g. backgammon sets use various colors for White; shogi sets have no color distinction between sides). White often moves first but not always (e.g. Black moves first in English draughts, shogi, and Go). Cf. Black. See also White and Black in chess. W: Worker Placement A genre of board games in which players take turns selecting an action while optimizing their resources and making meaningful decisions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vodka tonic** Vodka tonic: A vodka tonic is a long drink made with varying proportions of vodka and tonic water. Vodka tonics are frequently garnished with a slice of lime or lemon. One commonly used recipe is one part vodka and one part tonic water in a tumbler, often a highball glass over ice, with a generous lime wedge squeezed into it.The drink is referenced in the lyrics of the song "Goodbye Yellow Brick Road" by Elton John.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Opinion piece** Opinion piece: An opinion piece is an article, usually published in a newspaper or magazine, that mainly reflects the author's opinion about a subject. Opinion pieces are featured in many periodicals. Editorials: Opinion pieces may take the form of an editorial, usually written by the senior editorial staff or publisher of the publication, in which case the opinion piece is usually unsigned and may be supposed to reflect the opinion of the periodical. In major newspapers, such as the New York Times and the Boston Globe, editorials are classified under the heading "opinion." Columns: Other opinion pieces may be written by a (regular or guest) columnist. Such pieces, referred to as "columns", may be strongly opinionated, and the opinion expressed is that of the writer (and not the periodical). However, not all columns are opinion pieces; for example, columnists may write columns that are nonsensical and solely intended for their humouristic effect. Op-eds: An op-ed (abbreviated from "opposite the editorial page") is an opinion piece that appears on a page in the newspaper dedicated solely to them, often written by a subject-matter expert, a person with a unique perspective on an issue, or a regular columnist employed by the paper. Op-eds may be solicited by the editorial staff, but may also be submitted by the author for publication. Although the decision to publish such a piece rests with the editorial board, any opinions expressed are those of the author. A letter to the editor is a common example of this.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Signalling Connection Control Part** Signalling Connection Control Part: The Signalling Connection Control Part (SCCP) is a network layer protocol that provides extended routing, flow control, segmentation, connection-orientation, and error correction facilities in Signaling System 7 telecommunications networks. SCCP relies on the services of MTP for basic routing and error detection. Published specification: The base SCCP specification is defined by the ITU-T, in recommendations Q.711 to Q.714, with additional information to implementors provided by Q.715 and Q.716. There are, however, regional variations defined by local standards bodies. In the United States, ANSI publishes its modifications to Q.713 as ANSI T1.112. The TTC publishes as JT-Q.711 to JT-Q.714, and Europe ETSI publishes ETSI EN 300-009-1: both of which document their modifications to the ITU-T specifications. Routing facilities beyond MTP: Although MTP provides routing capabilities based upon the Point Code, SCCP allows routing using a Point Code and Subsystem number or a Global Title. A Point Code is used to address a particular node on the network, whereas a Subsystem number addresses a specific application available on that node. SCCP employs a process called Global Title Translation to determine Point Codes from Global Titles so as to instruct MTP on where to route messages. Routing facilities beyond MTP: SCCP messages contain parameters which describe the type of addressing used, and how the message should be routed: Address Indicator Routing indicator Route on Global Title Route on Point Code/Subsystem Number Global title indicator No Global Title Global Title includes Translation Type (TT), Numbering Plan Indicator (NPI) and Type of Number (TON) Global Title includes Translation Type only Subsystem indicator Subsystem Number present Subsystem Number not present Point Code indicator Point Code present Point Code not present Global Title Address Indicator Coding Address Indicator coded as national (the Address Indicator is treated as international if not specified) Protocol classes: SCCP provides 4 classes of protocol to its applications: Class 0: Basic connectionless. Class 1: Sequenced connectionless. Class 2: Basic connection-oriented. Protocol classes: Class 3: Flow control connection oriented.The connectionless protocol classes provide the capabilities needed to transfer one Network Service Data Unit (NSDU) in the "data" field of an XUDT, LUDT or UDT message. When one connectionless message is not sufficient to convey the user data contained in one NSDU, a segmenting/reassembly function for protocol classes 0 and 1 is provided. In this case, the SCCP at the originating node or in a relay node provides segmentation of the information into multiple segments prior to transfer in the "data" field of XUDT (or as a network option LUDT) messages. At the destination node, the NSDU is reassembled. Protocol classes: The connection-oriented protocol classes (protocol classes 2 and 3) provide the means to set up signalling connections in order to exchange a number of related NSDUs. The connection-oriented protocol classes also provide a segmenting and reassembling capability. If an NSDU is longer than 255 octets, it is split into multiple segments at the originating node, prior to transfer in the "data" field of DT messages. Each segment is less than or equal to 255 octets. At the destination node, the NSDU is reassembled. Protocol classes: Class 0: Basic connectionless The SCCP Class 0 protocol class is the most basic of the SCCP protocol classes. Network Service Data Units passed by higher layers to the in the originating node are delivered by the SCCP to higher layers in the destination node. They are transferred independently of each other. Therefore, they may be delivered to the SCCP user out-of-sequence. Thus, this protocol class corresponds to a pure connectionless network service. As a connectionless protocol, no network connection is established between the sender and the receiver. Protocol classes: Class 1: Sequenced connectionless SCCP Class 1 builds on the capabilities of Class 0, with the addition of a sequence control parameter in the NSDU which allows the SCCP User to instruct the SCCP that a given stream of messages should be delivered in sequence. Therefore, Protocol Class 1 corresponds to an enhanced connectionless protocol with assurances of in-sequence delivery. Protocol classes: Class 2: Basic connection-oriented SCCP Class 2 provides the facilities of Class 1, but also allows for an entity to establish a two-way dialog with another entity using SCCP. Class 3: Flow control connection oriented Class 3 service builds upon Class 2, but also allows for expedited (urgent) messages to be sent and received, and for errors in sequencing (segment re-assembly) to be detected and for SCCP to restart a connection should this occur. Transport over IP Networks: In the SIGTRAN suite of protocols, there are two primary methods of transporting SCCP applications across Internet Protocol networks: SCCP can be transported indirectly using the MTP level 3 User Adaptation protocol (M3UA), a protocol which provides support for users of MTP-3—including SCCP. Alternatively, SCCP applications can operate directly over the SCCP User Adaptation protocol (SUA) which is a form of modified SCCP designed specifically for use in IP networking. Transport over IP Networks: ITU-T also provides for the transport of SCCP users over Internet Protocol using the Generic Signalling Transport service specified in Q.2150.0, the signalling transport converter for SCTP specified in Q.2150.3 and a specialized Transport-Independent Signalling Connection Control Part (TI-SCCP) specified in T-REC-Q.2220. TI-SCCP can also be used with the Generic Signalling Transport adapted for MTP3 and MTP3b as described in Q.2150.1, or adapted for SSCOP or SSCOPMCE as described in Q.2150.2.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Iptables** Iptables: iptables is a user-space utility program that allows a system administrator to configure the IP packet filter rules of the Linux kernel firewall, implemented as different Netfilter modules. The filters are organized in different tables, which contain chains of rules for how to treat network traffic packets. Different kernel modules and programs are currently used for different protocols; iptables applies to IPv4, ip6tables to IPv6, arptables to ARP, and ebtables to Ethernet frames. Iptables: iptables requires elevated privileges to operate and must be executed by user root, otherwise it fails to function. On most Linux systems, iptables is installed as /usr/sbin/iptables and documented in its man pages, which can be opened using man iptables when installed. It may also be found in /sbin/iptables, but since iptables is more like a service rather than an "essential binary", the preferred location remains /usr/sbin. Iptables: The term iptables is also commonly used to inclusively refer to the kernel-level components. x_tables is the name of the kernel module carrying the shared code portion used by all four modules that also provides the API used for extensions; subsequently, Xtables is more or less used to refer to the entire firewall (v4, v6, arp, and eb) architecture. iptables superseded ipchains; and the successor of iptables is nftables, which was released on 19 January 2014 and was merged into the Linux kernel mainline in kernel version 3.13. Overview: iptables allows the system administrator to define tables containing chains of rules for the treatment of packets. Each table is associated with a different kind of packet processing. Packets are processed by sequentially traversing the rules in chains. A rule in a chain can cause a goto or jump to another chain, and this can be repeated to whatever level of nesting is desired. (A jump is like a “call”, i.e. the point that was jumped from is remembered.) Every network packet arriving at or leaving from the computer traverses at least one chain. Overview: The origin of the packet determines which chain it traverses initially. There are five predefined chains (mapping to the five available Netfilter hooks), though a table may not have all chains. Predefined chains have a policy, for example DROP, which is applied to the packet if it reaches the end of the chain. The system administrator can create as many other chains as desired. These chains have no policy; if a packet reaches the end of the chain it is returned to the chain which called it. A chain may be empty. Overview: PREROUTING: Packets will enter this chain before a routing decision is made. INPUT: Packet is going to be locally delivered. It does not have anything to do with processes having an opened socket; local delivery is controlled by the "local-delivery" routing table: ip route show table local. FORWARD: All packets that have been routed and were not for local delivery will traverse this chain. OUTPUT: Packets sent from the machine itself will be visiting this chain. Overview: POSTROUTING: Routing decision has been made. Packets enter this chain just before handing them off to the hardware.A chain does not exist by itself; it belongs to a table. There are three tables: nat, filter, and mangle. Unless preceded by the option -t, an iptables command concerns the filter table by default. For example, the command iptables -L -v -n, which shows some chains and their rules, is equivalent to iptables -t filter -L -v -n. To show chains of table nat, use the command iptables -t nat -L -v -n Each rule in a chain contains the specification of which packets it matches. It may also contain a target (used for extensions) or verdict (one of the built-in decisions). As a packet traverses a chain, each rule in turn is examined. If a rule does not match the packet, the packet is passed to the next rule. If a rule does match the packet, the rule takes the action indicated by the target/verdict, which may result in the packet being allowed to continue along the chain or may not. Matches make up the large part of rulesets, as they contain the conditions packets are tested for. These can happen for about any layer in the OSI model, as with e.g. the --mac-source and -p tcp --dport parameters, and there are also protocol-independent matches, such as -m time. Overview: The packet continues to traverse the chain until either a rule matches the packet and decides the ultimate fate of the packet, for example by calling one of the ACCEPT or DROP, or a module returning such an ultimate fate; or a rule calls the RETURN verdict, in which case processing returns to the calling chain; or the end of the chain is reached; traversal either continues in the parent chain (as if RETURN was used), or the base chain policy, which is an ultimate fate, is used.Targets also return a verdict like ACCEPT (NAT modules will do this) or DROP (e.g. the REJECT module), but may also imply CONTINUE (e.g. the LOG module; CONTINUE is an internal name) to continue with the next rule as if no target/verdict was specified at all. Userspace utilities: Front-ends There are numerous third-party software applications for iptables that try to facilitate setting up rules. Front-ends in textual or graphical fashion allow users to click-generate simple rulesets; scripts usually refer to shell scripts (but other scripting languages are possible too) that call iptables or (the faster) iptables-restore with a set of predefined rules, or rules expanded from a template with the help of a simple configuration file. Linux distributions commonly employ the latter scheme of using templates. Such a template-based approach is practically a limited form of a rule generator, and such generators also exist in standalone fashion, for example, as PHP web pages. Userspace utilities: Such front-ends, generators and scripts are often limited by their built-in template systems and where the templates offer substitution spots for user-defined rules. Also, the generated rules are generally not optimized for the particular firewalling effect the user wishes, as doing so will likely increase the maintenance cost for the developer. Users who reasonably understand iptables and want their ruleset optimized are advised to construct their own ruleset. Userspace utilities: Other notable tools FireHOL – a shell script wrapping iptables with an easy-to-understand plain-text configuration file NuFW – an authenticating firewall extension to Netfilter Shorewall – a gateway/firewall configuration tool, making it possible to use easier rules and have them mapped to iptables Literature: Gregor N. Purdy (25 August 2004). Linux iptables Pocket Reference: Firewalls, NAT & Accounting. O'Reilly Media, Inc. ISBN 978-1-4493-7898-1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Standard normal table** Standard normal table: In statistics, a standard normal table, also called the unit normal table or Z table, is a mathematical table for the values of Φ, the cumulative distribution function of the normal distribution. It is used to find the probability that a statistic is observed below, above, or between values on the standard normal distribution, and by extension, any normal distribution. Since probability tables cannot be printed for every normal distribution, as there are an infinite variety of normal distributions, it is common practice to convert a normal to a standard normal (known as a z-score) and then use the standard normal table to find probabilities. Normal and standard normal distribution: Normal distributions are symmetrical, bell-shaped distributions that are useful in describing real-world data. The standard normal distribution, represented by Z, is the normal distribution having a mean of 0 and a standard deviation of 1. Normal and standard normal distribution: Conversion If X is a random variable from a normal distribution with mean μ and standard deviation σ, its Z-score may be calculated from X by subtracting μ and dividing by the standard deviation: Z=X−μσ If X¯ is the mean of a sample of size n from some population in which the mean is μ and the standard deviation is σ, the standard error is σn: Z=X¯−μσ/n If {\textstyle \sum X} is the total of a sample of size n from some population in which the mean is μ and the standard deviation is σ, the expected total is nμ and the standard error is σn: Z=∑X−nμσn Reading a Z table: Formatting / layout Z tables are typically composed as follows: The label for rows contains the integer part and the first decimal place of Z. The label for columns contains the second decimal place of Z. Reading a Z table: The values within the table are the probabilities corresponding to the table type. These probabilities are calculations of the area under the normal curve from the starting point (0 for cumulative from mean, negative infinity for cumulative and positive infinity for complementary cumulative) to Z.Example: To find 0.69, one would look down the rows to find 0.6 and then across the columns to 0.09 which would yield a probability of 0.25490 for a cumulative from mean table or 0.75490 from a cumulative table. Reading a Z table: To find a negative value such as -0.83, one could use a cumulative table for negative z-values which yield a probability of 0.20327. But since the normal distribution curve is symmetrical, probabilities for only positive values of Z are typically given. The user might have to use a complementary operation on the absolute value of Z, as in the example below. Reading a Z table: Types of tables Z tables use at least three different conventions: Cumulative from mean gives a probability that a statistic is between 0 (mean) and Z. Example: Prob(0 ≤ Z ≤ 0.69) = 0.2549. Cumulative gives a probability that a statistic is less than Z. This equates to the area of the distribution below Z. Example: Prob(Z ≤ 0.69) = 0.7549. Complementary cumulative gives a probability that a statistic is greater than Z. This equates to the area of the distribution above Z. Example: Find Prob(Z ≥ 0.69). Since this is the portion of the area above Z, the proportion that is greater than Z is found by subtracting Z from 1. That is Prob(Z ≥ 0.69) = 1 − Prob(Z ≤ 0.69) or Prob(Z ≥ 0.69) = 1 − 0.7549 = 0.2451. Table examples: Cumulative from minus infinity to Z This table gives a probability that a statistic is between minus infinity and Z. f(z)=Φ(z) The values are calculated using the cumulative distribution function of a standard normal distribution with mean of zero and standard deviation of one, usually denoted with the capital Greek letter Φ (phi), is the integral Φ(z)=12π∫−∞ze−t2/2dt Φ (z) is related to the error function, or erf(z). erf ⁡(z2)] Note that for z = 1, 2, 3, one obtains (after multiplying by 2 to account for the [−z,z] interval) the results f (z) = 0.6827, 0.9545, 0.9974, characteristic of the 68–95–99.7 rule. Cumulative (less than Z) This table gives a probability that a statistic is less than Z (i.e. between negative infinity and Z). Complementary cumulative This table gives a probability that a statistic is greater than Z. f(z)=1−Φ(z) This table gives a probability that a statistic is greater than Z, for large integer Z values. Examples of use: A professor's exam scores are approximately distributed normally with mean 80 and standard deviation 5. Only a cumulative from mean table is available. Examples of use: What is the probability that a student scores an 82 or less? What is the probability that a student scores a 90 or more? What is the probability that a student scores a 74 or less? Since this table does not include negatives, the process involves the following additional step: What is the probability that a student scores between 74 and 82? What is the probability that an average of three scores is 82 or less?
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Double diode triode** Double diode triode: A double diode triode is a type of electronic vacuum tube once widely used in radio receivers. The tube has a triode for amplification, along with two diodes, one typically for use as a detector and the other as a rectifier for automatic gain control, in one envelope. In practice the two diodes usually share a common cathode. Multiple tube sections in one envelope minimized the number of tubes required in a radio or other apparatus. Double diode triode: In European nomenclature a first letter "E" identifies tubes with heaters to be connected in parallel to a transformer winding of 6.3 V; "A" identifies similar 4 V; "U" identifies tubes with heaters to be connected in series across the mains supply, drawing 100 mA; "H" identifies similar 150 mA, "C" identifies similar 200 mA, and "P" identifies similar 300 mA series-connected tubes. Following the voltage letter, "A" stands for a low-current (signal) diode section, "B" for a double diode with common cathode section, "C" for a triode section, "F" for a pentode section, "H" for a hexode or heptode section, and "L" for a power tetrode or pentode section. The first number identified the base type, for example 3 for Octal base; 9 for B7G sub-miniature 7 pin. The remaining numbers identified a particular tube type; tubes with all characters except the first identical had identical electrodes but a different heater; e.g. the EBC81 and UBC81. Generally, odd numbers identified tubes / valves with variable mu characteristics and even numbers straight, or sharp cut-off types. American nomenclature, also used in Europe, used a number to identify the heater voltage, then one or two sequentially assigned letters, then a number specifying the total number of electrodes plus one. The 6.3V EABC80 has 7 electrodes; the US equivalent are 6AK8 and 6T8, where the "AK" and "T" have no particular meaning; the 6N8 (EBF80) is a dual diode+pentode with 7 electrodes. Double diode triode: There are many double diode triode tubes, including EBC81 (6BD7), EBC90 (6AT6), EBC91 (6AV6) and the older EBC1, EBC2, EBC11, EBC21, EBC33, EBC41 (identical to EBC81 but Rimlock (B8A) socket instead of noval), ABC1 (EBC1 with a 4 V heater), CBC1 (EBC1 with a 200 mA heater). The commoner tube line-ups of an AM-only radio set with mains transformer having a double diode-triode were one of the following: ECH11+EF11+EBC11+EL11 Y8A Base -or- ECH42 (or 41)+EF42 (or 41)+ EBC41+ EL41 (or 42) Rimlock Base -or- ECH81+EF80 (or 85 or 89)+ EBC81 (or 91)+ EL84 (noval Socket) + rectifier and magic eye indicator (depending on the radio class and manufacturer). AC/DC sets without mains transformer would use "U" tubes of the same types, e.g. UCH42+UF41+UBC41+UL41+UY41 rectifier. Double diode triode: There was also a tube with a double diode and a triode sharing a common cathode, and an additional, independent single diode section, named EABC80 or 6AK8 or 6T8 (with a shorter glass envelope) and its versions for AC/DC transformerless receivers with series heater chains, named PABC80 (9AK8, 300 mA for TV sets), HABC80 (19T8, 150 mA for radios) and UABC80 (27AK8, 100 mA for radios). This tube was designed for early AM/FM (MW/VHF) radio sets and was widely used until the end of the tube era; the double diode was used for FM demodulation, the third, independent diode for AM detection and/or automatic gain control (AGC). Double diode triode: The main configurations for an early tube AM/FM set using EABC80 in the 1950s and '60s were: EC92+EF80 (or 85 or 89)+ECH81+EF80 (or 85 or 89)+EABC80+EL84 (or 95) -or- ECC85+EF80 (or 85 or 89)+ECH81+EABC80+EL84 (or 95)+ rectifier (tube or solid state) and indicator, depending on the radio class and manufacturer. For AC/DC radios, UCC85+UCH81+UF80 (or 85 or 89)+UABC80+UL84+ rectifier and indicator. These configurations were kept until semiconductor (germanium) diodes became available, making this type of tube obsolete.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lichnerowicz conjecture** Lichnerowicz conjecture: In mathematics, the Lichnerowicz conjecture is a generalization of a conjecture introduced by Lichnerowicz (1944). Lichnerowicz's original conjecture was that locally harmonic 4-manifolds are locally symmetric, and was proved by Walker (1949). The Lichnerowicz conjecture usually refers to the generalization that locally harmonic manifolds are flat or rank-1 locally symmetric. It has been proven true for compact manifolds with fundamental groups that are finite groups (Szabó 1990) but counterexamples exist in seven or more dimensions in the non-compact case (Damek & Ricci 1992)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**History of psychology** History of psychology: Psychology is defined as "the scientific study of behavior and mental processes". Philosophical interest in the human mind and behavior dates back to the ancient civilizations of Egypt, Persia, Greece, China, and India.Psychology as a field of experimental study began in 1854 in Leipzig, Germany when Gustav Fechner created the first theory of how judgments about sensory experiences are made and how to experiment on them. Fechner's theory, recognized today as Signal Detection Theory foreshadowed the development of statistical theories of comparative judgment and thousands of experiments based on his ideas (Link, S. W. Psychological Science, 1995). Later, 1879, Wilhelm Wundt founded in Leipzig, Germany, the first Psychological laboratory dedicated exclusively to psychological research. Wundt was also the first person to refer to himself as a psychologist. A notable precursor of Wundt was Ferdinand Ueberwasser (1752-1812) who designated himself Professor of Empirical Psychology and Logic in 1783 and gave lectures on empirical psychology at the Old University of Münster, Germany. Other important early contributors to the field include Hermann Ebbinghaus (a pioneer in the study of memory), William James (the American father of pragmatism), and Ivan Pavlov (who developed the procedures associated with classical conditioning). History of psychology: Soon after the development of experimental psychology, various kinds of applied psychology appeared. G. Stanley Hall brought scientific pedagogy to the United States from Germany in the early 1880s. John Dewey's educational theory of the 1890s was another example. Also in the 1890s, Hugo Münsterberg began writing about the application of psychology to industry, law, and other fields. Lightner Witmer established the first psychological clinic in the 1890s. James McKeen Cattell adapted Francis Galton's anthropometric methods to generate the first program of mental testing in the 1890s. In Vienna, meanwhile, Sigmund Freud independently developed an approach to the study of the mind called psychoanalysis, which would go on to become highly influential.The 20th century saw a reaction to Edward Titchener's critique of Wundt's empiricism. This contributed to the formulation of behaviorism by John B. Watson, which was popularized by B. F. Skinner. Behaviorism proposed emphasizing the study of overt behavior, because that could be quantified and easily measured. Early behaviorists considered the study of the "mind" too vague for productive scientific study. However, Skinner and his colleagues did study thinking as a form of covert behavior to which they could apply the same principles as overt (publicly observable) behavior. History of psychology: The final decades of the 20th century saw the rise of cognitive science, an interdisciplinary approach to studying the human mind. Cognitive science again considers the "mind" as a subject for investigation, using the tools of cognitive psychology, linguistics, computer science, philosophy, behaviorism, and neurobiology. This form of investigation has proposed that a wide understanding of the human mind is possible, and that such an understanding may be applied to other research domains, such as artificial intelligence. History of psychology: There are conceptual divisions of psychology in so-called "forces" or "waves," based on its schools and historical trends. This terminology is popularized among the psychologists to differentiate a growing humanism in therapeutic practice from the 1930s onwards, called the "third force," in response to the deterministic tendencies of Watson's behaviourism and Freud's psychoanalysis. Humanistic psychology has as important proponents Carl Rogers, Abraham Maslow, Gordon Allport, Erich Fromm, and Rollo May. Their humanistic concepts are also related to existential psychology, Viktor Frankl's logotherapy, positive psychology (which has Martin Seligman as one of the leading exponents), C. R. Cloninger's approach to well-being and character development, as well as to transpersonal psychology, incorporating such concepts as spirituality, self-transcendence, self-realization, self-actualization, and mindfulness. In cognitive behavioral psychotherapy, similar terms have also been incorporated, by which "first wave" is considered the initial behavioral therapy; a "second wave", Albert Ellis's cognitive one; and a "third wave", with the acceptance and commitment therapy, which emphasizes one's pursuit of values, methods of self-awareness, acceptance and psychological flexibility, instead of challenging negative thought schemes. A "fourth wave" would be the one that incorporates transpersonal concepts and positive flourishing, in a way criticized by some researchers for its heterogeneity and theoretical direction dependent on the therapist's view. A "fifth wave" has now been proposed by a group of researchers seeking to integrate earlier concepts into a unifying theory. Early psychological thought: Many cultures throughout history have speculated on the nature of the mind, heart, soul, spirit, brain, etc. For instance, in Ancient Egypt, the Edwin Smith Papyrus contains an early description of the brain, and some speculations on its functions (described in a medical/surgical context) and the descriptions could be related to Imhotep who was the first Egyptian physician who anatomized and discovered the body of the human being. Though other medical documents of ancient times were full of incantations and applications meant to turn away disease-causing demons and other superstition, the Edwin Smith Papyrus gives remedies to almost 50 conditions and only two contain incantations to ward off evil. Early psychological thought: Ancient Greek philosophers, from Thales (fl. 550 BC) through even to the Roman period, developed an elaborate theory of what they termed the psuchẽ (psyche) (from which the first half of "psychology" is derived), as well as other "psychological" terms – nous, thumos, logistikon, etc. Classical Greece (fifth century BCE), philosophers taught "naturalism", the belief that laws of nature shape our world, as opposed to gods and demons determining human fate. Alcmaeon, for example, believed the brain, not the heart, was the "organ of thought."He tracked the ascending sensory nerves from the body to the brain, theorizing that mental activity originated in the CNS and that the cause of mental illness resided within the brain. He applied this understanding to classify mental diseases and treatments.The most influential of these psychologists are the accounts of Plato (especially in the Republic), Pythagoras and of Aristotle (esp. Peri Psyches, better known under its Latin title, De Anima).Plato's tripartite theory of the soul, Chariot Allegory and concepts such as eros defined the subsequent Western Philosophy views of the psyche and anticipated modern psychological proposals. For example, concepts such as id, ego and super-ego and libido were interpreted by psychoanalysts as having been anticipated by Plato, to the extent that "in 1920, Freud decided to present Plato as the precursor of his own theory, as part of a strategy directed to define the scientific and cultural collocation of psychoanalysis".Other Hellenistic philosophers, namely the Stoics and Epicurians, diverged from the Classical Greek tradition in several important ways, especially in their concern with questions of the physiological basis of the mind. The Roman physician Galen addressed these issues most elaborately and influentially of all. The Greek tradition influenced some Christian and Islamic thought on the topic. Early psychological thought: In the Judeo-Christian tradition, the Manual of Discipline (from the Dead Sea Scrolls, c. 21 BC–61 AD) notes the division of human nature into two temperaments or opposing spirits of either veracity or perversity Walter M Freeman proposes that Thomism is the philosophical system explaining cognition that is most compatible with neurodynamics, in a 2008 article in the journal Mind and Matter entitled "Nonlinear Brain Dynamics and Intention According to Aquinas".In Asia, China had a long history of administering tests of ability as part of its education system. Chinese texts from 2500 years ago mention neuropsychiatric illness, including descriptions of mania and psychosis with or without epilepsy. "Imbalance" was the mechanism of psychosis. Other conditions described include confusion, visual illusions, intoxication, stress, and even malingering. Psychological theories about stages of human development can be traced to the time of Confucius, about 2500 years ago.In the 6th century AD, Lin Xie carried out an early experiment, in which he asked people to draw a square with one hand and at the same time draw a circle with the other (ostensibly to test people's vulnerability to distraction). It has been cited that this was the first psychology experiment.India had a theory of "the self" in its Vedanta philosophical writings. Additionally, Indians thought about the individual's self as being enclosed by different levels known as koshas. Additionally, the Sankya philosophy said that the mind has 5 components, including manas (lower mind), ahankara (sense of I-ness), chitta (memory bank of mind), buddhi (intellect), and atman (self/soul). Patanjali was one of the founders of the yoga tradition, sometime between 200 and 400 BC (pre-dating Buddhist psychology) and a student of the Vedas. He developed the science of breath and mind and wrote his knowledge in the form of between 194 and 196 aphorisms called the Yoga Sutras of Patanjali. He developed modern Yoga for psychological resilience and balance . He is reputed to have used yoga therapeutically for anxiety, depression and mental disorders as common then as now. Buddhist philosophies have developed several psychological theories (see Buddhism and psychology), formulating interpretations of the mind and concepts such as aggregates (skandhas), emptiness (sunyata), non-self (anatta), mindfulness and Buddha-nature, which are addressed today by theorists of humanistic and transpersonal psychology. Several Buddhist lineages have developed notions analogous to those of modern Western psychology, such as the unconscious, personal development and character improvement, the latter being part of the Noble Eightfold Path and expressed, for example, in the Tathagatagarbha Sutra. Hinayana traditions, such as the Theravada, focus more on individual meditation, while Mahayana traditions also emphasize the attainment of a Buddha nature of wisdom (prajña) and compassion (karuṇā) in the realization of the boddhisattva ideal, but affirming it more metaphysically, in which charity and helping sentient beings is cosmically fundamental. Buddhist monk and scholar D. T. Suzuki describes the importance of the individual's inner enlightenment and the self-realization of the mind. Researcher David Germano, in his thesis on Longchenpa, also shows the importance of self-actualization in the dzogchen teaching lineage.Medieval Muslim physicians also developed practices to treat patients with a variety of "diseases of the mind".Ahmed ibn Sahl al-Balkhi (850–934) was among the first, in this tradition, to discuss disorders related to both the body and the mind. Al-Balkhi recognized that the body and the soul can be healthy or sick, or "balanced or imbalanced". He wrote that imbalance of the body can result in fever, headaches and other bodily illnesses, while imbalance of the soul can result in anger, anxiety, sadness and other nafs-related symptoms.Avicenna, similarly, did early work in the treatment of nafs-related illnesses, and developed a system for associating changes in the mind with inner feelings. Avicenna also described phenomena we now recognize as neuropsychiatric conditions, including hallucination, mania, nightmare, melancholia, dementia, epilepsy and tremor.Ancient and medieval thinkers who discussed issues related to psychology included: Socrates of Athens (c. 470 – 399 BCE). Emphasized virtue ethics. In epistemology, understood dialectic to be central to the pursuit of truth. Early psychological thought: As early as the 4th century BC, the Greek physician Hippocrates theorized that mental disorders had physical rather than supernatural causes. Plato's tripartite theory of the soul, Chariot Allegory and concepts such as eros defined the subsequent Western Philosophy views of the psyche and anticipated modern psychological proposals. Alcmaeon theorizes the brain in the seat of the mind. In 387 BCE, Plato suggested that the brain is where mental processes take place. Boethius and his work represented an imaginary psychological dialogue between himself and philosophy, with philosophy personified as a woman, arguing that despite the apparent inequality of the world. In the 6th century AD, Lin Xie carried out an early psychological analysis experiment. It has been cited that this was the first psychology experiment. Ali ibn Sahl Rabban al-Tabari, who developed al-‘ilaj al-nafs (sometimes translated as "psychotherapy"), Padmasambhava was the 8th-century medicine Buddha of Tibet, called from the then Buddhist India to tame the Tibetans, and was instrumental in developing Tibetan psychiatric medicine. Patanjali founded Yoga and the method of psychological balance and resilience through breathing exercises and inner peace. Abu al-Qasim al-Zahrawi (Abulcasis), described head surgery; Ibn Tufail, who anticipated the tabula rasa argument and nature versus nurture debate. William of Ockham who has lot of interests in writing about logic and invented occams razor. Thomas Aquinas whose works allocated notion regarded emotions. Albertus magnus describes metaphysical morals in psychology and philosophical theories.Maimonides described rabies and belladonna intoxication.Witelo is considered a precursor of perception psychology. His Perspectiva contains much material in psychology, outlining views that are close to modern notions on the association of idea and on the subconscious. Further development: Many of the Ancients' writings would have been lost without the efforts of Muslim, Christian, and Jewish translators in the House of Wisdom, the House of Knowledge, and other such institutions in the Islamic Golden Age, whose glosses and commentaries were later translated into Latin in the 12th century. However, it is not clear how these sources first came to be used during the Renaissance, and their influence on what would later emerge as the discipline of psychology is a topic of scholarly debate. Further development: Etymology and the early usage of the word The first print use of the term "psychology", that is, Greek-inspired neo-Latin psychologia, is dated to multiple works dated 1525. Etymology has long been attributed to the German scholastic philosopher Rudolf Göckel (1547–1628, often known under the Latin form Rodolphus Goclenius), who published the Psychologia hoc est: de hominis perfectione, animo et imprimis ortu hujus... in Marburg in 1590. Croatian humanist Marko Marulić (1450–1524) likely used the term in the title of a Latin treatise entitled Psichiologia de ratione animae humanae (c.1520?). Although the treatise itself has not been preserved, its title appears in a list of Marulic's works compiled by his younger contemporary, Franjo Bozicevic-Natalis in his "Vita Marci Maruli Spalatensis" (Krstić, 1964). Further development: The term did not come into popular usage until the German Rationalist philosopher, Christian Wolff (1679–1754) used it in his works Psychologia empirica (1732) and Psychologia rationalis (1734). This distinction between empirical and rational psychology was picked up in Denis Diderot's (1713–1780) and Jean le Rond d'Alembert's (1717–1783) Encyclopédie (1751–1784) and was popularized in France by Maine de Biran (1766–1824). In England, the term "psychology" overtook "mental philosophy" in the middle of the 19th century, especially in the work of William Hamilton (1788–1856). Further development: Enlightenment psychological thought Early psychology was regarded as the study of the soul (in the Christian sense of the term). The modern philosophical form of psychology was heavily influenced by the works of René Descartes (1596–1650), and the debates that he generated, of which the most relevant were the objections to his Meditations on First Philosophy (1641), published with the text. Also important to the later development of psychology were his Passions of the Soul (1649) and Treatise on Man (completed in 1632 but, along with the rest of The World, withheld from publication after Descartes heard of the Catholic Church's condemnation of Galileo; it was eventually published posthumously, in 1664). Further development: Although not educated as a physician, Descartes did extensive anatomical studies of bulls' hearts and was considered important enough that William Harvey responded to him. Descartes was one of the first to endorse Harvey's model of the circulation of the blood, but disagreed with his metaphysical framework to explain it. Descartes dissected animals and human cadavers and as a result was familiar with the research on the flow of blood leading to the conclusion that the body is a complex device that is capable of moving without the soul, thus contradicting the "Doctrine of the Soul". The emergence of psychology as a medical discipline was given a major boost by Thomas Willis, not only in his reference to psychology (the "Doctrine of the Soul") in terms of brain function, but through his detailed 1672 anatomical work, and his treatise De anima brutorum quae hominis vitalis ac sentitiva est: exercitationes duae ("Two Discourses on the Souls of Brutes"—meaning "beasts"). However, Willis acknowledged the influence of Descartes's rival, Pierre Gassendi, as an inspiration for his work. Further development: The philosophers of the British Empiricist and Associationist schools had a profound impact on the later course of experimental psychology. John Locke's An Essay Concerning Human Understanding (1689), George Berkeley's Treatise Concerning the Principles of Human Knowledge (1710), and David Hume's A Treatise of Human Nature (1739–1740) were particularly influential, as were David Hartley's Observations on Man (1749) and John Stuart Mill's A System of Logic. (1843). Also notable was the work of some Continental Rationalist philosophers, especially Baruch Spinoza's (1632–1677) On the Improvement of the Understanding (1662) and Gottfried Wilhelm Leibniz's (1646–1716) New Essays on Human Understanding (completed 1705, published 1765). Another important contribution was Friedrich August Rauch's (1806–1841) book Psychology: Or, A View of the Human Soul; Including Anthropology (1840), the first English exposition of Hegelian philosophy for an American audience.German idealism pioneered the proposition of the unconscious, which Jung considered to have been described psychologically for the first time by physician and philosopher Carl Gustav Carus. Also notable was its use by Friedrich Wilhelm Joseph von Schelling (1775-1835), and by Eduard von Hartmann in Philosophy of the Unconscious (1869); psychologist Hans Eysenck writes in Decline and Fall of the Freudian Empire (1985) that Hartmann's version of the unconscious is very similar to Freud's.The Danish philosopher Søren Kierkegaard also influenced the humanistic, existential, and modern psychological schools with his works The Concept of Anxiety (1844) and The Sickness Unto Death (1849). Further development: Transition to contemporary psychology Also influential on the emerging discipline of psychology were debates surrounding the efficacy of Mesmerism (a precursor to hypnosis) and the value of phrenology. The former was developed in the 1770s by Austrian physician Franz Mesmer (1734–1815) who claimed to use the power of gravity, and later of "animal magnetism", to cure various physical and mental ills. As Mesmer and his treatment became increasingly fashionable in both Vienna and Paris, it also began to come under the scrutiny of suspicious officials. In 1784, an investigation was commissioned in Paris by King Louis XVI which included American ambassador Benjamin Franklin, chemist Antoine Lavoisier and physician Joseph-Ignace Guillotin (later the popularizer of the guillotine). They concluded that Mesmer's method was useless. Abbé Faria, an Indo-Portuguese priest, revived public attention in animal magnetism. Unlike Mesmer, Faria claimed that the effect was 'generated from within the mind' by the power of expectancy and cooperation of the patient. Further development: Although disputed, the "magnetic" tradition continued among Mesmer's students and others, resurfacing in England in the 19th century in the work of the physician John Elliotson (1791–1868), and the surgeons James Esdaile (1808–1859), and James Braid (1795–1860) (who reconceptualized it as property of the subject's mind rather than a "power" of the Mesmerist's, and relabeled it "hypnotism"). Mesmerism also continued to have a strong social (if not medical) following in England through the 19th century (see Winter, 1998). Faria's approach was significantly extended by the clinical and theoretical work of Ambroise-Auguste Liébeault and Hippolyte Bernheim of the Nancy School. Faria's theoretical position, and the subsequent experiences of those in the Nancy School made significant contributions to the later autosuggestion techniques of Émile Coué. It was adopted for the treatment of hysteria by the director of Paris's Salpêtrière Hospital, Jean-Martin Charcot (1825–1893). Further development: Phrenology began as "organology", a theory of brain structure developed by the German physician, Franz Joseph Gall (1758–1828). Gall argued that the brain is divided into a large number of functional "organs", each responsible for particular human mental abilities and dispositions – hope, love, spirituality, greed, language, the abilities to detect the size, form, and color of objects, etc. He argued that the larger each of these organs are, the greater the power of the corresponding mental trait. Further, he argued that one could detect the sizes of the organs in a given individual by feeling the surface of that person's skull. Gall's ultra-localizationist position with respect to the brain was soon attacked, most notably by French anatomist Pierre Flourens (1794–1867), who conducted ablation studies (on chickens) which purported to demonstrate little or no cerebral localization of function. Although Gall had been a serious (if misguided) researcher, his theory was taken by his assistant, Johann Gaspar Spurzheim (1776–1832), and developed into the profitable, popular enterprise of phrenology, which soon spawned, especially in Britain, a thriving industry of independent practitioners. In the hands of Scottish religious leader George Combe (1788–1858) (whose book The Constitution of Man was one of the best-sellers of the century), phrenology became strongly associated with political reform movements and egalitarian principles (see, e.g., Shapin, 1975; but also see van Wyhe, 2004). Spurzheim soon spread phrenology to America as well, where itinerant practical phrenologists assessed the mental well-being of willing customers (see Sokal, 2001; Thompson 2021). Further development: The development of modern psychology was closely linked to psychiatry in the eighteenth and nineteenth centuries (see History of psychiatry), when the treatment of the mentally ill in hospices was revolutionized after Europeans first considered their pathological conditions. In fact, there was no distinction between the two areas in psychotherapeutic practice, in an era when there was still no drug treatment (of the so-called psychopharmacologicy revolution from 1950) for mental disorders, and its early theorists and pioneering clinical psychologists generally had medical background. The first to implement in the Western a humanitarian and scientific treatment of mental health, based on Enlightenment ideas, were the French alienists, who developed the empirical observation of psychopathology, describing the clinical conditions, their physiological relationships and classifying them. It was called the rationalist-empirical school, which most known exponents were Pinel, Esquirol, Falret, Morel and Magnan. In the late nineteenth century, the French current was gradually overcome by the German field of study. At first, the German school was influenced by romantic ideals and gave rise to a line of mental process speculators, based more on empathy than reason. They became known as Psychiker, mentalists or psychologists, with different currents being highlighted by Reil (creator of the word "psychiatry"), Heinroth (first to use the term "psychosomatic") Ideler and Carus. In the middle of the century, a "somatic reaction" (somatiker) formed against the speculative doctrines of mentalism, and it was based on neuroanatomy and neuropathology. In it, those who made important contributions to the psychopathological classification were Griesinger, Westphal, Krafft-Ebbing and Kahlbaum, which, in their turn, would influence Wernicke and Meynert. Kraepelin revolutionized as the first to define the diagnostic aspects of mental disorders in syndromes, and the work of psychological classification was followed to the contemporary field by contributions from Schneider, Kretschmer, Leonhard, and Jaspers. In Great Britain, there stand out in the nineteenth century Alexander Bain founder of the first journal of psychology, Mind, and writer of reference books on the subject at the time, such as Mental Science: The Compendium of Psychology, and the History of Philosophy (1868), and Henry Maudsley. In Switzerland, Bleuler coined the terms "depth psychology", "schizophrenia", "schizoid" and "autism". In the United States, the Swiss psychiatrist Adolf Meyer maintained that the patient should be regarded as an integrated "psychobiological" whole, emphasizing psychosocial factors, concepts that propitiated the so-called psychosomatic medicine. Emergence of German experimental psychology: Until the middle of the 19th century, psychology was widely regarded as a branch of philosophy. Whether it could become an independent scientific discipline was questioned already earlier on: Immanuel Kant (1724–1804) declared in his Metaphysical Foundations of Natural Science (1786) that psychology might perhaps never become a "proper" natural science because its phenomena cannot be quantified, among other reasons. Kant proposed an alternative conception of an empirical investigation of human thought, feeling, desire, and action, and lectured on these topics for over twenty years (1772/73-1795/96). His Anthropology from a Pragmatic Point of View (1798), which resulted from these lectures, looks like an empirical psychology in many respects.Johann Friedrich Herbart (1776–1841) took issue with what he viewed as Kant's conclusion and attempted to develop a mathematical basis for a scientific psychology. Although he was unable to empirically realize the terms of his psychological theory, his efforts did lead scientists such as Ernst Heinrich Weber (1795–1878) and Gustav Theodor Fechner (1801–1887) to attempt to measure the mathematical relationships between the physical magnitudes of external stimuli and the psychological intensities of the resulting sensations. Fechner (1860) is the originator of the term psychophysics. Emergence of German experimental psychology: Meanwhile, individual differences in reaction time had become a critical issue in the field of astronomy, under the name of the "personal equation". Early researches by Friedrich Wilhelm Bessel (1784–1846) in Königsberg and Adolf Hirsch led to the development of a highly precise chronoscope by Matthäus Hipp that, in turn, was based on a design by Charles Wheatstone for a device that measured the speed of artillery shells (Edgell & Symes, 1906). Other timing instruments were borrowed from physiology (e.g., Carl Ludwig's kymograph) and adapted for use by the Utrecht ophthalmologist Franciscus Donders (1818–1899) and his student Johan Jacob de Jaager in measuring the duration of simple mental decisions. Emergence of German experimental psychology: The 19th century was also the period in which physiology, including neurophysiology, professionalized and saw some of its most significant discoveries. Among its leaders were Charles Bell (1774–1843) and François Magendie (1783–1855) who independently discovered the distinction between sensory and motor nerves in the spinal column, Johannes Müller (1801–1855) who proposed the doctrine of specific nerve energies, Emil du Bois-Reymond (1818–1896) who studied the electrical basis of muscle contraction, Pierre Paul Broca (1824–1880) and Carl Wernicke (1848–1905) who identified areas of the brain responsible for different aspects of language, as well as Gustav Fritsch (1837–1927), Eduard Hitzig (1839–1907), and David Ferrier (1843–1924) who localized sensory and motor areas of the brain. One of the principal founders of experimental physiology, Hermann Helmholtz (1821–1894), conducted studies of a wide range of topics that would later be of interest to psychologists – the speed of neural transmission, the natures of sound and color, and of our perceptions of them, etc. In the 1860s, while he held a position in Heidelberg, Helmholtz engaged as an assistant a young physician named Wilhelm Wundt. Wundt employed the equipment of the physiology laboratory – chronoscope, kymograph, and various peripheral devices – to address more complicated psychological questions than had, until then, been investigated experimentally. In particular he was interested in the nature of apperception – the point at which a perception occupies the central focus of conscious awareness. Emergence of German experimental psychology: In 1864 Wundt took up a professorship in Zürich, where he published his landmark textbook, Grundzüge der physiologischen Psychologie (Principles of Physiological Psychology, 1874). Moving to a more prestigious professorship in Leipzig in 1875, Wundt founded a laboratory specifically dedicated to original research in experimental psychology in 1879, the first laboratory of its kind in the world. In 1883, he launched a journal in which to publish the results of his, and his students', research, Philosophische Studien (Philosophical Studies) (For more on Wundt, see, e.g., Bringmann & Tweney, 1980; Rieber & Robinson, 2001). Wundt attracted a large number of students not only from Germany, but also from abroad. Among his most influential American students were G. Stanley Hall (who had already obtained a PhD from Harvard under the supervision of William James), James McKeen Cattell (who was Wundt's first assistant), and Frank Angell (who founded laboratories at both Cornell and Stanford). The most influential British student was Edward Bradford Titchener (who later became professor at Cornell). Emergence of German experimental psychology: Experimental psychology laboratories were soon also established at Berlin by Carl Stumpf (1848–1936) and at Göttingen by Georg Elias Müller (1850–1934). Another major German experimental psychologist of the era, though he did not direct his own research institute, was Hermann Ebbinghaus (1850–1909). Psychoanalysis: Experimentation was not the only approach to psychology in the German-speaking world at this time. Starting in the 1890s, employing the case study technique, the Viennese physician Sigmund Freud developed and applied the methods of hypnosis, free association, and dream interpretation to reveal putatively unconscious beliefs and desires that he argued were the underlying causes of his patients' "hysteria". He dubbed this approach psychoanalysis. Freudian psychoanalysis is particularly notable for the emphasis it places on the course of an individual's sexual development in pathogenesis. Psychoanalytic concepts have had a strong and lasting influence on Western culture, particularly on the arts. Although its scientific contribution is still a matter of debate, both Freudian and Jungian psychology revealed the existence of compartmentalized thinking, in which some behavior and thoughts are hidden from consciousness – yet operative as part of the complete personality. Hidden agendas, a bad conscience, or a sense of guilt, are examples of the existence of mental processes in which the individual is not conscious, through choice or lack of understanding, of some aspects of their personality and subsequent behavior. Psychoanalysis: Psychoanalysis examines mental processes which affect the ego. An understanding of these theoretically allows the individual greater choice and consciousness with a healing effect in neurosis and occasionally in psychosis, both of which Richard von Krafft-Ebing defined as "diseases of the personality". Psychoanalysis: Freud founded the International Psychoanalytic Association in 1910, inspired also by Ferenczi. Main theoretical successors were Anna Freud (his daughter) and Melane Klein, particularly in child psychoanalysis, both inaugurating competing concepts; in addition to those who became dissidents and developed interpretations different from Freud's psychoanalytic one, thus called by some neo-freudians, or more correctly post-freudians: the most known are Alfred Adler (individual psychology), Carl Gustav Jung (analytical psychology), Otto Rank, Karen Horney, Erik Erikson and Erich Fromm. Psychoanalysis: Jung was an associate of Freud's who later broke with him over Freud's emphasis on sexuality. Working with concepts of the unconscious first noted during the 1800s (by John Stuart Mill, Krafft-Ebing, Pierre Janet, Théodore Flournoy and others), Jung defined four mental functions which relate to and define the ego, the conscious self: Sensation, which tell consciousness that something is there. Psychoanalysis: Feelings, which consist of value judgments, and motivate our reaction to what we have sensed. Intellect, an analytic function that compares the sensed event to all known others and gives it a class and category, allowing us to understand a situation within a historical process, personal or public. And intuition, a mental function with access to deep behavioral patterns, being able to suggest unexpected solutions or predict unforeseen consequences, "as if seeing around corners" as Jung put it.Jung insisted on an empirical psychology on which theories must be based on facts and not on the psychologist's projections or expectations. Early American: Around 1875 the Harvard physiology instructor (as he then was), William James, opened a small experimental psychology demonstration laboratory for use with his courses. The laboratory was never used, at that time, for original research, and so controversy remains as to whether it is to be regarded as the "first" experimental psychology laboratory or not. In 1878, James gave a series of lectures at Johns Hopkins University entitled "The Senses and the Brain and their Relation to Thought" in which he argued, contra Thomas Henry Huxley, that consciousness is not epiphenomenal, but must have an evolutionary function, or it would not have been naturally selected in humans. The same year James was contracted by Henry Holt to write a textbook on the "new" experimental psychology. If he had written it quickly, it would have been the first English-language textbook on the topic. It was twelve years, however, before his two-volume The Principles of Psychology would be published. In the meantime textbooks were published by George Trumbull Ladd of Yale (1887) and James Mark Baldwin then of Lake Forest College (1889). Early American: William James was one of the founders of the American Society for Psychical Research in 1885, which studied psychic phenomena (parapsychology), before the creation of the American Psychological Association in 1892. James was also president of the British society that inspired the United States' one, the Society for Psychical Research, founded in 1882, which investigated psychology and the paranormal on topics such as mediumship, dissociation, telepathy and hypnosis, and it innovated research in psychology, by which, according to science historian Andreas Sommer, were "devised methodological innovations such as randomized study designs" and conducted "the first experiments investigating the psychology of eyewitness testimony (Hodgson and Davey, 1887), [and] empirical and conceptual studies illuminating mechanisms of dissociation and hypnotism"; Its members also initiated and organised the International Congresses of Physiological/Experimental psychology.In 1879 Charles Sanders Peirce was hired as a philosophy instructor at Johns Hopkins University. Although better known for his astronomical and philosophical work, Peirce also conducted what are perhaps the first American psychology experiments, on the subject of color vision, published in 1877 in the American Journal of Science (see Cadwallader, 1974). Peirce and his student Joseph Jastrow published "On Small Differences in Sensation" in the Memoirs of the National Academy of Sciences, in 1884. In 1882, Peirce was joined at Johns Hopkins by G. Stanley Hall, who opened the first American research laboratory devoted to experimental psychology in 1883. Peirce was forced out of his position by scandal and Hall was awarded the only professorship in philosophy at Johns Hopkins. In 1887 Hall founded the American Journal of Psychology, which published work primarily emanating from his own laboratory. In 1888 Hall left his Johns Hopkins professorship for the presidency of the newly founded Clark University, where he remained for the rest of his career. Early American: Soon, experimental psychology laboratories were opened at the University of Pennsylvania (in 1887, by James McKeen Cattell), Indiana University (1888, William Lowe Bryan), the University of Wisconsin (1888, Joseph Jastrow), Clark University (1889, Edmund Sanford), the McLean Asylum (1889, William Noyes), and the University of Nebraska (1889, Harry Kirke Wolfe). Early American: However, it was Princeton University's Eno Hall, built in 1924, that became the first university building in the United States to be devoted entirely to experimental psychology when it became the home of the university's Department of Psychology.In 1890, William James' The Principles of Psychology finally appeared, and rapidly became the most influential textbook in the history of American psychology. It laid many of the foundations for the sorts of questions that American psychologists would focus on for years to come. The book's chapters on consciousness, emotion, and habit were particularly agenda-setting. Early American: One of those who felt the impact of James' Principles was John Dewey, then professor of philosophy at the University of Michigan. With his junior colleagues, James Hayden Tufts (who founded the psychology laboratory at Michigan) and George Herbert Mead, and his student James Rowland Angell, this group began to reformulate psychology, focusing more strongly on the social environment and on the activity of mind and behavior than the psychophysics-inspired physiological psychology of Wundt and his followers had heretofore. Tufts left Michigan for another junior position at the newly founded University of Chicago in 1892. A year later, the senior philosopher at Chicago, Charles Strong, resigned, and Tufts recommended to Chicago president William Rainey Harper that Dewey be offered the position. After initial reluctance, Dewey was hired in 1894. Dewey soon filled out the department with his Michigan companions Mead and Angell. These four formed the core of the Chicago School of psychology. Early American: In 1892, G. Stanley Hall invited 30-some psychologists and philosophers to a meeting at Clark with the purpose of founding a new American Psychological Association (APA). (On the history of the APA, see Evans, Staudt Sexton, & Cadwallader, 1992.) The first annual meeting of the APA was held later that year, hosted by George Stuart Fullerton at the University of Pennsylvania. Almost immediately tension arose between the experimentally and philosophically inclined members of the APA. Edward Bradford Titchener and Lightner Witmer launched an attempt to either establish a separate "Section" for philosophical presentations, or to eject the philosophers altogether. After nearly a decade of debate, a Western Philosophical Association was founded and held its first meeting in 1901 at the University of Nebraska. The following year (1902), an American Philosophical Association held its first meeting at Columbia University. These ultimately became the Central and Eastern Divisions of the modern American Philosophical Association. Early American: In 1894, a number of psychologists, unhappy with the parochial editorial policies of the American Journal of Psychology approached Hall about appointing an editorial board and opening the journal out to more psychologists not within Hall's immediate circle. Hall refused, so James McKeen Cattell (then of Columbia) and James Mark Baldwin (then of Princeton) co-founded a new journal, Psychological Review, which rapidly grew to become a major outlet for American psychological researchers.Beginning in 1895, James Mark Baldwin (Princeton, Hopkins) and Edward Bradford Titchener (Cornell) entered into an increasingly acrimonious dispute over the correct interpretation of some anomalous reaction time findings that had come from the Wundt laboratory (originally reported by Ludwig Lange and James McKeen Cattell). In 1896, James Rowland Angell and Addison W. Moore (Chicago) published a series of experiments in Psychological Review appearing to show that Baldwin was the more correct of the two. However, they interpreted their findings in light of John Dewey's new approach to psychology, which rejected the traditional stimulus-response understanding of the reflex arc in favor of a "circular" account in which what serves as "stimulus" and what as "response" depends on how one views the situation. The full position was laid out in Dewey's landmark article "The Reflex Arc Concept in Psychology" which also appeared in Psychological Review in 1896. Early American: Titchener responded in Philosophical Review (1898, 1899) by distinguishing his austere "structural" approach to psychology from what he termed the Chicago group's more applied "functional" approach, and thus began the first major theoretical rift in American psychology between Structuralism and Functionalism. The group at Columbia, led by James McKeen Cattell, Edward L. Thorndike, and Robert S. Woodworth, was often regarded as a second (after Chicago) "school" of American Functionalism (see, e.g., Heidbredder, 1933), although they never used that term themselves, because their research focused on the applied areas of mental testing, learning, and education. Dewey was elected president of the APA in 1899, while Titchener dropped his membership in the association. (In 1904, Titchener formed his own group, eventually known as the Society of Experimental Psychologists.) Jastrow promoted the functionalist approach in his APA presidential address of 1900, and Angell adopted Titchener's label explicitly in his influential textbook of 1904 and his APA presidential address of 1906. In reality, Structuralism was, more or less, confined to Titchener and his students. (It was Titchener's former student E. G. Boring, writing A History of Experimental Psychology [1929/1950, the most influential textbook of the 20th century about the discipline], who launched the common idea that the structuralism/functionalism debate was the primary fault line in American psychology at the turn of the 20th century.) Functionalism, broadly speaking, with its more practical emphasis on action and application, better suited the American cultural "style" and, perhaps more important, was more appealing to pragmatic university trustees and private funding agencies. Early French: Jules Baillarger founded the Société Médico-Psychologique in 1847, one of the first associations of its kind and which published the Annales Medico-Psychologiques. France already had a pioneering tradition in psychological study, and it was relevant the publication of Précis d'un cours de psychologie ("Summary of a Psychology Course") in 1831 by Adolphe Garnier, who also published theTraité des facultés de l'âme, comprenant l'histoire des principales théories psychologiques ("Treatise of the Faculties of the Soul, comprising the history of major psychological theories") in 1852. Garnier was called "the best monument of psychological science of our time" by Revue des Deux Mondes in 1864.In no small measure because of the conservatism of the reign of Louis Napoléon (president, 1848–1852; emperor as "Napoléon III", 1852–1870), academic philosophy in France through the middle part of the 19th century was controlled by members of the eclectic and spiritualist schools, led by figures such as Victor Cousin (1792–1867), Thédodore Jouffroy (1796–1842), and Paul Janet (1823–1899). These were traditional metaphysical schools, opposed to regarding psychology as a natural science. With the ouster of Napoléon III after the débacle of the Franco-Prussian War, new paths, both political and intellectual, became possible. From the 1870 forward, a steadily increasing interest in positivist, materialist, evolutionary, and deterministic approaches to psychology developed, influenced by, among others, the work of Hyppolyte Taine (1828–1893) (e.g., De L'Intelligence, 1870) and Théodule Ribot (1839–1916) (e.g., La Psychologie Anglaise Contemporaine, 1870). Early French: In 1876, Ribot founded Revue Philosophique (the same year as Mind was founded in Britain), which for the next generation would be virtually the only French outlet for the "new" psychology (Plas, 1997). Although not a working experimentalist himself, Ribot's many books were to have profound influence on the next generation of psychologists. These included especially his L'Hérédité Psychologique (1873) and La Psychologie Allemande Contemporaine (1879). In the 1880s, Ribot's interests turned to psychopathology, writing books on disorders of memory (1881), will (1883), and personality (1885), and where he attempted to bring to these topics the insights of general psychology. Although in 1881 he lost a Sorbonne professorship in the History of Psychological Doctrines to traditionalist Jules Soury (1842–1915), from 1885 to 1889 he taught experimental psychology at the Sorbonne. In 1889 he was awarded a chair at the Collège de France in Experimental and Comparative Psychology, which he held until 1896 (Nicolas, 2002). Early French: France's primary psychological strength lay in the field of psychopathology. The chief neurologist at the Salpêtrière Hospital in Paris, Jean-Martin Charcot (1825–1893), had been using the recently revivied and renamed (see above) practice of hypnosis to "experimentally" produce hysterical symptoms in some of his patients. Two of his students, Alfred Binet (1857–1911) and Pierre Janet (1859–1947), adopted and expanded this practice in their own work. Early French: In 1889, Binet and his colleague Henri Beaunis (1830–1921) co-founded, at the Sorbonne, the first experimental psychology laboratory in France. Just five years later, in 1894, Beaunis, Binet, and a third colleague, Victor Henri (1872–1940), co-founded the first French journal dedicated to experimental psychology, L'Année Psychologique. In the first years of the 20th century, Binet was requested by the French government to develop a method for the newly founded universal public education system to identify students who would require extra assistance to master the standardized curriculum. In response, with his collaborator Théodore Simon (1873–1961), he developed the Binet-Simon Intelligence Test, first published in 1905 (revised in 1908 and 1911). Early French: Although the test was used to effect in France, it would find its greatest success (and controversy) in the United States, where it was translated into English by Henry H. Goddard (1866–1957), the director of the Training School for the Feebleminded in Vineland, New Jersey, and his assistant, Elizabeth Kite (a translation of the 1905 edition appeared in the Vineland Bulletin in 1908, but much better known was Kite's 1916 translation of the 1908 edition, which appeared in book form). The translated test was used by Goddard to advance his eugenics agenda with respect to those he deemed congenitally feeble-minded, especially immigrants from non-Western European countries. Binet's test was revised by Stanford professor Lewis M. Terman (1877–1956) into the Stanford-Binet IQ test in 1916. Early French: With Binet's death in 1911, the Sorbonne laboratory and L'Année Psychologique fell to Henri Piéron (1881–1964). Piéron's orientation was more physiological that Binet's had been. Early French: Pierre Janet became the leading psychiatrist in France, being appointed to the Salpêtrière (1890–1894), the Sorbonne (1895–1920), and the Collège de France (1902–1936). In 1904, he co-founded the Journale de Psychologie Normale et Pathologique with fellow Sorbonne professor Georges Dumas (1866–1946), a student and faithful follower of Ribot. Whereas Janet's teacher, Charcot, had focused on the neurological bases of hysteria, Janet was concerned to develop a scientific approach to psychopathology as a mental disorder. His theory that mental pathology results from conflict between unconscious and conscious parts of the mind, and that unconscious mental contents may emerge as symptoms with symbolic meanings led to a public priority dispute with Sigmund Freud. Early British: Although the British had the first scholarly journal dedicated to the topic of psychology – Mind, founded in 1876 by Alexander Bain and edited by George Croom Robertson – it was quite a long while before experimental psychology developed there to challenge the strong tradition of "mental philosophy". The experimental reports that appeared in Mind in the first two decades of its existence were almost entirely authored by Americans, especially G. Stanley Hall and his students (notably Henry Herbert Donaldson) and James McKeen Cattell. Early British: Francis Galton's (1822–1911) anthropometric laboratory opened in 1884. There people were tested on a wide variety of physical (e.g., strength of blow) and perceptual (e.g., visual acuity) attributes. In 1886 Galton was visited by James McKeen Cattell who would later adapt Galton's techniques in developing his own mental testing research program in the United States. Galton was not primarily a psychologist, however. The data he accumulated in the anthropometric laboratory primarily went toward supporting his case for eugenics. To help interpret the mounds of data he accumulated, Galton developed a number of important statistical techniques, including the precursors to the scatterplot and the product-moment correlation coefficient (later perfected by Karl Pearson, 1857–1936). Early British: Soon after, Charles Spearman (1863–1945) developed the correlation-based statistical procedure of factor analysis in the process of building a case for his two-factor theory of intelligence, published in 1901. Spearman believed that people have an inborn level of general intelligence or g which can be crystallized into a specific skill in any of a number of narrow content area (s, or specific intelligence). Early British: Laboratory psychology of the kind practiced in Germany and the United States was slow in coming to Britain. Although the philosopher James Ward (1843–1925) urged Cambridge University to establish a psychophysics laboratory from the mid-1870s forward, it was not until the 1891 that they put so much as £50 toward some basic apparatus (Bartlett, 1937). A laboratory was established through the assistance of the physiology department in 1897 and a lectureship in psychology was established which first went to W. H. R. Rivers (1864–1922). Soon Rivers was joined by C. S. Myers (1873–1946) and William McDougall (1871–1938). This group showed as much interest in anthropology as psychology, going with Alfred Cort Haddon (1855–1940) on the famed Torres Straits expedition of 1898. Early British: In 1901 the Psychological Society was established (which renamed itself the British Psychological Society in 1906), and in 1904 Ward and Rivers co-founded the British Journal of Psychology. Early Russian: Insofar as psychology was regarded as the science of the soul and institutionally part of philosophy courses in theology schools, psychology was present in Russia from the second half of the 18th century. By contrast, if by psychology we mean a separate discipline, with university chairs and people employed as psychologists, then it appeared only after the October Revolution. All the same, by the end of the 19th century, many different kinds of activities called psychology had spread in philosophy, natural science, literature, medicine, education, legal practice, and even military science. Psychology was as much a cultural resource as it was a defined area of scholarship.The question, "Who Is to Develop Psychology and How?", was of such importance that Ivan Sechenov, a physiologist and doctor by training and a teacher in institutions of higher education, chose it as the title for an essay in 1873. His question was rhetorical, for he was already convinced that physiology was the scientific basis on which to build psychology. The response to Sechenov's popular essay included one, in 1872–1873, from a liberal professor of law, Konstantin Kavelin. He supported a psychology drawing on ethnographic materials about national character, a program that had existed since 1847, when the ethnographic division of the recently founded Russian Geographical Society circulated a request for information on the people's way of life, including "intellectual and moral abilities." This was part of a larger debate about national character, national resources, and national development, in the context of which a prominent linguist, Alexander Potebnja, began, in 1862, to publish studies of the relation between mentality and language. Early Russian: Although it was the history and philology departments that traditionally taught courses in psychology, it was the medical schools that first introduced psychological laboratories and courses on experimental psychology. As early as the 1860s and 1870s, I. M. Balinskii (1827–1902) at the Military-Surgical Academy (which changed its name in the 1880s to the Military Medical Academy) in St. Petersburg and Sergey Korsakov, a psychiatrist at Moscow university, began to purchase psychometric apparatus. Vladimir Bekhterev created the first laboratory—a special space for psychological experiments—in Kazan' in 1885. At a meeting of the Moscow Psychological Society in 1887, the psychiatrists Grigory Rossolimo and Ardalion Tokarskii (1859–1901) demonstrated both Wundt's experiments and hypnosis. In 1895, Tokarskii set up a psychological laboratory in the psychiatric clinic of Moscow university with the support of its head, Korsakov, to teach future psychiatrists about what he promoted as new and necessary techniques. Early Russian: in January 1884, the philosophers Matvei Troitskii and Iakov Grot founded the Moscow Psychological Society. They wished to discuss philosophical issues, but because anything called "philosophical" could attract official disapproval, they used "psychological" as a euphemism. In 1907, Georgy Chelpanov announced a 3-year course in psychology based on laboratory work and a well-structured teaching seminar. In the following years, Chelpanov traveled in Europe and the United States to see existing institutes; the result was a luxurious four-story building for the Psychological Institute of Moscow with well-equipped laboratories, opening formally on March 23, 1914. Second generation German: Würzburg School In 1896, one of Wilhelm Wundt's former Leipzig laboratory assistants, Oswald Külpe (1862–1915), founded a new laboratory in Würzburg. Külpe soon surrounded himself with a number of younger psychologists, the so-called Würzburg School, most notably Narziß Ach (1871–1946), Karl Bühler (1879–1963), Ernst Dürr (1878–1913), Karl Marbe (1869–1953), and Henry Jackson Watt (1879–1925). Collectively, they developed a new approach to psychological experimentation that flew in the face of many of Wundt's restrictions. Wundt had drawn a distinction between the old philosophical style of self-observation (Selbstbeobachtung) in which one introspected for extended durations on higher thought processes, and inner perception (innere Wahrnehmung) in which one could be immediately aware of a momentary sensation, feeling, or image (Vorstellung). The former was declared to be impossible by Wundt, who argued that higher thought could not be studied experimentally through extended introspection, but only humanistically through Völkerpsychologie (folk psychology). Only the latter was a proper subject for experimentation. Second generation German: The Würzburgers, by contrast, designed experiments in which the experimental subject was presented with a complex stimulus (for example a Nietzschean aphorism or a logical problem) and after processing it for a time (for example interpreting the aphorism or solving the problem), retrospectively reported to the experimenter all that had passed through his consciousness during the interval. In the process, the Würzburgers claimed to have discovered a number of new elements of consciousness (over and above Wundt's sensations, feelings, and images) including Bewußtseinslagen (conscious sets), Bewußtheiten (awarenesses), and Gedanken (thoughts). In the English-language literature, these are often collectively termed "imageless thoughts", and the debate between Wundt and the Würzburgers, the "imageless thought controversy". Second generation German: Wundt referred to the Würzburgers' studies as "sham" experiments and criticized them vigorously. Wundt's most significant English student, Edward Bradford Titchener, then working at Cornell, intervened in the dispute, claiming to have conducted extended introspective studies in which he was able to resolve the Würzburgers' imageless thoughts into sensations, feelings, and images. He thus, paradoxically, used a method of which Wundt did not approve in order to affirm Wundt's view of the situation.The imageless thought debate is often said to have been instrumental in undermining the legitimacy of all introspective methods in experimental psychology and, ultimately, in bringing about the behaviorist revolution in American psychology. It was not without its own delayed legacy, however. Herbert A. Simon (1981) cites the work of one Würzburg psychologist in particular, Otto Selz (1881–1943), for having inspired him to develop his famous problem-solving computer algorithms (such as Logic Theorist and General Problem Solver) and his "thinking out loud" method for protocol analysis. In addition, Karl Popper studied psychology under Bühler and Selz in the 1920s, and appears to have brought some of their influence, unattributed, to his philosophy of science. Second generation German: Gestalt psychology Whereas the Würzburgers debated with Wundt mainly on matters of method, another German movement, centered in Berlin, took issue with the widespread assumption that the aim of psychology should be to break consciousness down into putative basic elements. Instead, they argued that the psychological "whole" has priority and that the "parts" are defined by the structure of the whole, rather than vice versa. Thus, the school was named Gestalt, a German term meaning approximately "form" or "configuration". It was led by Max Wertheimer (1880–1943), Wolfgang Köhler (1887–1967), and Kurt Koffka (1886–1941). Wertheimer had been a student of Austrian philosopher, Christian von Ehrenfels (1859–1932), who claimed that in addition to the sensory elements of a perceived object, there is an extra element which, though in some sense derived from the organization of the standard sensory elements, is also to be regarded as being an element in its own right. He called this extra element Gestalt-qualität or "form-quality". For instance, when one hears a melody, one hears the notes plus something in addition to them which binds them together into a tune – the Gestalt-qualität. It is the presence of this Gestalt-qualität which, according to Ehrenfels, allows a tune to be transposed to a new key, using completely different notes, but still retain its identity. Wertheimer took the more radical line that "what is given me by the melody does not arise ... as a secondary process from the sum of the pieces as such. Instead, what takes place in each single part already depends upon what the whole is", (1925/1938). In other words, one hears the melody first and only then may perceptually divide it up into notes. Similarly in vision, one sees the form of the circle first – it is given "im-mediately" (i.e. its apprehension is not mediated by a process of part-summation). Only after this primary apprehension might one notice that it is made up of lines or dots or stars. Second generation German: Gestalt-Theorie (Gestalt psychology) was officially initiated in 1912 in an article by Wertheimer on the phi-phenomenon; a perceptual illusion in which two stationary but alternately flashing lights appear to be a single light moving from one location to another. Contrary to popular opinion, his primary target was not behaviorism, as it was not yet a force in psychology. The aim of his criticism was, rather, the atomistic psychologies of Hermann von Helmholtz (1821–1894), Wilhelm Wundt (1832–1920), and other European psychologists of the time. Second generation German: The two men who served as Wertheimer's subjects in the phi experiment were Köhler and Koffka. Köhler was an expert in physical acoustics, having studied under physicist Max Planck (1858–1947), but had taken his degree in psychology under Carl Stumpf (1848–1936). Koffka was also a student of Stumpf's, having studied movement phenomena and psychological aspects of rhythm. In 1917 Köhler (1917/1925) published the results of four years of research on learning in chimpanzees. Köhler showed, contrary to the claims of most other learning theorists, that animals can learn by "sudden insight" into the "structure" of a problem, over and above the associative and incremental manner of learning that Ivan Pavlov (1849–1936) and Edward Lee Thorndike (1874–1949) had demonstrated with dogs and cats, respectively. Second generation German: The terms "structure" and "organization" were focal for the Gestalt psychologists. Stimuli were said to have a certain structure, to be organized in a certain way, and that it is to this structural organization, rather than to individual sensory elements, that the organism responds. When an animal is conditioned, it does not simply respond to the absolute properties of a stimulus, but to its properties relative to its surroundings. To use a favorite example of Köhler's, if conditioned to respond in a certain way to the lighter of two gray cards, the animal generalizes the relation between the two stimuli rather than the absolute properties of the conditioned stimulus: it will respond to the lighter of two cards in subsequent trials even if the darker card in the test trial is of the same intensity as the lighter one in the original training trials. Second generation German: In 1921 Koffka published a Gestalt-oriented text on developmental psychology, Growth of the Mind. With the help of American psychologist Robert Ogden, Koffka introduced the Gestalt point of view to an American audience in 1922 by way of a paper in Psychological Bulletin. It contains criticisms of then-current explanations of a number of problems of perception, and the alternatives offered by the Gestalt school. Koffka moved to the United States in 1924, eventually settling at Smith College in 1927. In 1935 Koffka published his Principles of Gestalt Psychology. This textbook laid out the Gestalt vision of the scientific enterprise as a whole. Science, he said, is not the simple accumulation of facts. What makes research scientific is the incorporation of facts into a theoretical structure. The goal of the Gestaltists was to integrate the facts of inanimate nature, life, and mind into a single scientific structure. This meant that science would have to swallow not only what Koffka called the quantitative facts of physical science but the facts of two other "scientific categories": questions of order and questions of Sinn, a German word which has been variously translated as significance, value, and meaning. Without incorporating the meaning of experience and behavior, Koffka believed that science would doom itself to trivialities in its investigation of human beings. Second generation German: Having survived the onslaught of the Nazis up to the mid-1930s, all the core members of the Gestalt movement were forced out of Germany to the United States by 1935. Köhler published another book, Dynamics in Psychology, in 1940 but thereafter the Gestalt movement suffered a series of setbacks. Koffka died in 1941 and Wertheimer in 1943. Wertheimer's long-awaited book on mathematical problem-solving, Productive Thinking, was published posthumously in 1945 but Köhler was now left to guide the movement without his two long-time colleagues. Emergence of behaviorism in America: As a result of the conjunction of a number of events in the early 20th century, behaviorism gradually emerged as the dominant school in American psychology. First among these was the increasing skepticism with which many viewed the concept of consciousness: although still considered to be the essential element separating psychology from physiology, its subjective nature and the unreliable introspective method it seemed to require, troubled many. William James' 1904 Journal of Philosophy.... article "Does Consciousness Exist?", laid out the worries explicitly. Emergence of behaviorism in America: Second was the gradual rise of a rigorous animal psychology. In addition to Edward Lee Thorndike's work with cats in puzzle boxes in 1898, the start of research in which rats learn to navigate mazes was begun by Willard Small (1900, 1901 in American Journal of Psychology). Robert M. Yerkes's 1905 Journal of Philosophy... article "Animal Psychology and the Criteria of the Psychic" raised the general question of when one is entitled to attribute consciousness to an organism. The following few years saw the emergence of John Broadus Watson (1878–1959) as a major player, publishing his dissertation on the relation between neurological development and learning in the white rat (1907, Psychological Review Monograph Supplement; Carr & Watson, 1908, J. Comparative Neurology & Psychology). Another important rat study was published by Henry H. Donaldson (1908, J. Comparative Neurology & Psychology). The year 1909 saw the first English-language account of Ivan Pavlov's studies of conditioning in dogs (Yerkes & Morgulis, 1909, Psychological Bulletin). Emergence of behaviorism in America: A third factor was the rise of Watson to a position of significant power within the psychological community. In 1908, Watson was offered a junior position at Johns Hopkins by James Mark Baldwin. In addition to heading the Johns Hopkins department, Baldwin was the editor of the influential journals, Psychological Review and Psychological Bulletin. Only months after Watson's arrival, Baldwin was forced to resign his professorship due to scandal. Watson was suddenly made head of the department and editor of Baldwin's journals. He resolved to use these powerful tools to revolutionize psychology in the image of his own research. In 1913 he published in Psychological Review the article that is often called the "manifesto" of the behaviorist movement, "Psychology as the Behaviorist Views It". There he argued that psychology "is a purely objective experimental branch of natural science", "introspection forms no essential part of its methods..". and "The behaviorist... recognizes no dividing line between man and brute". The following year, 1914, his first textbook, Behavior went to press. Although behaviorism took some time to be accepted as a comprehensive approach (see Samelson, 1981), (in no small part because of the intervention of World War I), by the 1920s Watson's revolution was well underway. The central tenet of early behaviorism was that psychology should be a science of behavior, not of the mind, and rejected internal mental states such as beliefs, desires, or goals. Watson himself, however, was forced out of Johns Hopkins by scandal in 1920. Although he continued to publish during the 1920s, he eventually moved on to a career in advertising (see Coon, 1994). Emergence of behaviorism in America: Among the behaviorists who continued on, there were a number of disagreements about the best way to proceed. Neo-behaviorists such as Edward C. Tolman, Edwin Guthrie, Clark L. Hull, and B. F. Skinner debated issues such as (1) whether to reformulate the traditional psychological vocabulary in behavioral terms or discard it in favor of a wholly new scheme, (2) whether learning takes place all at once or gradually, (3) whether biological drives should be included in the new science in order to provide a "motivation" for behavior, and (4) to what degree any theoretical framework is required over and above the measured effects of reinforcement and punishment on learning. By the late 1950s, Skinner's formulation had become dominant, and it remains a part of the modern discipline under the rubric of Behavior Analysis. Its application (Applied Behavior Analysis) has become one of the most useful fields of psychology. Emergence of behaviorism in America: Behaviorism was the ascendant experimental model for research in psychology for much of the 20th century, largely due to the creation and successful application (not least of which in advertising) of conditioning theories as scientific models of human behaviour. Second generation francophone: Genevan School In 1918, Jean Piaget (1896–1980) turned away from his early training in natural history and began post-doctoral work in psychoanalysis in Zurich. Later Piaget rejected psychoanalysis, as he thought it was insufficiently empirical. In 1919, he moved to Paris to work at the Binet-Simon Lab. However, Binet had died in 1911 and Simon lived and worked in Rouen. His supervision therefore came (indirectly) from Pierre Janet, Binet's old rival and a professor at the Collège de France. Second generation francophone: The job in Paris was relatively simple: to use the statistical techniques he had learned as a natural historian, studying molluscs, to standardize Cyril Burt's intelligence test for use with French children. Yet without direct supervision, he soon found a remedy to this boring work: exploring why children made the mistakes they did. Applying his early training in psychoanalytic interviewing, Piaget began to intervene directly with the children: "Why did you do that?" (etc.) It was from this that the ideas formalized in his later stage theory first emerged. Second generation francophone: In 1921, Piaget moved to Geneva to work with Édouard Claparède at the Rousseau Institute. They formed what is now known as the Genevan School. In 1936, Piaget received his first honorary doctorate from Harvard. In 1955, the International Center for Genetic Epistemology was founded: an interdisciplinary collaboration of theoreticians and scientists, devoted to the study of topics related to Piaget's theory. In 1969, Piaget received the "distinguished scientific contributions" award from the American Psychological Association. Soviet Marxist Psychology: In the early twentieth century, Ivan Pavlov's behavioral and conditioning experiments became the most internationally recognized Russian achievements. With the creation of the Soviet Union in 1922, Marxism was introduced as an overall philosophical and methodological framework in scientific research. In 1920s, state ideology promoted a tendency to the psychology of Bekhterev's reflexologist reductionism in its Marxist interpretation and to historical materialism, while idealistic philosophers and psychologists were harshly criticized. Another variation of Marxist version of psychology that got popularity mostly in Moscow and centered in the local Institute of Psychology was Konstantin Kornilov's (the Director of this Institute) reactology that became the main view, besides a small group of the members of the Vygotsky-Luria Circle that, besides its namesakes Lev Vygotsky, and Alexander Luria, included Bluma Zeigarnik, Alexei Leontiev and others, and in 1920s embraced a deterministic "instrumental psychology" version of Cultural-historical psychology. Due to Soviet censorship and primarily Vygotsky's failed attempt at building consistent psychological theory of consciousness many works by Vygotsky were not published chronologically. Soviet Marxist Psychology: A few attempts were made in 1920s at formulating the core of theoretical framework of the "genuinely Marxist" psychology, but all these failed and were characterized in early 1930s as either right- or left-wing deviations of reductionist "mechanicism" or "menshevising idealism". It was Sergei Rubinstein in mid 1930s, who formulated the key principles, on which the entire Soviet variation of Marxist psychology would be based, and, thus become the genuine pioneer and the founder of this psychological discipline in the Marxist disguise in the Soviet Union. Soviet Marxist Psychology: In late 1940s-early 1950s, Lysenkoism somewhat affected Russian psychology, yet gave it a considerable impulse for a reaction and unification that resulted in institutional and disciplinary integration of psychological community in the postwar Soviet Union. Cognitivism: Noam Chomsky's (1959) review of Skinner's book Verbal Behavior (which aimed to explain language acquisition in a behaviorist framework) is considered one of the major theoretical challenges to the type of radical (as in 'root') behaviorism that Skinner taught. Chomsky claimed that language could not be learned solely from the sort of operant conditioning that Skinner postulated. Chomsky argued that people could produce an infinite variety of sentences unique in structure and meaning and that these could not possibly be generated solely through the experience of natural language. As an alternative, he concluded that there must be internal mental structures – states of mind of the sort that behaviorism rejected as illusory. The issue is not whether mental activities exist; it is whether they can be shown to be the causes of behavior. Similarly, the work by Albert Bandura showed that children could learn by social observation, without any change in overt behaviour, and so must (according to him) be accounted for by internal representations. Cognitivism: The rise of computer technology also promoted the metaphor of mental function as information processing. This, combined with a scientific approach to studying the mind, as well as a belief in internal mental states, led to the rise of cognitivism as the dominant model of the mind. Cognitivism: Links between brain and nervous system function were also becoming common, partly due to the experimental work of people like Charles Sherrington and Donald Hebb, and partly due to studies of people with brain injury (see cognitive neuropsychology). With the development of technologies for accurately measuring brain function, neuropsychology and cognitive neuroscience have become some of the most active areas in contemporary psychology. Cognitivism: With the increasing involvement of other disciplines (such as philosophy, computer science, and neuroscience) in the quest to understand the mind, the umbrella discipline of cognitive science has been created as a means of focusing such efforts in a constructive way. Scholarly journals: There are three "primary journals" where specialist histories of psychology are published: History of Psychology (journal) Journal of the History of the Behavioral Sciences History of the Human SciencesIn addition, there are a large number of "friendly journals" where historical material can often be found.Burman, J. T. (2018). "What Is History of Psychology? Network Analysis of Journal Citation Reports, 2009-2015". SAGE Open. 8 (1): 215824401876300. doi:10.1177/2158244018763005. These are discussed in History of Psychology (discipline).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Scarlet Spider** Scarlet Spider: The Scarlet Spider is an alias used by several fictional characters appearing in American comic books published by Marvel Comics, most notably Ben Reilly and Kaine Parker, both of whom are genetic replicates of the superhero Spider-Man. Both the Ben Reilly and Kaine Parker incarnations of Scarlet Spider appear in Spider-Man: Across the Spider-Verse. Fictional character biography: Ben Reilly Benjamin "Ben" Reilly, a clone of the original Spider-Man created by the Jackal, is the first major version of the Scarlet Spider. Peter Parker To continue his superhero activities, Peter Parker was forced to use the Scarlet Spider identity due to all of his Spider-Man costumes being ruined, while Ben Reilly pretended to be the former in prison. Fictional character biography: Joe Wade Joseph "Joe" Wade is the only character to operate as a villain under the Scarlet Spider alias. An undercover FBI agent, he's assigned to investigate the second Doctor Octopus. However, Doctor Octopus discovers Joe and traps his body in a virtual reality chamber before using his thoughts to power a hard-light holographic duplicate of the Scarlet Spider to tarnish the Scarlet Spider name. Despite this, Joe is unable to stop himself from committing acts of violence. The true Scarlet Spider (Ben Reilly) attacks Doctor Octopus' lair, damaging the machine while Joe is still inside. This turned Joe into a mechanized version of the Scarlet Spider with superhuman strength and speed, claws on his fingertips, the ability to fire webbing from his wrists, crawl up walls, and fire laser "stingers" from his eyes. After the imposter goes on a rampage, the second Spider-Man (Ben Reilly) joins forces with the New Warriors to stop the cybernetic Scarlet Spider before the FBI put him in custody so Joe can undergo medical treatment to remove the technology. Fictional character biography: Scarlet Spiders (Red Team) The Scarlet Spiders, secretly all clones of Michael Van Patrick, work with the Initiative and wear advanced versions of the Iron Spider armour. Kaine Parker Kaine Parker, a clone of the original Spider-Man also created by the Jackal, is the fifth major version of the Scarlet Spider. Other versions: MC2 In the alternate future MC2, Felicity Hardy, daughter of Felicia Hardy and Flash Thompson, adopts the Scarlet Spider identity to irritate her mother. She attempts to convince Spider-Girl to take her on as a sidekick, but the latter refuses. Undeterred, she continues to fight crime until several near-death experiences cause her to give up the identity. Although she has no actual powers, she is skilled in martial arts and gymnastics and utilizes an array of spider-themed weaponry. Other versions: Spider-Gwen The Spider-Gwen universe's Mary Jane Watson dresses as the Scarlet Spider for Halloween. In other media: Television The Ben Reilly incarnation of the Scarlet Spider made a cameo appearance in Fantastic Four. The Ben Reilly incarnation of the Scarlet Spider appears in Spider-Man: The Animated Series, voiced by Christopher Daniel Barnes. Joe Wade makes a non-speaking cameo appearance in The Spectacular Spider-Man episode "Shear Strength" as an African-American FBI agent. The Ben Reilly incarnation of the Scarlet Spider, hybridized with Kaine Parker, appears in Ultimate Spider-Man, voiced by Scott Porter.Additionally, the original Scarlet Spider costume appears as Flash Thompson's initial superhero identity. The original Scarlet Spider costume appears in Marvel's Spider-Man (2017) as Peter Parker / Spider-Man's homemade costume. Film Peter Parker's homemade Spider-Man suit from the Marvel Cinematic Universe (MCU) films Captain America: Civil War (2016) and Spider-Man: Homecoming (2017) pays homage to the original Scarlet Spider. Both the Ben Reilly and Kaine Parker incarnations of Scarlet Spider appear in Spider-Man: Across the Spider-Verse, with the former voiced by Andy Samberg while the latter has no dialogue. Video games The Ben Reilly incarnation of the Scarlet Spider appears as an alternate skin for Peter Parker / Spider-Man in Spider-Man (2000), Spider-Man 2: Enter Electro, Marvel: Ultimate Alliance, Spider-Man: Shattered Dimensions, Ultimate Marvel vs. Capcom 3, and Spider-Man: Edge of Time. The Ben Reilly incarnation of the Scarlet Spider appears as a playable character in Marvel Super Hero Squad Online, voiced by Chris Cox. The Kaine Parker incarnation of the Scarlet Spider appears as alternate costume for Peter Parker / Spider-Man in The Amazing Spider-Man and The Amazing Spider-Man 2 film tie-in games. The Ben Reilly, Kaine Parker, Jon Wade, and Felicity Hardy incarnations of the Scarlet Spider all appear as playable characters in Spider-Man Unlimited. The Ben Reilly incarnation of the Scarlet Spider appears as a playable character in Lego Marvel Super Heroes 2. The Ben Reilly and Kaine Parker incarnations of the Scarlet Spider appear as alternate costumes for Peter Parker / Spider-Man in Spider-Man (2018). The Ben Reilly incarnation of the Scarlet Spider appears as a playable character in Marvel Future Fight. The Ben Reilly incarnation of the Scarlet Spider appears as a playable character in Marvel Strike Force
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Longitudinal data system** Longitudinal data system: Longitudinal data system is a data system capable of tracking student information over multiple years in multiple schools. The term appears in Federal law of the United States to describe such a system. Federal funding is provided to aid the design and implementation of such systems.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pont's Analysis** Pont's Analysis: Pont's Analysis is an analysis developed by Pont in 1909. This analysis allows one to predict the width of the maxillary arch at the premolar and molar region by measure the mesio-distal widths of the four permanent incisors. The analysis helps to determine if the dental arch is narrow or normal and if expansion is possible or not.The width from Left Premolar to Right Premolar or Measured Premolar Value (MPV) can be calculated by using Sum of Incisal Widths (S.I) of incisors and multiplying it by 100. The result can be divided by 80. Pont's Analysis: The width from Left Molar to Right Molar or Measured Molar Value (MMV) can be calculated by using the S.I of incisors and multiplying by 100. The result is divided by 64. The widths are measured from occlusal grooves of both premolars and molars. Disadvantages: One of the drawbacks of this analysis is that the analysis was initially done on French Population by Pont. Therefore, the data cannot be used to make predictions for other populations. This analysis also does not take the alignment of the teeth into consideration. In addition, because this analysis only applies to the upper arch serves a drawback also. Maxillary teeth are often missing and Peg Laterals are often seen in the maxillary arch. Linder Harth Index: Linder Hath index is derived from Pont's Index. The Hath index has a slight variation from Pont's analysis. In the maxillary arch instead of 80, Linder Harth Index uses 85 to achieve the Measured Molar Value.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Predicted no-effect concentration** Predicted no-effect concentration: The predicted no-effect concentration (PNEC) is the concentration of a chemical which marks the limit at which below no adverse effects of exposure in an ecosystem are measured. PNEC values are intended to be conservative and predict the concentration at which a chemical will likely have no toxic effect. They are not intended to predict the upper limit of concentration of a chemical that has a toxic effect. PNEC values are often used in environmental risk assessment as a tool in ecotoxicology. A PNEC for a chemical can be calculated with acute toxicity or chronic toxicity single-species data, Species Sensitivity Distribution (SSD) multi-species data, field data or model ecosystems data. Depending on the type of data used, an assessment factor is used to account for the confidence of the toxicity data being extrapolated to an entire ecosystem. Calculation methods: Assessment factor The use of assessment factors allows for laboratory, single-species and short term toxicity data to be extrapolated to conservatively predict ecosystem effects and accounts for the uncertainty in the extrapolation. The value of the assessment factor is dependent on the uncertainty of the available data and ranges from 1-1000. Calculation methods: Acute toxicity data Acute toxicity data includes LC50 and EC50 data. This data is frequently screened for quality, relevancy and ideally contains data for species in multiple trophic levels and/or taxonomic groups. The lowest LC50 in the compiled database is then divided by the assessment factor to calculate the PNEC for that data. The assessment factor applied to acute toxicity data is typically 1000. Calculation methods: Chronic toxicity data Chronic toxicity data includes NOEC data. The lowest NOEC value in the test dataset is divided by an assessment factor between 10 and 100 dependent on the diversity of test organisms and the amount of data available. If there are more species or data, the assessment factor is lower. Calculation methods: Species sensitivity data A PNEC may also be statistically derived from a SSD which is a model of the variability in the sensitivity of multiple species to a single toxicant or other stressor. The hazardous concentration for five percent of the species (HC5) in the SSD is used to derive the PNEC. The HC5 is the concentration at which five percent of the species in the SSD exhibit an effect. The HC5 is typically divided by an assessment factor of 1-5. In many cases, SSDs may not exist due to the lack of data on a large number of species. In these cases, the assessment factor approach to derivation of a PNEC should be used. Calculation methods: Field data or model ecosystems Field data or model ecosystems data includes field toxicity data and mesocosm toxicity. The magnitude of the assessment factor is study-specific in these types of studies. Applications: Environmental risk assessment PNEC is used extensively in Europe by the European Chemicals Agency, the Registration, Evaluation, Authorisation and Restriction of Chemicals program and other toxicology agencies to assess environmental risk. PNEC values can be used in conjunction with predicted environmental concentration values to calculate a risk characterization ratio (RCR), also called a Risk Quotient (RQ). RCR is equal to the PEC divided by the PNEC for a specific chemical and is a deterministic approach to estimating environmental risk at local or regional scales. If the PNEC exceeds the PEC, the conclusion is that the chemical poses no environmental risk. Assumptions: Derivation of PNEC for use in environmental risk lacks some scientific validity because the assessment factors are derived empirically. Additionally, PNECs derived from single-species toxicity data also assume that ecosystems are as sensitive as the most sensitive species and that the ecosystem function is dependent on the ecosystem structure.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Down Under rat** Down Under rat: The Down Under rat (Downunder or DU) is a fancy rat variety noted for the markings on its stomach. The "downunder" marking refers to both a patch of colour on the underside of the rat which matches the coat colouring on the top, and to the variety's Australian origins.While most varieties either have a white pattern on their undersides, or they are completely one colour, the Down Under stands out for its coloured ventral markings against a white background. These markings may be symmetrical or asymmetrical shapes, stripes, or spots. Additionally, because other markings are traditionally found on other parts of the body, Downunders are able to be crossed with those markings to produce varieties like a DU blaze—a rat with a white stripe on its nose. The genes for creating a Downunder rat are dominant, needing only one parent to produce the marking.Due to Australia's strict importation laws, rats are prohibited from being intentionally brought into the country. This has forced the rat fancy hobby to develop varieties in parallel to those found abroad. The Down Under is the first variety to originate in Australia. It was first noted in a litter of hairless rats bred by Cindy Cairns Sautchuk of 'The Rodent Ranch' in New South Wales. The first breeding Down Under was a furred male named Enigma. It is sometimes thought that the Down Under variety came from breeders in Brisbane, the bRatpack and RatmanDU ratteries. However, these breeders are actually just credited with shipping the first Downunders overseas.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Biological data visualization** Biological data visualization: Biology data visualization is a branch of bioinformatics concerned with the application of computer graphics, scientific visualization, and information visualization to different areas of the life sciences. This includes visualization of sequences, genomes, alignments, phylogenies, macromolecular structures, systems biology, microscopy, and magnetic resonance imaging data. Software tools used for visualizing biological data range from simple, standalone programs to complex, integrated systems. State-of-the-art and perspectives: Today we are experiencing a rapid growth in volume and diversity of biological data, presenting an increasing challenge for biologists. A key step in understanding and learning from these data is visualization. Thus, there has been a corresponding increase in the number and diversity of systems for visualizing biological data. State-of-the-art and perspectives: An emerging trend is the blurring of boundaries between the visualization of 3D structures at atomic resolution, visualization of larger complexes by cryo-electron microscopy, and visualization of the location of proteins and complexes within whole cells and tissues.A second emerging trend is an increase in the availability and importance of time-resolved data from systems biology, electron microscopy and cell and tissue imaging. In contrast, visualization of trajectories has long been a prominent part of molecular dynamics. State-of-the-art and perspectives: Finally, as datasets are increasing in size, complexity, and interconnectedness, biological visualization systems are improving in usability, data integration and standardization. List of visualization software: Many software systems are available for visualization biological data. The list below links some popularly used software, and systems grouped by application areas. Medusa - A simple tool for interaction graph analysis. It is a Java based application and available as an applet. Cytoscape - An open source software for integrating bio-molecular interaction networks with high-throughput expression data and other molecular states. Proviz - ProViz is a standalone open source application under the GPL license. PATIKA - It is a tool with integrated visual environment for collaborative construction and analysis of cellular pathways.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Super Bomberman 2** Super Bomberman 2: Super Bomberman 2 is a video game developed by Produce! and Hudson Soft and released on the Super Nintendo Entertainment System. It was released in Japan on April 28, 1994, in North America later the same year, and in Europe on February 23, 1995. It is the second installment of the Super Bomberman series, part of the larger Bomberman franchise, and the only installment without a 2-player story mode (although one was originally planned). Gameplay: Story Mode The story mode consists of walking through maze-like areas filled with blocks, monsters, and switches with a goal of opening the gate leading to the next area. To accomplish this, the player lays bombs to destroy all the monsters and flip all the switches. Destroying blocks in the maze will uncover useful power-ups to increase their bomb count, firepower, speed, and grant them special abilities such as remote control bombs, throwing bombs, and taking an extra hit. Gameplay: There are 5 worlds total, and at the end of each world is a boss. Each boss is first battled on foot before retreating into a giant machine. After the boss is defeated, the player will move on to the next world. Gameplay: Multiplayer In Battle Mode, 2 players (4 with a multitap) can face off against one another in one of 12 arenas designed specifically for multiplayer. Matches can be customized as battle royal matches or team matches. A special option called G-Bomber was added making the winner of each match golden and giving them an item to begin the next match with a power up as determined by spinning a wheel at the end of the match. Story: 5 evil cyborgs called the Five Dastardly Bombers are bent on taking over the universe. On Earth, they capture the original Bomberman, and he is placed in a prison cell in their space station. He awakens in the dungeon of Magnet Bomber and must fight his way to a final showdown with the Magnet Bomber himself. In the following four worlds, Bomberman will challenge Golem Bomber, Pretty Bomber, Brain Bomber, and their leader, Plasma Bomber, in an effort to free the Earth and himself from these alien invaders. Reception: Scary Larry of GamePro gave the game a positive review, praising the strategic gameplay, cute graphics, and music, though he remarked that the single player mode is considerably less engaging than the multiplayer. Next Generation reviewed the game, rating it five stars out of five, and stated that "This is truly God's perfect party game."Next Generation's 1996 lexicon of video game terms included the joke entries "Bomb-o'clock" and "Bombaholic", in which they referred to Super Bomberman 2 as "the videogame of choice for game developers everywhere". Later that year they named it the 3rd best game of all time, saying it "epitomizes the Japanese art of taking a ludicrously simple concept, and then executing that concept faultlessly. The control is superb, the graphics are ultimately functional ... the play is balanced to perfection - and four players won't have more fun doing anything else. We mean it. [Warcraft II: Tides of Darkness], Quake, Daytona USA - they're all great multiplayer games. But Super Bomberman 2 is better." In 1999, Next Generation also listed Super Bomberman 2 as number 30 on their "Top 50 Games of All Time", commenting that, "Of all the games that came out of the 16-bit era, Super Bomberman 2 remains a timeless reminder of the ingenuity and purity of gameplay that characterized Nintendo's world-beating console." IGN ranked the game 89th on their Top 100 SNES Games of All Time. In 1995, Total! listed the game 3rd on their "Top 100 SNES Games." In 1996, GamesMaster rated the game 6th in its "The GamesMaster SNES Top 10." In the same issue, they also listed the game 10th in their "Top 100 Games of All Time" and at the time opined SuperBomberman 2 is "The best multi-player game in the world." The game sold over 713,000 copies in Japan alone.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pyeloplasty** Pyeloplasty: Pyeloplasty is a type of surgical procedure performed to treat an uretero-pelvic junction obstruction if residual renal function is adequate.This revision of the renal pelvis treats the obstruction by excising the stenotic area of the renal pelvis or uretero-pelvic junction and creating a more capacious conduit using the tissue of the remaining ureter and renal pelvis. Pyeloplasty: There are different types of pyeloplasty depending on the surgical technique and patterns of incision used. These include the Y-V, Inverted 'U', and Dismembered types of pyeloplasty. The dismembered type of pyeloplasty (called an Anderson-Hynes pyeloplasty) is the most common type of pyeloplasty. This was described in relation to retrocaval ureter (now renamed as preureteric vena cava). Another technique of pyeloplasty is Culp's pyeloplasty; in this method a flap is rotated from dilated pelvis to decrease narrowing of ureter. Pyeloplasty: A pyeloplasty can either be done by the robotic, open, or laparoscopic route. Pyeloplasty: Anderson–Hynes open pyeloplasty, the upper third of the ureter and the renal pelvis are mobilised, the ureter is dismembered from the renal pelvis, redundant renal pelvis is excised and a new PUJ is reconfigured. A renal vein overlying the distended pelvis can be divided, but an artery in this situation should be preserved to avoid infarction of the renal parenchyma it supplies. The anastomosis is made in front of such an artery if it exists. A ureteric stent is inserted to splint the anastomosis. This type of surgery is now almost universally performed using laparoscopic techniques and in some centres is being performed with robotic assistance. Other surgical procedures have been described to widen the PUJ without dismembering the ureter from the renal pelvis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Random assignment** Random assignment: Random assignment or random placement is an experimental technique for assigning human participants or animal subjects to different groups in an experiment (e.g., a treatment group versus a control group) using randomization, such as by a chance procedure (e.g., flipping a coin) or a random number generator. This ensures that each participant or subject has an equal chance of being placed in any group. Random assignment of participants helps to ensure that any differences between and within the groups are not systematic at the outset of the experiment. Thus, any differences between groups recorded at the end of the experiment can be more confidently attributed to the experimental procedures or treatment.Random assignment, blinding, and controlling are key aspects of the design of experiments because they help ensure that the results are not spurious or deceptive via confounding. This is why randomized controlled trials are vital in clinical research, especially ones that can be double-blinded and placebo-controlled. Random assignment: Mathematically, there are distinctions between randomization, pseudorandomization, and quasirandomization, as well as between random number generators and pseudorandom number generators. How much these differences matter in experiments (such as clinical trials) is a matter of trial design and statistical rigor, which affect evidence grading. Studies done with pseudo- or quasirandomization are usually given nearly the same weight as those with true randomization but are viewed with a bit more caution. Benefits of random assignment: Imagine an experiment in which the participants are not randomly assigned; perhaps the first 10 people to arrive are assigned to the Experimental group, and the last 10 people to arrive are assigned to the Control group. At the end of the experiment, the experimenter finds differences between the Experimental group and the Control group, and claims these differences are a result of the experimental procedure. However, they also may be due to some other preexisting attribute of the participants, e.g. people who arrive early versus people who arrive late. Benefits of random assignment: Imagine the experimenter instead uses a coin flip to randomly assign participants. If the coin lands heads-up, the participant is assigned to the Experimental group. If the coin lands tails-up, the participant is assigned to the Control group. At the end of the experiment, the experimenter finds differences between the Experimental group and the Control group. Because each participant had an equal chance of being placed in any group, it is unlikely the differences could be attributable to some other preexisting attribute of the participant, e.g. those who arrived on time versus late. Potential issues: Random assignment does not guarantee that the groups are matched or equivalent. The groups may still differ on some preexisting attribute due to chance. The use of random assignment cannot eliminate this possibility, but it greatly reduces it. Potential issues: To express this same idea statistically - If a randomly assigned group is compared to the mean it may be discovered that they differ, even though they were assigned from the same group. If a test of statistical significance is applied to randomly assigned groups to test the difference between sample means against the null hypothesis that they are equal to the same population mean (i.e., population mean of differences = 0), given the probability distribution, the null hypothesis will sometimes be "rejected," that is, deemed not plausible. That is, the groups will be sufficiently different on the variable tested to conclude statistically that they did not come from the same population, even though, procedurally, they were assigned from the same total group. For example, using random assignment may create an assignment to groups that has 20 blue-eyed people and 5 brown-eyed people in one group. This is a rare event under random assignment, but it could happen, and when it does it might add some doubt to the causal agent in the experimental hypothesis. Random sampling: Random sampling is a related, but distinct process. Random sampling is recruiting participants in a way that they represent a larger population. Because most basic statistical tests require the hypothesis of an independent randomly sampled population, random assignment is the desired assignment method because it provides control for all attributes of the members of the samples—in contrast to matching on only one or more variables—and provides the mathematical basis for estimating the likelihood of group equivalence for characteristics one is interested in, both for pretreatment checks on equivalence and the evaluation of post treatment results using inferential statistics. More advanced statistical modeling can be used to adapt the inference to the sampling method. History: Randomization was emphasized in the theory of statistical inference of Charles S. Peirce in "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883). Peirce applied randomization in the Peirce-Jastrow experiment on weight perception. Charles S. Peirce randomly assigned volunteers to a blinded, repeated-measures design to evaluate their ability to discriminate weights. Peirce's experiment inspired other researchers in psychology and education, which developed a research tradition of randomized experiments in laboratories and specialized textbooks in the eighteen-hundreds.Jerzy Neyman advocated randomization in survey sampling (1934) and in experiments (1923). Ronald A. Fisher advocated randomization in his book on experimental design (1935).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Expensive tissue hypothesis** Expensive tissue hypothesis: The expensive tissue hypothesis (ETH) relates brain and gut size in evolution (specifically in human evolution). It suggests that in order for an organism to evolve a large brain without a significant increase in basal metabolic rate (as seen in humans), the organism must use less energy on other expensive tissues; the paper introducing the ETH suggests that in humans, this was achieved by eating an easy-to-digest diet and evolving a smaller, less energy intensive gut. The ETH has inspired many research projects to test its validity in primates and other organisms. Expensive tissue hypothesis: The human brain stands out among the mammals because its relative size compared to the rest of the body. The brain of a homo sapien is about three times larger than that of its closest living relative, the chimpanzee. For a primate of its body size, the relative size of the brain and that of the digestive tract is rather unexpected; the digestive tract is smaller than expected for a primate of a human body size. In 1995, two scientists proposed an attempt to solve this phenomenon of human evolution using the Expensive Tissue Hypothesis. Original paper: The original paper introducing the ETH was written by Leslie Aiello and Peter Wheeler. Availability to new data on basal metabolic rate (BMR) and brain size has shown that energetics is an issue in the maintenance of a relatively large brain, like the human brain. In mammals, brain size is positively correlated with the BMR. In the paper, they sought to explain how humans managed to have energy for their large and metabolically expensive brains while still maintaining a BMR comparable to other primates with smaller brains. They found that the humans’ smaller relative gut size almost completely compensated for the metabolic cost of the larger brain. They went on to postulate that a larger brain would allow for more complex foraging behavior, which would result in a higher quality diet, which would then allow the gut to shrink further, freeing up more energy for the brain. This research also presented a case for studying the evolution of organs in a more interconnected manner, rather than in isolation. Further research: Anthropologists have been able to observe a dramatic contrast in relative brain size between humans and our great ape ancestors. Studies have shown that brain size differences underlie major differences in cognitive performance. Because of this, brain tissue is energetically expensive and requires a great amount of energy compared to several other somatic tissues during rest. To understand how the body is able to provide the brain with the right amount of energy to function properly, scientists consider the cost side of the equation and focus on how brain and other expensive tissues such as the gut or the testes may trade off. Another possibility is that there may not be any trading off, rather there are other ways that humans are keeping the brain nourished. Further research: The academic debate around the ETH is still active, and has inspired a number of similar tests, all attempting to verify the ETH with another species or group of species by looking at encephalization (a ratio between brain size and body size), gut size, and/or diet quality. Primates, being the closest living relatives to humans, are a natural extension to the hypothesis, and as such are examined by many of these tests. One such study supported the expensive tissue hypothesis and found a positive correlation between diet quality and brain size (as would be expected by the original paper), but it did note that there were exceptions among the species tested. A broader study including primates and other mammals disputed the ETH, finding that there is no negative correlation between brain and gut sizes; it did, however, support the idea of energy trade-offs in evolution as it found a negative correlation between encephalization and adipose deposits.Studies have also been done in species less similar to humans, such as anurans and fish. The study of anurans found that among the 30 species tested, there was a significant negative correlation between gut size and brain size, as Aiello and Wheeler found in humans and primates in their original research. One study of fish used the carnivorous fish Gnathonemus petersii, which has a uniquely large brain, about three times the size expected of a fish of its size. The research found that these fish also had significantly smaller guts than other similar carnivorous fish. These further studies enrich the debate over the ETH. Further research: Another study done by Huang, Yu, and Liao investigated the possible effects of gut microbiota in the expensive tissue hypothesis among vertebrates. Researchers have investigated various symbiotic gut bacteria as well as other microorganisms that have coevolved in the human or other animals digestive tract. These microbiotas have evolved to form mutually and beneficial relationships with their host, they are important for immune function, nutrition and human physiology and any disruption in the gut can lead to gastrointestinal dysfunction like obesity for example. Several studies have also shown that the diversity and the composition of the gut microbiota vary topographically and temporarily. This is because specific bacteria have been linked to the host's food intake as well as the use of nutrition and energy metabolism. Any changes or modifications of the microbial landscape in the gut can lead to several complex and dynamic interactions throughout life. Additionally, the choice of the host is strongly associated with the diversification and complexity of the microbial, for instance, the study illustrates that diet high in fat increases the level of Bacteroidete and decreases the level of Firmicute in children's gut, the study also theorized that diet quality is also related to gut size.The study also found out that gut size has also seen coevolution in brain size, partly because both the brain size and gut are one of the most energetically costly organs in vertebrates body. Based on the expensive tissue hypothesis, higher energy expenditure of vertebrates with larger brains has to balance out by following a similar decrease in other energetically consuming organs; in this case, its gut size. There has also been evidence that shows that vertebrates with larger brains have evolved to balance out the energetic expenditure required by trading off with the gut size. For example, researchers have found a negative correlation between brain size and gut size in guppies as well as Omei wood frog Gut microbiotata a responds to diet quality in a way that influences the metabolism of the host. For instance, improving energy yield in the host or altering the metabolic pathways is one of the main processes that drive the trade-off between brain size and gut size. This process is also correlated with the ETH hypothesizes because the brain size increases when energy input is at a high level due to consumptions of extra diet and the overall increase constant energy input. However, after several investigations, the study could not find strong evidence to support that brain size is negatively correlated to the gut microbiota in the vertebrates. Further research: A similar study was done by Tsuboi et al., shows clear evidence that the brain size is correlated with the gut size by controlling the effects of shared ancestral and ecological confounding variables. The study found that the evolution of a larger brain is closely related to the increase in reproductive investment into egg size and parental size. The result of the experiment concluded that the energy cost of encephalization might have involved in the evolution of brain size in both endothermic as well as ectothermic vertebrates. For example, the study found out that homeothermic vertebrae such as elephant nose fish, Gnathonemus petersii has a large brain that is related to a smaller intestine and stomach size. Which suggested that energy constraints on brain size evolution are found in at least highly encephalized tropical species. Additionally, the study found that the evolution of brain size is associated with an increase in egg size can lead to an extended period of parental care. This also shows that the presence of the energetic constraints of encephalization is also being applied to homeothermic vertebrates. Even though the study provided distinct evidence to prove that brain size and gut size are negatively correlated with one another, however, there wasn't strong evidence to prove that. For instance, most of the study done on the live-bearing and egg-bearing species within Chondrichthyans, cannot be generalized across all homeothermic and ectothermic vertebrates. Further research: Further studies did show that there is definitely a positive correlation between brain mass residuals and BMS residuals in mammals, but the relationship is only significant in primates. When considering the expensive tissue hypothesis, we also need to consider how the Energy Trade-off Hypothesis affects the body too. Animals could reduce the size of other expensive tissues in the body or reduce energy allocation to locomotion or reproduction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OR5B3** OR5B3: Olfactory receptor 5B3 is a protein that in humans is encoded by the OR5B3 gene.Olfactory receptors interact with odorant molecules in the nose, to initiate a neuronal response that triggers the perception of a smell. The olfactory receptor proteins are members of a large family of G-protein-coupled receptors (GPCR) arising from single coding-exon genes. Olfactory receptors share a 7-transmembrane domain structure with many neurotransmitter and hormone receptors and are responsible for the recognition and G-protein-mediated transduction of odorant signals. The olfactory receptor gene family is the largest in the genome. The nomenclature assigned to the olfactory receptor genes and proteins for this organism is independent of other organisms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Perfect set** Perfect set: In general topology, a subset of a topological space is perfect if it is closed and has no isolated points. Equivalently: the set S is perfect if S=S′ , where S′ denotes the set of all limit points of S , also known as the derived set of S In a perfect set, every point can be approximated arbitrarily well by other points from the set: given any point of S and any neighborhood of the point, there is another point of S that lies within the neighborhood. Furthermore, any point of the space that can be so approximated by points of S belongs to S Note that the term perfect space is also used, incompatibly, to refer to other properties of a topological space, such as being a Gδ space. As another possible source of confusion, also note that having the perfect set property is not the same as being a perfect set. Examples: Examples of perfect subsets of the real line R are the empty set, all closed intervals, the real line itself, and the Cantor set. The latter is noteworthy in that it is totally disconnected. Whether a set is perfect or not (and whether it is closed or not) depends on the surrounding space. For instance, the set S=[0,1]∩Q is perfect as a subset of the space Q but not perfect as a subset of the space R Connection with other topological properties: Every topological space can be written in a unique way as the disjoint union of a perfect set and a scattered set.Cantor proved that every closed subset of the real line can be uniquely written as the disjoint union of a perfect set and a countable set. This is also true more generally for all closed subsets of Polish spaces, in which case the theorem is known as the Cantor–Bendixson theorem. Connection with other topological properties: Cantor also showed that every non-empty perfect subset of the real line has cardinality 2ℵ0 , the cardinality of the continuum. These results are extended in descriptive set theory as follows: If X is a complete metric space with no isolated points, then the Cantor space 2ω can be continuously embedded into X. Thus X has cardinality at least 2ℵ0 . If X is a separable, complete metric space with no isolated points, the cardinality of X is exactly 2ℵ0 If X is a locally compact Hausdorff space with no isolated points, there is an injective function (not necessarily continuous) from Cantor space to X, and so X has cardinality at least 2ℵ0
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Once-a-month cooking** Once-a-month cooking: The concept of once-a-month cooking (OAMC) is to spend a set amount of time cooking with an end result of having enough meals to last through the whole month. OAMC recipes usually involve freezing the meals until needed (termed freezer cooking). Advantages: The primary advantage to this method of cooking is to save time over the course of the month by preparing meals ahead of time in one big cooking day. Preparation time for each meal is then cut down to reheating time only.This method also allows the home cook to save money by purchasing food items in bulk and taking advantage of sales at the market. Money is also saved on the family budget by having homemade convenience foods which can cut down on the frequency of fast food purchases or home dinner deliveries.Cooking ahead for the freezer can be a healthier alternative to purchasing prepackaged frozen meals at the grocery store, allowing the cook to choose wholesome ingredients and cater to individual food needs (allergies, sensitivities, etc.).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DOS 1** DOS 1: DOS 1 or DOS-1 may refer to: The Soviet space station Salyut 1, also called DOS-1It may also refer to versions of Seattle Computer Product's 86-DOS (the predecessor to MS-DOS and PC DOS): 86-DOS 1.00, SCP OEM released version on 28 April 1981, licensed to OEMs including Microsoft 86-DOS 1.01, SCP internal version in May 1981 86-DOS 1.10, SCP OEM released version in July 1981, sold to Microsoft and renamed to MS-DOS 86-DOS 1.14, basis for IBM Personal Computer DOS 1.0It may also refer to versions of the Microsoft MS-DOS family: MS-DOS 1.11, Microsoft internal version in 1981 MS-DOS 1.12, Microsoft internal version in 1981 MS-DOS 1.13, Microsoft internal version in 1981 MS-DOS 1.20, Microsoft internal version in 1981 MS-DOS 1.21, Microsoft internal version in 1982 MS-DOS 1.22, Microsoft internal version in 1982 MS-DOS 1.23, Microsoft internal version in 1982 MS-DOS 1.24, Microsoft internal version in 1982, basis for IBM Personal Computer DOS 1.1 MS-DOS 1.25, basis for OEM versions of MS-DOS other than IBM in 1982, including SCP MS-DOS 1.25 MS-DOS 1.26, Microsoft internal version in 1982 MS-DOS 1.27, Microsoft internal version in 1982 MS-DOS 1.28, Microsoft internal version in 1982 MS-DOS 1.29, Microsoft internal version in 1982 MS-DOS 1.30, Microsoft internal version in 1982 MS-DOS 1.40, Microsoft internal version in 1982 MS-DOS 1.41, Microsoft internal version in 1982 MS-DOS 1.50, Microsoft internal version in 1982 MS-DOS 1.51, Microsoft internal version in 1982 MS-DOS 1.52, Microsoft internal version in 1982 MS-DOS 1.53, Microsoft internal version in 1982 MS-DOS 1.54, Microsoft internal version in 1982It may also refer to versions of the IBM Personal Computer DOS family: IBM Personal Computer DOS 1.0, OEM version of 86-DOS 1.14 in 1981 IBM Personal Computer DOS 1.1, OEM version of MS-DOS 1.24 in 1982It may also refer to versions of the Digital Research operating system family: DOS Plus 1.0, a single-user variant of Concurrent PC DOS in 1985 DOS Plus 1.1, a single-user variant of Concurrent PC DOS in 1985 DOS Plus 1.2, a single-user variant of Concurrent PC DOS 4.1 in 1986 PalmDOS 1.0, a Novell successor to Digital Research's DR DOS 6.0 tailored for early palmtop PCsIt may also refer to versions of the FreeDOS operating system: FreeDOS 1.0, a free and open-source DOS 7.1-compatible operating system distributed since September 2006 FreeDOS 1.1, a successor released in January 2012 FreeDOS 1.2, a successor released in December 2016
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vitamin D3 dihydroxylase** Vitamin D3 dihydroxylase: Vitamin D3 dihydroxylase is a cytochrome P450 enzyme purified from the actinobacterium Streptomyces griseolus, with EC number EC 1.14.15.22 and CYP Symbol CYP105A1 (Cytochrome P450, family 105, member A1), catalyses oxidation of cholecalciferol(vitamin D3) to calcitriol.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DCUN1D1** DCUN1D1: DCN1-like protein 1 is a protein that in humans is encoded by the DCUN1D1 gene.DCUN1D1 is amplified in several cancer types, including squamous cell cancers, and may act as an oncogenic driver in cancer cells. Interactions: DCUN1D1 has been shown to interact with: CAND1, CUL1, CUL2, CUL3 and RBX1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IL17A** IL17A: Interleukin-17A is a protein that in humans is encoded by the IL17A gene. In rodents, IL-17A used to be referred to as CTLA8, after the similarity with a viral gene (O40633). Function: The protein encoded by this gene is a proinflammatory cytokine produced by activated T cells. This cytokine regulates the activities of NF-kappaB and mitogen-activated protein kinases. This cytokine can stimulate the expression of IL6 and cyclooxygenase-2 (PTGS2/COX-2), as well as enhance the production of nitric oxide (NO). Discovery: IL-17A, often referred to as IL-17, was originally discovered at transcriptional level by Rouvier et al. in 1993 from a rodent T-cell hybridoma, derived from the fusion of a mouse cytotoxic T cell clone and a rat T cell lymphoma. Human and mouse IL-17A were cloned a few years later by Yao and Kennedy. Lymphocytes including CD4+, CD8+, gamma-delta T (γδ-T), invariant NKT and innate lymphoid cells (ILCs) are primary sources of IL-17A. Non-T cells, such as neutrophils, have also been reported to produce IL-17A under certain circumstances. IL-17A producing T helper cells (Th17 cells) are a distinct lineage from the Th1 and Th2 CD4+ lineages and the differentiation of Th17 cells requires STAT3 and RORC. Discovery: IL-17A receptor A (IL-17RA) was first isolated and cloned from mouse EL4 thymoma cells and the bioactivity of IL-17A was confirmed by stimulating the transcriptional factor NF-kappa B activity and interleukin-6 (IL-6) secretion in fibroblasts. IL-17RA pairs with IL-17RC to allow binding and signaling of IL-17A and IL-17F. Clinical significance: High levels of this cytokine are associated with several chronic inflammatory diseases including rheumatoid arthritis, psoriasis and multiple sclerosis. Clinical significance: Autoimmune diseases Multiple sclerosis (MS) is a neurological disease caused by immune cells, which attack and destroy the myelin sheath that insulates neurons in the brain and spinal cord. This disease and its animal model experimental autoimmune encephalomyelitis (EAE) have historically been associated with the discovery of Th17 cells. However, elevated expression of IL-17A in multiple sclerosis (MS) lesions as well as peripheral blood has been documented before the identification of Th17 cells. Human TH17 cells have been shown to efficiently transmigrate across the blood-brain barrier in multiple sclerosis lesions, promoting central nervous system inflammation.Psoriasis is an auto-inflammatory skin disease characterized by circumscribed, crimson red, silver-scaled, plaque-like inflammatory lesions. Initially, psoriasis was considered to be a Th1-mediated disease since elevated levels of IFN-γ, TNF-α, and IL-12 was found in the serum and lesions of psoriasis patients. However, the finding of IL-17-producing cells as well as IL17A transcripts in the lesions of psoriatic patients suggested that Th17 cells may synergize with Th1 cells in driving the pathology in psoriasis. The levels of IL-17A in the synovium correlate with tissue damage, whereas levels of IFN-γ correlate with protection. Direct clinical significance of IL-17A in RA comes from recent clinical trials which found that two anti-IL-17A antibodies, namely secukinumab and ixekizumab significantly benefit these patients.Th17 cells is also strongly associated rheumatoid arthritis (RA), a chronic disorder with symptoms include chronic joint inflammation, autoantibody production, which lead to the destruction of cartilage and bone.Th17 cells and IL-17 have also been linked to Crohn's disease (CD) and ulcerative colitis (UC), the two main forms of inflammatory bowel diseases (IBD). Th17 cells infiltrate massively to the inflamed tissue of IBD patients and both in vitro and in vivo studies have shown that Th17-related cytokines may initiate and amplify multiple pro-inflammatory pathways. Elevated IL-17A levels in IBD have been reported by several groups. Nonetheless, Th17 signature cytokines, such as IL-17A and IL-22, may target gut epithelial cells and promote the activation of regulatory pathways and confer protection in the gastrointestinal tract. To this end, recent clinical trials targeting IL-17A in IBD were negative and actually showed increased adverse events in the treatment arm. This data raised the question regarding the role of IL-17A in IBD pathogenesis and suggested that the elevated IL-17A might be beneficial for IBD patients. Clinical significance: Systemic lupus erythematosus, commonly referred as SLE or lupus, is a complex immune disorder affects the skin, joints, kidneys, and brain. Although the exact cause of lupus is not fully known, it has been reported that IL-17 and Th17 cells are involved in disease pathogenesis. It has been reported that serum IL-17 levels are also elevated in SLE patients compared to controls and the Th17 pathway has been shown to drive autoimmune responses in pre-clinical mouse models of lupus. More importantly, IL-17- and IL-17-producing cells have also been detected in kidney tissue and skin biopsies from SLE patients. Clinical significance: Lung diseases Elevated levels of IL-17A have been found in the sputum and in bronchoalveolar lavage fluid of patients with asthma and a positive correlation between IL-17A production and asthma severity has been established. In murine models, treatment with dexamethasone inhibits the release of Th2-related cytokines but does not affect IL-17A production. Furthermore, Th17 cell-mediated airway inflammation and airway hyperresponsiveness are steroid resistant, indicating a potential role for Th17 cells in steroid-resistant asthma. However, a recent trial using anti-IL-17RA did not show efficacy in subjects with asthma.Recent studies have suggested the involvement of immunological mechanisms in COPD. An increase in Th17 cells was observed in patients with COPD compared with current smokers without COPD and healthy subjects, and inverse correlations were found between Th17 cells with lung function. Gene expression profiling of bronchial brushings obtained from COPD patients also linked lung function to several Th17 signature genes such as SAA1, SAA2, SLC26A4 and LCN2. Animal studies have shown that cigarette smoke promotes pathogenic Th17 differentiation and induces emphysema, while blocking IL-17A using neutralizing antibody significantly decreased neutrophil recruitment and the pathological score of airway inflammation in tobacco-smoke-exposed mice. Clinical significance: Host defense In host defense, IL-17A has been shown to be mostly beneficial against infection caused by extracellular bacteria and fungi. Clinical significance: The primary function of Th17 cells appears to be control of the gut microbiota as well as the clearance of extracellular bacteria and fungi. IL-17A and IL-17 receptor signaling has been shown to be play a protective role in host defenses against many bacterial and fungal pathogens including Klebsiella pneumoniae, Mycoplasma pneumoniae, Candida albicans, Coccidioides posadasii, Histoplasma capsulatum, and Blastomyces dermatitidis. However, IL-17A seems to be detrimental in viral infection such as influenza through promoting neutrophilic inflammation.The requirements of IL-17A and IL-17 receptor signaling in host defense were well documented and appreciated before the identification of Th17 cells as an independent T helper cell lineage. In experimental pneumonia models, IL-17A or IL-17RA knock mice have increased susceptibility to various Gram-negative bacteria, such as Klebsiella pneumoniae and Mycoplasma pneumoniae. In contrast, data suggest that IL-23 and IL-17A are not required for protection against primary infection by the intracellular bacteria Mycobacterium tuberculosis. Both the IL-17RA knock out mice and the IL-23p19 knock out mice cleared primary infection with M. tuberculosis. However, IL-17A is required for protection against primary infection with a different intracellular bacteria, Francisella tularensis.Mouse model studies using the IL-17RA knock out mice and the IL-17A knock out mice with the murine adapted influenza strain (PR8) as well as the 2009 pandemic H1N1 strain [93] both support that IL-17A plays a detrimental role in mediating the acute lung injury.The role of adaptive immune responses mediated by antigen specific Th17 has been investigated more recently. Antigen specific Th17 cells were also shown to recognize conserved protein antigens among different K. pneumoniae strains and provide broad-spectrum serotype-independent protection. Antigen specific CD4 T cells also limit nasopharyngeal colonization of S. pneumoniae in mouse models. Furthermore, immunization with pneumococcal whole cell antigen and several derivatives provided IL-17-mediated, but not antibody dependent, protection against S. pneumoniae challenge. In fungal infection, it has been shown an IL-17 producing clone with a TCR specific for calnexin from Blastomyces dermatitidis confers protection with evolutionary related fungal species including Histoplasma spp. Clinical significance: Cancer In tumorigenesis, IL-17A has been shown to recruit myeloid derived suppressor cells (MDSCs) to dampen anti-tumor immunity. IL-17A can also enhance tumor growth in vivo through the induction of IL-6, which in turn activates oncogenic transcription factor signal transducer and activator of transcription 3 (STAT3) and upregulates pro-survival and pro-angiogenic genes in tumors. The exact role of IL-17A in angiogenesis has yet to be determined and current data suggest that IL-17A can promote or suppress tumor development. IL-17A seemed to facilitate development of colorectal carcinoma by fostering angiogenesis via promote VEGF production from cancer cells and it has been shown that IL-17A also mediates tumor resistance to anti-VEGF therapy through the recruitment of MDSCs.However IL-17A KO mice were more susceptible to developing metastatic lung melanoma, suggesting that IL-17A can possibly promote the production of the potent antitumor cytokine IFN-γ, produced by cytotoxic T cells. Indeed, data from ovarian cancer suggest that Th17 cells are positively correlated with NK cell–mediated immunity and anti-tumor CD8 responses. Clinical significance: Ocular diseases The presence of IL-17 has been proven in a number of ocular diseases associated with neovascularization. Elevated concentration of IL-17 have been shown in vitreous fluid during proliferative diabetic retinopathy. Increased rates of Th17 cells and higher concentrations of IL-17 have been observed in patients with age-related macular degeneration. As a drug target: The discovery of the key roles of IL-17A and IL-17A producing cells in inflammation, autoimmune diseases and host defense has led to the experimental targeting of the IL-17A pathway in animal models of diseases as well as in clinical trials in humans. Targeting IL-17A has been proven to be a good approach as anti-IL-17A is FDA approved for the treatment of psoriasis in 2015.Secukinumab (anti-IL-17A) has been evaluated in psoriasis and the first report showing Secukinumab is effective when compared with placebo was published in 2010. In 2015, the US Food and Drug Administration (FDA) and European Medicines Agency (EMA) approved anti-IL-17 for the treatment of psoriasis.Other than the monoclonal antibodies, highly specific and potent inhibitors targeting Th17 specific transcription factor RORγt have been identified and found to be highly effective.Vitamin D, a potent immunomodulator, has also been shown to suppress Th17 cell differentiation and function by several research groups. As a drug target: The active form of vitamin D has been found to 'severely impair' production of the IL17 and IL-17F cytokines by Th17 cells.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Java OpenAL** Java OpenAL: Java OpenAL (JOAL) is one of several wrapper libraries that allows Java programmers to access OpenAL. This allows Java programmers to use 3D sound in applications. JOAL is one of the libraries developed by the Sun Microsystems Game Technology Group. JOAL is released under a BSD license, and is available for Microsoft Windows, Mac OS X, and Linux. Like its graphical counterpart, Java OpenGL (JOGL), JOAL was developed using the GlueGen utility, a program that generates Java bindings from C header files. Java OpenAL: The official site on java.net was deleted in March 2011. The JOAL project, however, is still alive in Jogamp.org JOAL.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Crush load** Crush load: A crush load is a level of passenger loading in a transport vehicle which is so high that passengers are "crushed" against one another. It represents an extreme form of passenger loading, and normally considered to be representative of a system with serious capacity limitations. Crush loads result from too many passengers within a vehicle designed for a much smaller number. Crush loaded trains or buses are so heavily loaded that for most passengers physical contact with several other nearby passengers is impossible to avoid. Definition: In the context of transport economics and planning, crush load refers to the maximum level of passenger load for a particular vehicle or rail carriage. Crush loads are calculated for the number of passengers per unit area, standing up. Crush loads are not an issue for passengers that are seated, as passengers will not normally sit on one another. Crush loads are most common on city buses and rail metro systems, where passenger loading is high, and most passengers stand. Airlines almost never have crush loads, nor do high speed and/or long-distance rail or long-distance bus routes, where all passengers are generally seated. Definition: Crush loads are normally measured using number of standing passengers per 1 square metre (1.2 sq yd). Six passengers per square metre is often considered the practical limit on what can be accepted without serious discomfort to passengers. However, severe crush loads can be much in excess of this. Before the 1965–1982 Commuting Five Directions Operation (ja:通勤五方面作戦) project undertaken by Japanese National Railways (JNR) on five trunk railway lines serving the Greater Tokyo Area, crush load peak ridership on said lines regularly exceeded 200% to 300% of each line's optimal route capacity. During the same period of time, many of Japan's subway lines in operation also saw similar overcrowding levels, which was similarly only ameliorated by building new extra lines. Similar issues continue to persist on the Seoul Metropolitan Subway system in South Korea, in particular Line 2, Line 9 and Gimpo Goldline, where, especially due to the shorter train formations operated on the latter two lines, they are popularly nicknamed "hell trains".In India, the term "super dense crush load" has been coined by railway officials to describe passenger loads on peak-hour trains operating on the Mumbai Suburban Railway when carriages built for 200 passengers carry over 500, translating to 14–16 people per square metre; not accounting for the many passengers who have no choice but to hitch onto overcrowded moving trains out of necessity. Effects: Crush loads in transport vehicles can result in many secondary issues, such as petty theft and pickpocketing, extreme discomfort for passengers, sexual harassment, and an inability for passengers to board and alight vehicles in a timely manner. Effects: For a rail vehicle which has a crush load, passengers are touching and there is no space for another passenger to enter without causing serious discomfort to the passengers on board. According to Hoel et al. in Transportation Infrastructure Engineering, operating at crush load increases dwell time (the length of time the transport vehicle remains in the station or stop) and reduces overall vehicle capacity per unit of time.Large dense concentrations of passengers can create dangerous conditions, both within transit vehicles and at overcrowded stations. In 2014, a news service in Mumbai, India reported several serious platform gap mutilation incidents and a death within a few months, mostly attributed to crowded conditions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Momentum theory** Momentum theory: In fluid dynamics, momentum theory or disk actuator theory is a theory describing a mathematical model of an ideal actuator disk, such as a propeller or helicopter rotor, by W.J.M. Rankine (1865), Alfred George Greenhill (1888) and Robert Edmund Froude (1889). Momentum theory: The rotor is modeled as an infinitely thin disc, inducing a constant velocity along the axis of rotation. The basic state of a helicopter is hovering. This disc creates a flow around the rotor. Under certain mathematical premises of the fluid, there can be extracted a mathematical connection between power, radius of the rotor, torque and induced velocity. Friction is not included. Momentum theory: For a stationary open rotor with no outer duct, such as a helicopter in hover, the power required to produce a given thrust is: P=T32ρA where: T is the thrust ρ is the density of air (or other medium) A is the area of the rotor disc P is powerA device which converts the translational energy of the fluid into rotational energy of the axis or vice versa is called a Rankine disk actuator. The real life implementations of such devices include marine and aviation propellers, windmills, helicopter rotors, centrifugal pumps, wind turbines, turbochargers and chemical agitators.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Semantic gap** Semantic gap: The semantic gap characterizes the difference between two descriptions of an object by different linguistic representations, for instance languages or symbols. According to Andreas M. Hein, the semantic gap can be defined as "the difference in meaning between constructs formed within different representation systems". In computer science, the concept is relevant whenever ordinary human activities, observations, and tasks are transferred into a computational representation.More precisely the gap means the difference between ambiguous formulation of contextual knowledge in a powerful language (e.g. natural language) and its sound, reproducible and computational representation in a formal language (e.g. programming language). Semantics of an object depends on the context it is regarded within. For practical application this means any formal representation of real world tasks requires the translation of the contextual expert knowledge of an application (high-level) into the elementary and reproducible operations of a computing machine (low-level). Since natural language allows the expression of tasks which are impossible to compute in a formal language there are no means to automate this translation in a general way. Moreover, the examination of languages within the Chomsky hierarchy indicates that there is no formal and consequently automated way of translating from one language into another above a certain level of expressional power. Theoretical background: The yet unproven but commonly accepted Church-Turing thesis states that a Turing machine and all equivalent formal languages such as the lambda calculus perform and represent all formal operations respectively as applied by a computing human. However the selection of adequate operations for the correct computation itself is not formally deducible, moreover it depends on the computability of the underlying problem. Tasks, such as the halting problem, may be formulated comprehensively in natural language, but the computational representation will not terminate or does not provide a usable result, which is proven by Rice's theorem. The general expression of limitations for rule based deduction by Gödel's incompleteness theorem indicates that the semantic gap is never to be fully closed. These are general statements, considering the generalized limits of computation on the highest level of abstraction where the semantic gap manifests itself. There are however many subsets of problems which may be translated automatically, especially in the higher-numbered levels of the Chomsky hierarchy. Formal languages: Real world tasks are formalized by programming languages, which are executed on computers based on the von Neumann architecture. Since programming languages are only comfortable representations of the Turing machine any program on a von Neumann computer has the same properties and limitations as the Turing machine or its equivalent representation. Consequently, every programming language such as CPU level machine code, assembler, or any high level programming language has the same expressional power as the underlying Turing machine is able to compute. There is no semantic gap between them since a program is transferred from the high level language to the machine code by a program, e.g. a compiler which itself runs on a Turing machine without any user interaction. The semantic gap actually opens between the selection of the rules and the representation of the task. Practical consequences: Selection of rules for formal representations of real world applications, corresponds to writing a program. Writing programs is independent from the actual programming language and basically requires the translation of the domain specific knowledge of the user into the formal rules operating a turing machine. It is this transfer from contextual knowledge into formal representation which cannot be automatized with respect to the theoretical limitations of computation. Consequently, any mapping from real world applications into computer applications requires a certain amount of technical background knowledge by the user, where the semantic gap manifests itself. Practical consequences: It is a fundamental task of software engineering to close the gap between application specific knowledge and technically doable formalization. For this purpose domain specific (high-level) knowledge must be transferred into an algorithm and its parameters (low-level). This requires the dialogue between user and developer. Aim is always a software which allows the user to represent his knowledge as parameters of an algorithm without knowing the details of the implementation, and to interpret the outcome of the algorithm without the aid of the developer. For this purpose user interfaces play the key role in software design, while developers are supported by frameworks which help organizing the integration of contextual information. Examples: Document retrieval A simple example can be formulated as a series of increasingly difficult natural language queries to locate a target document that may or may not exist locally on a known computer system. Example queries: 1) Locate any file in the known directory "/usr/local/funny". 2) Locate any file where the word "funny" appears in the filename. 3) Locate any text file where the word "funny" or the substring "humor" appears in the text. 4) Locate any mp3 file where "funny", "comic" or "humor" appears in the metadata. 5) Locate any file of any type related to humor. Examples: 6) Locate any image that is likely to make my grandmother laugh.The progressive difficulty of these queries is represented by the increasing degree of abstraction from the types and semantics defined the system architecture (directories and files on a known computer) to the types and semantics that occupy the realm of ordinary human discourse (subjects such as "humor" and entities such as "my grandmother"). Moreover, this disparity of realms is further complicated by leaky abstractions, such as is common in the case of query 4), where the target document may exist, but may not encapsulate the "metadata" in a manner expected by the user, nor the designer of the query processing system. Examples: Image analysis Image analysis is a typical domain for which a high degree of abstraction from low-level methods is required, and where the semantic gap immediately affects the user. If image content is to be identified to understand the meaning of an image, the only available independent information is the low-level pixel data. Textual annotations always depend on the knowledge, capability of expression and specific language of the annotator and therefore is unreliable. To recognize the displayed scenes from the raw data of an image the algorithms for selection and manipulation of pixels must be combined and parameterized in an adequate manner and finally linked with the natural description. Even the simple linguistic representation of shape or color such as round or yellow requires entirely different mathematical formalization methods, which are neither intuitive nor unique and sound. Examples: Layered systems In many layered systems, some conflicts arise when concepts at a high level of abstraction need to be translated into lower, more concrete artifacts. This mismatch is often called semantic gap. Databases OODBMSs (object-oriented database management system) advocates sometimes claim that these databases help to reduce the semantic gap between the application domain (miniworld) and the traditional RDBMS systems. However Relational proponents would posit the exact opposite, because by definition object databases fix the data being recorded into a single binding abstraction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Orbit** Orbit: In celestial mechanics, an orbit (also known as orbital revolution) is the curved trajectory of an object such as the trajectory of a planet around a star, or of a natural satellite around a planet, or of an artificial satellite around an object or position in space such as a planet, moon, asteroid, or Lagrange point. Normally, orbit refers to a regularly repeating trajectory, although it may also refer to a non-repeating trajectory. To a close approximation, planets and satellites follow elliptic orbits, with the center of mass being orbited at a focal point of the ellipse, as described by Kepler's laws of planetary motion. Orbit: For most situations, orbital motion is adequately approximated by Newtonian mechanics, which explains gravity as a force obeying an inverse-square law. However, Albert Einstein's general theory of relativity, which accounts for gravity as due to curvature of spacetime, with orbits following geodesics, provides a more accurate calculation and understanding of the exact mechanics of orbital motion. History: Historically, the apparent motions of the planets were described by European and Arabic philosophers using the idea of celestial spheres. This model posited the existence of perfect moving spheres or rings to which the stars and planets were attached. It assumed the heavens were fixed apart from the motion of the spheres and was developed without any understanding of gravity. After the planets' motions were more accurately measured, theoretical mechanisms such as deferent and epicycles were added. Although the model was capable of reasonably accurately predicting the planets' positions in the sky, more and more epicycles were required as the measurements became more accurate, hence the model became increasingly unwieldy. Originally geocentric, it was modified by Copernicus to place the Sun at the centre to help simplify the model. The model was further challenged during the 16th century, as comets were observed traversing the spheres.The basis for the modern understanding of orbits was first formulated by Johannes Kepler whose results are summarised in his three laws of planetary motion. First, he found that the orbits of the planets in our Solar System are elliptical, not circular (or epicyclic), as had previously been believed, and that the Sun is not located at the center of the orbits, but rather at one focus. Second, he found that the orbital speed of each planet is not constant, as had previously been thought, but rather that the speed depends on the planet's distance from the Sun. Third, Kepler found a universal relationship between the orbital properties of all the planets orbiting the Sun. For the planets, the cubes of their distances from the Sun are proportional to the squares of their orbital periods. Jupiter and Venus, for example, are respectively about 5.2 and 0.723 AU distant from the Sun, their orbital periods respectively about 11.86 and 0.615 years. The proportionality is seen by the fact that the ratio for Jupiter, 5.23/11.862, is practically equal to that for Venus, 0.7233/0.6152, in accord with the relationship. Idealised orbits meeting these rules are known as Kepler orbits. History: Isaac Newton demonstrated that Kepler's laws were derivable from his theory of gravitation and that, in general, the orbits of bodies subject to gravity were conic sections (this assumes that the force of gravity propagates instantaneously). Newton showed that, for a pair of bodies, the orbits' sizes are in inverse proportion to their masses, and that those bodies orbit their common center of mass. Where one body is much more massive than the other (as is the case of an artificial satellite orbiting a planet), it is a convenient approximation to take the center of mass as coinciding with the center of the more massive body. History: Advances in Newtonian mechanics were then used to explore variations from the simple assumptions behind Kepler orbits, such as the perturbations due to other bodies, or the impact of spheroidal rather than spherical bodies. Joseph-Louis Lagrange developed a new approach to Newtonian mechanics emphasizing energy more than force, and made progress on the three-body problem, discovering the Lagrangian points. In a dramatic vindication of classical mechanics, in 1846 Urbain Le Verrier was able to predict the position of Neptune based on unexplained perturbations in the orbit of Uranus. History: Albert Einstein in his 1916 paper The Foundation of the General Theory of Relativity explained that gravity was due to curvature of space-time and removed Newton's assumption that changes propagate instantaneously. This led astronomers to recognize that Newtonian mechanics did not provide the highest accuracy in understanding orbits. In relativity theory, orbits follow geodesic trajectories which are usually approximated very well by the Newtonian predictions (except where there are very strong gravity fields and very high speeds) but the differences are measurable. Essentially all the experimental evidence that can distinguish between the theories agrees with relativity theory to within experimental measurement accuracy. The original vindication of general relativity is that it was able to account for the remaining unexplained amount in precession of Mercury's perihelion first noted by Le Verrier. However, Newton's solution is still used for most short term purposes since it is significantly easier to use and sufficiently accurate. Planetary orbits: Within a planetary system, planets, dwarf planets, asteroids and other minor planets, comets, and space debris orbit the system's barycenter in elliptical orbits. A comet in a parabolic or hyperbolic orbit about a barycenter is not gravitationally bound to the star and therefore is not considered part of the star's planetary system. Bodies that are gravitationally bound to one of the planets in a planetary system, either natural or artificial satellites, follow orbits about a barycenter near or within that planet. Planetary orbits: Owing to mutual gravitational perturbations, the eccentricities of the planetary orbits vary over time. Mercury, the smallest planet in the Solar System, has the most eccentric orbit. At the present epoch, Mars has the next largest eccentricity while the smallest orbital eccentricities are seen with Venus and Neptune. Planetary orbits: As two objects orbit each other, the periapsis is that point at which the two objects are closest to each other and the apoapsis is that point at which they are the farthest. (More specific terms are used for specific bodies. For example, perigee and apogee are the lowest and highest parts of an orbit around Earth, while perihelion and aphelion are the closest and farthest points of an orbit around the Sun.) In the case of planets orbiting a star, the mass of the star and all its satellites are calculated to be at a single point called the barycenter. The paths of all the star's satellites are elliptical orbits about that barycenter. Each satellite in that system will have its own elliptical orbit with the barycenter at one focal point of that ellipse. At any point along its orbit, any satellite will have a certain value of kinetic and potential energy with respect to the barycenter, and the sum of those two energies is a constant value at every point along its orbit. As a result, as a planet approaches periapsis, the planet will increase in speed as its potential energy decreases; as a planet approaches apoapsis, its velocity will decrease as its potential energy increases. Principles: There are a few common ways of understanding orbits: A force, such as gravity, pulls an object into a curved path as it attempts to fly off in a straight line. Principles: As the object is pulled toward the massive body, it falls toward that body. However, if it has enough tangential velocity it will not fall into the body but will instead continue to follow the curved trajectory caused by that body indefinitely. The object is then said to be orbiting the body.The velocity relationship of two moving objects with mass can thus be considered in four practical classes, with subtypes: No orbit Suborbital trajectories Range of interrupted elliptical paths Orbital trajectories (or simply, orbits) Open (or escape) trajectories It is worth noting that orbital rockets are launched vertically at first to lift the rocket above the atmosphere (which causes frictional drag), and then slowly pitch over and finish firing the rocket engine parallel to the atmosphere to achieve orbit speed. Principles: Once in orbit, their speed keeps them in orbit above the atmosphere. If e.g., an elliptical orbit dips into dense air, the object will lose speed and re-enter (i.e. fall). Occasionally a space craft will intentionally intercept the atmosphere, in an act commonly referred to as an aerobraking maneuver. Principles: Illustration As an illustration of an orbit around a planet, the Newton's cannonball model may prove useful (see image below). This is a 'thought experiment', in which a cannon on top of a tall mountain is able to fire a cannonball horizontally at any chosen muzzle speed. The effects of air friction on the cannonball are ignored (or perhaps the mountain is high enough that the cannon is above the Earth's atmosphere, which is the same thing).If the cannon fires its ball with a low initial speed, the trajectory of the ball curves downward and hits the ground (A). As the firing speed is increased, the cannonball hits the ground farther (B) away from the cannon, because while the ball is still falling towards the ground, the ground is increasingly curving away from it (see first point, above). All these motions are actually "orbits" in a technical sense—they are describing a portion of an elliptical path around the center of gravity—but the orbits are interrupted by striking the Earth. Principles: If the cannonball is fired with sufficient speed, the ground curves away from the ball at least as much as the ball falls—so the ball never strikes the ground. It is now in what could be called a non-interrupted or circumnavigating, orbit. For any specific combination of height above the center of gravity and mass of the planet, there is one specific firing speed (unaffected by the mass of the ball, which is assumed to be very small relative to the Earth's mass) that produces a circular orbit, as shown in (C). Principles: As the firing speed is increased beyond this, non-interrupted elliptic orbits are produced; one is shown in (D). If the initial firing is above the surface of the Earth as shown, there will also be non-interrupted elliptical orbits at slower firing speed; these will come closest to the Earth at the point half an orbit beyond, and directly opposite the firing point, below the circular orbit. Principles: At a specific horizontal firing speed called escape velocity, dependent on the mass of the planet and the distance of the object from the barycenter, an open orbit (E) is achieved that has a parabolic path. At even greater speeds the object will follow a range of hyperbolic trajectories. In a practical sense, both of these trajectory types mean the object is "breaking free" of the planet's gravity, and "going off into space" never to return. Newton's laws of motion: Newton's law of gravitation and laws of motion for two-body problems In most situations, relativistic effects can be neglected, and Newton's laws give a sufficiently accurate description of motion. The acceleration of a body is equal to the sum of the forces acting on it, divided by its mass, and the gravitational force acting on a body is proportional to the product of the masses of the two attracting bodies and decreases inversely with the square of the distance between them. To this Newtonian approximation, for a system of two-point masses or spherical bodies, only influenced by their mutual gravitation (called a two-body problem), their trajectories can be exactly calculated. If the heavier body is much more massive than the smaller, as in the case of a satellite or small moon orbiting a planet or for the Earth orbiting the Sun, it is accurate enough and convenient to describe the motion in terms of a coordinate system that is centered on the heavier body, and we say that the lighter body is in orbit around the heavier. For the case where the masses of two bodies are comparable, an exact Newtonian solution is still sufficient and can be had by placing the coordinate system at the center of the mass of the system. Newton's laws of motion: Defining gravitational potential energy Energy is associated with gravitational fields. A stationary body far from another can do external work if it is pulled towards it, and therefore has gravitational potential energy. Since work is required to separate two bodies against the pull of gravity, their gravitational potential energy increases as they are separated, and decreases as they approach one another. For point masses, the gravitational energy decreases to zero as they approach zero separation. It is convenient and conventional to assign the potential energy as having zero value when they are an infinite distance apart, and hence it has a negative value (since it decreases from zero) for smaller finite distances. Newton's laws of motion: Orbital energies and orbit shapes When only two gravitational bodies interact, their orbits follow a conic section. The orbit can be open (implying the object never returns) or closed (returning). Which it is depends on the total energy (kinetic + potential energy) of the system. In the case of an open orbit, the speed at any position of the orbit is at least the escape velocity for that position, in the case of a closed orbit, the speed is always less than the escape velocity. Since the kinetic energy is never negative if the common convention is adopted of taking the potential energy as zero at infinite separation, the bound orbits will have negative total energy, the parabolic trajectories zero total energy, and hyperbolic orbits positive total energy. Newton's laws of motion: An open orbit will have a parabolic shape if it has the velocity of exactly the escape velocity at that point in its trajectory, and it will have the shape of a hyperbola when its velocity is greater than the escape velocity. When bodies with escape velocity or greater approach each other, they will briefly curve around each other at the time of their closest approach, and then separate, forever. Newton's laws of motion: All closed orbits have the shape of an ellipse. A circular orbit is a special case, wherein the foci of the ellipse coincide. The point where the orbiting body is closest to Earth is called the perigee, and is called the periapsis (less properly, "perifocus" or "pericentron") when the orbit is about a body other than Earth. The point where the satellite is farthest from Earth is called the apogee, apoapsis, or sometimes apifocus or apocentron. A line drawn from periapsis to apoapsis is the line-of-apsides. This is the major axis of the ellipse, the line through its longest part. Newton's laws of motion: Kepler's laws Bodies following closed orbits repeat their paths with a certain time called the period. This motion is described by the empirical laws of Kepler, which can be mathematically derived from Newton's laws. These can be formulated as follows: The orbit of a planet around the Sun is an ellipse, with the Sun in one of the focal points of that ellipse. [This focal point is actually the barycenter of the Sun-planet system; for simplicity, this explanation assumes the Sun's mass is infinitely larger than that planet's.] The planet's orbit lies in a plane, called the orbital plane. The point on the orbit closest to the attracting body is the periapsis. The point farthest from the attracting body is called the apoapsis. There are also specific terms for orbits about particular bodies; things orbiting the Sun have a perihelion and aphelion, things orbiting the Earth have a perigee and apogee, and things orbiting the Moon have a perilune and apolune (or periselene and aposelene respectively). An orbit around any star, not just the Sun, has a periastron and an apastron. Newton's laws of motion: As the planet moves in its orbit, the line from the Sun to the planet sweeps a constant area of the orbital plane for a given period of time, regardless of which part of its orbit the planet traces during that period of time. This means that the planet moves faster near its perihelion than near its aphelion, because at the smaller distance it needs to trace a greater arc to cover the same area. This law is usually stated as "equal areas in equal time." For a given orbit, the ratio of the cube of its semi-major axis to the square of its period is constant. Newton's laws of motion: Limitations of Newton's law of gravitation Note that while bound orbits of a point mass or a spherical body with a Newtonian gravitational field are closed ellipses, which repeat the same path exactly and indefinitely, any non-spherical or non-Newtonian effects (such as caused by the slight oblateness of the Earth, or by relativistic effects, thereby changing the gravitational field's behavior with distance) will cause the orbit's shape to depart from the closed ellipses characteristic of Newtonian two-body motion. The two-body solutions were published by Newton in Principia in 1687. In 1912, Karl Fritiof Sundman developed a converging infinite series that solves the three-body problem; however, it converges too slowly to be of much use. Except for special cases like the Lagrangian points, no method is known to solve the equations of motion for a system with four or more bodies. Newton's laws of motion: Approaches to many-body problems Rather than an exact closed form solution, orbits with many bodies can be approximated with arbitrarily high accuracy. These approximations take two forms: One form takes the pure elliptic motion as a basis and adds perturbation terms to account for the gravitational influence of multiple bodies. This is convenient for calculating the positions of astronomical bodies. The equations of motion of the moons, planets, and other bodies are known with great accuracy, and are used to generate tables for celestial navigation. Still, there are secular phenomena that have to be dealt with by post-Newtonian methods. Newton's laws of motion: The differential equation form is used for scientific or mission-planning purposes. According to Newton's laws, the sum of all the forces acting on a body will equal the mass of the body times its acceleration (F = ma). Therefore accelerations can be expressed in terms of positions. The perturbation terms are much easier to describe in this form. Predicting subsequent positions and velocities from initial values of position and velocity corresponds to solving an initial value problem. Numerical methods calculate the positions and velocities of the objects a short time in the future, then repeat the calculation ad nauseam. However, tiny arithmetic errors from the limited accuracy of a computer's math are cumulative, which limits the accuracy of this approach.Differential simulations with large numbers of objects perform the calculations in a hierarchical pairwise fashion between centers of mass. Using this scheme, galaxies, star clusters and other large assemblages of objects have been simulated. Newtonian analysis of orbital motion: The following derivation applies to such an elliptical orbit. We start only with the Newtonian law of gravitation stating that the gravitational acceleration towards the central body is related to the inverse of the square of the distance between them, namely F2=−Gm1m2r2 where F2 is the force acting on the mass m2 caused by the gravitational attraction mass m1 has for m2, G is the universal gravitational constant, and r is the distance between the two masses centers. Newtonian analysis of orbital motion: From Newton's Second Law, the summation of the forces acting on m2 related to that body's acceleration: F2=m2A2 where A2 is the acceleration of m2 caused by the force of gravitational attraction F2 of m1 acting on m2. Combining Eq. 1 and 2: −Gm1m2r2=m2A2 Solving for the acceleration, A2: A2=F2m2=−1m2Gm1m2r2=−μr2 where μ is the standard gravitational parameter, in this case Gm1 . It is understood that the system being described is m2, hence the subscripts can be dropped. We assume that the central body is massive enough that it can be considered to be stationary and we ignore the more subtle effects of general relativity. When a pendulum or an object attached to a spring swings in an ellipse, the inward acceleration/force is proportional to the distance A=F/m=−kr. Newtonian analysis of orbital motion: Due to the way vectors add, the component of the force in the x^ or in the y^ directions are also proportionate to the respective components of the distances, rx″=Ax=−krx . Hence, the entire analysis can be done separately in these dimensions. This results in the harmonic parabolic equations cos ⁡(t) and sin ⁡(t) of the ellipse. In contrast, with the decreasing relationship A=μ/r2 , the dimensions cannot be separated.The location of the orbiting object at the current time t is located in the plane using vector calculus in polar coordinates both with the standard Euclidean basis and with the polar basis with the origin coinciding with the center of force. Let r be the distance between the object and the center and θ be the angle it has rotated. Let x^ and y^ be the standard Euclidean bases and let cos sin ⁡(θ)y^ and sin cos ⁡(θ)y^ be the radial and transverse polar basis with the first being the unit vector pointing from the central body to the current location of the orbiting object and the second being the orthogonal unit vector pointing in the direction that the orbiting object would travel if orbiting in a counter clockwise circle. Then the vector to the orbiting object is cos sin ⁡(θ)y^=rr^ We use r˙ and θ˙ to denote the standard derivatives of how this distance and angle change over time. We take the derivative of a vector to see how it changes over time by subtracting its location at time t from that at time t+δt and dividing by δt . The result is also a vector. Because our basis vector r^ moves as the object orbits, we start by differentiating it. From time t to t+δt , the vector r^ keeps its beginning at the origin and rotates from angle θ to θ+θ˙δt which moves its head a distance θ˙δt in the perpendicular direction θ^ giving a derivative of θ˙θ^ cos sin sin cos sin cos cos sin ⁡(θ)θ˙y^=−θ˙r^ We can now find the velocity and acceleration of our orbiting object. Newtonian analysis of orbital motion: O^=rr^O˙=δrδtr^+rδr^δt=r˙r^+r[θ˙θ^]O¨=[r¨r^+r˙θ˙θ^]+[r˙θ˙θ^+rθ¨θ^−rθ˙2r^]=[r¨−rθ˙2]r^+[rθ¨+2r˙θ˙]θ^ The coefficients of r^ and θ^ give the accelerations in the radial and transverse directions. As said, Newton gives this first due to gravity is −μ/r2 and the second is zero. Equation (2) can be rearranged using integration by parts. rθ¨+2r˙θ˙=1rddt(r2θ˙)=0 We can multiply through by r because it is not zero unless the orbiting object crashes. Then having the derivative be zero gives that the function is a constant. which is actually the theoretical proof of Kepler's second law (A line joining a planet and the Sun sweeps out equal areas during equal intervals of time). The constant of integration, h, is the angular momentum per unit mass. Newtonian analysis of orbital motion: In order to get an equation for the orbit from equation (1), we need to eliminate time. (See also Binet equation.) In polar coordinates, this would express the distance r of the orbiting object from the center as a function of its angle θ . However, it is easier to introduce the auxiliary variable u=1/r and to express u as a function of θ . Derivatives of r with respect to time may be rewritten as derivatives of u with respect to angle. Newtonian analysis of orbital motion: u=1r θ˙=hr2=hu2 (reworking (3)) or r¨=−h2u2δ2uδθ2 Plugging these into (1) gives r¨−rθ˙2=−μr2−h2u2δ2uδθ2−1u(hu2)2=−μu2 So for the gravitational force – or, more generally, for any inverse square force law – the right hand side of the equation becomes a constant and the equation is seen to be the harmonic equation (up to a shift of origin of the dependent variable). The solution is: cos ⁡(θ−θ0) where A and θ0 are arbitrary constants. This resulting equation of the orbit of the object is that of an ellipse in Polar form relative to one of the focal points. This is put into a more standard form by letting e≡h2A/μ be the eccentricity, letting a≡h2/μ(1−e2) be the semi-major axis. Finally, letting θ0≡0 so the long axis of the ellipse is along the positive x coordinate. Newtonian analysis of orbital motion: cos ⁡θ When the two-body system is under the influence of torque, the angular momentum h is not a constant. After the following calculation: δrδθ=−1u2δuδθ=−hmδuδθδ2rδθ2=−h2u2m2δ2uδθ2−hu2m2δhδθδuδθ(δθδt)2r=h2u3m2 we will get the Sturm-Liouville equation of two-body system. Relativistic orbital motion: The above classical (Newtonian) analysis of orbital mechanics assumes that the more subtle effects of general relativity, such as frame dragging and gravitational time dilation are negligible. Relativistic effects cease to be negligible when near very massive bodies (as with the precession of Mercury's orbit about the Sun), or when extreme precision is needed (as with calculations of the orbital elements and time signal references for GPS satellites.). Orbital planes: The analysis so far has been two dimensional; it turns out that an unperturbed orbit is two-dimensional in a plane fixed in space, and thus the extension to three dimensions requires simply rotating the two-dimensional plane into the required angle relative to the poles of the planetary body involved. The rotation to do this in three dimensions requires three numbers to uniquely determine; traditionally these are expressed as three angles. Orbital period: The orbital period is simply how long an orbiting body takes to complete one orbit. Specifying orbits: Six parameters are required to specify a Keplerian orbit about a body. For example, the three numbers that specify the body's initial position, and the three values that specify its velocity will define a unique orbit that can be calculated forwards (or backwards) in time. However, traditionally the parameters used are slightly different. Specifying orbits: The traditionally used set of orbital elements is called the set of Keplerian elements, after Johannes Kepler and his laws. The Keplerian elements are six: Inclination (i) Longitude of the ascending node (Ω) Argument of periapsis (ω) Eccentricity (e) Semimajor axis (a) Mean anomaly at epoch (M0).In principle, once the orbital elements are known for a body, its position can be calculated forward and backward indefinitely in time. However, in practice, orbits are affected or perturbed, by other forces than simple gravity from an assumed point source (see the next section), and thus the orbital elements change over time. Perturbations: An orbital perturbation is when a force or impulse which is much smaller than the overall force or average impulse of the main gravitating body and which is external to the two orbiting bodies causes an acceleration, which changes the parameters of the orbit over time. Perturbations: Radial, prograde and transverse perturbations A small radial impulse given to a body in orbit changes the eccentricity, but not the orbital period (to first order). A prograde or retrograde impulse (i.e. an impulse applied along the orbital motion) changes both the eccentricity and the orbital period. Notably, a prograde impulse at periapsis raises the altitude at apoapsis, and vice versa and a retrograde impulse does the opposite. A transverse impulse (out of the orbital plane) causes rotation of the orbital plane without changing the period or eccentricity. In all instances, a closed orbit will still intersect the perturbation point. Perturbations: Orbital decay If an orbit is about a planetary body with a significant atmosphere, its orbit can decay because of drag. Particularly at each periapsis, the object experiences atmospheric drag, losing energy. Each time, the orbit grows less eccentric (more circular) because the object loses kinetic energy precisely when that energy is at its maximum. This is similar to the effect of slowing a pendulum at its lowest point; the highest point of the pendulum's swing becomes lower. With each successive slowing more of the orbit's path is affected by the atmosphere and the effect becomes more pronounced. Eventually, the effect becomes so great that the maximum kinetic energy is not enough to return the orbit above the limits of the atmospheric drag effect. When this happens the body will rapidly spiral down and intersect the central body. Perturbations: The bounds of an atmosphere vary wildly. During a solar maximum, the Earth's atmosphere causes drag up to a hundred kilometres higher than during a solar minimum. Some satellites with long conductive tethers can also experience orbital decay because of electromagnetic drag from the Earth's magnetic field. As the wire cuts the magnetic field it acts as a generator, moving electrons from one end to the other. The orbital energy is converted to heat in the wire. Orbits can be artificially influenced through the use of rocket engines which change the kinetic energy of the body at some point in its path. This is the conversion of chemical or electrical energy to kinetic energy. In this way changes in the orbit shape or orientation can be facilitated. Another method of artificially influencing an orbit is through the use of solar sails or magnetic sails. These forms of propulsion require no propellant or energy input other than that of the Sun, and so can be used indefinitely. See statite for one such proposed use. Perturbations: Orbital decay can occur due to tidal forces for objects below the synchronous orbit for the body they're orbiting. The gravity of the orbiting object raises tidal bulges in the primary, and since below the synchronous orbit, the orbiting object is moving faster than the body's surface the bulges lag a short angle behind it. The gravity of the bulges is slightly off of the primary-satellite axis and thus has a component along with the satellite's motion. The near bulge slows the object more than the far bulge speeds it up, and as a result, the orbit decays. Conversely, the gravity of the satellite on the bulges applies torque on the primary and speeds up its rotation. Artificial satellites are too small to have an appreciable tidal effect on the planets they orbit, but several moons in the Solar System are undergoing orbital decay by this mechanism. Mars' innermost moon Phobos is a prime example and is expected to either impact Mars' surface or break up into a ring within 50 million years. Perturbations: Orbits can decay via the emission of gravitational waves. This mechanism is extremely weak for most stellar objects, only becoming significant in cases where there is a combination of extreme mass and extreme acceleration, such as with black holes or neutron stars that are orbiting each other closely. Oblateness The standard analysis of orbiting bodies assumes that all bodies consist of uniform spheres, or more generally, concentric shells each of uniform density. It can be shown that such bodies are gravitationally equivalent to point sources. Perturbations: However, in the real world, many bodies rotate, and this introduces oblateness and distorts the gravity field, and gives a quadrupole moment to the gravitational field which is significant at distances comparable to the radius of the body. In the general case, the gravitational potential of a rotating body such as, e.g., a planet is usually expanded in multipoles accounting for the departures of it from spherical symmetry. From the point of view of satellite dynamics, of particular relevance are the so-called even zonal harmonic coefficients, or even zonals, since they induce secular orbital perturbations which are cumulative over time spans longer than the orbital period. They do depend on the orientation of the body's symmetry axis in the space, affecting, in general, the whole orbit, with the exception of the semimajor axis. Perturbations: Multiple gravitating bodies The effects of other gravitating bodies can be significant. For example, the orbit of the Moon cannot be accurately described without allowing for the action of the Sun's gravity as well as the Earth's. One approximate result is that bodies will usually have reasonably stable orbits around a heavier planet or moon, in spite of these perturbations, provided they are orbiting well within the heavier body's Hill sphere. Perturbations: When there are more than two gravitating bodies it is referred to as an n-body problem. Most n-body problems have no closed form solution, although some special cases have been formulated. Light radiation and stellar wind For smaller bodies particularly, light and stellar wind can cause significant perturbations to the attitude and direction of motion of the body, and over time can be significant. Of the planetary bodies, the motion of asteroids is particularly affected over large periods when the asteroids are rotating relative to the Sun. Strange orbits: Mathematicians have discovered that it is possible in principle to have multiple bodies in non-elliptical orbits that repeat periodically, although most such orbits are not stable regarding small perturbations in mass, position, or velocity. However, some special stable cases have been identified, including a planar figure-eight orbit occupied by three moving bodies. Further studies have discovered that nonplanar orbits are also possible, including one involving 12 masses moving in 4 roughly circular, interlocking orbits topologically equivalent to the edges of a cuboctahedron.Finding such orbits naturally occurring in the universe is thought to be extremely unlikely, because of the improbability of the required conditions occurring by chance. Astrodynamics: Orbital mechanics or astrodynamics is the application of ballistics and celestial mechanics to the practical problems concerning the motion of rockets and other spacecraft. The motion of these objects is usually calculated from Newton's laws of motion and Newton's law of universal gravitation. It is a core discipline within space mission design and control. Celestial mechanics treats more broadly the orbital dynamics of systems under the influence of gravity, including spacecraft and natural astronomical bodies such as star systems, planets, moons, and comets. Orbital mechanics focuses on spacecraft trajectories, including orbital maneuvers, orbit plane changes, and interplanetary transfers, and is used by mission planners to predict the results of propulsive maneuvers. General relativity is a more exact theory than Newton's laws for calculating orbits, and is sometimes necessary for greater accuracy or in high-gravity situations (such as orbits close to the Sun). Earth orbits: Low Earth orbit (LEO): Geocentric orbits with altitudes up to 2,000 km (0–1,240 miles). Earth orbits: Medium Earth orbit (MEO): Geocentric orbits ranging in altitude from 2,000 km (1,240 miles) to just below geosynchronous orbit at 35,786 kilometers (22,236 mi). Also known as an intermediate circular orbit. These are "most commonly at 20,200 kilometers (12,600 mi), or 20,650 kilometers (12,830 mi), with an orbital period of 12 hours." Both geosynchronous orbit (GSO) and geostationary orbit (GEO) are orbits around Earth matching Earth's sidereal rotation period. All geosynchronous and geostationary orbits have a semi-major axis of 42,164 km (26,199 mi). All geostationary orbits are also geosynchronous, but not all geosynchronous orbits are geostationary. A geostationary orbit stays exactly above the equator, whereas a geosynchronous orbit may swing north and south to cover more of the Earth's surface. Both complete one full orbit of Earth per sidereal day (relative to the stars, not the Sun). Earth orbits: High Earth orbit: Geocentric orbits above the altitude of geosynchronous orbit 35,786 km (22,240 miles). Scaling in gravity: The gravitational constant G has been calculated as: (6.6742 ± 0.001) × 10−11 (kg/m3)−1s−2.Thus the constant has dimension density−1 time−2. This corresponds to the following properties. Scaling in gravity: Scaling of distances (including sizes of bodies, while keeping the densities the same) gives similar orbits without scaling the time: if for example distances are halved, masses are divided by 8, gravitational forces by 16 and gravitational accelerations by 2. Hence velocities are halved and orbital periods and other travel times related to gravity remain the same. For example, when an object is dropped from a tower, the time it takes to fall to the ground remains the same with a scale model of the tower on a scale model of the Earth. Scaling in gravity: Scaling of distances while keeping the masses the same (in the case of point masses, or by adjusting the densities) gives similar orbits; if distances are multiplied by 4, gravitational forces and accelerations are divided by 16, velocities are halved and orbital periods are multiplied by 8. When all densities are multiplied by 4, orbits are the same; gravitational forces are multiplied by 16 and accelerations by 4, velocities are doubled and orbital periods are halved. When all densities are multiplied by 4, and all sizes are halved, orbits are similar; masses are divided by 2, gravitational forces are the same, gravitational accelerations are doubled. Hence velocities are the same and orbital periods are halved. In all these cases of scaling. if densities are multiplied by 4, times are halved; if velocities are doubled, forces are multiplied by 16. These properties are illustrated in the formula (derived from the formula for the orbital period) GT2ρ=3π(ar)3, for an elliptical orbit with semi-major axis a, of a small body around a spherical body with radius r and average density ρ, where T is the orbital period. See also Kepler's Third Law. Patents: The application of certain orbits or orbital maneuvers to specific useful purposes has been the subject of patents. Tidal locking: Some bodies are tidally locked with other bodies, meaning that one side of the celestial body is permanently facing its host object. This is the case for Earth-Moon and Pluto-Charon system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Friedman test** Friedman test: The Friedman test is a non-parametric statistical test developed by Milton Friedman. Similar to the parametric repeated measures ANOVA, it is used to detect differences in treatments across multiple test attempts. The procedure involves ranking each row (or block) together, then considering the values of ranks by columns. Applicable to complete block designs, it is thus a special case of the Durbin test. Friedman test: Classic examples of use are: n wine judges each rate k different wines. Are any of the k wines ranked consistently higher or lower than the others? n welders each use k welding torches, and the ensuing welds were rated on quality. Do any of the k torches produce consistently better or worse welds?The Friedman test is used for one-way repeated measures analysis of variance by ranks. In its use of ranks it is similar to the Kruskal–Wallis one-way analysis of variance by ranks. Friedman test: The Friedman test is widely supported by many statistical software packages. Method: Given data {xij}n×k , that is, a matrix with n rows (the blocks), k columns (the treatments) and a single observation at the intersection of each block and treatment, calculate the ranks within each block. If there are tied values, assign to each tied value the average of the ranks that would have been assigned without ties. Replace the data with a new matrix {rij}n×k where the entry rij is the rank of xij within block i Find the values r¯⋅j=1n∑i=1nrij The test statistic is given by 12 nk(k+1)∑j=1k(r¯⋅j−k+12)2 . Note that the value of Q does need to be adjusted for tied values in the data. Method: Finally, when n or k is large (i.e. n > 15 or k > 4), the probability distribution of Q can be approximated by that of a chi-squared distribution. In this case the p-value is given by P(χk−12≥Q) . If n or k is small, the approximation to chi-square becomes poor and the p-value should be obtained from tables of Q specially prepared for the Friedman test. If the p-value is significant, appropriate post-hoc multiple comparisons tests would be performed. Related tests: When using this kind of design for a binary response, one instead uses the Cochran's Q test. The Sign test (with a two-sided alternative) is equivalent to a Friedman test on two groups. Kendall's W is a normalization of the Friedman statistic between 0 and 1. The Wilcoxon signed-rank test is a nonparametric test of nonindependent data from only two groups. The Skillings–Mack test is a general Friedman-type statistic that can be used in almost any block design with an arbitrary missing-data structure. The Wittkowski test is a general Friedman-Type statistics similar to Skillings-Mack test. When the data do not contain any missing value, it gives the same result as Friedman test. But if the data contain missing values, it is both, more precise and sensitive than Skillings-Mack test. An implementation of the test exists in R. Post hoc analysis: Post-hoc tests were proposed by Schaich and Hamerle (1984) as well as Conover (1971, 1980) in order to decide which groups are significantly different from each other, based upon the mean rank differences of the groups. These procedures are detailed in Bortz, Lienert and Boehnke (2000, p. 275). Eisinga, Heskes, Pelzer and Te Grotenhuis (2017) provide an exact test for pairwise comparison of Friedman rank sums, implemented in R. The Eisinga c.s. exact test offers a substantial improvement over available approximate tests, especially if the number of groups ( k ) is large and the number of blocks ( n ) is small. Post hoc analysis: Not all statistical packages support post-hoc analysis for Friedman's test, but user-contributed code exists that provides these facilities (for example in SPSS, and in R.). Also, there is a specialized package available in R containing numerous non-parametric methods for post-hoc analysis after Friedman.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Catch crop** Catch crop: In agriculture, a catch crop is a fast-growing crop that is grown between successive plantings of a main crop. It is a specific type of cover crop that is grown between two main crops. This crop is utilized as a way to reduce nitrogen leaching but it also promotes environmental benefits such as fortifying soil structure, retention of water and enhancement of soil biological activity. Catch crops revolve around plant species that have short growing seasons, rapid growth, low soil and nutrients requirements to be considered a catch crop.Catch cropping is a type of succession planting. It makes more efficient use of growing space. For example, radishes that mature from seed in 25–30 days can be grown between rows of most vegetables, and harvested long before the main crop matures. Or, a catch crop can be planted between the spring harvest and fall planting of some crops. Leach reduction: Growing catch crops is to improve and help maintain soil organic matter. This is achieved by promoting an increase of activity of soil microbes. Cultivation of this type of crop helps retain nutrients, specifically nitrogen and phosphorus from leaching from the soil. To be able to get these benefits in a field there are multiple factors that have to be taken into account to obtain the most out of the catch crops. Those factors include harvesting date, soil tillage, and sow timing being the most crucial. When timing of sowing is postponed from the date that was recommended a decrease of catch crop efficiency, growth and over performance occurs where studies have shown a decrease uptake of N. Soil-water management: Catch crops help protect soil against water erosion through its residue which increases soil roughness. The residue serves as a soil surface protection from splash effects as well contributes to lowering the surface runoff. Certain catch crop species are also able to create biopores by decaying roots which create flow paths in the soil causing an increase of water infiltration. Although during the crop growth they come to deplete soil water some species of catch crops help preserve water once they are killed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Biotin carboxyl carrier protein** Biotin carboxyl carrier protein: Biotin carboxyl carrier protein (BCCP) refers to proteins containing a biotin attachment domain that carry biotin and carboxybiotin throughout the ATP-dependent carboxylation by biotin-dependent carboxylases. The biotin carboxyl carrier protein is an Acetyl CoA subunit that allows for Acetyl CoA to be catalyzed and converted to malonyl-CoA. More specifically, BCCP catalyzes the carboxylation of the carrier protein to form an intermediate. Then the carboxyl group is transferred by the transcacrboxylase to form the malonyl-CoA. This conversion is an essential step in the biosynthesis of fatty acids. In the case of E. coli Acetyl-CoA carboxylase, the BCCP is a separate protein known as accB (P0ABD8). On the other hand, in Haloferax mediterranei, propionyl-CoA carboxylase, the BCCP pccA (I3R7G3) is fused with biotin carboxylase. Biotin carboxyl carrier protein: The biosynthesis of fatty acids in plants, such as triacylglycerol, is vital to the plant's overall health because it allows for accumulation of seed oil. The biosynthesis that is catalyzed by BCCP usually takes place in the chloroplast of plant cells. The biosynthesis performed by the BCCP protein allows for the transfer of CO2 within active sites of the cell.The biotin carboxyl carrier protein carries approximately 1 mol of biotin per 22,000 g of protein.There is not much research on BCCPs at the moment. However, a recent studyon plant genomics found that Brassica BCCPs might play a key role in abiotic and biotic stress responses. Meaning that these proteins may be relaying messages to the rest of the plant body after it has been exposed to extreme conditions that disrupt the plant's homeostasis. Synthesis of Malonyl-CoA: The synthesis of Malonyl-CoA consists of two half reactions. The first being the carboxylation of biotin with bicarbonate and the second being the transfer of the CO2 group to acetyl-CoA from carboxybiotin to allow for the formation of malonyl-CoA. Two different protein subassemblies, along with BCCP, are required for this two step reaction to be successful: biotin carboxylase (BC) and carboxyltransferase (CT). BCCP contains the biotin cofactor which is covalently bound to a lysine residue.In fungi, mammals, and plant cytosols, all three of these components (BCCP, BC, and CT) exist on one polypeptide chain. However, most studies of this protein have been conducted on the E. coli form of the enzyme, where all three components exist as three separate complexes rather than being united on one polypeptide chain. Structure: The first report of the BCCP structure was made by biochemists F. K. Athappilly and W. A. Hendrickson in 1995. It can be thought of as a long β-hairpin structure, with four pairs of antiparallel β-strands that wrap around a central hydrophobic core. The biotinylation motif Met-Lys-Met is located at the tip of the β-hairpin structure. Rotations around the CαCβ bond of this Lys residue contribute to the swinging-arm model. The connection to the rest of the enzyme at the N-terminus of BCCP core is located at the opposite end of the structure from the biotin moiety. Rotations around this region contribute to the swinging-domain model, and the N1′ atom of biotin is ~ 40 Å from this pivot point. This gives a range of ~ 80 Å for the swinging-domain model, and the BC–CT active site distances observed so far are between 40 and 80 Å. In addition, the linker before the BCCP core in the holoenzyme could also be flexible, which would give further reach for the biotin N1′ atom.The structures of biotin-accepting domains from E. coli BCCP-87 and the 1.3S subunit of P. shermanii TC were determined by both X-ray crystallography and nuclear magnetic resonance studies. (Athappilly and Hendrickson, 1995; Roberts et al., 1999; Reddy et al., 1998). These produced essentially the same structures that are structurally related to the lipoyl domains of 2-oxo acid dehydrogenase multienzyme complexes (Brocklehurst and Perham, 1993; Dardel et al., 1993), which similarly undergo an analogous post-translational modification. These domains form a flattened β-barrel structure comprising two four-stranded β-sheets with the N- and C-terminal residues close together at one end of the structure. At the other end of the molecule, the biotinyl- or lipoyl-accepting lysine resides on a highly exposed, tight hairpin loop between β4 and β5 strands. The structure of the domain is stabilized by a core of hydrophobic residues, which are important structural determinants. Conserved glycine residues occupy β-turns linking the β-strands.The structure of the Biotin-accepting domain consists of BCCP-87 which contains a seven-amino-acid insertion common to certain prokaryotic acetyl-CoA carboxylases but not present in other biotindomains (Chapman-Smith and Cronan, 1999). This region of the peptide adopts a thumb structure between the β2 and β3 strands and, interestingly, forms direct contacts with the biotin moiety in both the crystal and solution structures (Athappilly and Hendrickson, 1995; Roberts et al., 1999). It has been proposed that this thumb may function as a mobile lid for either, or possibly both, the biotin carboxylase or carboxyl- transferase active sites in the biotin-dependent enzyme (Cronan, 2001). The function of this lid could aid to prevent solvation of the active sites, thereby aiding in the transfer of CO2 from carboxybiotin to acetyl CoA. Secondly, the thumb is required for dimerization of BCCP, necessary for the formation of the active acetyl CoA carboxylase complex (Cronan, 2001). In conclusion, the thumb functions to inhibit the aberrant lipoylation of the target lysine by lipoyl protein ligase (Reche and Perham, 1999). Removal of the thumb by mutagenesis rendered BCCP-87 a favorable substrate for lipoylation but abolished biotinylation (Reche and Perham, 1999). The thumb structure, however, is not a highly conserved feature amongst all biotin domains. Many biotin-dependent enzymes do not contain this insertion, including all five mammalian enzymes. However, it appears the interactions between biotin and protein might be a conserved feature and important for catalysis as similar contacts have been observed in the “thumbless” domains from P. shermanii transcarboxylase (Jank et al., 2002) and the biotinyl/lipoyl attachment protein of B. subtilis (Cui et al., 2006). The significance of this requires further investigation but it is possible that the mechanism employed by the biotin enzymes may involve noncovalent interactions between the protein and the prosthetic group.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hypoxia-activated prodrug** Hypoxia-activated prodrug: Hypoxia-Activated Prodrugs (HAPs) are prodrugs that target regions of tumor hypoxia within tumor cells. HAPs may offer the potential, alone and in combination with conventional chemotherapy, of improving cancer therapy. It is believed that tumor hypoxia contributes significantly to treatment failure and relapse among cancer patients because cells in the hypoxic zones of solid tumors resist traditional chemotherapy for at least two reasons: first, most antitumor agents cannot penetrate beyond 50-100 micrometers from capillaries, thereby never reaching those cells in the hypoxic regions. Secondly, the lower nutrient and oxygen supply to cells in the hypoxic zones of tumors cause them to divide more slowly than their well oxygenated counterparts, so hypoxic tumor cells exhibit greater resistance to chemotherapies and radiation which target rapidly dividing cells or require oxygen for efficacy. Hypoxia-activated prodrug: Hypoxia also contributes to the invasive and metastatic phenotypes of aggressive cancers by promoting genetic instability and accelerating the accumulation of mutations that can ultimately give rise to drug resistance.There are several companies developing HAPs: Novacea, Inc. (acquired by Transcept/Paratek pharmaceuticals), Proacta Inc. (now defunct) and Threshold Pharmaceuticals, Inc. These companies are involved in developing the following drug candidates: AQ4N (Novacea), PR-104 (Proacta) and TH-302 (evofosfamide) and TH-4000 (tarloxotinib) (Threshold Pharmaceuticals).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Resident DJ** Resident DJ: In the DJ culture, a resident DJ or local DJ refers to a DJ is part of the staff of employees of the club, unlike a guest artist, who works as freelancer, which means that he or she plays on several clubs (including several countries). Obtaining a residence implies being part of the salaried staff of a company. Unlike a guest, the resident almost inevitably has to conform to certain musical styles dictated by the hiring company. Instead, the resident's sponsorship rests with the club itself, which will probably means greater investment in marketing than if it worked independently. Resident DJ: Generally, a resident tends to obtain less fame, considerations (and salary) than a guest, although there are notable exceptions to this; examples of successful residents are Sandrien from Trouw (Amsterdam), or Ben Klock and Marcel Dettmann from Berghain (Berlin). The residency is considered the best way of pragmatic learning for a novice DJ: everything he/she learned at home is now put into practice with an audience in front of him/her, forcing he/she to engage in a "conversation" with the audience. History: In the early ages of clubbing, when the first underground nightclubs were formed in the 70s and 80s fixed hiring was the most common practice of signing DJs, so they were all residents. The culture of electronic music and DJing emerged in the large industrialized cities of the Anglo-Saxon countries: the United Kingdom and the United States. Later, the DJ profession became popular and diversified, and changed the paradigm to a form of freelance employment; "By the early 1990s the network of commercial raves and rave-style clubs of macropists had already created a closed circuit of guest DJs". who traveled all over the country, and therefore the number of resident DJs was reduced. History: At the end of the 90s, due to the need for renovation in the world, the figure of the resident DJ re-emerged, and has remained that way to this day. However, the current role of the resident varies slightly from the traditional role, as a new concept has emerged: the 'guest resident', meaning several 'guest' residents who take turns regularly at a club for a while. One example was Paul Oakenfold who got a temporary contract and moved to Liverpool for a few months in 1997 to play on Saturdays at Cream. History: By blurring the line that separates residence from invitation, greater work flexibility is allowed for both DJs and clubs. This is how it has been maintained throughout the 21st century. Roles of a resident DJ: Serve as support for guest DJs, on many occasions adapting to their musical style. Warm up the dance floor, as being the "opening act", to prepare the public for the next set. Availability for the club, this implies having an irregular, flexible schedule, as some weekends the resident DJ will have to play at the beginning of the night, others at rush hour and others at closing. Roles of a resident DJ: Be responsible for the "musical identity" of the club; Just as the graphic designer will be in charge of the visual communication of the company, the resident DJ(s) are usually responsible for the musical line of the club, and in part, therefore, for the image / message that is projected.In this sense, the reporter A. Arango writes for Vice: Residences not only benefit DJs, but help clubs to forge their sound and give them an identity. That's why when we listen to "Resident of Amnesia Ibiza" or "Resident of Concrete Paris" completely different sounds come to mind. Some residences become so legendary that they end up defining the future of the clubs. For example, it is impossible to talk about Paradise Garage without mentioning the role that Larry Levan played, or to talk about Fabric without mentioning the curatorial work of Craig Richards. Roles of a resident DJ: In a broader sense, local DJs are also somewhat responsible for the local music scene in their city, region or country. A more local approach to electronic music leads to the creation of new sounds and trends. Here's how M. Barnes describes it for DJ Broadcast: With a greater focus on touring, there is less chance for local influences to permeate the global electronic music culture. It seems that we are moving towards a homogenized and generic "sound" that avoids any cultural influences from specific localities. As a pillar of a club, the DJ can help cultivate local sounds, from subtle nuances of style to complete reinvention of genres such as Tuki music of Venezuela. Developed by local DJs and producers, who fused hard house and techno with local influences, it emerged as its own subgenre in the early 2000s. Iconic residents: Alfredo at Amnesia (Ibiza) Carl Cox at Space (Ibiza) Craig Richards at Fabric (London) Danny Tenaglia in Tunnel (New York) David Mancuso at The Loft (New York) Frankie Knuckles in Warehouse (Chicago) Harri & Domenic at Sub Club (Glasgow) Larry Levan at Paradise Garage (New York) Ron Hardy at Music Box (Chicago)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Friedberg numbering** Friedberg numbering: In computability theory, a Friedberg numbering is a numbering (enumeration) of the set of all uniformly recursively enumerable sets that has no repetitions: each recursively enumerable set appears exactly once in the enumeration (Vereščagin and Shen 2003:30). The existence of such numberings was established by Richard M. Friedberg in 1958 (Cutland 1980:78).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded