text
stringlengths
60
353k
source
stringclasses
2 values
**Stabilizer (ship)** Stabilizer (ship): Ship stabilizers (or stabilisers) are fins or rotors mounted beneath the waterline and emerging laterally from the hull to reduce a ship's roll due to wind or waves. Active fins are controlled by a gyroscopic control system. When the gyroscope senses the ship roll, it changes the fins' angle of attack so that the forward motion of the ship exerts force to counteract the roll. Fixed fins and bilge keels do not move; they reduce roll by hydrodynamic drag exerted when the ship rolls. Stabilizers are mostly used on ocean-going ships. Function: Fins work by producing lift or downforce when the vessel is in motion. The lift produced by the fins should work against the roll moment of the vessel. To accomplish this, two wings, each installed underwater on either side of the ship, are used. Stabilizers can be: Retractable - All medium and large cruise and ferry ships have the ability to retract the fins into a space inside the hull in order to avoid extra fuel consumption and reduce the required hull clearance when the fins are not needed Non-retractable - Used on small vessels such as yachts.Stabilizer movement is similar to that of aircraft ailerons. Some types of fins, especially the ones installed on larger ships, are provided with flaps, that increase the fin lift by about 15%. Stabilizer control needs to consider numerous variables that change quickly: wind, waves, ship motion, draft, etc. Fin stabilizers are much more efficient at higher velocities and lose effectiveness when the ship is under a minimum speed. Stabilization solutions at anchor or at low speed include actively controlled fins (such as the stabilisation at rest system developed by Rolls-Royce that oscillate to counteract wave motion), and rotary cylinders employing the Magnus effect. The latter two systems are retractable, allowing for a thinner vessel profile when docking and reducing drag while cruising. History: Leopold starts the stabilizer history with antiroll tanks installed on British warships in the end of 19th century. Another early stabilization technology was the anti-rolling gyro, or gyroscopic stabilization. In 1915 the gyroscopic stabilizer was mounted on US destroyer USS Worden (DD-16). The World War I transport USS Henderson, completed in 1917, was the first large ship with gyro stabilizers. It had two 25-ton, 9-foot diameter flywheels mounted near the center of the ship, spun at 1100 RPM by 75 HP AC motors. The gyroscopes' cases were mounted on vertical bearings. When a small sensor gyroscope on the bridge sensed a roll, a servomotor would rotate the gyros about a vertical axis in a direction so their precession would counteract the roll. In tests this system was able to reduce roll to 3 degrees in the roughest seas. For about 20 years the effectiveness of the stabilizers was unclear (in part due to improved gunfire directors), and in the US Navy the feature remained experimental (gyrostabilizer on the USS Osborne (DD-295), active-tank stabilizer on USS Hamilton (DMS-18)) until 1950s. One of the most famous ships to first use an anti-rolling gyro was the Italian passenger liner SS Conte di Savoia, which first sailed in November 1932. It had three flywheels which were 13 feet in diameter and weighed 108 tons. Gyroscope stabilization was replaced by fin stabilization due to its lower weight and bulk, but it has seen renewed interest since the 1990s (Seakeeper, etc.).The fin stabilizer had been patented by Motora Shintaro of Japan in 1922. The first use of fin stabilizers on a ship was by a Japanese cruise liner in 1933. From the late 1930s the British were actively installing the Denny-Brown fin stabilizers onto their warships (over 100 installations by 1950). US Navy continued unsuccessful experiments with roll tanks until the successful fin stabilizer installations onto USS Gyatt (1956) and USS Bronstein (DE-1037) (1958).In 1934 a Dutch liner introduced one of the world's most unusual ship stabilizer systems, in which two large tubes were mounted on each side of the ship's hull with the bottom of the tubes open to the sea. The top of the tubes had compressed air or steam pumped in. As the ship rolled, the side it was rolling to would fill with water and then compressed air or steam would be injected to push the water down, countering the roll.In 2018, rocket and space technology company Blue Origin purchased the Stena Freighter, a roll-on/roll-off cargo ship, for use as a landing platform ship for its New Glenn launch vehicle booster stages. As of late 2018, the ship is undergoing refit to prepare for its role of landing rockets. The rocket boosters will be recovered downrange of the launch site in the Atlantic Ocean while the hydrodynamically-stabilized ship is underway. The ship stabilization technology is designed to increase the likelihood of successful rocket recovery in rough seas, as well as helping to carry out launches on schedule. Sources: Kerchove, René de baron (1961). "Denny-Brown Stabilizer". International Maritime Dictionary: An Encyclopedic Dictionary of Useful Maritime Terms and Phrases, Together with Equivalents in French and German (2 ed.). Van Nostrand Reinhold. p. 213. ISBN 978-0-442-02062-0. OCLC 1039382382. Leopold, Reuven (December 1977). "Innovation adoption in naval ship design". Naval Engineers Journal. 89 (6): 35–42. doi:10.1111/j.1559-3584.1977.tb05537.x. eISSN 1559-3584. hdl:1721.1/16291. ISSN 0028-1425.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Descinolone acetonide** Descinolone acetonide: Descinolone acetonide (developmental code name CL-27071), also known as desoxytriamcinolone acetonide, is a synthetic glucocorticoid corticosteroid which was never marketed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vasopressin receptor 2** Vasopressin receptor 2: Vasopressin receptor 2 (V2R), or arginine vasopressin receptor 2 (officially called AVPR2), is a protein that acts as receptor for vasopressin. AVPR2 belongs to the subfamily of G-protein-coupled receptors. Its activity is mediated by the Gs type of G proteins, which stimulate adenylate cyclase. Vasopressin receptor 2: AVPR2 is expressed in the kidney tubule, predominantly in the membrane of cells of the distal convoluted tubule and collecting ducts, in fetal lung tissue and lung cancer, the last two being associated with alternative splicing. AVPR2 is also expressed outside the kidney in vascular endothelium. Stimulation causes the release of von Willebrand factor and factor VIII from the endothelial cells. Because von Willebrand factor helps stabilize circulating levels of factor VIII, the vasopressin analog desmopressin can be used to stimulate the AVPR2 receptor and increase levels of circulating factor VIII. This is useful in the treatment of hemophilia A as well as Von Willebrand disease. Vasopressin receptor 2: In the kidney, AVPR2's primary property is to respond to arginine vasopressin by stimulating mechanisms that concentrate the urine and maintain water homeostasis in the organism. When the function of AVPR2 is lost, the disease nephrogenic diabetes insipidus (NDI) results. Antagonists: Vasopressin receptor antagonists that are selective for the V2 receptor include: Tolvaptan (FDA-approved) Lixivaptan Mozavaptan SatavaptanTheir main uses are in hyponatremia, such as that caused by syndrome of inappropriate antidiuretic hormone (SIADH) and heart failure, however these agents should be avoided in patients with cirrhosis.Demeclocycline and lithium carbonate act as indirect antagonists of renal vasopressin V2 receptors by inhibiting activation of the second messenger cascade of the receptors. Pharmacoperones: Vasopressin receptor 2 function has been shown to be deleteriously effected by point mutations in its gene. Some of these mutations, when expressed, cause the receptor to remain in the cytosol. An approach to rescue receptor function utilizes pharmacoperones or molecular chaperones, which are typically small molecules that rescue misfolded proteins to the cell surface. These interact with the receptor to restore cognate receptor function devoid of antagonist or agonist activity. This approach, when effective, should increase therapeutic reach. Pharmacoperones have been identified that restore function of V2R. Interactions: Arginine vasopressin receptor 2 has been shown to interact with C1QTNF1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tram-train** Tram-train: A tram-train is a type of light rail vehicle that meets the standards of a light rail system (usually an urban street running tramway), but which also meets national mainline standards permitting operation alongside mainline trains. This allows services that can utilise both existing urban light rail systems and mainline railway networks and stations. It combines the urban accessibility of a tram or light rail with a mainline train's greater speed in the suburbs.The modern tram-train concept was pioneered by the German city of Karlsruhe in the late 1980s, resulting in the creation of the Karlsruhe Stadtbahn. This concept is often referred to as the Karlsruhe model, and it has since been adopted in other cities such as Mulhouse in France and in Kassel, Nordhausen and Saarbrücken in Germany.An inversion of the concept is a train-tram; a mainline train adapted to run on-street in an urban tramway, also known as the Zwickau Model. Technology: The tram-train often is a type of interurban — that is, they link separate towns or cities, according to George W. Hilton and John F. Due's definition.Most tram-trains are standard gauge, which facilitates sharing track with main-line trains. Exceptions include Alicante Tram and Nordhausen, which are metre gauge. Technology: Tram-train vehicles are dual-equipped to suit the needs of both tram and train operating modes, with support for multiple electrification voltages if required and safety equipment such as train stops and other railway signalling equipment. The Karlsruhe and Saarbrücken systems use "PZB" or "Indusi" automatic train protection, so that if the driver passes a signal at stop the emergency brakes are applied. History: The idea is not new; in the early 20th century, interurban streetcar lines often operated on dedicated rights-of-way between towns, while running on street trackage in town. The first interurban to emerge in the United States was the Newark and Granville Street Railway in Ohio, which opened in 1889. In 1924, in Hobart, Australia, sharing of tracks between trams and trains was proposed.The difference between modern tram-trains and the older interurban and radial railways is that tram-trains are built to meet mainline railway standards, rather than ignoring them. An exception is the United States' River Line in New Jersey, which runs along freight tracks with time separation; passenger trains run by day, and freight by night. Existing systems: Asia Japan Fukui: Fukui Fukubu Line Europe Austria Gmunden: Traunsee Tram (2018) Vienna: Badner Bahn Denmark Aarhus Letbane France Lyon: Rhônexpress (2010) Lyon: Tram-train de l'ouest lyonnais Mulhouse: Mulhouse tramway Nantes: Nantes tram-train (2011) Île-de-France (Paris region): Tramway Line 4 (2006) Tramway Line 11 Express (2017) Tramway Line 13 Express (2022) Germany Chemnitz: Chemnitz Tramway Karlsruhe: Stadtbahn Karlsruhe Kassel: Kassel RegioTram (2006) Nordhausen: Trams in Nordhausen Saarbrücken: Saarbahn Hungary Szeged-Hódmezővásárhely tram-train (2021) Italy Sassari: Metrosassari Netherlands The Hague-Rotterdam: RandstadRail Portugal Porto: Porto Metro Line B/Bx (opening 2005) Porto Metro Line C (opening 2005) Spain Alicante: Alicante Tram (2007) Mallorca: Mallorca rail network Cádiz: Cádiz Bay tram-train (2022) United Kingdom Sheffield - Rotherham: Sheffield Supertram (2018) Cardiff: South Wales Metro (2023) North America Puebla, Mexico: Puebla-Cholula Tourist Train San Diego, California: San Diego Trolley Proposed systems: Africa The October 6th Tram system (The O6T), Cairo, Egypt Asia Haifa–Nazareth, Israel Keelung Light Rail Transit (Nangang-Keelung), Taiwan Europe Braunschweig, Germany Bratislava, Slovakia Cardiff, United Kingdom. Wales & Borders franchise: South Wales Valley Lines (2022 - 2023) - rolling stock currently under construction. Erlangen, Germany – an extension of Straßenbahn Nürnberg not initially planned to use mainline rail tracks but proposed to do so in the future. The planned line to Herzogenaurach replicates a former mainline rail line Greater Manchester, United Kingdom. Proposed extensions to the Manchester Metrolink network. Proposed systems: Grenoble, France Groningen, Netherlands Île de France (Paris region), France: Tramway Line 12 Express (in construction phase as of July 2023) Kiel, Germany Kyiv, Ukraine Košice, Slovakia (in planning phase) León, Spain Liberec — Jablonec nad Nisou, Czech Republic Linköping, Sweden Manresa, Spain Metro Mondego, Coimbra, Portugal RijnGouweLijn, Netherlands Metro de Sevilla. Seville has one metro line and one tram line that are not connected, but the long-term intention is to link the metro and tram systems. Proposed systems: Sevastopol Strasbourg, France Szeged, Hungary. Two other destinations are being considered as of January 2022 besides the Szeged - Hódmezővásárhely line, which entered operation in November 2021. The Szeged - Subotica (Serbia) line is in early planning phase. A preparatory study was also completed for the Szeged - Makó line, but the estimated costs were high, and it is also dependent on a new road-rail bridge over the river Tisa only in planning phase as of now. Proposed systems: TramCamp, Camp de Tarragona, Catalonia, Spain Wrocław, Poland (2005) — 600 V DC/3 kV DC Riga, Latvia Tampere, Finland Turku, Finland West Midlands conurbation, United Kingdom. Proposed extensions to the West Midlands Metro. Oceania Adelaide, South Australia – On June 5, 2008, the Government of South Australia announced plans for train-tram operation on the Adelaide Metro's Outer Harbor/Grange train lines and City West-Glenelg tramline extension as part of a 10-year A$2 billion public transport upgrade. South America Bogotá Commuter Rail (RegioTram), Colombia Cali, Colombia Vehicles: Models of tram designed for tram-train operation include: Alstom's RegioCitadis and Citadis Dualis, derived from the Citadis Bombardier Flexity Link and Bombardier Flexity Swift Siemens S70 Stadler Citylink Train-tram: Europe Zwickau: Trams in Zwickau – on-board diesel generator (light-weight RegioSprinter diesel railbuses that also operate on street tramway) North America Austin, Texas: Capital MetroRail – commuter rail that shares more commonality with train-tram operation, with downtown street running and usage of mainline track. Uses diesel multiple units. New Jersey: River Line – diesel multiple units using main line tracks between Trenton, New Jersey and Camden, New Jersey in a time-sharing agreement with the freight companies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Simian foamy virus** Simian foamy virus: Simian foamy virus (SFV) is a species of the genus Spumavirus that belongs to the family of Retroviridae. It has been identified in a wide variety of primates, including prosimians, New World and Old World monkeys, as well as apes, and each species has been shown to harbor a unique (species-specific) strain of SFV, including African green monkeys, baboons, macaques, and chimpanzees. As it is related to the more well-known retrovirus human immunodeficiency virus (HIV), its discovery in primates has led to some speculation that HIV may have been spread to the human species in Africa through contact with blood from apes, monkeys, and other primates, most likely through bushmeat-hunting practices. Description: Although the simian foamy virus is endemic in African apes and monkeys, there are extremely high infection rates in captivity, ranging from 70% to 100% in adult animals. As humans are in close proximity to infected individuals, people who have had contact with primates can become infected with SFV, making SFV a zoonotic virus. Its ability to cross over to humans was proven in 2004 by a joint United States and Cameroonian team which found the retrovirus in gorillas, mandrills, and guenons; unexpectedly, they also found it in 10 of 1,100 local Cameroon residents. Of those found infected, the majority are males who had been bitten by a primate. While this only accounts for 1% of the population, this detail alarms some who fear the outbreak of another zoonotic epidemic.SFV causes cells to fuse with each other to form syncytia, whereby the cell becomes multi-nucleated and many vacuoles form, giving it a "foamy" appearance. Structure: The SFV is a spherical, enveloped virus that ranges from 80 to 100 nm in diameter. The cellular receptors have not been characterized, but it is hypothesized that it has a molecular structure with near ubiquitous prevalence, since a wide range of cells are permissible to infection.As a retrovirus, SFV poses the following structural characteristics: Envelope: Composed of phospholipids taken from a lipid bilayer, in this case the endoplasmic reticulum. Additional glycoproteins are synthesized from the env gene. The envelope protects the interior of the virus from the environment, and enables entry by fusing to the membrane of the permissive cell. Structure: RNA: The genetic material that carries the code for protein production to create additional viral particles. Proteins: consisting of gag proteins, protease (PR), pol proteins, and env proteins. Group-specific antigen (gag) proteins are major components of the viral capsid. Protease performs proteolytic cleavages during virion maturation to make mature gag and pol proteins. Pol proteins are responsible for synthesis of viral DNA and integration into host DNA after infection. Structure: Env proteins are required for the entry of virions into the host cell. The ability of the retrovirus to bind to its target host cell using specific cell-surface receptors is given by the surface component (SU) of the Env protein, while the ability of the retrovirus to enter the cell via membrane fusion is imparted by the membrane-anchored trans-membrane component (TM). Lack of or imperfections in Env proteins make the virus non-infectious. Genome: As a retrovirus, the genomic material is monopartite, linear, positive-sense single-stranded RNA that forms a double stranded DNA intermediate through the use of the enzyme reverse transcriptase. The RNA strand is approximately 12kb's in length, with a 5'-cap and a 3'poly-A tail. The first full genome annotation of a proviral SFV isolated from cynomolgus macaque (Macaca fascicularis) had been performed in December 2016, where it revealed two regulatory sequences, tas and bet, in addition to the structural sequences of gag, pol and env. There are two long terminal repeats (LTRs) of about 600 nucleotides long at the 5' and 3' ends that function as promoters, with an additional internal promoter (IP) located near the 3' end of env. The LTRs contain the U3, R, and U5 regions that are characteristic of retroviruses. There is also a primer binding site (PBS) at the 5'end and a polypurine tract (PPT) at the 3'end.Whereas gag, pol, and env are conserved throughout retroviruses, the tas gene is unique and found only in Spumaviridae. It encodes for a trans-activator protein required for transcription from both the LTR promoter and the IP. The synthesized Tas protein, which was initially known as Bel-1, is a 36-kDa phosphoprotein which contains an acidic transcription activation domain at its C-terminus and a centrally located DNA binding domain.The Bet protein is required for viral replication, as it counteracts the innate antiretroviral activity of APOBEC3 family defense factors by obstructing their incorporation into virions. Replication cycle: Entry into cell The virus attaches to host receptors through the SU glycoprotein, and the TM glycoprotein mediates fusion with the cell membrane. The entry receptor that triggers viral entry has not been identified, but the absence of heparan sulfate in one study resulted in a decrease of infection, acknowledging it as an attachment factor that assists in mediating the entry of the viral particle. It is not clear if the fusion is pH-dependent or independent, although some evidence has been provided to indicate that SFV does enter cells through a pH-dependent step. Once the virus has entered the interior of the cell, the retroviral core undergoes structural transformations through the activity of viral proteases. Studies have revealed that there are three internal protease-dependent cleavage sites that are critical for the virus to be infectious. One mutation within the gag gene had caused a structural change to the first cleavage site, preventing subsequent cleavage at the two other sites by the viral PR, reflecting its prominent role. Once disassembled, the genetic material and enzymes are free within the cytoplasm to continue with the viral replication. Whereas most retroviruses deposit ssRNA(+) into the cell, SFV and other related species are different in that up to 20% of released viral particles already contains dsDNA genomes. This is due to a unique feature of spumaviruses in which the onset of reverse transcription of genomic RNA occurs before release rather than after entry of the new host cell like in other retroviruses. Replication cycle: Replication and transcription As both ssRNA(+) and dsDNA enter the cell, the remaining ssRNA is copied into dsDNA through reverse transcriptase. Nuclear entry of the viral dsDNA is covalently integrated into the cell's genome by the viral integrase, forming a provirus. The integrated provirus utilizes the promoter elements in the 5'LTR to drive transcription. This gives rise to the unspliced full length mRNA that will serve as genomic RNA to be packaged into virions, or used as a template for translation of gag. The spliced mRNAs encode pol (PR, RT, RnaseH, IN) and env (SU, TM) that will be used to later assemble the viral particles.The Tas trans-activator protein augments transcription directed by the LTR through cis-acting targets in the U3 domain of the LTR. The presence of this protein is crucial, as in the absence of Tas, LTR-mediated transcription cannot be detected. Foamy viruses utilize multiple promoters, which is a mechanism observed in no other retrovirus except Spumaviridae. The IP is required for viral infectivity in tissue culture, as this promoter has a higher basal transcription level than the LTR promoter, and its use leads to transcripts encoding Tas and Bet. Once levels of Tas accumulate, it begins to make use of the LTR promoter, which binds Tas with lower affinity than the IP and leads to accumulation of gag, pol, and env transcripts. Replication cycle: Assembly and release The SFV capsid is assembled in the cytoplasm as a result of multimerization of Gag molecules, but unlike other related viruses, SFV Gag lacks an N-terminal myristylation signal and capsids are not targeted to the plasma membrane (PM). They require expression of the envelope protein for budding of intracellular capsids from the cell, suggesting a specific interaction between the Gag and Env proteins. Evidence for this interaction was discovered in 2001 when a deliberate mutation for a conserved arginine (Arg) residue at position 50 to alanine of the SFVcpz inhibited proper capsid assembly and abolished viral budding even in the presence of the envelope glycoproteins. Analysis of the glycoproteins on the envelope of the viral particle indicate that it is localized to the endoplasmic reticulum (ER), and that once it buds from the organelle, the maturation process is finalized and can leave to infect additional cells. A dipeptide of two lysine residues (dilysine) was the identified motif that determined to be the specific molecule that mediated the signal, localizing viral particles in the ER. Modulation and interaction of host cell: There is little data on how SFV interacts with the host cell as the infection takes its course. The most obvious effect that can be observed is the formation of syncytia that results in multinucleated cells. While the details for how SFV can induce this change are not known, the related HIV does cause similar instances among CD4+ T cells. As the cell transcribes the integrated proviral genome, glycoproteins are produced and displayed at the surface of the cell. If enough proteins are at the surface with other CD4+ T cells nearby, the glycoproteins will attach and result in the fusion of several cells.Foamy degeneration, or vacuolization is another observable change within the cells, but it is unknown how SFV results in the formation of numerous cytoplasmic vacuoles. This is another characteristic of retroviruses, but there are no studies or explanations on why this occurs. Transmission and pathogenicity: The transmission of SFV is believed to spread through saliva, because large quantities of viral RNA, indicative of SFV gene expression and replication, are present in cells of the oral mucosa. Aggressive behaviors such as bites, to nurturing ones such as a mother licking an infant all have the ability to spread the virus. Studies of natural transmission suggest that infants of infected mothers are resistant to infection, presumably because of passive immunity from maternal antibodies, but infection becomes detectable by three years of age. Little else is known about the prevalence and transmission patterns of SFV in wild-living primate populations.The first case of a spumavirus being isolated from a primate was in 1955 (Rustigan et al., 1955) from the kidneys. What is curious about the cytopathology of SFV is that while it results in rapid cell death for cells in vitro, it loses its highly cytopathic nature in vivo. With little evidence to suggest that SFV infection causes illness, some scientists believe that it has a commensal relationship to simians.In one study to determine the effects of SFV(mac239) on rhesus macaques that were previously infected with another type of the virus, the experiment had provided evidence that previous infection can increase the risk viral loads reaching unsustainable levels, killing CD4+ T cells and ultimately resulting in the expiration of the doubly infected subjects. SFV/SIV models have since been proposed to replicate the relationship between SFV and HIV in humans, a potential health concern for officials. Tropism: SFV can infect a wide range of cells, with in vitro experiments confirming that fibroblasts, epithelial cells, and neural cells all showed extensive cytopathology that is characteristic of foamy virus infection. The cytopathic effects in B lymphoid cells and macrophages was reduced, where reverse transcriptase values were lower when compared to fibroblasts and epithelial cells. Cells that expressed no signs of cytopathy from SFV were the Jurkat and Hut-78 T-cell lines. Cospeciation of SFV and primates: The phylogenetic tree analysis of SFV polymerase and mitochondrial cytochrome oxidase subunit II (COII has been shown as a powerful marker used for primate phylogeny) from African and Asian monkeys and apes provides very similar branching order and divergence times among the two trees, supporting the cospeciation. Also, the substitution rate in the SFV gene was found to be extremely slow, i.e. the SFV has evolved at a very low rate (1.7×10−8 substitutions per site per year). These results suggest SFV has been cospeciated with Old World primates for about 30 million years, making them the oldest known vertebrate RNA viruses.The SFV sequence examination of species and subspecies within each clade of the phylogenetic tree of the primates indicated cospeciation of SFV and the primate hosts, as well. A strong linear relationship was found between the branch lengths for the host and SFV gene trees, which indicated synchronous genetic divergence in both data sets.By using the molecular clock, it was observed that the substitution rates for the host and SFV genes were very similar. The substitution rates for host COII gene and the SFV gene were found out to be (1.16±0.35)×10−8 and (1.7±0.45)×10−8 respectively. This is the slowest rate of substitution observed for RNA viruses and is closer to that of DNA viruses and endogenous retroviruses. This rate is quite different from that of exogenous RNA viruses such as HIV and influenza A virus (10−3 to 10−4 substitutions per site per year). Prevalence: Researchers in Cameroon, the Democratic Republic of the Congo, France, Gabon, Germany, Japan, Rwanda, the United Kingdom, and the United States have found that simian foamy virus is widespread among wild chimpanzees throughout equatorial Africa.Humans exposed to wild primates, including chimpanzees, can acquire SFV infections. Since the long-term consequences of these cross-species infections are not known, it is important to determine to what extent wild primates are infected with simian foamy viruses. In this study, researchers tested this question for wild chimpanzees by using novel noninvasive methods. Analyzing over 700 fecal samples from 25 chimpanzee communities across sub-Saharan Africa, the researchers obtained viral sequences from a large proportion of these communities, showing a range of infection rates from 44% to 100%.Major disease outbreaks have originated from cross-species transmission of infectious agents between primates and humans, making it important to learn more about how these cross-species transfers occur. The high SFV infection rates of chimpanzees provide an opportunity to monitor where humans are exposed to these viruses. Identifying the locations may help determine where the highest rates of human–chimpanzee interactions occur. This may predict what other pathogens may jump the species barrier next.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Teeterboard** Teeterboard: The teeterboard or Korean plank is an acrobatic apparatus that resembles a playground seesaw. The strongest teeterboards are made of oak (usually 9 feet in length). The board is divided in the middle by a fulcrum made of welded steel. At each end of the board is a square padded area, where a performer stands on an incline before being catapulted into the air. The well-trained flyer performs various aerial somersaults, landing on padded mats, a human pyramid, a specialized landing chair, stilts, or even a Russian bar. Teeterboard: The teeterboard is operated by a team of flyers, catchers, spotters and pushers. Some members of the team perform more than one acrobatic role. In the early 1960s the finest teeterboard acts, trained in the Eastern Bloc countries, performed with Ringling Brothers and Barnum & Bailey Circus. Teeterboard: Korean-style teeterboard called Neolttwigi is a form of teeterboard where two performers jump vertically in place, landing back on the apparatus instead of dismounting onto a landing mat or human pyramid. Korean plank acts are featured in the Cirque du Soleil shows Nouvelle Experience, Mystère, Dralion, Corteo, Koozå and Amaluna. Flying Fruit Fly Circus, based in Albury NSW Australia, uses custom teeterboards (handmade in-house) in numerous national and international shows. The Hungarian board (bascule hongroise) has a higher fulcrum, and the pushers jump from a height (e.g., from a tower).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sołtan argument** Sołtan argument: The Sołtan argument is an astrophysical theory outlined in 1982 by Polish astronomer Andrzej Sołtan. It maintains that if quasars were powered by accretion onto a supermassive black hole, then such supermassive black holes must exist in our local universe as "dead" quasars. History: As early as 1969, Donald Lynden-Bell wrote a paper suggesting that "dead quasars" were found at the center of the Milky Way and nearby galaxies by arguing that given the quasar-number counts, luminosities, distances, and the efficiency of accretion into a "Schwarzschild throat" through the last stable circular orbit (note that the term black hole had been coined only two years earlier and was still gaining popular usage), roughly 1010 quasars existed in the observable universe. This number density of "dead quasars" was attributed by Lynden-Bell to high mass-to-light ratio objects found at the center of galaxies. This is essentially the Sołtan argument, though the direct connection between black hole masses and quasar luminosity functions is missing. In the paper, Lynden-Bell also suggests some radical ideas that are now fully integrated into modern understanding of astrophysics including the model that accretion disks are supported by magnetic fields, that extragalactic cosmic rays are accelerated in them, and he estimates to within an order of magnitude the masses of several of the closest supermassive black holes including the ones in the Milky Way, M31, M32, M81, M82, M87, and NGC 4151.Thirteen years later, Sołtan explicitly showed that the luminosity ( L ) of quasars was due to the accretion rate of mass onto black holes given by: L=ϵM˙c2 where ϵ is the efficiency factor M˙ is the time rate of mass falling into the black hole c is the speed of lightGiven the number of observed quasars at various redshifts, he was able to derive an integrated energy density due to quasar output. Since observers on Earth are flux limited, there are always more quasars that exist than are observed and thus the energy density he derived is a lower bound. He obtained the value of approximately 10−10 ergs per cubic meter.Sołtan calculated the mass density of accreted material as it is directly related to the energy density of quasar light. He derived a value of approximately 1014 solar masses per cubic Gigaparsec. This mass would be discretely distributed (since quasars are single point sources); given an average mass of approximately ten million solar masses, it would be statistically likely for a "dead quasar" to be within a few megaparsecs of Earth.At this time, evidence was already accumulating that supermassive black holes were found at the center of large galaxies, which are distributed approximately on the order of a megaparsec apart from each other. This argument therefore made a reasonable case that supermassive black holes were at one time ultraluminous quasars. History: The first quantitative estimates of the mass density in supermassive black holes were 5-10 times higher than Sołtan's estimate. This discrepancy was resolved in 2000 via the discovery of the M–sigma relation, which showed that most of the previously-published black hole masses were in error. Present constraints: As of 2008, the best constraints for the supermassive black hole mass per cubic megaparsec in the local universe derived from the Sołtan argument is between 2 - 5 x 105 solar masses. This value is consistent with observations of the mass of local supermassive black holes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DCL2** DCL2: DCL2 (an abbreviation of Dicer-like 2) is a gene in plants that codes for the DCL2 protein, a ribonuclease III enzyme involved in processing exogenous double-stranded RNA (dsRNA) into 22 nucleotide small interference RNAs (siRNAs).Diverse sources of dsRNAs have been characterized, broadly classified as exogenous or endogenous. A classical example of exogenous derived dsRNAs are the viral genomes release during infection, specially from those double-stranded RNA viruses, where the cleavage of dsRNA produce small RNA products called viral siRNAs or vsi-RNAs. Other examples of exogenous source of dsRNAs are transgenic with several insertion loci along the plant hos genome. DCL2 also process endogenous sources as double-stranded RNAs derived of cis-natural antisense transcripts, generating 22nt short interfering RNA (natsi-RNAs); however, the biological relevance, evolutionary conservation, and experimental validation of natsi-RNAs remains controversial. Function: Dicer proteins belongs to the RNaseIII-like family, a gene family with highly conserved endonuclease in eukaryotes, with procaryotes representatives. In Arabidopsis and most of land Plants, there are mainly four Dicer-like proteins (DCL): DCL1, DCL2, DCL3, and DCL4. They all contain five domains, following the order from N-terminus to C-terminus: DEXD-helicase, helicase-C, domain of unknown function 283 (DUF283), Piwi/Argonaute/Zwille (PAZ) domain, two tandem RNase III domains, and one or two dsRNA-binding domains (dsRBDs). In general, the helicase domain of dicer-like proteins utilizes ATP hydrolysis to facilitate the unwinding of dsRNA. The DUF283 domain have been recently associated as a protein domain involve in facilitation of RNA-RNA base pairing and RNA-binding. The PAZ and RNase III domains are essential for dsRNA cleavage via the recognition of dsRNA ends by PAZ domain, the RNase III domains cuts one of the strands of dsRNA.DCL2 plays an essential role in transitive silencing of transgenes by processing secondary siRNAs, including trans-acting siRNA. To do so, it does requires DCL4 and RDR6, which amplifies the silencing by using the mRNA target of the DCL2's generated 22nt siRNA, as substrate to generate secondary siRNAs, providing an efficient mechanism for long-distance silencing, in a phenomenon called transitivity of RNA silencing.DCL2 may participate as well with DCL3 in the production of 24 nucleotide repeat-associated siRNAs (ra-siRNAs) derived from heterochromatic regions, genomic regions silenced by the presence of DNA repetitive elements such as transposons. Transitive and systemic RNA silencing: A key difference between DCL1 and others DCLs 2,3 and 4 proteins is the amplification capacity of the pathways specific for the later protein. The involvement of RDR proteins extends the small RNA-target complex beyond the original trigger-spot. The subset of siRNA used in signal amplification are called transitive or secondary siRNAs and the process of amplification is called transitivity. The amplification propagates the secondary siRNA and its target specific silencing activity from one tissue to another, eventually reaching the whole plant's tissues, in a process called systemic silencing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Medial knee injuries** Medial knee injuries: Medial knee injuries (those to the inside of the knee) are the most common type of knee injury. The medial ligament complex of the knee consists of: superficial medial collateral ligament (sMCL), also called the medial collateral ligament (MCL) or tibial collateral ligament deep medial collateral ligament (dMCL), or mid-third medial capsular ligament posterior oblique ligament (POL), or oblique fibers of the sMCLThis complex is the major stabilizer of the medial knee. Injuries to the medial side of the knee are most commonly isolated to these ligaments. A thorough understanding of the anatomy and function of the medial knee structures, along with a detailed history and physical exam, are imperative to diagnosing and treating these injuries. Symptoms: Patients often complain of pain and swelling over the medial aspect of the knee joint. They may also report instability with side-to-side movement and during athletic performance that involves cutting or pivoting. Symptoms: Complications Jacobson previously described the common problems to medial knee surgery. It was stressed that adequate diagnosis is imperative and all possible injuries should be evaluated and addressed intraoperatively. Damage to the saphenous nerve and its infrapatellar branch is possible during medial knee surgery, potentially causing numbness or pain over the medial knee and leg. As with all surgeries, there is a risk of bleeding, wound problems, deep vein thrombosis, and infection that can complicate the outcome and rehabilitation process. The long term complication of arthrofibrosis and heterotopic ossification (Pellegrini-Stieda syndrome) are problems that are best addressed with early range of motion and following defined rehabilitation protocols. Failure of graft due to intrinsic mechanical forces should be prevented with preoperative alignment assessment (osteotomy treatment) and proper rehabilitation. Causes: Medial knee injury is usually caused by a valgus knee force, a tibial external rotation force, or a combination thereof. This mechanism is often seen in sports that involve aggressive knee flexion like ice hockey, skiing, and football. Anatomy and function: Structures on the medial side of the knee include the tibia, femur, vastus medialis obliquus muscle, semitendinosus tendon, gracilis tendon, sartorius tendon, adductor magnus tendon, medial head of the gastrocnemius muscle, semimembranosus tendon, medial meniscus, medial patellofemoral ligament (MPFL), sMCL, dMCL, and POL. It has been found that the most important structures for stabilization in this area of the knee are ligaments: sMCL, dMCL, and POL. Anatomy and function: Bones The bones of the knee are the femur, patella, tibia, and fibula. The fibula is on the lateral side of the knee and the patella has little effect on the medial side of the knee. The bony congruity of the medial knee consists of the opposing surfaces of the medial femoral condyle and the medial tibial plateau. On the medial femoral condyle there are three bony landmarks that are important: the medial epicondyle, adductor tubercle, and gastrocnemius tubercle. The medial epicondyle is the most distal and anterior prominence. The adductor tubercle is just proximal and posterior to the medial epicondyle. The gastrocnemius tubercle is just distal and posterior to the adductor tubercle. Anatomy and function: Ligaments and biomechanical function The sMCL connects the femur to the tibia. It originates just proximal and posterior to the medial epicondyle (not directly on the epicondyle) and splits into two distinct sections. One tibial section attaches to soft tissue, 1 cm distal to the joint line. The other tibial section attaches directly to the tibia, anterior to the posteromedial tibial crest, 6 cm distal to the joint line. This distal attachment is the stronger of the two and makes up the floor of the pes anserine bursa. The proximal tibial attachment of the sMCL is the primary stabilizer to valgus force on the knee, whereas the distal tibial attachment is the primary stabilizer of external rotation at 30° of knee flexion.The dMCL is a thickening of the medial aspect of the capsule surrounding the knee. It originates on the femur 1 cm distal to the sMCL origin and inserts 3–4 mm distal to the joint line. It runs parallel to and underneath the sMCL. The dMCL connects directly to the medial meniscus and therefore can be divided into meniscofemoral and meniscotibial ligament components. Anatomy and function: The meniscofemoral ligament is longer than the meniscotibial ligament, which is shorter and thicker in nature. The meniscofemoral ligament is a primary internal rotation stabilizer and a secondary external rotation stabilizer, activated when the sMCL fails. The meniscotibial ligament acts to secondarily stabilize internal rotation. Anatomy and function: The POL (called by older texts: oblique portion of the sMCL) is a fascial expansion with three main components: superficial, central (tibial), and capsular. The central arm is the strongest and thickest. It arises from the semimembranosus tendon and connects anterior and distal to the gastrocnemius tubercle via the posterior joint capsule. The POL, therefore, is not a stand-alone structure, but a thickening of the posteromedial joint capsule. It stabilizes internal rotation of the knee through all degrees of flexion but bears the most load when internally rotated in full extension. It also acts as a secondary external rotation stabilizer.The MPFL arises from the fibers of the vastus medialis obliquus muscle and attaches distally to the superior medial aspect of the patella. This ligament acts to keep the patella within the trochlear groove during flexion and extension. It is rarely injured from a medial knee injury unless there is a concurrent lateral patellar subluxation or dislocation. Anatomy and function: Tendons and muscles The adductor magnus tendon attaches to the distal medial femoral condyle just posterior and proximal to the adductor tubercle. It has a fascial expansion on the distal-medial aspect that attaches to the medial gastrocnemius tendon, capsular arm of the POL, and posteromedial joint capsule. The thick distal lateral aspect attaches to the medial supracondylar ridge. The adductor magnus tendon is an excellent, consistent landmark because it is rarely injured. The vastus medialis obliquus muscle courses over the anteromedial thigh, attaching along the adductor magnus anterior border and to the quadratus femoris tendon. The medial gastrocnemius tendon arises proximal and posterior to the gastrocnemius tubercle of the medial femoral condyle. This is another important landmark because it is rarely injured and attaches close to the capsular arm of the POL, thus helping the surgeon locate the femoral attachment of the POL. Diagnosis: The majority of medial knee injuries are isolated ligamentous injuries. Most patients will relate a history of a traumatic blow to the lateral aspect of the knee (causing a valgus force) or a non-contact valgus force. Acute injuries are much easier to diagnose clinically, while chronic injuries may be less apparent due to difficulty in differentiating from a lateral knee injury, possibly requiring valgus stress radiographs. Diagnosis: Physical exam The physical exam should always begin with a visual inspection of the joint for any outward signs of trauma. Palpation should follow paying close attention to effusion and subjective tenderness during the exam. The practitioner should also evaluate the contralateral (non-injured) knee to note any differences in gross appearance and landmarks. Palpation should focus specifically on the meniscofemoral and meniscotibial aspects of the sMCL. It has been reported that injury to one versus other has implications for healing, so localization of the site of injury is beneficial. Testing of the knee joint should be done using the following techniques and the findings compared to the contralateral, normal knee: Valgus stress at 0° and 20°- This test puts direct stress on the medial knee structures, reproducing the mechanism of injury. Valgus stress testing is done with the patient supine on the exam table. The lower extremity, supported by the examiner, is abducted. The examiner's fingers monitor the medial joint space for gapping while placing the opposite hand on the ankle. The knee is placed in 20° of flexion. The examiner then uses their own thigh as a fulcrum at the knee and applies a valgus force (pulling the foot and ankle away from the patient's body). The force is then used to establish the amount of gapping present within the joint. It has been reported that 20° of flexion is best for isolating the sMCL, allowing the practitioner to establish the degree of injury (see Classification). Additional testing is done at 0° to determine if a Grade III injury is present. Diagnosis: Anteromedial drawer test- This test is performed with the patient supine with the knee flexed to 80-90°. The foot is externally rotated 10-15° and the examiner supplies an anterior and external rotational force. The joint can then be evaluated for tibial anteromedial rotation, taking care to recognize the possibility of posterolateral corner instability giving similar rotational test results. As always, compare the test in the opposite knee. Diagnosis: Dial Test (anteromedial rotation test)- This test should be executed with the patient lying both supine and prone. When the patient is supine, the knees must be flexed 30° off the table. The thigh is then stabilized and the foot externally rotated. The examiner watches for the tibial tubercle of the affected knee to rotate as the foot rotates, comparing it to the contralateral knee. A positive test will show rotation of greater than 10-15° of rotation compared to the opposite knee. This is most easily assessed with a hand placed over the tibia while testing. When the patient is prone, the knee is flexed to 90° and both feet are externally rotated and compared, noting the difference from the non-injured joint. Similar to the anteromedial drawer test, a false positive test can result from a posterolateral corner injury. Testing at both 30° and 90° helps to distinguish between these injuries: one should monitor where the tibial rotation occurs (anteromedial or posterolateral) in the supine position and also assess for medial or lateral joint line gapping to differentiate between these two injuries. Diagnosis: Classification Grading of medial knee injuries is dependent on the amount of medial joint space gapping found upon valgus stress testing with the knee in 20° of flexion. Grade I injuries have no instability clinically and are associated with tenderness only, representing a mild sprain. Grade II injuries have broad tenderness over the medial knee and have some gapping with a firm end-point during valgus testing; this represents a partial tear of the ligaments. Grade III injuries have a complete ligamentous tear. There will be no end-point to valgus stress testing. The historic quantified definition of grades I, II, and III represented 0–5 mm, 5–10 mm, and >10 mm of medial compartment gapping, respectively. LaPrade et al. reported, however, that a simulated grade III sMCL injury showed only 3.2 mm of increased medial compartment gapping compared to the intact state. Additionally, with the knee in full extension, if valgus stress testing reveals more than 1–2 mm of medial compartment gapping present, a concomitant anterior cruciate ligament (ACL) or posterior cruciate ligament (PCL) injury is suspected. Diagnosis: Radiographs Anterior-posterior (AP) radiographs are useful for reliably assessing normal anatomical landmarks. Bilateral valgus stress AP images can show a difference in medial joint space gapping. It has been reported that an isolated grade III sMCL tear will show an increase in medial compartment gapping of 1.7 mm at 0° of knee flexion and 3.2 mm at 20° of knee flexion, compared to the contralateral knee. Additionally, a complete medial ligamentous disruption (sMCL, dMCL, and POL) will show increased gapping by 6.5 mm at 0° and 9.8 mm at 20° during valgus stress testing. Pellegrini-Stieda syndrome can also be seen on AP radiographs. This finding is due to calcification of the sMCL (heterotopic ossification) caused by the chronic tear of the ligament. Diagnosis: MRI Magnetic resonance imaging (MRI) can be helpful in assessing for a ligamentous injury to the medial side of the knee. Milewski et al. has found that grade I to III classification can be seen on MRI. With a high-quality image (1.5 tesla or 3 tesla magnet) and no previous knowledge of the patient's history, musculoskeletal radiologists were able to accurately diagnose medial knee injury 87% of the time. MRI can also show associated bone bruises on the lateral side of the knee, which one study shows, happen in almost half of medial knee injuries.Knee MRIs should be avoided for knee pain without mechanical symptoms or effusion, and upon non-successful results from a functional rehabilitation program. Treatment: Treatment of medial knee injuries varies depending on location and classification of the injuries. The consensus of many studies is that isolated grade I, II, and III injuries are usually well suited to non-operative treatment protocols. Acute grade III injuries with concomitant multiligament injuries or knee dislocation involving medial side injury should undergo surgical treatment. Chronic grade III injuries should also undergo surgical treatment if the patient is experiencing rotational instability or side-to-side instability. Treatment: Nonoperative treatment Conservative treatment of isolated medial knee injuries (grades I-III) begins with controlling swelling and protecting the knee. Swelling is managed well with rest, ice, elevation, and compression wraps. Protection can be performed using a hinged brace that stabilizes against varus and valgus stress but allows full flexion and extension. The brace should be worn for the first four to six weeks of rehabilitation, especially during physical exercise to prevent trauma to the healing ligament. Stationary bike exercises are the recommended exercise for active range of motion and should be increased as tolerated by the patient. Side-to-side movements of the knee should be avoided. The patient is allowed to bear weight as tolerated and should perform quadriceps strengthening exercises along with range of motion exercises. The typical return-to-play time frame for most athletes with a grade III medial knee injury undergoing a rehabilitation program is 5 to 7 weeks. Treatment: Operative treatment It has been reported that severe acute and chronic grade III medial knee injuries often involve the sMCL in combination with the POL. Direct surgical repair or reconstruction, therefore, should be performed for both of these ligaments because they both play an important role in static medial knee stability. The biomechanically validated approach is to reconstruct both the POL and both divisions of the sMCL. Treatment: Severe acute tears Surgery involving direct repair (with or without augmentation from a hamstring autograft), among other previously used techniques, have not been biomechanically tested. An anatomical reconstruction of the sMCL and POL has been biomechanically validated. Treatment: Chronic instability Underlying causes of chronic medial knee instability must be identified before surgical reconstruction is performed. More specifically, patients with genu valgum (knock-kneed) alignment must be evaluated and treated with an osteotomy(s) to establish balanced forces on knee ligaments, preventing premature failure of concurrent cruciate ligament reconstruction. These patients should be rehabilitated after the osteotomy heals before it can be verified that they do not still have functional limitations. Once proper alignment is achieved, reconstruction can be performed. Treatment: Anatomic medial knee reconstruction This technique, described in detail by LaPrade et al., uses two grafts in four separate tunnels. An incision is made over the medial knee 4 cm medial to the patella, and extended distally 7 to 8 cm past the joint line, directly over the pes anserinus tendons.Within the distal borders of the incision, the semitendinosus and gracilis tendons are found beneath the sartorius muscle fascia. The distal tibial attachment of the sMCL can be found under these identified tendons, making up the floor of the pes anserine bursa, 6 cm distal to the joint line. Once identified, the remaining soft tissue is removed from the attachment site. An eyelet pin is then drilled through attachment site transversely through the tibia, making sure the starting point is located at the posterior aspect of the site to ensure better biomechanical outcomes. Over the eyelet pin, a 7-mm reamer (6 mm considered in smaller patients) is reamed to a depth of 25 mm. Once prepared, attention is directed to preparing the reconstruction tunnel for the tibial attachment of the POL. Above the anterior arm attachment of the semimembranosus muscle tendon, the tibial attachment of the central arm of the POL is identified. This attachment is exposed by making a small incision parallel to the fibers along the posterior edge of the anterior arm of the semimembranosus tendon. Once exposed, an eyelet pin is drilled through the tibia toward Gerdy's tubercle (anterolateral tibia). After verifying the correct anatomic eyelet pin placement, a 7-mm reamer is used over the pin to drill a tunnel depth of 25 mm.Moving to the femoral attachments of the ligaments, the first step is identifying the adductor magnus muscle tendon, and its corresponding attachment site, near the adductor tubercle. Just distal and slightly anterior to this tubercle is the bony prominence of the medial epicondyle. The attachment site of the sMCL can be identified slightly proximal and posterior to the epicondyle. An eyelet pin can now be passed transversely through the femur at this site. The tunnel at this location, however, should be drilled after identifying the POL attachment site.The next step of identifying the POL femoral attachment is done by locating the gastrocnemius tubercle (2.6 mm distal and 3.1 mm anterior to the medial gastrocnemius tendon attachment on the femur). If the posteromedial capsule is not intact, the POL attachment site is located 7.7 mm distal and 2.9 mm anterior to the gastrocnemius tubercle. With the capsule intact, however, an incision is made along the posterior aspect of the sMCL, parallel to its fibers. The central arm of the POL can then be found at its femoral attachment site. Once identified, an eyelet pin is passed transversely through the femur. The distances between the femoral attachment site of the POL and the sMCL (on average, 11mm) should now be measured to verify that the anatomic attachment sites have been correctly identified. Once this is done, the femoral tunnels for the sMCL and POL can be reamed to a depth of 25 mm using a 7-mm reamer.The next aspect of the surgery is preparation and placement of the reconstruction grafts. The preparation can be done while the other steps are being completed by another surgeon or physician's assistant. The semitendinosus tendon can be harvested using a hamstring stripper for use as the reconstruction autograft. The autograft is sectioned into a 16-cm length for the sMCL reconstruction and 12-cm length for the POL reconstruction. These lengths are also used if the surgery is done with cadaver allograft. The sMCL and POL grafts are pulled into their respective femoral tunnels and each secured with a cannulated bioabsorbable screw. The grafts are passed distally along their native courses to the tibial attachments. The sMCL is passed under sartorius fascia (and any remaining sMCL fibers). Both grafts are passed (but not yet secured) into their respective tibial tunnels using the existing eyelet pins. If simultaneous cruciate ligament surgery is underway, the cruciate reconstructions are secured before securing the medial ligaments.Securing the POL graft is done in full knee extension. The graft is pulled tight and fixed using a bioabsorbable screw. The knee is then flexed to 20°. Making sure the tibia remains in neutral rotation, a varus force is used to ensure there is no medial compartment gapping of the knee. The sMCL graft is then tightened and fixed with a bioabsorbable screw.The final step of reconstruction ligament fixation is the proximal tibial attachment of the sMCL. This soft-tissue attachment can be reproduced with a suture anchor placed 12.2 mm distal to the medial joint line (average location), directly medial to the anterior arm of the semimembranosus tibial attachment. Once this aspect of the sMCL is secured to the suture anchor, the knee is put through range of motion testing by the physician to determine the "safe zone" of knee motion which is used during the first post-operative day rehabilitation (below). Rehabilitation: Nonoperative Rehabilitation As mentioned in the Nonoperative Treatment section, the principles of rehabilitation are to control swelling, protect the knee (bracing), reactivate the quadriceps muscle, and restore range of motion. Early weight bearing is encouraged as tolerated, using crutches as little as possible, with a goal of walking without a limp. Stationary biking is the preferred range of motion exercise, stimulating the ligament to heal faster. Time on the bike and resistance should be increased as tolerated by the patient. Side-to-side movement should be restricted until after 3 to 4 weeks to allow the adequate healing. Proprioceptive and balance activities can progress after clinical exam or valgus stress radiographs reveal healing. Athletes can often resume full activities within 5 to 7 weeks after an isolated sMCL injury. Rehabilitation: Postoperative Rehabilitation Postoperative rehabilitation protocols for reconstructed or repaired medial knee injuries focus on protecting the ligaments/grafts, managing swelling, reactivating the quadriceps, and establishing range of motion. A safe range of motion ("safe zone") should be measured by the surgeon intraoperatively and relayed to the rehabilitation specialist to prevent overstressing the ligaments during rehabilitation. The ideal passive range of motion is 0 to 90° of flexion on postoperative day one after surgery and should be followed for 2 weeks, as tolerated, with a goal of 130° of flexion at the end of the 6th week. To protect the newly reconstructed ligaments, a hinged knee brace should be used. Swelling should be managed with cryotherapy and compression. Patellofemoral mobilization, quadriceps reactivation, and frequent ankle pumps are also utilized right after surgery to prevent arthrofibrosis. Non-weight bearing to touch-down weight bearing is recommended for the first 6 weeks, progressing to closed-kinetic-chain exercises thereafter. Light-resistance stationary biking is also started at 2 weeks and can be increased as tolerated. Gait mechanics are addressed when the patient is able to bear their full weight. The patient should be able to walk without limping or developing swelling in the joint. Rehabilitation can only move as fast as tolerated and effusion must be monitored and managed at all times to ensure good results. Once motion, strength, and balance are regained, plyometric and agility exercises are started at 16 weeks. Brisk walking for 1 to 2 miles should be well tolerated before the patient starts a jogging program. Return to sports may be assessed at this point, providing no functional or stability deficits are present. Rehabilitation should be supervised by a professional specialist working along with the surgeon. Protocols may be adjusted in the presence of concomitant ligament reconstructions or osteotomies. Valgus stress AP radiographs (mentioned above) are an excellent and cost-effective way to monitor postoperative results and follow-up. History: During the Troubles in Northern Ireland, paramilitaries considered themselves to be law enforcers in their own areas. They used limb punishment shootings, commonly referred to as kneecapping, to punish petty criminals and other individuals whose behavior they deemed to be unacceptable. If the crime was considered to be grave, the victim was also shot in the ankles and elbows, leaving them with six gunshot wounds (colloquially known as a six pack). Approximately 2,500 people were victims of these punishment shootings through the duration of the conflict. Those who were attacked carried a social stigma with them.The Red Brigades, an Italian militant organization, employed these punishment shootings to warn their opponents. They used the method to punish at least 75 people up to December 1978. More recently this kind of punishment shooting has been employed by Hamas in the Gaza Strip to silence their political opponents.The Bangladesh Police have started kneecapping in the country since 2009 to punish the opposition and preventing them from participating in protests against the government. Human Rights Watch (HRW) has published a report on kneecapping in Bangladesh. Future research: Future research with regard to medial knee injuries should evaluate clinical outcomes between different reconstruction techniques. Determining the advantages and disadvantages of these techniques would also be beneficial for optimizing treatment.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spectrofluorometer** Spectrofluorometer: A spectrofluorometer is an instrument which takes advantage of fluorescent properties of some compounds in order to provide information regarding their concentration and chemical environment in a sample. A certain excitation wavelength is selected, and the emission is observed either at a single wavelength, or a scan is performed to record the intensity versus wavelength, also called an emission spectrum. The instrument is used in fluorescence spectroscopy. Operation: Generally, spectrofluorometers use high intensity light sources to bombard a sample with as many photons as possible. This allows for the maximum number of molecules to be in an excited state at any one point in time. The light is either passed through a filter, selecting a fixed wavelength, or a monochromator, which allows a wavelength of interest to be selected for use as the exciting light. The emission is collected at the perpendicular to the emitted light. The emission is also either passed through a filter or a monochromator before being detected by a photomultiplier tube, photodiode, or charge-coupled device detector. The signal can either be processed as digital or analog output. Operation: Systems vary greatly and a number of considerations affect the choice. The first is the signal-to-noise ratio. There are many ways to look at the signal to noise of a given system but the accepted standard is by using the Raman signal of water. Sensitivity or detection limit is another specification to be considered, that is how little light can be measured. The standard would be fluorescein in NaOH, typical values for a high end instrument are in the femtomolar range. Auxiliary components: These systems come with many options, including: Polarizers Peltier temperature controllers Cryostats Cold Finger Dewars Pulsed lasers for lifetime measurements LEDs for lifetimes Filter holders Adjustable optics (very important) Solid sample holders Slide holders Integrating spheres Near-infrared detectors Bilateral slits Manual slits Computer controlled slits Fast switching monochromators Filter wheels
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Knowledge Engineering Environment** Knowledge Engineering Environment: Knowledge Engineering Environment (KEE) is a frame-based development tool for expert systems. It was developed and sold by IntelliCorp, and was first released in 1983. It ran on Lisp machines, and was later ported to Lucid Common Lisp with the CLX library, an X Window System (X11) interface for Common Lisp. This version was available on several different UNIX workstations. Knowledge Engineering Environment: On KEE, several extensions were offered: Simkit, a frame-based simulation library KEEconnection, database connection between the frame system and relational databasesIn KEE, frames are called units. Units are used for both individual instances and classes. Frames have slots and slots have facets. Facets can describe, for example, a slot's expected values, its working value, or its inheritance rule. Slots can have multiple values. Behavior can be implemented using a message passing model. Knowledge Engineering Environment: KEE provides an extensive graphical user interface (GUI) to create, browse, and manipulate frames. KEE also includes a frame-based rule system. In the KEE knowledge base, rules are frames. Both forward chaining and backward chaining inference are available. Knowledge Engineering Environment: KEE supports non-monotonic reasoning through the concepts of worlds. Worlds allow providing alternative slot-values of frames. Through an assumption-based truth or reason maintenance system, inconsistencies can be detected and analyzed.ActiveImages allows graphical displays to be attached to slots of Units. Typical examples are buttons, dials, graphs, and histograms. The graphics are also implemented as Units via KEEPictures, a frame-based graphics library.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stromberg Formation** Stromberg Formation: The Stromberg Formation is a geologic formation in Germany. It preserves fossils dating back to the Cretaceous period.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Computer program** Computer program: A computer program is a sequence or set of instructions in a programming language for a computer to execute. Computer programs are one component of software, which also includes documentation and other intangible components.A computer program in its human-readable form is called source code. Source code needs another computer program to execute because computers can only execute their native machine instructions. Therefore, source code may be translated to machine instructions using the language's compiler. (Assembly language programs are translated using an assembler.) The resulting file is called an executable. Alternatively, source code may execute within the language's interpreter.If the executable is requested for execution, then the operating system loads it into memory and starts a process. The central processing unit will soon switch to this process so it can fetch, decode, and then execute each machine instruction.If the source code is requested for execution, then the operating system loads the corresponding interpreter into memory and starts a process. The interpreter then loads the source code into memory to translate and execute each statement. Running the source code is slower than running an executable. Moreover, the interpreter must be installed on the computer. Example computer program: The "Hello, World!" program is used to illustrate a language's basic syntax. The syntax of the language BASIC (1964) was intentionally limited to make the language easy to learn. For example, variables are not declared before being used. Also, variables are automatically initialized to zero. Here is an example computer program, in Basic, to average a list of numbers: Once the mechanics of basic computer programming are learned, more sophisticated and powerful languages are available to build large computer systems. History: Improvements in software development are the result of improvements in computer hardware. At each stage in hardware's history, the task of computer programming changed dramatically. Analytical Engine In 1837, Charles Babbage was inspired by Jacquard's loom to attempt to build the Analytical Engine. History: The names of the components of the calculating device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled. The device had a "store" which consisted of memory to hold 1,000 numbers of 50 decimal digits each. Numbers from the "store" were transferred to the "mill" for processing. It was programmed using two sets of perforated cards. One set directed the operation and the other set inputted the variables. However, after more than 17,000 pounds of the British government's money, the thousands of cogged wheels and gears never fully worked together.Ada Lovelace worked for Charles Babbage to create a description of the Analytical Engine (1843). The description contained Note G which completely detailed a method for calculating Bernoulli numbers using the Analytical Engine. This note is recognized by some historians as the world's first computer program. Other historians consider Babbage himself wrote the first computer program for the Analytical Engine. It listed a sequence of operations to compute the solution for a system of two linear equations. History: Universal Turing machine In 1936, Alan Turing introduced the Universal Turing machine, a theoretical device that can model every computation. It is a finite-state machine that has an infinitely long read/write tape. The machine can move the tape back and forth, changing its contents as it performs an algorithm. The machine starts in the initial state, goes through a sequence of steps, and halts when it encounters the halt state. All present-day computers are Turing complete. History: ENIAC The Electronic Numerical Integrator And Computer (ENIAC) was built between July 1943 and Fall 1945. It was a Turing complete, general-purpose computer that used 17,468 vacuum tubes to create the circuits. At its core, it was a series of Pascalines wired together. Its 40 units weighed 30 tons, occupied 1,800 square feet (167 m2), and consumed $650 per hour (in 1940s currency) in electricity when idle. It had 20 base-10 accumulators. Programming the ENIAC took up to two months. Three function tables were on wheels and needed to be rolled to fixed function panels. Function tables were connected to function panels by plugging heavy black cables into plugboards. Each function table had 728 rotating knobs. Programming the ENIAC also involved setting some of the 3,000 switches. Debugging a program took a week. It ran from 1947 until 1955 at Aberdeen Proving Ground, calculating hydrogen bomb parameters, predicting weather patterns, and producing firing tables to aim artillery guns. History: Stored-program computers Instead of plugging in cords and turning switches, a stored-program computer loads its instructions into memory just like it loads its data into memory. As a result, the computer could be programmed quickly and perform calculations at very fast speeds. Presper Eckert and John Mauchly built the ENIAC. The two engineers introduced the stored-program concept in a three-page memo dated February 1944. Later, in September 1944, Dr. John von Neumann began working on the ENIAC project. On June 30, 1945, von Neumann published the First Draft of a Report on the EDVAC which equated the structures of the computer with the structures of the human brain. The design became known as the von Neumann architecture. The architecture was simultaneously deployed in the constructions of the EDVAC and EDSAC computers in 1949.The IBM System/360 (1964) was a family of computers, each having the same instruction set architecture. The Model 20 was the smallest and least expensive. Customers could upgrade and retain the same application software. The Model 195 was the most premium. Each System/360 model featured multiprogramming—having multiple processes in memory at once. When one process was waiting for input/output, another could compute. History: IBM planned for each model to be programmed using PL/1. A committee was formed that included COBOL, Fortran and ALGOL programmers. The purpose was to develop a language that was comprehensive, easy to use, extendible, and would replace Cobol and Fortran. The result was a large and complex language that took a long time to compile. History: Computers manufactured until the 1970s had front-panel switches for manual programming. The computer program was written on paper for reference. An instruction was represented by a configuration of on/off settings. After setting the configuration, an execute button was pressed. This process was then repeated. Computer programs also were automatically inputted via paper tape, punched cards or magnetic-tape. After the medium was loaded, the starting address was set via switches, and the execute button was pressed. History: Very Large Scale Integration A major milestone in software development was the invention of the Very Large Scale Integration (VLSI) circuit (1964). Following World War II, tube-based technology was replaced with point-contact transistors (1947) and bipolar junction transistors (late 1950s) mounted on a circuit board. During the 1960s, the aerospace industry replaced the circuit board with an integrated circuit chip.Robert Noyce, co-founder of Fairchild Semiconductor (1957) and Intel (1968), achieved a technological improvement to refine the production of field-effect transistors (1963). The goal is to alter the electrical resistivity and conductivity of a semiconductor junction. First, naturally occurring silicate minerals are converted into polysilicon rods using the Siemens process. The Czochralski process then converts the rods into a monocrystalline silicon, boule crystal. The crystal is then thinly sliced to form a wafer substrate. The planar process of photolithography then integrates unipolar transistors, capacitors, diodes, and resistors onto the wafer to build a matrix of metal–oxide–semiconductor (MOS) transistors. The MOS transistor is the primary component in integrated circuit chips.Originally, integrated circuit chips had their function set during manufacturing. During the 1960s, controlling the electrical flow migrated to programming a matrix of read-only memory (ROM). The matrix resembled a two-dimensional array of fuses. The process to embed instructions onto the matrix was to burn out the unneeded connections. There were so many connections, firmware programmers wrote a computer program on another chip to oversee the burning. The technology became known as Programmable ROM. In 1971, Intel installed the computer program onto the chip and named it the Intel 4004 microprocessor. History: The terms microprocessor and central processing unit (CPU) are now used interchangeably. However, CPUs predate microprocessors. For example, the IBM System/360 (1964) had a CPU made from circuit boards containing discrete components on ceramic substrates. History: Sac State 8008 The Intel 4004 (1971) was a 4-bit microprocessor designed to run the Busicom calculator. Five months after its release, Intel released the Intel 8008, an 8-bit microprocessor. Bill Pentz led a team at Sacramento State to build the first microcomputer using the Intel 8008: the Sac State 8008 (1972). Its purpose was to store patient medical records. The computer supported a disk operating system to run a Memorex, 3-megabyte, hard disk drive. It had a color display and keyboard that was packaged in a single console. The disk operating system was programmed using IBM's Basic Assembly Language (BAL). The medical records application was programmed using a BASIC interpreter. However, the computer was an evolutionary dead-end because it was extremely expensive. Also, it was built at a public university lab for a specific purpose. Nonetheless, the project contributed to the development of the Intel 8080 (1974) instruction set. History: x86 series In 1978, the modern software development environment began when Intel upgraded the Intel 8080 to the Intel 8086. Intel simplified the Intel 8086 to manufacture the cheaper Intel 8088. IBM embraced the Intel 8088 when they entered the personal computer market (1981). As consumer demand for personal computers increased, so did Intel's microprocessor development. The succession of development is known as the x86 series. The x86 assembly language is a family of backward-compatible machine instructions. Machine instructions created in earlier microprocessors were retained throughout microprocessor upgrades. This enabled consumers to purchase new computers without having to purchase new application software. The major categories of instructions are: Memory instructions to set and access numbers and strings in random-access memory. History: Integer arithmetic logic unit (ALU) instructions to perform the primary arithmetic operations on integers. Floating point ALU instructions to perform the primary arithmetic operations on real numbers. Call stack instructions to push and pop words needed to allocate memory and interface with functions. Single instruction, multiple data (SIMD) instructions to increase speed when multiple processors are available to perform the same algorithm on an array of data. History: Changing programming environment VLSI circuits enabled the programming environment to advance from a computer terminal (until the 1990s) to a graphical user interface (GUI) computer. Computer terminals limited programmers to a single shell running in a command-line environment. During the 1970s, full-screen source code editing became possible through a text-based user interface. Regardless of the technology available, the goal is to program in a programming language. Programming paradigms and languages: Programming language features exist to provide building blocks to be combined to express programming ideals. Ideally, a programming language should: express ideas directly in the code. express independent ideas independently. express relationships among ideas directly in the code. combine ideas freely. combine ideas only where combinations make sense. express simple ideas simply.The programming style of a programming language to provide these building blocks may be categorized into programming paradigms. For example, different paradigms may differentiate: procedural languages, functional languages, and logical languages. different levels of data abstraction. different levels of class hierarchy. different levels of input datatypes, as in container types and generic programming.Each of these programming styles has contributed to the synthesis of different programming languages.A programming language is a set of keywords, symbols, identifiers, and rules by which programmers can communicate instructions to the computer. They follow a set of rules called a syntax. Keywords are reserved words to form declarations and statements. Symbols are characters to form operations, assignments, control flow, and delimiters. Identifiers are words created by programmers to form constants, variable names, structure names, and function names. Syntax Rules are defined in the Backus–Naur form.Programming languages get their basis from formal languages. The purpose of defining a solution in terms of its formal language is to generate an algorithm to solve the underlining problem. An algorithm is a sequence of simple instructions that solve a problem. Generations of programming language The evolution of programming language began when the EDSAC (1949) used the first stored computer program in its von Neumann architecture. Programming the EDSAC was in the first generation of programming language. Programming paradigms and languages: The first generation of programming language is machine language. Machine language requires the programmer to enter instructions using instruction numbers called machine code. For example, the ADD operation on the PDP-11 has instruction number 24576.The second generation of programming language is assembly language. Assembly language allows the programmer to use mnemonic instructions instead of remembering instruction numbers. An assembler translates each assembly language mnemonic into its machine language number. For example, on the PDP-11, the operation 24576 can be referenced as ADD in the source code. The four basic arithmetic operations have assembly instructions like ADD, SUB, MUL, and DIV. Computers also have instructions like DW (Define Word) to reserve memory cells. Then the MOV instruction can copy integers between registers and memory.The basic structure of an assembly language statement is a label, operation, operand, and comment.Labels allow the programmer to work with variable names. The assembler will later translate labels into physical memory addresses. Programming paradigms and languages: Operations allow the programmer to work with mnemonics. The assembler will later translate mnemonics into instruction numbers. Operands tell the assembler which data the operation will process. Comments allow the programmer to articulate a narrative because the instructions alone are vague. Programming paradigms and languages: The key characteristic of an assembly language program is it forms a one-to-one mapping to its corresponding machine language target.The third generation of programming language uses compilers and interpreters to execute computer programs. The distinguishing feature of a third generation language is its independence from particular hardware. Early languages include Fortran (1958), COBOL (1959), ALGOL (1960), and BASIC (1964). In 1973, the C programming language emerged as a high-level language that produced efficient machine language instructions. Whereas third-generation languages historically generated many machine instructions for each statement, C has statements that may generate a single machine instruction. Moreover, an optimizing compiler might overrule the programmer and produce fewer machine instructions than statements. Today, an entire paradigm of languages fill the imperative, third generation spectrum.The fourth generation of programming language emphasizes what output results are desired, rather than how programming statements should be constructed. Declarative languages attempt to limit side effects and allow programmers to write code with relatively few errors. One popular fourth generation language is called Structured Query Language (SQL). Database developers no longer need to process each database record one at a time. Also, a simple instruction can generate output records without having to understand how it's retrieved. Programming paradigms and languages: Imperative languages Imperative languages specify a sequential algorithm using declarations, expressions, and statements: A declaration introduces a variable name to the computer program and assigns it to a datatype – for example: var x: integer; An expression yields a value – for example: 2 + 2 yields 4 A statement might assign an expression to a variable or use the value of a variable to alter the program's control flow – for example: x := 2 + 2; if x = 4 then do_something(); Fortran FORTRAN (1958) was unveiled as "The IBM Mathematical FORmula TRANslating system." It was designed for scientific calculations, without string handling facilities. Along with declarations, expressions, and statements, it supported: arrays. Programming paradigms and languages: subroutines. "do" loops.It succeeded because: programming and debugging costs were below computer running costs. it was supported by IBM. applications at the time were scientific.However, non-IBM vendors also wrote Fortran compilers, but with a syntax that would likely fail IBM's compiler. The American National Standards Institute (ANSI) developed the first Fortran standard in 1966. In 1978, Fortran 77 became the standard until 1991. Fortran 90 supports: records. pointers to arrays. Programming paradigms and languages: COBOL COBOL (1959) stands for "COmmon Business Oriented Language." Fortran manipulated symbols. It was soon realized that symbols did not need to be numbers, so strings were introduced. The US Department of Defense influenced COBOL's development, with Grace Hopper being a major contributor. The statements were English-like and verbose. The goal was to design a language so managers could read the programs. However, the lack of structured statements hindered this goal.COBOL's development was tightly controlled, so dialects did not emerge to require ANSI standards. As a consequence, it was not changed for 15 years until 1974. The 1990s version did make consequential changes, like object-oriented programming. Programming paradigms and languages: Algol ALGOL (1960) stands for "ALGOrithmic Language." It had a profound influence on programming language design. Emerging from a committee of European and American programming language experts, it used standard mathematical notation and had a readable structured design. Algol was first to define its syntax using the Backus–Naur form. This led to syntax-directed compilers. It added features like: block structure, where variables were local to their block. Programming paradigms and languages: arrays with variable bounds. "for" loops. functions. recursion.Algol's direct descendants include Pascal, Modula-2, Ada, Delphi and Oberon on one branch. On another branch the descendants include C, C++ and Java. Programming paradigms and languages: Basic BASIC (1964) stands for "Beginner's All-Purpose Symbolic Instruction Code." It was developed at Dartmouth College for all of their students to learn. If a student did not go on to a more powerful language, the student would still remember Basic. A Basic interpreter was installed in the microcomputers manufactured in the late 1970s. As the microcomputer industry grew, so did the language.Basic pioneered the interactive session. It offered operating system commands within its environment: The 'new' command created an empty slate. Programming paradigms and languages: Statements evaluated immediately. Statements could be programmed by preceding them with a line number. The 'list' command displayed the program. The 'run' command executed the program.However, the Basic syntax was too simple for large programs. Recent dialects added structure and object-oriented extensions. Microsoft's Visual Basic is still widely used and produces a graphical user interface. Programming paradigms and languages: C C programming language (1973) got its name because the language BCPL was replaced with B, and AT&T Bell Labs called the next version "C." Its purpose was to write the UNIX operating system. C is a relatively small language, making it easy to write compilers. Its growth mirrored the hardware growth in the 1980s. Its growth also was because it has the facilities of assembly language, but uses a high-level syntax. It added advanced features like: inline assembler. Programming paradigms and languages: arithmetic on pointers. pointers to functions. bit operations. freely combining complex operators.C allows the programmer to control which region of memory data is to be stored. Global variables and static variables require the fewest clock cycles to store. The stack is automatically used for the standard variable declarations. Heap memory is returned to a pointer variable from the malloc() function. Programming paradigms and languages: The global and static data region is located just above the program region. (The program region is technically called the text region. It's where machine instructions are stored.)The global and static data region is technically two regions. One region is called the initialized data segment, where variables declared with default values are stored. The other region is called the block started by segment, where variables declared without default values are stored. Programming paradigms and languages: Variables stored in the global and static data region have their addresses set at compile-time. They retain their values throughout the life of the process.The global and static region stores the global variables that are declared on top of (outside) the main() function. Global variables are visible to main() and every other function in the source code.On the other hand, variable declarations inside of main(), other functions, or within { } block delimiters are local variables. Local variables also include formal parameter variables. Parameter variables are enclosed within the parenthesis of function definitions. They provide an interface to the function.Local variables declared using the static prefix are also stored in the global and static data region. Unlike global variables, static variables are only visible within the function or block. Static variables always retain their value. An example usage would be the function int increment_counter(){static int counter = 0; counter++; return counter;}The stack region is a contiguous block of memory located near the top memory address. Variables placed in the stack are populated from top to bottom. A stack pointer is a special-purpose register that keeps track of the last memory address populated. Variables are placed into the stack via the assembly language PUSH instruction. Therefore, the addresses of these variables are set during runtime. The method for stack variables to lose their scope is via the POP instruction.Local variables declared without the static prefix, including formal parameter variables, are called automatic variables and are stored in the stack. They are visible inside the function or block and lose their scope upon exiting the function or block.The heap region is located below the stack. It is populated from the bottom to the top. The operating system manages the heap using a heap pointer and a list of allocated memory blocks. Like the stack, the addresses of heap variables are set during runtime. An out of memory error occurs when the heap pointer and the stack pointer meet.C provides the malloc() library function to allocate heap memory. Populating the heap with data is an additional copy function. Variables stored in the heap are economically passed to functions using pointers. Without pointers, the entire block of data would have to be passed to the function via the stack. Programming paradigms and languages: C++ In the 1970s, software engineers needed language support to break large projects down into modules. One obvious feature was to decompose large projects physically into separate files. A less obvious feature was to decompose large projects logically into abstract datatypes. At the time, languages supported concrete (scalar) datatypes like integer numbers, floating-point numbers, and strings of characters. Concrete datatypes have their representation as part of their name. Abstract datatypes are structures of concrete datatypes, with a new name assigned. For example, a list of integers could be called integer_list. Programming paradigms and languages: In object-oriented jargon, abstract datatypes are called classes. However, a class is only a definition; no memory is allocated. When memory is allocated to a class and bound to an identifier, it's called an object.Object-oriented imperative languages developed by combining the need for classes and the need for safe functional programming. A function, in an object-oriented language, is assigned to a class. An assigned function is then referred to as a method, member function, or operation. Object-oriented programming is executing operations on objects.Object-oriented languages support a syntax to model subset/superset relationships. In set theory, an element of a subset inherits all the attributes contained in the superset. For example, a student is a person. Therefore, the set of students is a subset of the set of persons. As a result, students inherit all the attributes common to all persons. Additionally, students have unique attributes that other people do not have. Object-oriented languages model subset/superset relationships using inheritance. Object-oriented programming became the dominant language paradigm by the late 1990s.C++ (1985) was originally called "C with Classes." It was designed to expand C's capabilities by adding the object-oriented facilities of the language Simula.An object-oriented module is composed of two files. The definitions file is called the header file. Here is a C++ header file for the GRADE class in a simple school application: A constructor operation is a function with the same name as the class name. It is executed when the calling operation executes the new statement. Programming paradigms and languages: A module's other file is the source file. Here is a C++ source file for the GRADE class in a simple school application: Here is a C++ header file for the PERSON class in a simple school application: Here is a C++ source file for the PERSON class in a simple school application: Here is a C++ header file for the STUDENT class in a simple school application: Here is a C++ source file for the STUDENT class in a simple school application: Here is a driver program for demonstration: Here is a makefile to compile everything: Declarative languages Imperative languages have one major criticism: assigning an expression to a non-local variable may produce an unintended side effect. Declarative languages generally omit the assignment statement and the control flow. They describe what computation should be performed and not how to compute it. Two broad categories of declarative languages are functional languages and logical languages. Programming paradigms and languages: The principle behind a functional language is to use lambda calculus as a guide for a well defined semantic. In mathematics, a function is a rule that maps elements from an expression to a range of values. Consider the function: times_10(x) = 10 * x The expression 10 * x is mapped by the function times_10() to a range of values. One value happens to be 20. This occurs when x is 2. So, the application of the function is mathematically written as: times_10(2) = 20 A functional language compiler will not store this value in a variable. Instead, it will push the value onto the computer's stack before setting the program counter back to the calling function. The calling function will then pop the value from the stack.Imperative languages do support functions. Therefore, functional programming can be achieved in an imperative language, if the programmer uses discipline. However, a functional language will force this discipline onto the programmer through its syntax. Functional languages have a syntax tailored to emphasize the what.A functional program is developed with a set of primitive functions followed by a single driver function. Consider the snippet: function max(a,b){/* code omitted */} function min(a,b){/* code omitted */} function difference_between_largest_and_smallest(a,b,c) { return max(a,max(b,c)) - min(a, min(b,c));} The primitives are max() and min(). The driver function is difference_between_largest_and_smallest(). Executing: put(difference_between_largest_and_smallest(10,4,7)); will output 6. Programming paradigms and languages: Functional languages are used in computer science research to explore new language features. Moreover, their lack of side-effects have made them popular in parallel programming and concurrent programming. However, application developers prefer the object-oriented features of imperative languages. Programming paradigms and languages: Lisp Lisp (1958) stands for "LISt Processor." It is tailored to process lists. A full structure of the data is formed by building lists of lists. In memory, a tree data structure is built. Internally, the tree structure lends nicely for recursive functions. The syntax to build a tree is to enclose the space-separated elements within parenthesis. The following is a list of three elements. The first two elements are themselves lists of two elements: ((A B) (HELLO WORLD) 94) Lisp has functions to extract and reconstruct elements. The function head() returns a list containing the first element in the list. The function tail() returns a list containing everything but the first element. The function cons() returns a list that is the concatenation of other lists. Therefore, the following expression will return the list x: cons(head(x), tail(x)) One drawback of Lisp is when many functions are nested, the parentheses may look confusing. Modern Lisp environments help ensure parenthesis match. As an aside, Lisp does support the imperative language operations of the assignment statement and goto loops. Also, Lisp is not concerned with the datatype of the elements at compile time. Instead, it assigns (and may reassign) the datatypes at runtime. Assigning the datatype at runtime is called dynamic binding. Whereas dynamic binding increases the language's flexibility, programming errors may linger until late in the software development process.Writing large, reliable, and readable Lisp programs requires forethought. If properly planned, the program may be much shorter than an equivalent imperative language program. Lisp is widely used in artificial intelligence. However, its usage has been accepted only because it has imperative language operations, making unintended side-effects possible. Programming paradigms and languages: ML ML (1973) stands for "Meta Language." ML checks to make sure only data of the same type are compared with one another. For example, this function has one input parameter (an integer) and returns an integer: ML is not parenthesis-eccentric like Lisp. The following is an application of times_10(): times_10 2 It returns "20 : int". (Both the results and the datatype are returned.) Like Lisp, ML is tailored to process lists. Unlike Lisp, each element is the same datatype. Moreover, ML assigns the datatype of an element at compile-time. Assigning the datatype at compile-time is called static binding. Static binding increases reliability because the compiler checks the context of variables before they are used. Programming paradigms and languages: Prolog Prolog (1972) stands for "PROgramming in LOgic." It is a logic programming language, based on formal logic. The building blocks of a Prolog program are facts and rules. Here is a simple example: After all the facts and rules are entered, then a question can be asked: Will Tom eat Jerry? The following example shows how Prolog will convert a letter grade to its numeric value: Here is a comprehensive example:1) All dragons billow fire, or equivalently, a thing billows fire if the thing is a dragon: 2) A creature billows fire if one of its parents billows fire: 3) A thing X is a parent of a thing Y if X is the mother of Y or X is the father of Y: 4) A thing is a creature if the thing is a dragon: 5) Norberta is a dragon, and Puff is a creature. Norberta is the mother of Puff. Programming paradigms and languages: Notice that rule (2) is a recursive (or inductive) definition. It can be understood purely declaratively, without any need to understand how it is executed. Rule (3) shows how functions, as in functional programming, are represented by relations. Here the mother and father relations are functions in the sense that every individual has only one mother and only one father. Programming paradigms and languages: Prolog is an untyped language. However, types (or classes) can be represented by using predicates. Rule (4) defines dragon as a subtype (or subclass) of creature. Computation in Prolog is initiated by a query or theorem to be proved, and solutions are generated by backward reasoning. For example given the query: Prolog generates two answers : Practical applications for Prolog are knowledge representation and problem solving in Artificial Intelligence. Programming paradigms and languages: Object-oriented programming Object-oriented programming is a programming method to execute operations (functions) on objects. The basic idea is to group the characteristics of a phenomenon into an object container and give the container a name. The operations on the phenomenon are also grouped into the container. Object-oriented programming developed by combining the need for containers and the need for safe functional programming. This programming method need not be confined to an object-oriented language. In an object-oriented language, an object container is called a class. In a non-object-oriented language, a data structure (which is also known as a record) may become an object container. To turn a data structure into an object container, operations need to be written specifically for the structure. The resulting structure is called an abstract datatype. However, inheritance will be missing. Nonetheless, this shortcoming can be overcome. Programming paradigms and languages: Here is a C programming language header file for the GRADE abstract datatype in a simple school application: The grade_new() function performs the same algorithm as the C++ constructor operation. Here is a C programming language source file for the GRADE abstract datatype in a simple school application: In the constructor, the function calloc() is used instead of malloc() because each memory cell will be set to zero. Programming paradigms and languages: Here is a C programming language header file for the PERSON abstract datatype in a simple school application: Here is a C programming language source file for the PERSON abstract datatype in a simple school application: Here is a C programming language header file for the STUDENT abstract datatype in a simple school application: Here is a C programming language source file for the STUDENT abstract datatype in a simple school application: Here is a driver program for demonstration: Here is a makefile to compile everything: The formal strategy to build object-oriented objects is to: Identify the objects. Most likely these will be nouns. Programming paradigms and languages: Identify each object's attributes. What helps to describe the object? Identify each object's actions. Most likely these will be verbs. Identify the relationships from object to object. Most likely these will be verbs.For example: A person is a human identified by a name. A grade is an achievement identified by a letter. A student is a person who earns a grade. Programming paradigms and languages: Syntax and semantics The syntax of a programming language is a list of production rules which govern its form. A programming language's form is the correct placement of its declarations, expressions, and statements. Complementing the syntax of a language are its semantics. The semantics describe the meanings attached to various syntactic constructs. A syntactic construct may need a semantic description because a form may have an invalid interpretation. Also, different languages might have the same syntax; however, their behaviors may be different. Programming paradigms and languages: The syntax of a language is formally described by listing the production rules. Whereas the syntax of a natural language is extremely complicated, a subset of the English language can have this production rule listing: a sentence is made up of a noun-phrase followed by a verb-phrase; a noun-phrase is made up of an article followed by an adjective followed by a noun; a verb-phrase is made up of a verb followed by a noun-phrase; an article is 'the'; an adjective is 'big' or an adjective is 'small'; a noun is 'cat' or a noun is 'mouse'; a verb is 'eats';The words in bold-face are known as "non-terminals". The words in 'single quotes' are known as "terminals".From this production rule listing, complete sentences may be formed using a series of replacements. The process is to replace non-terminals with either a valid non-terminal or a valid terminal. The replacement process repeats until only terminals remain. One valid sentence is: sentence noun-phrase verb-phrase article adjective noun verb-phrase the adjective noun verb-phrase the big noun verb-phrase the big cat verb-phrase the big cat verb noun-phrase the big cat eats noun-phrase the big cat eats article adjective noun the big cat eats the adjective noun the big cat eats the small noun the big cat eats the small mouseHowever, another combination results in an invalid sentence: the small mouse eats the big catTherefore, a semantic is necessary to correctly describe the meaning of an eat activity. Programming paradigms and languages: One production rule listing method is called the Backus–Naur form (BNF). BNF describes the syntax of a language and itself has a syntax. This recursive definition is an example of a meta-language. The syntax of BNF includes: ::= which translates to is made up of a[n] when a non-terminal is to its right. It translates to is when a terminal is to its right. Programming paradigms and languages: | which translates to or. < and > which surround non-terminals.Using BNF, a subset of the English language can have this production rule listing: Using BNF, a signed-integer has the production rule listing: Notice the recursive production rule: This allows for an infinite number of possibilities. Therefore, a semantic is necessary to describe a limitation of the number of digits. Notice the leading zero possibility in the production rules: Therefore, a semantic is necessary to describe that leading zeros need to be ignored. Two formal methods are available to describe semantics. They are denotational semantics and axiomatic semantics. Software engineering and computer programming: Software engineering is a variety of techniques to produce quality software. Computer programming is the process of writing or editing source code. In a formal environment, a systems analyst will gather information from managers about all the organization's processes to automate. This professional then prepares a detailed plan for the new or modified system. The plan is analogous to an architect's blueprint. Software engineering and computer programming: Performance objectives The systems analyst has the objective to deliver the right information to the right person at the right time. The critical factors to achieve this objective are: The quality of the output. Is the output useful for decision-making? The accuracy of the output. Does it reflect the true situation? The format of the output. Is the output easily understood? The speed of the output. Time-sensitive information is important when communicating with the customer in real time. Software engineering and computer programming: Cost objectives Achieving performance objectives should be balanced with all of the costs, including: Development costs. Uniqueness costs. A reusable system may be expensive. However, it might be preferred over a limited-use system. Hardware costs. Operating costs.Applying a systems development process will mitigate the axiom: the later in the process an error is detected, the more expensive it is to correct. Waterfall model The waterfall model is an implementation of a systems development process. As the waterfall label implies, the basic phases overlap each other: The investigation phase is to understand the underlying problem. The analysis phase is to understand the possible solutions. The design phase is to plan the best solution. The implementation phase is to program the best solution. The maintenance phase lasts throughout the life of the system. Changes to the system after it's deployed may be necessary. Faults may exist, including specification faults, design faults, or coding faults. Improvements may be necessary. Adaption may be necessary to react to a changing environment. Software engineering and computer programming: Computer programmer A computer programmer is a specialist responsible for writing or modifying the source code to implement the detailed plan. A programming team is likely to be needed because most systems are too large to be completed by a single programmer. However, adding programmers to a project may not shorten the completion time. Instead, it may lower the quality of the system. To be effective, program modules need to be defined and distributed to team members. Also, team members must interact with one another in a meaningful and effective way.Computer programmers may be programming in the small: programming within a single module. Chances are a module will execute modules located in other source code files. Therefore, computer programmers may be programming in the large: programming modules so they will effectively couple with each other. Programming-in-the-large includes contributing to the application programming interface (API). Software engineering and computer programming: Program modules Modular programming is a technique to refine imperative language programs. Refined programs may reduce the software size, separate responsibilities, and thereby mitigate software aging. A program module is a sequence of statements that are bounded within a block and together identified by a name. Modules have a function, context, and logic: The function of a module is what it does. Software engineering and computer programming: The context of a module are the elements being performed upon. The logic of a module is how it performs the function.The module's name should be derived first by its function, then by its context. Its logic should not be part of the name. For example, function compute_square_root( x ) or function compute_square_root_integer( i : integer ) are appropriate module names. However, function compute_square_root_by_division( x ) is not. The degree of interaction within a module is its level of cohesion. Cohesion is a judgment of the relationship between a module's name and its function. The degree of interaction between modules is the level of coupling. Coupling is a judgement of the relationship between a module's context and the elements being performed upon. Software engineering and computer programming: Cohesion The levels of cohesion from worst to best are: Coincidental Cohesion: A module has coincidental cohesion if it performs multiple functions, and the functions are completely unrelated. For example, function read_sales_record_print_next_line_convert_to_float(). Coincidental cohesion occurs in practice if management enforces silly rules. For example, "Every module will have between 35 and 50 executable statements." Logical Cohesion: A module has logical cohesion if it has available a series of functions, but only one of them is executed. For example, function perform_arithmetic( perform_addition, a, b ). Software engineering and computer programming: Temporal Cohesion: A module has temporal cohesion if it performs functions related to time. One example, function initialize_variables_and_open_files(). Another example, stage_one(), stage_two(), ... Procedural Cohesion: A module has procedural cohesion if it performs multiple loosely related functions. For example, function read_part_number_update_employee_record(). Communicational Cohesion: A module has communicational cohesion if it performs multiple closely related functions. For example, function read_part_number_update_sales_record(). Informational Cohesion: A module has informational cohesion if it performs multiple functions, but each function has its own entry and exit points. Moreover, the functions share the same data structure. Object-oriented classes work at this level. Functional Cohesion: a module has functional cohesion if it achieves a single goal working only on local variables. Moreover, it may be reusable in other contexts. Coupling The levels of coupling from worst to best are: Content Coupling: A module has content coupling if it modifies a local variable of another function. COBOL used to do this with the alter verb. Common Coupling: A module has common coupling if it modifies a global variable. Control Coupling: A module has control coupling if another module can modify its control flow. For example, perform_arithmetic( perform_addition, a, b ). Instead, control should be on the makeup of the returned object. Stamp Coupling: A module has stamp coupling if an element of a data structure passed as a parameter is modified. Object-oriented classes work at this level. Data Coupling: A module has data coupling if all of its input parameters are needed and none of them are modified. Moreover, the result of the function is returned as a single object. Data flow analysis Data flow analysis is a design method used to achieve modules of functional cohesion and data coupling. The input to the method is a data-flow diagram. A data-flow diagram is a set of ovals representing modules. Each module's name is displayed inside its oval. Modules may be at the executable level or the function level. Software engineering and computer programming: The diagram also has arrows connecting modules to each other. Arrows pointing into modules represent a set of inputs. Each module should have only one arrow pointing out from it to represent its single output object. (Optionally, an additional exception arrow points out.) A daisy chain of ovals will convey an entire algorithm. The input modules should start the diagram. The input modules should connect to the transform modules. The transform modules should connect to the output modules. Functional categories: Computer programs may be categorized along functional lines. The main functional categories are application software and system software. System software includes the operating system, which couples computer hardware with application software. The purpose of the operating system is to provide an environment where application software executes in a convenient and efficient manner. Both application software and system software execute utility programs. At the hardware level, a microcode program controls the circuits throughout the central processing unit. Functional categories: Application software Application software is the key to unlocking the potential of the computer system. Enterprise application software bundles accounting, personnel, customer, and vendor applications. Examples include enterprise resource planning, customer relationship management, and supply chain management software. Functional categories: Enterprise applications may be developed in-house as a one-of-a-kind proprietary software. Alternatively, they may be purchased as off-the-shelf software. Purchased software may be modified to provide custom software. If the application is customized, then either the company's resources are used or the resources are outsourced. Outsourced software development may be from the original software vendor or a third-party developer.The potential advantages of in-house software are features and reports may be developed exactly to specification. Management may also be involved in the development process and offer a level of control. Management may decide to counteract a competitor's new initiative or implement a customer or vendor requirement. A merger or acquisition may necessitate enterprise software changes. The potential disadvantages of in-house software are time and resource costs may be extensive. Furthermore, risks concerning features and performance may be looming. Functional categories: The potential advantages of off-the-shelf software are upfront costs are identifiable, the basic needs should be fulfilled, and its performance and reliability have a track record. The potential disadvantages of off-the-shelf software are it may have unnecessary features that confuse end users, it may lack features the enterprise needs, and the data flow may not match the enterprise's work processes.One approach to economically obtaining a customized enterprise application is through an application service provider. Specialty companies provide hardware, custom software, and end-user support. They may speed the development of new applications because they possess skilled information system staff. The biggest advantage is it frees in-house resources from staffing and managing complex computer projects. Many application service providers target small, fast-growing companies with limited information system resources. On the other hand, larger companies with major systems will likely have their technical infrastructure in place. One risk is having to trust an external organization with sensitive information. Another risk is having to trust the provider's infrastructure reliability. Functional categories: Operating system An operating system is the low-level software that supports a computer's basic functions, such as scheduling processes and controlling peripherals.In the 1950s, the programmer, who was also the operator, would write a program and run it. After the program finished executing, the output may have been printed, or it may have been punched onto paper tape or cards for later processing. More often than not the program did not work. The programmer then looked at the console lights and fiddled with the console switches. If less fortunate, a memory printout was made for further study. In the 1960s, programmers reduced the amount of wasted time by automating the operator's job. A program called an operating system was kept in the computer at all times.The term operating system may refer to two levels of software. The operating system may refer to the kernel program that manages the processes, memory, and devices. More broadly, the operating system may refer to the entire package of the central software. The package includes a kernel program, command-line interpreter, graphical user interface, utility programs, and editor. Functional categories: Kernel Program The kernel's main purpose is to manage the limited resources of a computer: The kernel program should perform process scheduling. The kernel creates a process control block when a program is selected for execution. However, an executing program gets exclusive access to the central processing unit only for a time slice. To provide each user with the appearance of continuous access, the kernel quickly preempts each process control block to execute another one. The goal for system developers is to minimize dispatch latency.The kernel program should perform memory management.When the kernel initially loads an executable into memory, it divides the address space logically into regions. The kernel maintains a master-region table and many per-process-region (pregion) tables—one for each running process. These tables constitute the virtual address space. The master-region table is used to determine where its contents are located in physical memory. The pregion tables allow each process to have its own program (text) pregion, data pregion, and stack pregion. Functional categories: The program pregion stores machine instructions. Since machine instructions do not change, the program pregion may be shared by many processes of the same executable. To save time and memory, the kernel may load only blocks of execution instructions from the disk drive, not the entire execution file completely. The kernel is responsible for translating virtual addresses into physical addresses. The kernel may request data from the memory controller and, instead, receive a page fault. If so, the kernel accesses the memory management unit to populate the physical data region and translate the address. The kernel allocates memory from the heap upon request by a process. When the process is finished with the memory, the process may request for it to be freed. If the process exits without requesting all allocated memory to be freed, then the kernel performs garbage collection to free the memory. The kernel also ensures that a process only accesses its own memory, and not that of the kernel or other processes.The kernel program should perform file system management. The kernel has instructions to create, retrieve, update, and delete files. The kernel program should perform device management. The kernel provides programs to standardize and simplify the interface to the mouse, keyboard, disk drives, printers, and other devices. Moreover, the kernel should arbitrate access to a device if two processes request it at the same time. The kernel program should perform network management. The kernel transmits and receives packets on behalf of processes. One key service is to find an efficient route to the target system. The kernel program should provide system level functions for programmers to use.Programmers access files through a relatively simple interface that in turn executes a relatively complicated low-level I/O interface. The low-level interface includes file creation, file descriptors, file seeking, physical reading, and physical writing. Programmers create processes through a relatively simple interface that in turn executes a relatively complicated low-level interface. Programmers perform date/time arithmetic through a relatively simple interface that in turn executes a relatively complicated low-level time interface. The kernel program should provide a communication channel between executing processes. For a large software system, it may be desirable to engineer the system into smaller processes. Processes may communicate with one another by sending and receiving signals.Originally, operating systems were programmed in assembly; however, modern operating systems are typically written in higher-level languages like C, Objective-C, and Swift. Functional categories: Utility program A utility program is designed to aid system administration and software execution. Operating systems execute hardware utility programs to check the status of disk drives, memory, speakers, and printers. A utility program may optimize the placement of a file on a crowded disk. System utility programs monitor hardware and network performance. When a metric is outside an acceptable range, a trigger alert is generated.Utility programs include compression programs so data files are stored on less disk space. Compressed programs also save time when data files are transmitted over the network. Utility programs can sort and merge data sets. Utility programs detect computer viruses. Functional categories: Microcode program A microcode program is the bottom-level interpreter that controls the data path of software-driven computers. (Advances in hardware have migrated these operations to hardware execution circuits.) Microcode instructions allow the programmer to more easily implement the digital logic level—the computer's real hardware. The digital logic level is the boundary between computer science and computer engineering.A logic gate is a tiny transistor that can return one of two signals: on or off. Having one transistor forms the NOT gate. Connecting two transistors in series forms the NAND gate. Connecting two transistors in parallel forms the NOR gate. Connecting a NOT gate to a NAND gate forms the AND gate. Connecting a NOT gate to a NOR gate forms the OR gate.These five gates form the building blocks of binary algebra—the digital logic functions of the computer. Microcode instructions are mnemonics programmers may use to execute digital logic functions instead of forming them in binary algebra. They are stored in a central processing unit's (CPU) control store. These hardware-level instructions move data throughout the data path. The micro-instruction cycle begins when the microsequencer uses its microprogram counter to fetch the next machine instruction from random-access memory. The next step is to decode the machine instruction by selecting the proper output line to the hardware module. The final step is to execute the instruction using the hardware module's set of gates. Instructions to perform arithmetic are passed through an arithmetic logic unit (ALU). The ALU has circuits to perform elementary operations to add, shift, and compare integers. By combining and looping the elementary operations through the ALU, the CPU performs its complex arithmetic. Functional categories: Microcode instructions move data between the CPU and the memory controller. Memory controller microcode instructions manipulate two registers. The memory address register is used to access each memory cell's address. The memory data register is used to set and read each cell's contents.Microcode instructions move data between the CPU and the many computer buses. The disk controller bus writes to and reads from hard disk drives. Data is also moved between the CPU and other functional units via the peripheral component interconnect express bus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trampling** Trampling: Trampling is the act of walking on something repeatedly by humans or animals. Trampling on open ground can destroy the above ground parts of many plants and can compact the soil, thereby creating a distinct microenvironment that specific species may be adapted for. It can be used as part of a wildlife management strategy along grazing. When carrying out investigations like a belt transect, trampling should be avoided. At other times, it is part of the experimental design. Trampling can be a disturbance to ecology and to archaeological sites.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HAT-P-20** HAT-P-20: HAT-P-20 is a K-type main-sequence star about 232 light-years away. The star has a strong starspot activity, and its equatorial plane is misaligned by 36+10−12° with the planetary orbit. Although star with a giant planet on close orbit is expected to be spun-up by tidal forces, only weak indications of tidal spin-up were detected. Planetary system: In 2010 a transiting hot super-Jovian planet was detected. Its equilibrium temperature is 996±19 K.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Serial femtosecond crystallography** Serial femtosecond crystallography: Serial femtosecond crystallography (SFX) is a form of X-ray crystallography developed for use at X-ray free-electron lasers (XFELs). Single pulses at free-electron lasers are bright enough to generate resolvable Bragg diffraction from sub-micron crystals. However, these pulses also destroy the crystals, meaning that a full data set involves collecting diffraction from many crystals. This method of data collection is referred to as serial, referencing a row of crystals streaming across the X-ray beam, one at a time. History: While the idea of serial crystallography had been proposed earlier, it was first demonstrated with XFELs by Chapman et al. at the Linac Coherent Light Source (LCLS) in 2011. This method has since been extended to solve unknown structures, perform time-resolved experiments, and later even brought back to synchrotron X-ray sources. Methods: In comparison to conventional crystallography, where a single (relatively large) crystal is rotated in order to collect a 3D data set, some additional methods have to be developed to measure in the serial mode. First, a method is required to efficiently stream crystals across the beam focus. The other major difference is in the data analysis pipeline. Here, each crystal is in a random, unknown orientation which must be computationally determined before the diffraction patterns from all the crystals can be merged into a set of 3D hkℓ intensities. Methods: Sample Delivery The first sample delivery system used for this technique was the Gas Dynamic Virtual Nozzle (GDVN) which generates a liquid jet in vacuum (accelerated by a concentric helium gas stream) containing crystals. Since then, many other methods have been successfully demonstrated at both XFELs and synchrotron sources. A summary of these methods along with their key relative features is given below: Gas Dynamic Virtual Nozzle (GDVN) - low background scattering, but high sample consumption. Only method available for high repetition rate sources. Methods: Lipidic Cubic Phase (LCP) injector - Low sample consumption, with relatively high background. Specially suited for membrane proteins Other viscous delivery media - Similar to LCP, low sample consumption with high background Fixed target scanning systems (wide variety of systems have been used with different features, with standard crystal loops, or silicon chips) - Low sample consumption, background depends on system, mechanically complex Tape drive (crystals auto-pipetted onto a Kapton tape and brought to X-ray focus) - Similar to fixed target systems, except with fewer moving parts Data Analysis In order to recover a 3D structure from the individual diffraction patterns, they must be oriented, scaled and merged to generate a list of hkℓ intensities. These intensities can then be passed to standard crystallographic phasing and refinement programs. The first experiments only oriented the patterns and obtained accurate intensity values by averaging over a large number of crystals (> 100,000). Later versions correct for variations in individual pattern properties such as overall intensity variations and B-factor variations as well as refining the orientations to fix the "partialities" of the individual Bragg reflections.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shower splash guard** Shower splash guard: A shower splash guard is a permanently installed, fixed, rigid fitting made of plastic or glass that prevent water from a shower from splashing out of the bathtub and onto the floor. Typically, the shower splash guard is a small triangular piece of plastic that is used in combination with a shower curtain, to prevent water escaping at the corners, but it may be a much larger piece that is used by itself. Typical design and use: They are commonly installed on rectangular drop-in bathtubs with a shower head combination. In this arrangement, the bathtub is installed tightly against three walls, which are often covered with ceramic tiles. The bathtub and walls are sealed along the tub and wall abutment with a flexible caulk and rigid grout is used between the tiles to contain all water. The most common North American configuration is a rectangular drop-in bathtub/shower which is 5 foot in length and approximately 32 inch in width, having a shower head, water spout, taps, over flow drain, and bottom drain on one of the 32 inch ends. Typical design and use: The remaining fourth side is used for entry and exit by the user, for the purpose of showering or bathing, and is fitted with a device to prevent the shower head from spraying water outside the tub. Typical design and use: Most shower splash guard designs are based on a right triangle, where the 90 degree legs are attached to the wall and tub ledge. The right triangle design functions well when the bathtub and walls are installed perfectly square with each other. Some extend as much as 4 feet up the wall, and are held in position using adhesive foam tapes or glues. Typical design and use: Generally, shower splash guards are made from plastic and function as a dam to prevent water from escaping the bathtub and pooling on the floor. These devices are installed to fill the intersecting void between the top ledge of the bathtub and the perpendicular wall. Relationship to bathtub or shower curtains and doors: Other common devices for containing water spray include a highly flexible, waterproof bathtub or shower curtain, or more recently, a shower door, which is a permanently installed sliding or pivoting door made from glass or plastic. Containing water spray, leaks, and splashes within the bathtub is particularly important in bathrooms constructed with wooden under-flooring because water causes the wooden floor boards and floor joists to rot or decay over time, if left unattended. Relationship to bathtub or shower curtains and doors: Shower or bathtub curtains are more commonly installed because of their very low cost and ease of installation, requiring only a rod, which may be removable, that spans between the end walls of the open side of the bathtub, and rings or hooks for attaching the curtain. The waterproof curtain or liner is placed inside the bathtub during and after shower use to contain water spray and splashes within the bathtub for proper drainage while drying. However, they are not highly effective: if the curtain is not very carefully arranged, then water escapes and puddles on the floor, near the wall at the end of the shower curtain, particularly at the end where the shower head was located. Relationship to bathtub or shower curtains and doors: The shower splash guard is designed to solve the problem of containing water, mostly at the shower head end of the bathtub. Like the shower curtain, it dates back to the early 1900s.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**4-Nitrochlorobenzene** 4-Nitrochlorobenzene: 4-Nitrochlorobenzene is the organic compound with the formula ClC6H4NO2. It is a pale yellow solid. 4-Nitrochlorobenzene is a common intermediate in the production of a number of industrially useful compounds, including antioxidants commonly found in rubber. Other isomers with the formula ClC6H4NO2 include 2-nitrochlorobenzene and 3-nitrochlorobenzene. Preparation: 4-Nitrochlorobenzene is prepared industrially by nitration of chlorobenzene: ClC6H5 + HNO3 → ClC6H4NO2 + H2OThis reaction affords both the 2- and the 4-nitro derivatives, in about a 1:2 ratio. These isomers are separated by a combination of crystallization and distillation. 4-Nitrochlorobenzene was originally prepared by the nitration of 4-bromochlorobenzene by Holleman and coworkers. Applications: 4-Nitrochlorobenzene is an intermediate in the preparation of a variety of derivatives. Nitration gives 2,4-dinitrochlorobenzene, and 3,4-dichloronitrobenzene. Reduction with iron metal gives 4-chloroaniline. The electron-withdrawing nature of the appended nitro-group makes the benzene ring especially susceptible to nucleophilic aromatic substitution, unlike related chlorobenzene. Thus, the strong nucleophiles hydroxide, methoxide, fluoride, and amide displace chloride to give respectively 4-nitrophenol, 4-nitroanisole, 4-fluoronitrobenzene, and 4-nitroaniline.Another use of 4-nitrochlorobenzene is its condensation with aniline to produce 4-nitrodiphenylamine. Reductive alkylation of the nitro group affords secondary aryl amines, which are useful antioxidants for rubber. Safety: The U.S. National Institute for Occupational Safety and Health considers 4-nitrochlorobenzene as a potential occupational carcinogen. The Occupational Safety and Health Administration set a permissible exposure limit of 1 mg/m3 The American Conference of Governmental Industrial Hygienists recommends an airborne exposure limit of 0.64 mg/m3 over a time-weighted average of eight hours.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Electron spin resonance dating** Electron spin resonance dating: Electron spin resonance dating, or ESR dating, is a technique used to date materials which radiocarbon dating cannot, including minerals (e.g., carbonates, silicates, sulphates), biological materials (e.g., tooth enamel), archaeological materials (e.g., ceramics) and food. Electron spin resonance dating was first introduced to the science community in 1975, when Japanese nuclear physicist Motoji Ikeya dated a speleothem in Akiyoshi Cave, Japan. ESR dating measures the amount of unpaired electrons in crystalline structures that were previously exposed to natural radiation. The age of a substance can be determined by measuring the dosage of radiation since the time of its formation. Applications: Electron spin resonance dating is being used in fields like radiation chemistry, biochemistry, and as well as geology, archaeology, and anthropology. ESR dating is used instead of radiocarbon dating or radiometric dating because ESR dating can be applied on materials different from other methods, as well as covering different age ranges. The dating of buried teeth has served as the basis for the dating of human remains. Studies have been used to date burnt flint and quartz found in certain ancient ceramics. ESR dating has been widely applied to date hydrothermal vents and sometimes to mine minerals. Newer ESR dating applications include dating previous earthquakes from fault gouge, past volcanic eruptions, tectonic activity along coastlines, fluid flow in accretionary prisms, and cold seeps.ESR dating can be applied to newly formed materials or previously heated samples, as long the heating is below the closure temperature or the heating time is much shorter than the characteristic decay time. The closure temperature of quartz in granite is about 30-90 °C and of barite is about 190-340 °C for ESR dating. Dating process: Electron spin resonance dating can be described as trapped charge dating. Radioactivity causes negatively charged electrons to move from a ground state, the valence band, to a higher energy level at the conduction band. After a short time, electrons eventually recombine with the positively charged holes left in the valence band. The trapped electrons form para-magnetic centers and give rise to certain signals that can be detected under an ESR spectrometry. The amount of trapped electrons corresponds to the magnitude of the ESR signal. This ESR signal is directly proportional to the number of trapped electrons in the mineral, the dosage of radioactive substances, and the age. Calculating the ESR age: The electron spin resonance age of a substance is found from the following equation: DE=∫0TD(t).dt where DE is the equivalent dose, or paleodose (in Gray or Gy), i.e. the amount of radiation a sample has received during the time elapsed between the zeroing of the ESR clock (t = 0) and the sampling (t = T). D(t) is the dose rate (usually in Gy/ka or microGy/a), which is the average dose absorbed by the sample in 1 year. If D(t) is considered constant over time, then, the equation may be expressed as follows: T=DE/D In this scenario, T is the age of the sample, i.e. the time during which the sample has been exposed to natural radioactivity since the ESR signal has been last reset. This happens by releasing the trapped charge, i.e. usually by either dissolution/recrystallization, heat, optical bleaching, or mechanical stress. Calculating the ESR age: Determining the accumulated dose The accumulated dose is found by the additive dose method and by an electron spin resonance (ESR) spectrometry. This when a sample is put into an external magnetic field and irradiated with certain dosages of microwaves that changes the energy level of the magnetic centers (changes the spin rotation) either to the same or opposite of the surrounding magnetic field. The change in magnetic properties only happens at specific energy levels and for certain microwave frequencies, there are specific magnetic strengths that cause these changes to occur (resonance). Positioning an ESR line in a spectrum corresponds to the proportion (g-value) of the microwave frequency to magnetic field strength used in the spectrometry. As the extrapolation toward zero of the ESR intensity occurs, the accumulated dose can then be determined. Calculating the ESR age: Determining the annual dose rate The dose rate is found from the summation of the concentrations of radioactive materials in the sample (internal dose rate) and its surrounding environment (external dose rate). The dosages of internal and external radioactivity must be calculated separately because of the varying differences between the two.Factors to include in calculating the radioactivity: Uranium, thorium and potassium concentration Energies for alpha, beta, and gamma rays of uranium-238 and thorium-232 Correction factors related to the water content, the geometry of the sample, its thickness and density Cosmic ray dose rates – dependent on geographical position and thickness of covering sediments (300 pGy/a at sea level) Reliability: Trapped electrons only have a limited time frame when they are within the intermediate energy level stages. After a certain time range, or temperature fluctuations, trapped electrons will return to their energy states and recombine with holes. The recombination of electrons with their holes is only negligible if the average life is ten times higher than the age of the sample being dated. New heating events may erase previous ESR ages so in environments with multiple episodes of heating, such as in hydrothermal vents, maybe only newly formed minerals can be dated with ESR dating but not older minerals. This explains why samples from the same hydrothermal chimney may give different ESR ages. In environments with multiple phases of mineral formation, generally, ESR dating gives the average age of the bulk mineral while radiometric dates are biased to the ages of younger phases because of the decay of parent nuclei.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Visual masking** Visual masking: Visual masking is a phenomenon of visual perception. It occurs when the visibility of one image, called a target, is reduced by the presence of another image, called a mask. Visual masking: The target might be invisible or appear to have reduced contrast or lightness. There are three different timing arrangements for masking: forward masking, backward masking, and simultaneous masking. In forward masking, the mask precedes the target. In backward masking the mask follows the target. In simultaneous masking, the mask and target are shown together. There are two different spatial arrangements for masking: pattern masking and metacontrast. Pattern masking occurs when the target and mask locations overlap. Metacontrast masking occurs when the mask does not overlap with the target location. Factors affecting visual masking: Target-to-mask spatial separation Suppression can be seen in both forward and backward masking when there is pattern masking, but not when there is metacontrast. Simultaneous masking, however, will produce facilitation of target visibility during pattern masking. Facilitation also comes about when metacontrast is combined with either simultaneous or forward masking. This is because it takes time for the mask to reach the target's location through lateral propagation. As the target gets further from the mask, the time required for lateral propagation increases. Thus, the masking effect will increase as the mask gets closer to the target. Factors affecting visual masking: Target-to-mask temporal separation As the time difference between the target and the mask increases, the masking effect decreases. This is because the integration time of a target stimulus has an upper limit 200 ms, based on physiological experiments and as the separation approaches this limit, the mask is able to produce less of an effect on the target, as the target has had more time to form a full neural representation in the brain. Factors affecting visual masking: Polat, Sterkin, and Yehezkel went into great detail in explaining the effect of temporal matching between target input and lateral propagation of the mask. Based on data from previous single-unit recordings, they concluded that the time window for any sort of efficient interaction with target processing is 210 to 310 ms after the target's appearance. Anything outside of this window would fail to cause any sort of masking effect. This explains why there is a masking effect when the mask is presented 50 ms after the target, but not when the inter-stimulus interval between mask and target is 150 ms. In the first case, mask response would propagate to the target location and be processed with a delay of 260 to 310 ms, whereas the ISI of 150 would result in a delay of 410 to 460 ms. Factors affecting visual masking: Monoptic vs. dichoptic visual masking In dichoptic visual masking, the target is presented to one eye and the mask to the other, whereas in monoptic visual masking, both eyes are presented with the target and the mask. It was found that the masking effect was just as strong in dichoptic as it was in monoptic masking, and that it showed the same timing characteristics. Possible neural correlates: There are multiple theories surrounding the neural correlates of masking, but most of them agree on a few key ideas. First, backward visual masking comes about from suppression of the target's “after-discharge”, where the after-discharge can be thought of as the neural response to the target's termination. Impairments in backward masking have been consistently found in those with schizophrenia as well as in their unaffected siblings, thus suggesting that the impairments might be an endophenotype for schizophrenia.Forward masking, on the other hand, is correlated to the suppression of the target's “onset-response”, which can be thought of as the neural response to the target's appearance. Possible neural correlates: Two-channel model Originally proposed by Breitmeyer and Ganz in 1976, the original version of this model stated that there were two different visual information channels- one being fast and transient, the other being slow and sustained. The theory asserts that each stimulus travels up each channel, and both channels are necessary for proper and full processing of any given stimulus. It explained backward masking by saying that the neural representation of the mask would travel up the transient channel and intercept the neural representation of the target as it travelled up the slower channel, suppressing the target's representation and decreasing its visibility. One problem with this model, as proposed by Macknik and Martinez-Conde, is that it predicts masking to occur as a function of how far apart, temporally, the stimulus onset is. However, Macknik and Martinez-Conde showed that backward masking is actually more dependent on how far apart stimulus termination is. Possible neural correlates: Retino-cortical dynamics model Breitmeyer and Ögmen modified the two-channel model in 2006, renaming it to the retino-cortical dynamics (RECOD) model in the process. Their main proposed modification was that the fast and slow channels were actually feed forward and feedback channels, instead of the magnocellular and parvocellular retino-geniculocortical pathways, which is what had previously been proposed. Thus, according to this new model, backward masking is caused when feed forward input from the mask interferes with the feedback coming from the higher visual areas’ response to the target, thus reducing visibility. Possible neural correlates: Lamme's recurrent feedback hypothesis of visual awareness and masking This model proposes that backward masking is caused by an interference with feedback from higher visual areas. In this model, target duration is irrelevant because masking is supposed to occur as a function of feedback, which is generated when the target appears on screen. Lamme's group further supported their model when they described that the surgical removal of the extrastriate cortex in monkeys leads to a reduction of area V1 late responses. Possible neural correlates: Lateral inhibition circuits Proposed by Macknik and Martinez-Conde in 2008, this theory proposes that masking can be explained almost entirely by feed forward lateral inhibition circuits. The idea is that the edges of the mask, if positioned in close proximity to the target, may inhibit the responses caused by the edges of the target, inhibiting perception of the target. Possible neural correlates: Coupled interactions between V1 and fusiform gyrus Haynes, Driver, and Rees proposed this theory in 2005, stating that visibility derives from the feed forward and feedback interactions between the V1 and fusiform gyrus. In their experiment, they required subjects to attend actively to the target- thus, as Macknik and Martinez-Conde point out, it is possible that their results were confounded by the attentional aspect of the trials, and that the results may not accurately reflect the effects of visual masking. Possible neural correlates: Frontal lobe processing of visual masking This was proposed by Thompson and Schall, based on experiments conducted in 1999 and 2000. They concluded that visual masking is processed in the frontal-eye fields, and that the neural correlate of masking lies not in the inhibition of the response to the target but in the “merging” of target and mask responses. One criticism of their experiment, however, is that their target was almost 300x dimmer than the mask, so their results may have been confounded by the different response latencies one would expect from stimuli with such differences in brightness. Evidence from monoptic and dichoptic visual masking: Macknik & Martinez-Conde recorded from neurons in the lateral geniculate nucleus (LGN) and V1 V1 while presenting monoptic and dichoptic stimuli, and found that monoptic masking occurred in all the LGN and V1 neurons that were recorded, but dichoptic masking only occurred in some of the binocular neurons in V1, which supports the hypothesis that visual masking in monoptic regions is not due to feedback from dichoptic regions. This is because, if there had been feedback from higher areas of the visual field, the early circuits would have “inherited” dichoptic masking from the feedback coming from higher levels, and so would exhibit both dichoptic and monoptic masking. Evidence from monoptic and dichoptic visual masking: Although monoptic masking is stronger in the early visual areas, monoptic and dichoptic masking are equivalent in magnitude. Thus, dichoptic masking must become stronger as it proceeds down the visual hierarchy if the preceding hypothesis is correct. In fact, dichoptic masking was shown to begin downstream of area V2.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Second-generation biofuels** Second-generation biofuels: Second-generation biofuels, also known as advanced biofuels, are fuels that can be manufactured from various types of non-food biomass. Biomass in this context means plant materials and animal waste used especially as a source of fuel. Second-generation biofuels: First-generation biofuels are made from sugar-starch feedstocks (e.g., sugarcane and corn) and edible oil feedstocks (e.g., rapeseed and soybean oil), which are generally converted into bioethanol and biodiesel, respectively. Second-generation biofuels are made from different feedstocks and therefore may require different technology to extract useful energy from them. Second generation feedstocks include lignocellulosic biomass or woody crops, agricultural residues or waste, as well as dedicated non-food energy crops grown on marginal land unsuitable for food production. Second-generation biofuels: The term second-generation biofuels is used loosely to describe both the 'advanced' technology used to process feedstocks into biofuel, but also the use of non-food crops, biomass and wastes as feedstocks in 'standard' biofuels processing technologies if suitable. This causes some considerable confusion. Therefore it is important to distinguish between second-generation feedstocks and second-generation biofuel processing technologies. The development of second-generation biofuels has seen a stimulus since the food vs. fuel dilemma regarding the risk of diverting farmland or crops for biofuels production to the detriment of food supply. The biofuel and food price debate involves wide-ranging views, and is a long-standing, controversial one in the literature. Introduction: Second-generation biofuel technologies have been developed to enable the use of non-food biofuel feedstocks because of concerns to food security caused by the use of food crops for the production of first-generation biofuels. The diversion of edible food biomass to the production of biofuels could theoretically result in competition with food and land uses for food crops. Introduction: First-generation bioethanol is produced by fermenting plant-derived sugars to ethanol, using a similar process to that used in beer and wine-making (see Ethanol fermentation). This requires the use of food and fodder crops, such as sugar cane, corn, wheat, and sugar beet. The concern is that if these food crops are used for biofuel production that food prices could rise and shortages might be experienced in some countries. Corn, wheat, and sugar beet can also require high agricultural inputs in the form of fertilizers, which limit the greenhouse gas reductions that can be achieved. Biodiesel produced by transesterification from rapeseed oil, palm oil, or other plant oils is also considered a first-generation biofuel. Introduction: The goal of second-generation biofuel processes is to extend the amount of biofuel that can be produced sustainably by using biomass consisting of the residual non-food parts of current crops, such as stems, leaves and husks that are left behind once the food crop has been extracted, as well as other crops that are not used for food purposes (non-food crops), such as switchgrass, grass, jatropha, whole crop maize, miscanthus and cereals that bear little grain, and also industry waste such as woodchips, skins and pulp from fruit pressing, etc.The problem that second-generation biofuel processes are addressing is to extract useful feedstocks from this woody or fibrous biomass, which is predominantly composed of plant cell walls. In all vascular plants the useful sugars of the cell wall are bound within the complex carbohydrates (polymers of sugar molecules) hemicellulose and cellulose, but made inaccessible for direct use by the phenolic polymer lignin. Lignocellulosic ethanol is made by extracting sugar molecules from the carbohydrates using enzymes, steam heating, or other pre-treatments. These sugars can then be fermented to produce ethanol in the same way as first-generation bioethanol production. The by-product of this process is lignin. Lignin can be burned as a carbon neutral fuel to produce heat and power for the processing plant and possibly for surrounding homes and businesses. Thermochemical processes (liquefaction) in hydrothermal media can produce liquid oily products from a wide range of feedstock that has a potential to replace or augment fuels. However, these liquid products fall short of diesel or biodiesel standards. Upgrading liquefaction products through one or many physical or chemical processes may improve properties for use as fuel. Second-generation technology: The following subsections describe the main second-generation routes currently under development. Thermochemical routes Carbon-based materials can be heated at high temperatures in the absence (pyrolysis) or presence of oxygen, air and/or steam (gasification). Second-generation technology: These thermochemical processes yield a mixture of gases including hydrogen, carbon monoxide, carbon dioxide, methane and other hydrocarbons, and water. Pyrolysis also produces a solid char. The gas can be fermented or chemically synthesised into a range of fuels, including ethanol, synthetic diesel, synthetic gasoline or jet fuel.There are also lower temperature processes in the region of 150–374 °C, that produce sugars by decomposing the biomass in water with or without additives. Second-generation technology: Gasification Gasification technologies are well established for conventional feedstocks such as coal and crude oil. Second-generation gasification technologies include gasification of forest and agricultural residues, waste wood, energy crops and black liquor. Output is normally syngas for further synthesis to e.g. Fischer–Tropsch products including diesel fuel, biomethanol, BioDME (dimethyl ether), gasoline via catalytic conversion of dimethyl ether, or biomethane (synthetic natural gas). Syngas can also be used in heat production and for generation of mechanical and electrical power via gas motors or gas turbines. Second-generation technology: Pyrolysis Pyrolysis is a well established technique for decomposition of organic material at elevated temperatures in the absence of oxygen. In second-generation biofuels applications forest and agricultural residues, wood waste and energy crops can be used as feedstock to produce e.g. bio-oil for fuel oil applications. Bio-oil typically requires significant additional treatment to render it suitable as a refinery feedstock to replace crude oil. Second-generation technology: Torrefaction Torrefaction is a form of pyrolysis at temperatures typically ranging between 200–320 °C. Feedstocks and output are the same as for pyrolysis. Hydrothermal liquefaction Hydrothermal liquefaction is a process similar to pyrolysis that can process wet materials. The process is typically at moderate temperatures up to 400 °C and higher than atmospheric pressures. The capability to handle a wide range of materials make hydrothermal liquefaction viable for producing fuel and chemical production feedstock. Second-generation technology: Biochemical routes Chemical and biological processes that are currently used in other applications are being adapted for second-generation biofuels. Biochemical processes typically employ pre-treatment to accelerate the hydrolysis process, which separates out the lignin, hemicellulose and cellulose. Once these ingredients are separated, the cellulose fractions can be fermented into alcohols.Feedstocks are energy crops, agricultural and forest residues, food industry and municipal biowaste and other biomass containing sugars. Products include alcohols (such as ethanol and butanol) and other hydrocarbons for transportation use. Types of biofuel: The following second-generation biofuels are under development, although most or all of these biofuels are synthesized from intermediary products such as syngas using methods that are identical in processes involving conventional feedstocks, first-generation and second-generation biofuels. The distinguishing feature is the technology involved in producing the intermediary product, rather than the ultimate off-take. A process producing liquid fuels from gas (normally syngas) is called a gas-to-liquid (GtL) process. When biomass is the source of the gas production the process is also referred to as biomass-to-liquids (BTL). From syngas using catalysis Biomethanol can be used in methanol motors or blended with petrol up to 10–20% without any infrastructure changes. BioDME can be produced from Biomethanol using catalytic dehydration or it can be produced directly from syngas using direct DME synthesis. DME can be used in the compression ignition engine. Bio-derived gasoline can be produced from DME via high-pressure catalytic condensation reaction. Bio-derived gasoline is chemically indistinguishable from petroleum-derived gasoline and thus can be blended into the gasoline pool. Biohydrogen can be used in fuel cells to produce electricity. Types of biofuel: Mixed Alcohols (i.e., mixture of mostly ethanol, propanol, and butanol, with some pentanol, hexanol, heptanol, and octanol). Mixed alcohols are produced from syngas with several classes of catalysts. Some have employed catalysts similar to those used for methanol. Molybdenum sulfide catalysts were discovered at Dow Chemical and have received considerable attention. Addition of cobalt sulfide to the catalyst formulation was shown to enhance performance. Molybdenum sulfide catalysts have been well studied but have yet to find widespread use. These catalysts have been a focus of efforts at the U.S. Department of Energy's Biomass Program in the Thermochemical Platform. Noble metal catalysts have also been shown to produce mixed alcohols. Most R&D in this area is concentrated in producing mostly ethanol. However, some fuels are marketed as mixed alcohols (see Ecalene and E4 Envirolene) Mixed alcohols are superior to pure methanol or ethanol, in that the higher alcohols have higher energy content. Also, when blending, the higher alcohols increase compatibility of gasoline and ethanol, which increases water tolerance and decreases evaporative emissions. In addition, higher alcohols have also lower heat of vaporization than ethanol, which is important for cold starts. (For another method for producing mixed alcohols from biomass see bioconversion of biomass to mixed alcohol fuels) Biomethane (or Bio-SNG) via the Sabatier reaction From syngas using Fischer–Tropsch The Fischer–Tropsch (FT) process is a gas-to-liquid (GtL) process. When biomass is the source of the gas production the process is also referred to as biomass-to-liquids (BTL). Types of biofuel: A disadvantage of this process is the high energy investment for the FT synthesis and consequently, the process is not yet economic. FT diesel can be mixed with fossil diesel at any percentage without need for infrastructure change and moreover, synthetic kerosene can be produced Biocatalysis Biohydrogen might be accomplished with some organisms that produce hydrogen directly under certain conditions. Biohydrogen can be used in fuel cells to produce electricity. Butanol and Isobutanol via recombinant pathways expressed in hosts such as E. coli and yeast, butanol and isobutanol may be significant products of fermentation using glucose as a carbon and energy source. DMF (2,5-Dimethylfuran). Recent advances in producing DMF from fructose and glucose using catalytic biomass-to-liquid process have increased its attractiveness. Other processes HTU (Hydro Thermal Upgrading) diesel is produced from wet biomass. It can be mixed with fossil diesel in any percentage without need for infrastructure. Types of biofuel: Wood diesel. A new biofuel was developed by the University of Georgia from woodchips. The oil is extracted and then added to unmodified diesel engines. Either new plants are used or planted to replace the old plants. The charcoal byproduct is put back into the soil as a fertilizer. According to the director Tom Adams since carbon is put back into the soil, this biofuel can actually be carbon negative not just carbon neutral. Carbon negative decreases carbon dioxide in the air reversing the greenhouse effect not just reducing it. Second Generation Feedstocks: To qualify as a second generation feedstock, a source must not be suitable for human consumption. Second-generation biofuel feedstocks include specifically grown inedible energy crops, cultivated inedible oils, agricultural and municipal wastes, waste oils, and algae. Nevertheless, cereal and sugar crops are also used as feedstocks to second-generation processing technologies. Land use, existing biomass industries and relevant conversion technologies must be considered when evaluating suitability of developing biomass as feedstock for energy. Second Generation Feedstocks: Energy crops Plants are made from lignin, hemicellulose and cellulose; second-generation technology uses one, two or all of these components. Common lignocellulosic energy crops include wheat straw, Arundo donax, Miscanthus spp., short rotation coppice poplar and willow. However, each offers different opportunities and no one crop can be considered 'best' or 'worst'. Municipal solid waste Municipal Solid Waste comprises a very large range of materials, and total waste arisings are increasing. In the UK, recycling initiatives decrease the proportion of waste going straight for disposal, and the level of recycling is increasing each year. However, there remains significant opportunities to convert this waste to fuel via gasification or pyrolysis. Green waste Green waste such as forest residues or garden or park waste may be used to produce biofuel via different routes. Examples include Biogas captured from biodegradable green waste, and gasification or hydrolysis to syngas for further processing to biofuels via catalytic processes. Black liquor Black liquor, the spent cooking liquor from the kraft process that contains concentrated lignin and hemicellulose, may be gasified with very high conversion efficiency and greenhouse gas reduction potential to produce syngas for further synthesis to e.g. biomethanol or BioDME. The yield of crude tall oil from process is in the range of 30 – 50 kg / ton pulp. Greenhouse gas emissions: Lignocellulosic biofuels reduces greenhouse gas emissions by 60–90% when compared with fossil petroleum (Börjesson.P. et al. 2013. Dagens och framtidens hållbara biodrivmedel), which is on par with the better of current biofuels of the first-generation, where typical best values currently is 60–80%. In 2010, average savings of biofuels used within EU was 60% (Hamelinck.C. et al. 2013 Renewable energy progress and biofuels sustainability, Report for the European Commission). In 2013, 70% of the biofuels used in Sweden reduced emissions with 66% or higher. (Energimyndigheten 2014. Hållbara biodrivmedel och flytande biobränslen 2013). Commercial development: An operating lignocellulosic ethanol production plant is located in Canada, run by Iogen Corporation. The demonstration-scale plant produces around 700,000 litres of bioethanol each year. A commercial plant is under construction. Many further lignocellulosic ethanol plants have been proposed in North America and around the world. Commercial development: The Swedish specialty cellulose mill Domsjö Fabriker in Örnsköldsvik, Sweden develops a biorefinery using Chemrec's black liquor gasification technology. When commissioned in 2015 the biorefinery will produce 140,000 tons of biomethanol or 100,000 tons of BioDME per year, replacing 2% of Sweden's imports of diesel fuel for transportation purposes. In May 2012 it was revealed that Domsjö pulled out of the project, effectively killing the effort. Commercial development: In the UK, companies like INEOS Bio and British Airways are developing advanced biofuel refineries, which are due to be built by 2013 and 2014 respectively. Under favourable economic conditions and strong improvements in policy support, NNFCC projections suggest advanced biofuels could meet up to 4.3 per cent of the UK's transport fuel by 2020 and save 3.2 million tonnes of CO2 each year, equivalent to taking nearly a million cars off the road.Helsinki, Finland, 1 February 2012 – UPM is to invest in a biorefinery producing biofuels from crude tall oil in Lappeenranta, Finland. The industrial scale investment is the first of its kind globally. The biorefinery will produce annually approximately 100,000 tonnes of advanced second-generation biodiesel for transport. Construction of the biorefinery will begin in the summer of 2012 at UPM’s Kaukas mill site and be completed in 2014. UPM's total investment will amount to approximately EUR 150 million.Calgary, Alberta, 30 April 2012 – Iogen Energy Corporation has agreed to a new plan with its joint owners Royal Dutch Shell and Iogen Corporation to refocus its strategy and activities. Shell continues to explore multiple pathways to find a commercial solution for the production of advanced biofuels on an industrial scale, but the company will NOT pursue the project it has had under development to build a larger scale cellulosic ethanol facility in southern Manitoba.In India, Indian Oil Companies have agreed to build seven second generation refineries across the country. The companies who will be participating in building of 2G biofuel plants are Indian Oil Corporation (IOCL), HPCL and BPCL. In May 2018, the Government of India unveiled a biofuel policy wherein a sum of INR 5,000 crores was allocated to set-up 2G biorefineries. Indian oil marketing companies were in a process of constructing 12 refineries with a capex of INR 10,000 crores.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**KIF17** KIF17: Kinesin-like protein KIF17 is a protein that in humans is encoded by the KIF17 gene. KIF17 and its close relative, C. elegans OSM-3, are members of the kinesin-2 family of plus-end directed microtubule-based motor proteins. In contrast to heterotrimeric kinesin-2 motors, however, KIF17 and OSM-3 form distinct homodimeric complexes. Homodimeric kinesin-2 has been implicated in the transport of NMDA receptors along dendrites for delivery to the dendritic membrane, whereas both heterotrimeric and homodimeric kinesin-2 motors function cooperatively in anterograde intraflagellar transport (IFT) and cilium biogenesis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Frontal assault** Frontal assault: A frontal assault is a military tactic which involves a direct, full-force attack on the front line of an enemy force, rather than to the flanks or rear of the enemy. It allows for a quick and decisive victory, but at the cost of subjecting the attackers to the maximum defensive power of the enemy; this can make frontal assaults costly even if successful, and often disastrously costly if unsuccessful. It may be used as a last resort when time, terrain, limited command control, or low troop quality do not allow for any battlefield flexibility. The risks of a frontal assault can be mitigated by the use of heavy supporting fire, diversionary attacks, the use of cover (such as smokescreens or the darkness of night), or infiltration tactics. Frontal assault: Frontal assaults were common in ancient warfare, where heavy infantry made up the core of armies such as the Greek phalanx and the Roman legion. These dense formations, many ranks deep, would utilize their weight in numbers to press forward and break enemy lines. In medieval warfare, heavy cavalry such as mounted knights relied on frontal assaults for easy victories against infantry levies. These tactics waned as the defensive quality of infantry increased, especially with the introduction of firearms. Both heavy infantry and heavy cavalry were replaced with lighter, more maneuverable troops. Frontal assault: Yet even in Napoleonic warfare, a frontal assault by cavalry against a thin line could be effective when conditions were right, or even by infantry if the enemy was shaken or weakened by preceding attacks. But as firepower increased, as with the introduction of the rifle, successful frontal assaults against a prepared enemy became rare. They continued to be attempted, however, as alternative tactics that could achieve a decisive victory for the attacker were not developed. Frontal assault: During the American Civil War, it took some time for generals on both sides to understand that a frontal assault against an enemy who was well entrenched or otherwise held a strong defensive position was unlikely to succeed and was wasteful of manpower. Frontal assault: During World War I, advances in machine guns and artillery greatly increased defensive firepower, while trench warfare removed almost all options for battlefield maneuver. This resulted in repeated frontal assaults with horrific casualties. Only at the end of the war, with the introduction of tanks, infiltration tactics, and combined arms, were the beginnings of modern maneuver warfare found as a way to avoid the necessity of frontal assaults. Battles with notably successful frontal assaults: Battle of Bunker Hill – After two failed attempts, the British army succeeds in capturing the heights. Battle of Missionary Ridge – Union army storms Missionary Ridge after flank attacks are stalled. Battle of Pea Ridge – Union army routs Confederate forces in a frontal assault on the second day. Battle of Spotsylvania Court House – Union army captures the "Mule Shoe" salient. Brusilov Offensive – Russian army breaks Austro-Hungarian lines during the First World War. Battle of Vimy Ridge – Operationally, a frontal assault, though new platoon-based tactics enabled tactical maneuver at the lowest levels. Battles with notably unsuccessful frontal assaults: Battle of Carillon – A classic example of tactical military incompetence. Battle of Gettysburg – Pickett's Charge aims at the Union center and is repulsed. Battle of Fredericksburg – Union army fails to take Marye's Heights. Battle of Franklin – Repeated Confederate charges are repulsed. Battle of Balaklava – The Charge of the Light Brigade. Battle of Cold Harbor – Union assaults repulsed by Confederate forces with heavy casualties. Battle of Longewala – Failure of Pakistan Army's 206 and 51 Brigades with 2000-3000 men and 40 tanks attack on 120 Indian soldiers of the A company of the Punjab Regiment defending the Longewala border post.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Theoretical oxygen demand** Theoretical oxygen demand: Theoretical oxygen demand (ThOD) is the calculated amount of oxygen required to oxidize a compound to its final oxidation products. However, there are some differences between standard methods that can influence the results obtained: for example, some calculations assume that nitrogen released from organic compounds is generated as ammonia, whereas others allow for ammonia oxidation to nitrate. Therefore, in expressing results, the calculation assumptions should always be stated. Theoretical oxygen demand: In order to determine the ThOD for glycine (CH2(NH2)COOH) using the following assumptions: In the first step, the organic carbon and nitrogen are converted to carbon dioxide (CO2) and ammonia (NH3), respectively. In the second and third steps, the ammonia is oxidized sequentially to nitrite and nitrate. Theoretical oxygen demand: The ThOD is the sum of the oxygen required for all three steps.We can calculate by following steps: Write balanced reaction for the carbonaceous oxygen demand.CH2(NH2)COOH + 1.5O2 → NH3 + 2CO2 + H2O Write balanced reactions for the nitrogenous oxygen demand.NH3 + 1.5O2 → HNO2 + H2O HNO2 + 0.5O2 → HNO3NH3 + 2O2 → HNO3 + H2O Determine the ThOD.ThOD = (1.5 + 2) mol O2/mol glycine= 3.5 mol O2/mol glycine × 32 g/mol O2 / 75 g/mol glycine= 1.49 g O2/g glycineThe theoretical oxygen demand represents the worst-case scenario. The actual oxygen demand of any compound depends on the biodegradability of the compound and the specific organism metabolizing the compound. The actual oxygen demand can be measured experimentally and is called the biochemical oxygen demand (BOD).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Actin related protein 2/3 complex inhibitor** Actin related protein 2/3 complex inhibitor: Actin related protein 2/3 complex inhibitor is a protein that in humans is encoded by the ARPIN gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Graphic organizer** Graphic organizer: A graphic organizer, also known as a knowledge map, concept map, story map, cognitive organizer, advance organizer, or concept diagram is a pedagogical tool that uses visual symbols to express knowledge and concepts through relationships between them. The main purpose of a graphic organizer is to provide a visual aid to facilitate learning and instruction. History: Graphic organizers have a history extending to the early 1960s. David Paul Ausubel was an American psychologist who coined the phrase "advance organizers" to refer to tools which bridge "the gap between what learners already know and what they have to learn at any given moment in their educational careers." Ausubel's advance organizers originally took the form of prose to merge the familiar—what students know—with the new or unfamiliar—what they have discovered or are learning. The advance organizer intended to help learners more easily retain verbal information but was written in a higher level of language. Later the advance organizers were represented as paradigms, schematic representations, and diagrams as they took on more geometric shapes.In 1969, Richard F. Barron came up with a tree diagram that was referred to as a "structured overview." The diagram introduced new vocabulary and used spatial characteristics and language written at the same level as the material being learned. In the classroom, this hierarchical organization was used by the teacher as a pre-reading strategy to show relationships among vocabulary. Its use later expanded for not only pre-reading strategies but for supplementary and post-reading activities. It was not until the 1980s that the term graphic organizer was used. Theories: Various theories have been put forth to undergird the assimilation of knowledge through the use of graphic organizers. According to Ausubel's Subsumption Theory, when a learner connects new information to their own preexisting ideas, they absorb new information. By relating new information to prior knowledge, learners reorganize their cognitive structures rather than build an entirely new one from scratch. Educational psychologist Richard E. Mayer reinterpreted Ausubel's subsumption theory within his own theory of assimilation encoding. In applying the use of organizers to the assimilation theory, advance organizers facilitate prior knowledge to working memory as well as its active integration of received information. However, he warned that advance organizers are not beneficial if the tools do not ask the learner to actively incorporate new information or if the preceding teaching methods and materials already are well-defined and well-structured.Others find a basis for graphic organizers on schema theory developed by Swiss psychologist Jean Piaget. In psychology, schema refers to a cognitive framework or concept that helps to organize and interpret information. The brain uses these patterns of thinking and behavior that are held in long-term memory to help people interpret the world around them. Piaget's theory is that a scheme is both a category of knowledge and the process of acquiring new knowledge. He believed that as people continually adapt to their environments, they take in new information and acquire additional knowledge. Culbert, et al. (1998) posits that by using graphic organizes, prior knowledge is activated, and learners can add new information to their schema and thus improve comprehension of the material. Application: Pre-writing tool for students with learning disabilities In one study of 21 students on Individualized Education Plans, graphic organizers were used during the pre-writing process to gauge student achievement on the persuasive essay in a 10th grade writing classroom. Explicit instruction on how to fill out the organizer along with color coding sections and sufficient class time to fill these out resulted in an 89 percent of students saying they felt graphic organizers were helpful in a post-assignment survey. Application: Metacognition tool for second-language (L2) learners One yearlong study of a 3rd and 5th grade California dual language classrooms found that through the use of graphic organizers, students increased higher-order thinking skills, enhanced vocabulary acquisition, and developed the academic language of science. Types of organizers: Graphic organizers take many forms: Relational organizers Storyboard Fishbone - Ishikawa diagram Cause and effect web Chart T-Chart Category/classification organizers Concept mapping KWL tables Mind mapping Sequence organizers Chain Ladder - Story map Stairs - Topic map Compare contrast organizers Dashboard Venn diagrams Double bubble map Concept development organizers Story web Word web Circle chart Flow chart Cluster diagram Lotus diagram Star diagram Options and control device organizers Mechanical control panel Graphical user interface Enhancing students' skills: A review study concluded that using graphic organizers improves student performance in the following areas: Retention Students remember information better and can better recall it when it is represented and learned both visually and verbally.Reading comprehension The use of graphic organizers helps improving the reading comprehension of students.Student achievement Students with and without learning disabilities improve achievement across content areas and grade levels.Thinking and learning skills; critical thinking When students develop and use a graphic organizer their higher order thinking and critical thinking skills are enhanced. Criticism: In four studies on the effects of advance organizers on learning tasks, no significant difference was found from the control group who did not use organizers to learn as presented in a paper Richard F. Barron delivered to the Annual Meeting of the National Reading Conference in 1980. In that same study, Richard F. Barron did find that student-constructed postorganizers showed more benefits. Graphic postorganizers focus on learners finding the relationships of vocabulary terms by manipulating them in a diagram or schematic way after they have already learned these terms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thumb position** Thumb position: In music performance and education, thumb position, not a traditional position, is a string instrument playing technique used to facilitate playing in the upper register of the double bass, cello, and related instruments, such as the electric upright bass. To play passages in this register, the player shifts his or her hand out from behind the neck and curves the hand, using the side of the thumb to press down the string; in effect, the side of the thumb becomes a movable nut (capo). On the double bass: For the double bass, thumb position is used when playing above one-lined G (on the third ledger line in bass clef notation for the double bass). To play passages in this register, the player shifts his hand out from behind the neck and flattens it out, using the side of the thumb to press down the string. When playing in thumb position, the use of the fourth finger is replaced by the third finger, as the fourth finger becomes too short to produce a reliable tone. Bass instruction books often teach thumb position by having the player place the left-hand thumb on the high (one-lined) G note. On the double bass: In this same position, notes below the G can also be played. By barring the thumb across the G and D strings, the G and D notes can be played in quick succession. Alternatively, notes on the D string can be performed in quick alternation with notes on the G string. While traditional methods rarely discuss playing on the A or E string in thumb position, French pedagogue Francois Rabbath (and his disciples such as Paul Ellison) advocate the performance of notes on the A and E strings. While introductory manuals start teaching thumb position by stopping the G string on the high G, any of the notes on the upper part of the fingerboard can be stopped and held by the thumb. On the double bass: One issue with the use of thumb position is that it is harder to produce vibrato with the thumb than with the fingers, because fingers have much fleshier pads than the side of the thumb. While the difference between the vibrato sound produced by the fingers and the thumb may not be noticeable in a passage of moving notes, if there is a held note which is stopped by the thumb, which is vibrated, the difference may be noticeable. As such, some players use finger substitution to replace the thumb with one of the fingers. On the double bass: Other bass instruments Electric bassists such as Brian Bromberg and Steve Bailey have applied the thumb position technique to their instruments because they share a common tuning. In the jazz world, many bassists from the 1970s onward play both instruments, sometimes with equal proficiency (e.g. Stanley Clarke). The advantage of using thumb position on the bass guitar is that the left hand can cover two octaves or more without shifting position, which facilitates complex passages. On the cello: With the cello, in the "neck" positions (which use just less than half of the fingerboard, nearest the top of the instrument), the thumb rests on the back of the neck. However, in thumb position, the thumb usually rests alongside the fingers on the string and the side of the thumb is used to play notes, along with the other left-hand fingers. The fingers are normally held curved with each knuckle bent, with the fingertips in contact with the string. If a finger is required on two (or more) strings at once to play perfect fifths (in double stops or chords) it is used flat. In slower, or more expressive playing, the contact point can move slightly away from the nail to the pad of the finger, allowing a fuller vibrato. On the cello: Thumb position can be, and is (by virtue of the requirements of the extensive repertoire) employed many times, and not only in the higher range of the instrument. [emphasis added] The cello thumb position is introduced on the second or half-string harmonic, because of the ease with which this note may be found, the succession of notes from seventh to thumb position being in "a natural sequence and range," and the stretch required to reach seventh position without removing the thumb from the neck. Bylsma attributes the tendency of many composers of the 19th century to prescribe thumb position only on the A string to the fact that they were not string players (in contrast to Boccherini, for example, who routinely used A and D, and also used G).In thumb position, the, "'standard' finger pattern," between the thumb, first and second fingers, is whole, whole, and half tone. Thus if the thumb is placed on A, the index and middle fingers are placed on B and C#. On the cello: As with the double bass, one issue with the use of thumb position is that it is harder to produce vibrato with the thumb than with the fingers, because fingers have much fleshier pads than the side of the thumb. Some cellists use finger substitution to replace the thumb with one of the fingers or alternatively press down on the thumb using the index finger to provide an ample range of vibration.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Passive fire protection** Passive fire protection: Passive fire protection (PFP) is components or systems of a building or structure that slows or impedes the spread of the effects of fire or smoke without system activation, and usually without movement. Examples of passive systems include floor-ceilings and roofs, fire doors, windows, and wall assemblies, fire-resistant coatings, and other fire and smoke control assemblies. Passive fire protection systems can include active components such as fire dampers. Main characteristics: Passive fire protection systems are intended to: Contain a fire to the compartment of fire origin Slow a fire from spreading from the compartment of fire origin Slow the heating of structural members Prevent the spread of fire through intentional openings (e.g., doors, HVAC ducts) in fire rated assemblies by the use of a fire rated closure (e.g., fire door, fire damper) Prevent the spread of fire through penetrations (e.g., holes in fire walls through which building systems such as plumbing pipes or electrical cables pass) in fire rated assemblies by the use of fire stopsPFP systems are designed to "prevent" the spread of fire and smoke, or heating of structural members, for an intended limited period of time as determined by the local building code and fire codes. Passive fire protection measures such as firestops, fire walls, and fire doors, are tested to determine the fire-resistance rating of the final assembly, which is usually expressed in terms of hours of fire resistance (e.g., ⅓, ¾, 1, 1½, 2, 3, 4 hour). A certification listing provides the limitations of the rating. Main characteristics: Passive fire protection systems typically do not require motion. Exceptions are fire dampers (fire-resistive closures within air ducts, excluding grease ducts) and fire door closers, which move, open and shut in order to work, as well as all intumescent products which swell in order to provide adequate material thickness and fill gaps. The simplicity of PFP systems usually results in higher reliability as compared to active fire protection systems such as sprinkler systems which require several operational components for proper functioning. Main characteristics: PFP in a building perform as a group of systems within systems. For example, an installed firestop system is part of a fire-resistance rated wall system or floor system, which is in turn a part of a fire compartment which forms an integral part of the overall building which operates as a system. Main characteristics: Different types of materials are employed in the design and construction of PFP systems. Endothermic materials absorb heat, including calcium silicate board, concrete and gypsum wallboard. For example, water can boil out of a concrete slab when heated. The chemically bound water inside these materials sublimates when heated. PFP measures also include intumescents and ablative materials. Materials themselves are not fire resistance rated. They must be organised into systems which bear a fire resistance rating when installed in accordance with certification listings (e.g., DIN 4102 Part 4). Main characteristics: There are mainly two types of materials that provide structural fire resistance: intumescent and vermiculite. Vermiculite materials cover the structural steel members in a relatively thick layer. Because of the porous nature of vermiculite, its use is not advisable if there is the possibility of water exposure. Steel corrosion is also difficult to monitor. Intumescent fireproofing is a layer of a material which is applied like paint on the structural steel members. The thickness of this intumescent coating is dependent on the steel section used. Intumescent coatings are applied in a relatively low thickness (usually 350- to 700-micrometer), have a more aesthetic smooth finish, and help prevent corrosion. Main characteristics: PFP system performance is typically demonstrated in fire tests. A typical test objective for fire rated assemblies is to maintain the item or the side to be protected at or below either 140 °C (for walls, floors and electrical circuits required to have a fire-resistance rating). A typical test objective (e.g., ASTM E119) for fire rated structural protection is to limit the temperature of the structural element (e.g., beam, column) to ca. 538 °C, at which point the yield strength of the structural element has been sufficiently reduced that structural building collapse may occur. Typical test standards for walls and floors are BS 476: Part 22: 1987, BS EN 1364-1: 1999 & BS EN 1364-2: 1999 or ASTM E119. Smaller components such as fire dampers, fire doors, etc., follow suit in the main intentions of the basic standard for walls and floors. Fire testing involves live fire exposures upwards of 1100 °C, depending on the fire-resistance rating and duration one is after. Test objectives other than fire exposures are sometimes included such as hose stream impact to determine the survivability of the system under realistic conditions. Examples: Fire-resistance rated walls Firewalls not only have a rating, but are also designed to sub-divide buildings such that if collapse occurs on one side, this will not affect the other side. They can also be used to eliminate the need for sprinklers, as a trade-off. Fire-resistant glass using multi-layer intumescent technology or wire mesh embedded within the glass may be used in the fabrication of fire-resistance rated windows in walls or fire doors. Fire-resistance rated floors Occupancy separations (barriers designated as occupancy separations are intended to segregate parts of buildings, where different uses are on each side; for instance, apartments on one side and stores on the other side of the occupancy separation). Examples: Closures (fire dampers) Sometimes firestops are treated in building codes identically to closures. Canada de-rates closures, where, for instance a 2-hour closure is acceptable for use in a 3-hour fire separation, so long as the fire separation is not an occupancy separation or firewall. The lowered rating is then referred to as a fire protection rating, both for firestops, unless they contain plastic pipes and regular closures. Examples: Firestops Grease ducts (These refer to ducts that lead from commercial cooking equipment such as ranges, deep fryers and double-decker and conveyor-equipped pizza ovens to grease duct fans.) In North America, grease ducts are made of minimum 16 gauge (1.6 mm) sheet metal, all welded, and certified openings for cleaning, whereby the ducting is either inherently manufactured to have a specific fire-resistance rating, OR it is ordinary 16 gauge ductwork with an exterior layer of purpose-made and certified fireproofing. Either way, North American grease ducts must comply with NFPA96 requirements. Examples: Cable coating (application of fire retardants, which are either endothermic or intumescent, to reduce flamespread and smoke development of combustible cable-jacketing) Spray fireproofing (application of intumescent or endothermic paints, or fibrous or cementitious plasters to keep substrates such as structural steel, electrical or mechanical services, valves, liquefied petroleum gas (LPG) vessels, vessel skirts, bulkheads or decks below either 140 °C for electrical items or ca. 500 °C for structural steel elements to maintain operability of the item to be protected) Fireproofing cladding (boards used for the same purpose and in the same applications as spray fireproofing) Materials for such cladding include perlite, vermiculite, calcium silicate, gypsum, intumescent epoxy, Durasteel (cellulose-fibre reinforced concrete and punched sheet-metal bonded composite panels), MicroTherm Enclosures (boxes or wraps made of fireproofing materials, including fire-resistive wraps and tapes to protect speciality valves and other items deemed to require protection against fire and heat—an analogy for this would be a safe) or the provision of circuit integrity measures to keep electrical cables operational during an accidental fire. Regulations: Examples of testing that underlies certification listing: Europe: BS EN 1364 Netherlands: NEN 6068 Germany: DIN 4102 United Kingdom: BS 476 Canada: ULC-S101 United States: ASTM E119 South Africa: SANS 10117Each of these test procedures have very similar fire endurance regimes and heat transfer limitations. Differences include the hose-stream tests, which are unique to Canada and the United States, whereas Germany includes an impact test during the fire for firewalls. Germany is unique in including heat induced expansion and collapse of ferrous cable trays into account for firestops resulting in the favouring of firestop mortars which tend to hold the penetrating cable tray in place, whereas firestops made of rockwool and elastomeric toppings have been demonstrated in testing by Otto Graf institute to be torn open and rendered inoperable when the cable tray expands, pushes in and then collapses.In exterior applications for the offshore and the petroleum sectors, the fire endurance testing uses a higher temperature and faster heat rise, whereas in interior applications such as office buildings, factories and residential, the fire endurance is based upon experiences gained from burning wood. The interior fire time/temperature curve is referred to as "ETK" (Einheitstemperaturzeitkurve = standard time/temperature curve) or the "building elements" curve, whereas the high temperature variety is called the hydrocarbon curve as it is based on burning oil and gas products, which burn hotter and faster. The most severe fire exposure test is the British "jetfire" test, which has been used to some extent in the UK and Norway but is not typically found in common regulations. Regulations: Typically, during the construction of buildings, fire protective systems must conform to the requirements of building code that was in effect on the day that the building permit was applied for. Enforcement for compliance with building codes is typically the responsibility of municipal building departments. Once construction is complete, the building must maintain its design basis by remaining in compliance with the current fire code, which is enforced by the fire prevention officers of the municipal fire department. An up-to-date fire protection plan, containing a complete inventory and maintenance details of all fire protection components, including firestops, fireproofing, fire sprinklers, fire detectors, fire alarm systems, fire extinguishers, etc. are sometimes a requirement for demonstration of compliance with applicable laws and regulations. Prescriptive versus Listed: Prescriptive systems have been tested and verified by governmental authorities including DIBt, the British Standards Institute (BSI) and the National Research Council's Institute for Research in Construction. These organisations publish wall and floor assembly details in codes and standards that are used with generic standardised components to achieve the quantified fire-resistance ratings. Germany and the UK publish prescriptive systems in standards such as DIN4102 Part 4 (Germany) and BS476 (United Kingdom). Prescriptive versus Listed: Listed systems are certified by testing in which the installed configuration must comply with the tolerances and materials set out in the certification listing. The United Kingdom is an exception to this as certification is required but not testing. Countries with optional certification: Fire tests in the UK are reported in the form of test results but building authorities do not require written proof that the materials that have been installed on site are actually identical to the materials and products that were used in the test. The test report is often interpreted by engineers as the test results are not communicated in uniformly structured listings. In the UK, and other countries which do not require certification, the proof that the manufacturer has not substituted other materials apart from those used in the original testing is based on trust in the manufacturer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Banner blindness** Banner blindness: Banner blindness is a phenomenon in web usability where visitors to a website consciously or unconsciously ignore banner-like information. A broader term covering all forms of advertising is ad blindness, and the mass of banners that people ignore is called banner noise. The term banner blindness was coined in 1998 as a result of website usability tests where a majority of the test subjects either consciously or unconsciously ignored information that was presented in banners. The information that was overlooked included both external advertisement banners and internal navigational banners, often called "quick links". Banner blindness: Banners have become one of the dominant means of advertising. A 2015 study shows that up to 93% of ads go unviewed. The first banner ad appeared in 1994. The average click-through rate (CTR) dropped from 2% in 1995 to 0.5% in 1998. After a relatively stable period with a 0.6% click-through rate in 2003, CTR rebounded to 1% by 2013.This does not, however, mean that banner ads do not influence viewers. Website viewers may not be consciously aware of an ad, but it does have an unconscious influence on their behavior. A banner's content affects both businesses and visitors of the site. Native advertising and social media are used to avoid banner blindness. Factors: Human behavior User goals When searching for specific information on a website, users focus only on the parts of the page where they expect that information will be, e.g. small text and hyperlinks. A 2011 study investigated via eye-tracking analysis whether users avoided looking at ads inserted on a non-search website, and whether they retained ad content in memory. The study found that most participants fixated (looked at) ads at least once during their website visit. When a viewer is working on a task, ads may cause a disturbance, eventually leading to ad avoidance. If a user wants to find something on the web page and ads disrupt or delay their search, they will try to avoid the source of interference. Factors: Clutter aversion A higher than expected number of advertisements may cause a user to view the page as cluttered. The number of adverts and annoyances on a webpage contribute to this perception of clutter. As users can concentrate on only one stimulus at a time, having too many objects in their field of vision causes them to lose focus. This contributes to behaviors such as ad avoidance or banner blindness. Factors: Website familiarity As a user becomes familiar with a webpage, they learn where to expect content, and where to expect adverts, and learn to ignore banner ads without looking at them. Usability tests that compared the perception of banners between subjects searching for specific information and subjects aimlessly browsing seem to support this theory. A 2014 eye-tracking study examined how right-side images (in contrast to plain text) in Google AdWords affect users' visual behavior. The analysis concludes that the appearance of images does not change user interaction with ads. Factors: Brand recognition If a user is already aware of a brand, viewing an ad banner for that brand would reconfirm their existing attitudes towards it, whether positive or negative. A banner ad may only leave a positive impression in the viewer if they already have a positive perception of the brand. Similarly, someone seeing an ad for a brand they have a negative perception of may further dissuade them from buying from that brand. Factors: If viewers have a neutral or no opinion about a brand, then a banner ad for that brand could leave a positive impression, due to the mere-exposure effect: A tendency to develop a preference for something due to familiarity. Banner aspects Shared space Unlike advertisements in television or radio, which completely interrupt and temporarily replace the content, banner adverts exist alongside the content. Websites typically contain various elements in different sizes, shapes, and colours. As a banner ad only occupies part of a website, it cannot hold the user's complete attention. Factors: Perceived usefulness Banner ads that seem to contain useful information, and which are easy for the viewer to comprehend, are more likely to be viewed and clicked on than adverts the user does not find useful, or finds difficult to understand.Prices and promotions, when mentioned in banner ads, do not have a major impact on their perceived usefulness. Users assume that all ads signify promotions of some sort and hence do not give much weight to it. Factors: Congruence Congruity is the relationship of an advert with the surrounding web content. There have been mixed results of congruity on users. Click through rates increased when the ads shown on a website were similar to the products or services of that website. A banner with colour schemes incongruent with the rest of website does grab the viewer's attention, but they tend to respond negatively to it, compared with banners whose color schemes were congruent.Congruency has more impact when the user browses fewer web pages. When users were given specific web tasks in a 2013 study, incongruent ads grabbed their attention, but they displayed ad avoidance behaviors. The relevance of the ad's content to the user's goal and to the website does not affect view time due to the expectation that an advert will be irrelevant.Congruency between the advert and the web content has no effect on view duration, according to a 2011 study. Factors: Calls to action Banners with phrases that invite action, such as "click here", do not attract views or clicks. Prevention and Subversion: Advertisers and webmasters may attempt to prevent or subvert banner blindness by eliminating one or more possible causes: Location Users generally read webpage from top left to bottom right, so adverts in this path may be more noticeable. As viewers are less likely to notice something in their peripheral vision, adverts to the right of the page content will be seen less than adverts to the left. Banner ads just below the navigation area may be viewed more, as users expect content at the top of the page. Confusion about whether the top of page has content or advertisement results in more views of the advert. Prevention and Subversion: Animation Users dislike animated ads since they can cause loss of focus. This distraction may increase the attention of some users when they are involved in free browsing (not seeking to complete a specific goal). Users involved in a specific task typically fail to recall animated ads, take longer to complete their task, and experience an increased perceived workload. Moderate animation can increase recognition rates. Rapidly animated banner ads can cause lower recognition rates of the advert itself, and negative attitudes toward the advertiser.In visual search tasks, animated ads did not impact the performance of users and did not capture more views than static ads. Animations signal to users of the existence of ads and lead to ad avoidance behavior, but repetitive exposure to them can induce the mere exposure effect. Prevention and Subversion: Personalization and relevance Personalized ads use and include information about viewers, like demographics, PII, and purchasing habits. An ad is noticed more if it has a higher degree of personalization, even if it causes discomfort in users. Personalized ads are found to be clicked more often than other ads. If a user is involved in a demanding task, more attention is paid to a personalized ad than an unpersonalised ad.Such ads do, however, increase privacy concerns and can appear 'creepy'. An individual with greater existing privacy concerns will avoid personalised ads, primarily due to concerns over their data being shared with third parties. Users are more likely to accept behavior tracking if they have faith in the company that permitted the ad. Though this can be an effective method for advertisers, users do not always prefer their behaviors be used to personalize ads. Ads are more often clicked when they show something relevant to the user's search but if the purchase has been made, and the ad continues to appear, it causes frustration.Personalization enhanced recognition for the content of banners while the effect on attention was weaker and less significant, in the studies conducted by Koster et al. Exploration of web pages and recognition of task-relevant information was not influenced. Visual exploration of banners typically proceeds from the picture to the logo and finally to the slogan.If a website serves ads unrelated to its viewers' interests, about 75% of viewers experience frustration with the website. Advertising efforts must focus on the user's current intention and interest and not just previous searches. Publishing fewer, but more relative, ads is more effective. Prevention and Subversion: Advertisers may use data analytics and campaign management tools to categorise viewers and serve ads that are more likely to be relevant to the user's interests. Information about users could be gained through gamification tools which could reward them for providing that information. Such tools could be quizzes, calculators, chats, or questionnaires. Native ads Native advertising often places adverts inline with expected, non-ad content. For example, video advertisements playing within a video-streaming website before, during, or after the main video feature. Another common format is a text or image advert within a social media feed, formatted to resemble posts made by users. Native ads are designed to resemble the user's expected experience. They can have greater viewability than other forms of advertising because they are less easy to distinguish from expected, non-advert content. Social media Through social media, advertisers can transfer feelings of trust in known individuals to adverts, thereby validating the ads. Peer pressure can encourage users to change attitudes or behavior regarding advertising to adapt to group customs. Advertisement through known people piqued interests of users and increased ad views much more effectively than banner ads.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hierarchical epistemology** Hierarchical epistemology: Hierarchical epistemology is a theory of knowledge which posits that beings have different access to reality depending on their ontological rank.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Memorial square** Memorial square: A memorial square is an intersection dedicated in memory of someone, usually someone who was killed in a war. It is not the same as a town square. While the name of a town square is used to describe where something is located, the name of a memorial square is not used in the same manner.Memorial squares are also erected to commemorate events or symbolize a specific ethos embodied in the death of one or more persons related to specific cause. This type of memorial square has been built in post-Soviet states such as Uzbekistan where memorial squares and parks have been established in memory of both civilians and soldiers who died in a specific conflict, i.e. World War II.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carnitine dehydratase** Carnitine dehydratase: In enzymology, a carnitine dehydratase (EC 4.2.1.89) is an enzyme that catalyzes the chemical reaction L-carnitine ⇌ 4-(trimethylammonio)but-2-enoate + H2OHence, this enzyme has one substrate, L-carnitine, and two products, 4-(trimethylammonio)but-2-enoate and H2O. This enzyme belongs to the family of lyases, specifically the hydro-lyases, which cleave carbon-oxygen bonds. The systematic name of this enzyme class is L-carnitine hydro-lyase [4-(trimethylammonio)but-2-enoate-forming]. This enzyme is also called L-carnitine hydro-lyase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Extensor digitorum longus muscle** Extensor digitorum longus muscle: The extensor digitorum longus is a pennate muscle, situated at the lateral part of the front of the leg. Structure: It arises from the lateral condyle of the tibia; from the upper three-quarters of the anterior surface of the body of the fibula; from the upper part of the interosseous membrane; from the deep surface of the fascia; and from the intermuscular septa between it and the tibialis anterior on the medial, and the peroneal muscles on the lateral side. Between it and the tibialis anterior are the upper portions of the anterior tibial vessels and deep peroneal nerve. Structure: The muscle passes under the superior and inferior extensor retinaculum of foot in company with the fibularis tertius, and divides into four slips, which run forward on the dorsum of the foot, and are inserted into the second and third phalanges of the four lesser toes. Structure: The tendons to the second, third, and fourth toes are each joined, opposite the metatarsophalangeal articulations, on the lateral side by a tendon of the extensor digitorum brevis. The tendons are inserted in the following manner: each receives a fibrous expansion from the interossei and lumbricals, and then spreads out into a broad aponeurosis, which covers the dorsal surface of the first phalanx: this aponeurosis, at the articulation of the first with the second phalanx, divides into three slips—an intermediate, which is inserted into the base of the second phalanx; and two collateral slips, which, after uniting on the dorsal surface of the second phalanx, are continued onward, to be inserted into the base of the third phalanx. Structure: Variations This muscle varies considerably in the modes of origin and the arrangement of its various tendons. The tendons to the second and fifth toes may be found doubled, or extra slips are given off from one or more tendons to their corresponding metatarsal bones, or to the short extensor, or to one of the interosseous muscles. A slip to the great toe from the innermost tendon has been found.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thermal power station** Thermal power station: A thermal power station is a type of power station in which heat energy is converted to electrical energy. In a steam-generating cycle heat is used to boil water in a large pressure vessel to produce high-pressure steam, which drives a steam turbine connected to an electrical generator. The low-pressure exhaust from the turbine enters a steam condenser where it is cooled to produce hot condensate which is recycled to the heating process to generate more high pressure steam. This is known as a Rankine cycle. The design of thermal power stations depends on the intended energy source: fossil fuel, nuclear and geothermal power, solar energy, biofuels, and waste incineration are all used. Certain thermal power stations are also designed to produce heat for industrial purposes; for district heating; or desalination of water, in addition to generating electrical power. Thermal power station: Fuels such as natural gas or oil can also be burnt directly in gas turbines (internal combustion). These plants can be of the open cycle or the more efficient combined cycle type. Types of thermal energy: Almost all coal-fired power stations, petroleum, nuclear, geothermal, solar thermal electric, and waste incineration plants, as well as all natural gas power stations are thermal. Natural gas is frequently burned in gas turbines as well as boilers. The waste heat from a gas turbine, in the form of hot exhaust gas, can be used to raise steam by passing this gas through a heat recovery steam generator (HRSG). The steam is then used to drive a steam turbine in a combined cycle plant that improves overall efficiency. Power stations burning coal, fuel oil, or natural gas are often called fossil fuel power stations. Some biomass-fueled thermal power stations have appeared also. Non-nuclear thermal power stations, particularly fossil-fueled plants, which do not use cogeneration are sometimes referred to as conventional power stations. Types of thermal energy: Commercial electric utility power stations are usually constructed on a large scale and designed for continuous operation. Virtually all electric power stations use three-phase electrical generators to produce alternating current (AC) electric power at a frequency of 50 Hz or 60 Hz. Large companies or institutions may have their own power stations to supply heating or electricity to their facilities, especially if steam is created anyway for other purposes. Steam-driven power stations have been used to drive most ships in most of the 20th century. Shipboard power stations usually directly couple the turbine to the ship's propellers through gearboxes. Power stations in such ships also provide steam to smaller turbines driving electric generators to supply electricity. Nuclear marine propulsion is, with few exceptions, used only in naval vessels. There have been many turbo-electric ships in which a steam-driven turbine drives an electric generator which powers an electric motor for propulsion. Types of thermal energy: Cogeneration plants, often called combined heat and power (CHP) facilities, produce both electric power and heat for process heat or space heating, such as steam and hot water. History: The reciprocating steam engine has been used to produce mechanical power since the 18th century, with notable improvements being made by James Watt. When the first commercially developed central electrical power stations were established in 1882 at Pearl Street Station in New York and Holborn Viaduct power station in London, reciprocating steam engines were used. The development of the steam turbine in 1884 provided larger and more efficient machine designs for central generating stations. By 1892 the turbine was considered a better alternative to reciprocating engines; turbines offered higher speeds, more compact machinery, and stable speed regulation allowing for parallel synchronous operation of generators on a common bus. After about 1905, turbines entirely replaced reciprocating engines in almost all large central power stations. History: The largest reciprocating engine-generator sets ever built were completed in 1901 for the Manhattan Elevated Railway. Each of seventeen units weighed about 500 tons and was rated 6000 kilowatts; a contemporary turbine set of similar rating would have weighed about 20% as much. Thermal power generation efficiency: The energy efficiency of a conventional thermal power station is defined as saleable energy produced as a percent of the heating value of the fuel consumed. A simple cycle gas turbine achieves energy conversion efficiencies from 20 to 35%. Typical coal-based power plants operating at steam pressures of 170 bar and 570 °C run at efficiency of 35 to 38%, with state-of-the-art fossil fuel plants at 46% efficiency. Combined-cycle systems can reach higher values. As with all heat engines, their efficiency is limited, and governed by the laws of thermodynamics. Thermal power generation efficiency: The Carnot efficiency dictates that higher efficiencies can be attained by increasing the temperature of the steam. Sub-critical pressure fossil fuel power stations can achieve 36–40% efficiency. Supercritical designs have efficiencies in the low to mid 40% range, with new "ultra critical" designs using pressures above 4400 psi (30.3 MPa) and multiple stage reheat reaching 45-48% efficiency. Above the critical point for water of 705 °F (374 °C) and 3212 psi (22.06 MPa), there is no phase transition from water to steam, but only a gradual decrease in density. Thermal power generation efficiency: Currently most nuclear power stations must operate below the temperatures and pressures that coal-fired plants do, in order to provide more conservative safety margins within the systems that remove heat from the nuclear fuel. This, in turn, limits their thermodynamic efficiency to 30–32%. Some advanced reactor designs being studied, such as the very-high-temperature reactor, Advanced Gas-cooled Reactor, and supercritical water reactor, would operate at temperatures and pressures similar to current coal plants, producing comparable thermodynamic efficiency. Thermal power generation efficiency: The energy of a thermal power station not utilized in power production must leave the plant in the form of heat to the environment. This waste heat can go through a condenser and be disposed of with cooling water or in cooling towers. If the waste heat is instead used for district heating, it is called cogeneration. An important class of thermal power station is that associated with desalination facilities; these are typically found in desert countries with large supplies of natural gas, and in these plants freshwater production and electricity are equally important co-products. Thermal power generation efficiency: Other types of power stations are subject to different efficiency limitations. Most hydropower stations in the United States are about 90 percent efficient in converting the energy of falling water into electricity while the efficiency of a wind turbine is limited by Betz's law, to about 59.3%, and actual wind turbines show lower efficiency. Electricity cost: The direct cost of electric energy produced by a thermal power station is the result of cost of fuel, capital cost for the plant, operator labour, maintenance, and such factors as ash handling and disposal. Indirect social or environmental costs, such as the economic value of environmental impacts, or environmental and health effects of the complete fuel cycle and plant decommissioning, are not usually assigned to generation costs for thermal stations in utility practice, but may form part of an environmental impact assessment. Those indirect costs belong to the broader concept of externalities. Boiler and steam cycle: In the nuclear plant field, steam generator refers to a specific type of large heat exchanger used in a pressurized water reactor (PWR) to thermally connect the primary (reactor plant) and secondary (steam plant) systems, which generates steam. In a boiling water reactor (BWR), no separate steam generator is used and water boils in the reactor core. In some industrial settings, there can also be steam-producing heat exchangers called heat recovery steam generators (HRSG) which utilize heat from some industrial process, most commonly utilizing hot exhaust from a gas turbine. The steam generating boiler has to produce steam at the high purity, pressure and temperature required for the steam turbine that drives the electrical generator. Boiler and steam cycle: Geothermal plants do not need boilers because they use naturally occurring steam sources. Heat exchangers may be used where the geothermal steam is very corrosive or contains excessive suspended solids. Boiler and steam cycle: A fossil fuel steam generator includes an economizer, a steam drum, and the furnace with its steam generating tubes and superheater coils. Necessary safety valves are located at suitable points to protect against excessive boiler pressure. The air and flue gas path equipment include: forced draft (FD) fan, air preheater (AP), boiler furnace, induced draft (ID) fan, fly ash collectors (electrostatic precipitator or baghouse), and the flue-gas stack. Boiler and steam cycle: Feed water heating The boiler feed water used in the steam boiler is a means of transferring heat energy from the burning fuel to the mechanical energy of the spinning steam turbine. The total feed water consists of recirculated condensate water and purified makeup water. Because the metallic materials it contacts are subject to corrosion at high temperatures and pressures, the makeup water is highly purified before use. A system of water softeners and ion exchange demineralizes produces water so pure that it coincidentally becomes an electrical insulator, with conductivity in the range of 0.3–1.0 microsiemens per centimeter. The makeup water in a 500 MWe plant amounts to perhaps 120 US gallons per minute (7.6 L/s) to replace water drawn off from the boiler drums for water purity management, and to also offset the small losses from steam leaks in the system. Boiler and steam cycle: The feed water cycle begins with condensate water being pumped out of the condenser after traveling through the steam turbines. The condensate flow rate at full load in a 500 MW plant is about 6,000 US gallons per minute (400 L/s). Boiler and steam cycle: The water is pressurized in two stages, and flows through a series of six or seven intermediate feed water heaters, heated up at each point with steam extracted from an appropriate duct on the turbines and gaining temperature at each stage. Typically, in the middle of this series of feedwater heaters, and before the second stage of pressurization, the condensate plus the makeup water flows through a deaerator that removes dissolved air from the water, further purifying and reducing its corrosiveness. The water may be dosed following this point with hydrazine, a chemical that removes the remaining oxygen in the water to below 5 parts per billion (ppb). It is also dosed with pH control agents such as ammonia or morpholine to keep the residual acidity low and thus non-corrosive. Boiler and steam cycle: Boiler operation The boiler is a rectangular furnace about 50 feet (15 m) on a side and 130 feet (40 m) tall. Its walls are made of a web of high pressure steel tubes about 2.3 inches (58 mm) in diameter.Fuel such as pulverized coal is air-blown into the furnace through burners located at the four corners, or along one wall, or two opposite walls, and it is ignited to rapidly burn, forming a large fireball at the center. The thermal radiation of the fireball heats the water that circulates through the boiler tubes near the boiler perimeter. The water circulation rate in the boiler is three to four times the throughput. As the water in the boiler circulates it absorbs heat and changes into steam. It is separated from the water inside a drum at the top of the furnace. The saturated steam is introduced into superheat pendant tubes that hang in the hottest part of the combustion gases as they exit the furnace. Here the steam is superheated to 1,000 °F (540 °C) to prepare it for the turbine. Boiler and steam cycle: Plants that use gas turbines to heat the water for conversion into steam use boilers known as heat recovery steam generators (HRSG). The exhaust heat from the gas turbines is used to make superheated steam that is then used in a conventional water-steam generation cycle, as described in the gas turbine combined-cycle plants section. Boiler and steam cycle: Boiler furnace and steam drum The water enters the boiler through a section in the convection pass called the economizer. From the economizer it passes to the steam drum and from there it goes through downcomers to inlet headers at the bottom of the water walls. From these headers the water rises through the water walls of the furnace where some of it is turned into steam and the mixture of water and steam then re-enters the steam drum. This process may be driven purely by natural circulation (because the water is the downcomers is denser than the water/steam mixture in the water walls) or assisted by pumps. In the steam drum, the water is returned to the downcomers and the steam is passed through a series of steam separators and dryers that remove water droplets from the steam. The dry steam then flows into the superheater coils. Boiler and steam cycle: The boiler furnace auxiliary equipment includes coal feed nozzles and igniter guns, soot blowers, water lancing, and observation ports (in the furnace walls) for observation of the furnace interior. Furnace explosions due to any accumulation of combustible gases after a trip-out are avoided by flushing out such gases from the combustion zone before igniting the coal. The steam drum (as well as the superheater coils and headers) have air vents and drains needed for initial start up. Boiler and steam cycle: Superheater Fossil fuel power stations often have a superheater section in the steam generating furnace. The steam passes through drying equipment inside the steam drum on to the superheater, a set of tubes in the furnace. Here the steam picks up more energy from hot flue gases outside the tubing, and its temperature is now superheated above the saturation temperature. The superheated steam is then piped through the main steam lines to the valves before the high-pressure turbine. Boiler and steam cycle: Nuclear-powered steam plants do not have such sections but produce steam at essentially saturated conditions. Experimental nuclear plants were equipped with fossil-fired superheaters in an attempt to improve overall plant operating cost. Steam condensing The condenser condenses the steam from the exhaust of the turbine into liquid to allow it to be pumped. If the condenser can be made cooler, the pressure of the exhaust steam is reduced and efficiency of the cycle increases. Boiler and steam cycle: The surface condenser is a shell and tube heat exchanger in which cooling water is circulated through the tubes. The exhaust steam from the low-pressure turbine enters the shell, where it is cooled and converted to condensate (water) by flowing over the tubes as shown in the adjacent diagram. Such condensers use steam ejectors or rotary motor-driven exhausts for continuous removal of air and gases from the steam side to maintain vacuum. Boiler and steam cycle: For best efficiency, the temperature in the condenser must be kept as low as practical in order to achieve the lowest possible pressure in the condensing steam. Since the condenser temperature can almost always be kept significantly below 100 °C where the vapor pressure of water is much less than atmospheric pressure, the condenser generally works under vacuum. Thus leaks of non-condensible air into the closed loop must be prevented. Boiler and steam cycle: Typically the cooling water causes the steam to condense at a temperature of about 25 °C (77 °F) and that creates an absolute pressure in the condenser of about 2–7 kPa (0.59–2.07 inHg), i.e. a vacuum of about −95 kPa (−28 inHg) relative to atmospheric pressure. The large decrease in volume that occurs when water vapor condenses to liquid creates the vacuum that generally increases the efficiency of the turbines. Boiler and steam cycle: The limiting factor is the temperature of the cooling water and that, in turn, is limited by the prevailing average climatic conditions at the power station's location (it may be possible to lower the temperature beyond the turbine limits during winter, causing excessive condensation in the turbine). Plants operating in hot climates may have to reduce output if their source of condenser cooling water becomes warmer; unfortunately this usually coincides with periods of high electrical demand for air conditioning. Boiler and steam cycle: The condenser generally uses either circulating cooling water from a cooling tower to reject waste heat to the atmosphere, or once-through cooling (OTC) water from a river, lake or ocean. In the United States, about two-thirds of power plants use OTC systems, which often have significant adverse environmental impacts. The impacts include thermal pollution and killing large numbers of fish and other aquatic species at cooling water intakes. Boiler and steam cycle: The heat absorbed by the circulating cooling water in the condenser tubes must also be removed to maintain the ability of the water to cool as it circulates. This is done by pumping the warm water from the condenser through either natural draft, forced draft or induced draft cooling towers (as seen in the adjacent image) that reduce the temperature of the water by evaporation, by about 11 to 17 °C (20 to 30 °F)—expelling waste heat to the atmosphere. The circulation flow rate of the cooling water in a 500 MW unit is about 14.2 m3/s (500 ft3/s or 225,000 US gal/min) at full load.The condenser tubes are typically made stainless steel or other alloys to resist corrosion from either side. Nevertheless, they may become internally fouled during operation by bacteria or algae in the cooling water or by mineral scaling, all of which inhibit heat transfer and reduce thermodynamic efficiency. Many plants include an automatic cleaning system that circulates sponge rubber balls through the tubes to scrub them clean without the need to take the system off-line.The cooling water used to condense the steam in the condenser returns to its source without having been changed other than having been warmed. If the water returns to a local water body (rather than a circulating cooling tower), it is often tempered with cool 'raw' water to prevent thermal shock when discharged into that body of water. Boiler and steam cycle: Another form of condensing system is the air-cooled condenser. The process is similar to that of a radiator and fan. Exhaust heat from the low-pressure section of a steam turbine runs through the condensing tubes, the tubes are usually finned and ambient air is pushed through the fins with the help of a large fan. The steam condenses to water to be reused in the water-steam cycle. Air-cooled condensers typically operate at a higher temperature than water-cooled versions. While saving water, the efficiency of the cycle is reduced (resulting in more carbon dioxide per megawatt-hour of electricity). Boiler and steam cycle: From the bottom of the condenser, powerful condensate pumps recycle the condensed steam (water) back to the water/steam cycle. Reheater Power station furnaces may have a reheater section containing tubes heated by hot flue gases outside the tubes. Exhaust steam from the high-pressure turbine is passed through these heated tubes to collect more energy before driving the intermediate and then low-pressure turbines. Boiler and steam cycle: Air path External fans are provided to give sufficient air for combustion. The Primary air fan takes air from the atmosphere and, first warms the air in the air preheater for better economy. Primary air then passes through the coal pulverizers, and carries the coal dust to the burners for injection into the furnace. The Secondary air fan takes air from the atmosphere and, first warms the air in the air preheater for better economy. Secondary air is mixed with the coal/primary air flow in the burners. Boiler and steam cycle: The induced draft fan assists the FD fan by drawing out combustible gases from the furnace, maintaining slightly below atmospheric pressure in the furnace to avoid leakage of combustion products from the boiler casing. Steam turbine generator: A steam turbine generator consists of a series of steam turbines interconnected to each other and a generator on a common shaft. Steam turbine generator: Steam turbine There is usually a high-pressure turbine at one end, followed by an intermediate-pressure turbine, and finally one, two, or three low-pressure turbines, and the shaft that connects to the generator. As steam moves through the system and loses pressure and thermal energy, it expands in volume, requiring increasing diameter and longer blades at each succeeding stage to extract the remaining energy. The entire rotating mass may be over 200 metric tons and 100 feet (30 m) long. It is so heavy that it must be kept turning slowly even when shut down (at 3 rpm) so that the shaft will not bow even slightly and become unbalanced. This is so important that it is one of only six functions of blackout emergency power batteries on site. (The other five being emergency lighting, communication, station alarms, generator hydrogen seal system, and turbogenerator lube oil.) For a typical late 20th-century power station, superheated steam from the boiler is delivered through 14–16-inch (360–410 mm) diameter piping at 2,400 psi (17 MPa; 160 atm) and 1,000 °F (540 °C) to the high-pressure turbine, where it falls in pressure to 600 psi (4.1 MPa; 41 atm) and to 600 °F (320 °C) in temperature through the stage. It exits via 24–26-inch (610–660 mm) diameter cold reheat lines and passes back into the boiler, where the steam is reheated in special reheat pendant tubes back to 1,000 °F (540 °C). The hot reheat steam is conducted to the intermediate pressure turbine, where it falls in both temperature and pressure and exits directly to the long-bladed low-pressure turbines and finally exits to the condenser. Steam turbine generator: Turbo generator The generator, typically about 30 feet (9 m) long and 12 feet (3.7 m) in diameter, contains a stationary stator and a spinning rotor, each containing miles of heavy copper conductor. There is generally no permanent magnet, thus preventing black starts. In operation it generates up to 21,000 amperes at 24,000 volts AC (504 MWe) as it spins at either 3,000 or 3,600 rpm, synchronized to the power grid. The rotor spins in a sealed chamber cooled with hydrogen gas, selected because it has the highest known heat transfer coefficient of any gas and for its low viscosity, which reduces windage losses. This system requires special handling during startup, with air in the chamber first displaced by carbon dioxide before filling with hydrogen. This ensures that a highly explosive hydrogen–oxygen environment is not created. Steam turbine generator: The power grid frequency is 60 Hz across North America and 50 Hz in Europe, Oceania, Asia (Korea and parts of Japan are notable exceptions), and parts of Africa. The desired frequency affects the design of large turbines, since they are highly optimized for one particular speed. The electricity flows to a distribution yard where transformers increase the voltage for transmission to its destination. Steam turbine generator: The steam turbine-driven generators have auxiliary systems enabling them to work satisfactorily and safely. The steam turbine generator, being rotating equipment, generally has a heavy, large-diameter shaft. The shaft therefore requires not only supports but also has to be kept in position while running. To minimize the frictional resistance to the rotation, the shaft has a number of bearings. The bearing shells, in which the shaft rotates, are lined with a low-friction material like Babbitt metal. Oil lubrication is provided to further reduce the friction between shaft and bearing surface and to limit the heat generated. Stack gas path and cleanup: As the combustion flue gas exits the boiler it is routed through a rotating flat basket of metal mesh which picks up heat and returns it to incoming fresh air as the basket rotates. This is called the air preheater. The gas exiting the boiler is laden with fly ash, which are tiny spherical ash particles. The flue gas contains nitrogen along with combustion products carbon dioxide, sulfur dioxide, and nitrogen oxides. The fly ash is removed by fabric bag filters in baghouses or electrostatic precipitators. Once removed, the fly ash byproduct can sometimes be used in the manufacturing of concrete. This cleaning up of flue gases, however, only occurs in plants that are fitted with the appropriate technology. Still, the majority of coal-fired power stations in the world do not have these facilities. Legislation in Europe has been efficient to reduce flue gas pollution. Japan has been using flue gas cleaning technology for over 30 years and the US has been doing the same for over 25 years. China is now beginning to grapple with the pollution caused by coal-fired power stations. Stack gas path and cleanup: Where required by law, the sulfur and nitrogen oxide pollutants are removed by stack gas scrubbers which use a pulverized limestone or other alkaline wet slurry to remove those pollutants from the exit stack gas. Other devices use catalysts to remove nitrous oxide compounds from the flue-gas stream. The gas travelling up the flue-gas stack may by this time have dropped to about 50 °C (120 °F). A typical flue-gas stack may be 150–180 metres (490–590 ft) tall to disperse the remaining flue gas components in the atmosphere. The tallest flue-gas stack in the world is 419.7 metres (1,377 ft) tall at the Ekibastuz GRES-2 Power Station in Kazakhstan. Stack gas path and cleanup: In the United States and a number of other countries, atmospheric dispersion modeling studies are required to determine the flue-gas stack height needed to comply with the local air pollution regulations. The United States also requires the height of a flue-gas stack to comply with what is known as the "good engineering practice" (GEP) stack height. In the case of existing flue gas stacks that exceed the GEP stack height, any air pollution dispersion modeling studies for such stacks must use the GEP stack height rather than the actual stack height. Auxiliary systems: Boiler make-up water treatment plant and storage Since there is continuous withdrawal of steam and continuous return of condensate to the boiler, losses due to blowdown and leakages have to be made up to maintain a desired water level in the boiler steam drum. For this, continuous make-up water is added to the boiler water system. Impurities in the raw water input to the plant generally consist of calcium and magnesium salts which impart hardness to the water. Hardness in the make-up water to the boiler will form deposits on the tube water surfaces which will lead to overheating and failure of the tubes. Thus, the salts have to be removed from the water, and that is done by a water demineralising treatment plant (DM). A DM plant generally consists of cation, anion, and mixed bed exchangers. Any ions in the final water from this process consist essentially of hydrogen ions and hydroxide ions, which recombine to form pure water. Very pure DM water becomes highly corrosive once it absorbs oxygen from the atmosphere because of its very high affinity for oxygen. Auxiliary systems: The capacity of the DM plant is dictated by the type and quantity of salts in the raw water input. However, some storage is essential as the DM plant may be down for maintenance. For this purpose, a storage tank is installed from which DM water is continuously withdrawn for boiler make-up. The storage tank for DM water is made from materials not affected by corrosive water, such as PVC. The piping and valves are generally of stainless steel. Sometimes, a steam blanketing arrangement or stainless steel doughnut float is provided on top of the water in the tank to avoid contact with air. DM water make-up is generally added at the steam space of the surface condenser (i.e., the vacuum side). This arrangement not only sprays the water but also DM water gets deaerated, with the dissolved gases being removed by a de-aerator through an ejector attached to the condenser. Auxiliary systems: Fuel preparation system In coal-fired power stations, the raw feed coal from the coal storage area is first crushed into small pieces and then conveyed to the coal feed hoppers at the boilers. The coal is next pulverized into a very fine powder. The pulverizers may be ball mills, rotating drum grinders, or other types of grinders. Some power stations burn fuel oil rather than coal. The oil must kept warm (above its pour point) in the fuel oil storage tanks to prevent the oil from congealing and becoming unpumpable. The oil is usually heated to about 100 °C before being pumped through the furnace fuel oil spray nozzles. Boilers in some power stations use processed natural gas as their main fuel. Other power stations may use processed natural gas as auxiliary fuel in the event that their main fuel supply (coal or oil) is interrupted. In such cases, separate gas burners are provided on the boiler furnaces. Auxiliary systems: Barring gear Barring gear (or "turning gear") is the mechanism provided to rotate the turbine generator shaft at a very low speed after unit stoppages. Once the unit is "tripped" (i.e., the steam inlet valve is closed), the turbine coasts down towards standstill. When it stops completely, there is a tendency for the turbine shaft to deflect or bend if allowed to remain in one position too long. This is because the heat inside the turbine casing tends to concentrate in the top half of the casing, making the top half portion of the shaft hotter than the bottom half. The shaft therefore could warp or bend by millionths of inches. Auxiliary systems: This small shaft deflection, only detectable by eccentricity meters, would be enough to cause damaging vibrations to the entire steam turbine generator unit when it is restarted. The shaft is therefore automatically turned at low speed (about one percent rated speed) by the barring gear until it has cooled sufficiently to permit a complete stop. Oil system An auxiliary oil system pump is used to supply oil at the start-up of the steam turbine generator. It supplies the hydraulic oil system required for steam turbine's main inlet steam stop valve, the governing control valves, the bearing and seal oil systems, the relevant hydraulic relays and other mechanisms. At a preset speed of the turbine during start-ups, a pump driven by the turbine main shaft takes over the functions of the auxiliary system. Auxiliary systems: Generator cooling While small generators may be cooled by air drawn through filters at the inlet, larger units generally require special cooling arrangements. Hydrogen gas cooling, in an oil-sealed casing, is used because it has the highest known heat transfer coefficient of any gas and for its low viscosity which reduces windage losses. This system requires special handling during start-up, with air in the generator enclosure first displaced by carbon dioxide before filling with hydrogen. This ensures that the highly flammable hydrogen does not mix with oxygen in the air. Auxiliary systems: The hydrogen pressure inside the casing is maintained slightly higher than atmospheric pressure to avoid outside air ingress, and up to about two atmospheres pressure to improve heat transfer capacity. The hydrogen must be sealed against outward leakage where the shaft emerges from the casing. Mechanical seals around the shaft are installed with a very small annular gap to avoid rubbing between the shaft and the seals on smaller turbines, with labyrinth type seals on larger machines.. Seal oil is used to prevent the hydrogen gas leakage to atmosphere. Auxiliary systems: The generator also uses water cooling. Since the generator coils are at a potential of about 22 kV, an insulating barrier such as Teflon is used to interconnect the water line and the generator high-voltage windings. Demineralized water of low conductivity is used. Auxiliary systems: Generator high-voltage system The generator voltage for modern utility-connected generators ranges from 11 kV in smaller units to 30 kV in larger units. The generator high-voltage leads are normally large aluminium channels because of their high current as compared to the cables used in smaller machines. They are enclosed in well-grounded aluminium bus ducts and are supported on suitable insulators. The generator high-voltage leads are connected to step-up transformers for connecting to a high-voltage electrical substation (usually in the range of 115 kV to 765 kV) for further transmission by the local power grid. Auxiliary systems: The necessary protection and metering devices are included for the high-voltage leads. Thus, the steam turbine generator and the transformer form one unit. Smaller units may share a common generator step-up transformer with individual circuit breakers to connect the generators to a common bus. Monitoring and alarm system Most of the power station operational controls are automatic. However, at times, manual intervention may be required. Thus, the plant is provided with monitors and alarm systems that alert the plant operators when certain operating parameters are seriously deviating from their normal range. Auxiliary systems: Battery-supplied emergency lighting and communication A central battery system consisting of lead–acid cell units is provided to supply emergency electric power, when needed, to essential items such as the power station's control systems, communication systems, generator hydrogen seal system, turbine lube oil pumps, and emergency lighting. This is essential for a safe, damage-free shutdown of the units in an emergency situation. Auxiliary systems: Circulating water system To dissipate the thermal load of main turbine exhaust steam, condensate from gland steam condenser, and condensate from Low Pressure Heater by providing a continuous supply of cooling water to the main condenser thereby leading to condensation. The consumption of cooling water by inland power stations is estimated to reduce power availability for the majority of thermal power stations by 2040–2069.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Baptism by fire** Baptism by fire: The phrase baptism by fire or baptism of fire is a Christian theological concept originating from the words of John the Baptist in Matthew 3:11.It also has related meanings in military history and popular culture. Christianity: The term baptism with fire originated from the words of John the Baptist in Matthew 3:11 (and the parallel passage in Luke 3:16).: Matthew 3:11 "I indeed baptize you with water unto repentance: but he that cometh after me is mightier than I, whose shoes I am not worthy to bear: he shall baptize you with the Holy Ghost, and with fire" King James Version 1611 Many Christian writers, such as John Kitto, have noted that this could be taken as a hendiadys, the Spirit as fire, or as pointing out two distinct baptisms - one by the Spirit, one by fire. If two baptisms, then various meanings have been suggested for the second baptism, by fire - to purify each single individual who accept Jesus Christ as Lord and Savior to be the temple of the Holy Spirit, to cast out demons and to destroy the stronghold of the flesh by the Fire of God.Of this expression, J. H. Thayer commented: "to overwhelm with fire (those who do not repent), i.e., to subject them to the terrible penalties of hell". W. E. Vine noted regarding the "fire" of this passage: "of the fire of Divine judgment upon the rejectors of Christ, Matt. 3:11 (where a distinction is to be made between the baptism of the Holy Spirit at Pentecost and the fire of Divine retribution)". Arndt and Gingrich speak of the "fire of divine Judgment Mt. 3:11; Lk. 3:16".However, as J. W. McGarvey observed, the phrase "baptize you ... in the fire" also refers to the day of Pentecost, because there was a "baptism of fire" which appears as the tongue of fire on that day. Parted "tongues," which were mere "like as of fire ... sat upon" each of the apostles. Those brothers were "overwhelmed with the fire of The Holy Spirit" on that occasion. Similarly, Matthew Henry comments that as "fire make[s] all it seizes like itself... so does the Spirit make the soul holy like itself."The concept of baptism by 'fire and the Holy Spirit' lies behind the Consolamentum rite of the Cathars or Albigenses. Christianity: Methodism (inclusive of the holiness movement) In Methodism (inclusive of the holiness movement), baptism by fire is synonymous with the second work of grace: entire sanctification, which is also known as Baptism with the Holy Spirit.Jabulani Sibanda, a theologian in the Wesleyan-Arminian tradition, says with regard to entire sanctification: This experience is important because it is the second work of grace. It leads to purity of heart, and it is the baptism by fire (Matthew 3:11) in which impurities are dealt with. This experience symbolizes the death to self as Paul said that he is crucified with Christ, “…I do not live but Christ lives in me” (Galatians 2:20). It is the singleness of the eye. “The light of the body is the eye: if therefore thine eye be single, thy whole body shall be full of light. But if thine eye be evil, thy whole body shall be full of darkness. If therefore the light that is in thee be darkness, how great is that darkness” (Matthew 6:22–23 KJV). Singleness of the eye is the opposite of what James addresses as double mindedness. He calls people to cleanse their hands and purify their hearts. The person focuses on God alone; he or she is no longer unstable. It is also an experience of devotedness and separateness to God. This is an experience of one giving oneself totally to God. Christianity: Pentecostalism In Pentecostalism, baptism by fire is synonymous with Spirit baptism, which is accompanied by glossolalia (speaking in tongues). The Church of Jesus Christ of Latter-Day Saints In the Church of Jesus Christ of Latter-day Saints, the term relates to confirmation and the phrase "baptism of fire" or "baptism by fire" appears several times in Latter-day Saint canonized scripture, including: Doctrine and Covenants 20:41; Doctrine and Covenants 33:11; Doctrine and Covenants 39:6; and 2 Nephi 31:13–17. The relation between the confirmation of the Holy Ghost and the baptism of fire is explained by David A. Bednar, a church apostle: "the Holy Ghost is a sanctifier who cleanses and burns dross and evil out of human souls as though by fire". Military usage: In the military usage, a baptism by fire refers to a soldier's first time in battle. The Catholic Encyclopedia, and writers such as John Deedy, state that the term in a military sense entered the English language in 1822 as a translation of the French phrase baptême du feu. From military usage the term has extended into many other areas in relation to an initiation into a new role. In popular culture: The phrase 'baptism of fire' has also entered into popular culture. An example is Brothers in Arms (song) by the Dire Straits, which covers the British involvement in the Falklands war: Through these fields of destructionsbaptisms of fireI've witnessed your sufferingas the battle raged higher.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Electrohydraulic servo valve** Electrohydraulic servo valve: An electrohydraulic servo valve (EHSV) is an electrically-operated valve that controls how hydraulic fluid is sent to an actuator. Servo valves are often used to control powerful hydraulic cylinders with a very small electrical signal. Servo valves can provide precise control of position, velocity, pressure, and force with good post-movement damping characteristics. History of electrohydraulic servo valves: The electrohydraulic servo valve first appeared in World War II. The EHSVs in use during the 1940s was characterized by poor accuracy and slow response times due to the inability to rapidly convert electrical signals into hydraulic flows. The first two-stage servo valve used a solenoid to actuate a first stage spool which in turn drove a rotating main stage. The servo valves of the World War II-era were similar to this — using a solenoid to drive a spool valve. History of electrohydraulic servo valves: Advancement of EHSVs took off in the 1950s, largely due to the adoption of permanent magnet torque motors as the first stage (as opposed to solenoids). This resulted in greatly improved response times and a reduction in power used to control the valves. Description: Types Electrohydraulic servo valves may consist of one or more stages. A single-stage servo valve uses a torque motor to directly position a spool valve. Single-stage servo valves suffer from limitations in flow capability and stability due to torque motor power requirements. Two-stage servo valves may use flapper, jet pipe, or deflector jet valves as hydraulic amplifier first stages to position a second-stage spool valve. This design results in significant increases in servo valve flow capability, stability, and force output. Similarly, three-stage servo valves may use an intermediate stage spool valve to position a larger third stage spool valve. Three-stage servo valves are limited to very high power applications, where significant flows are required. Description: Furthermore, two-stage servo valves may be classified by the type of feedback used for the second stage; which may be spool position, load pressure, or load flow feedback. Most commonly, two-stage servo valves use position feedback; which may further be classified by direct feedback, force feedback, or spring centering. Description: Control A servo valve receives pressurized hydraulic fluid from a source, typically a hydraulic pump. It then transfers the fluid to a hydraulic cylinder in a closely controlled manner. Typically, the valve will move the spool proportionnaly to an electrical signal that it receives, indirectly controlling flow rate. Simple hydraulic control valves are binary, they are either on or off. Servo valves are different in that they can continuously vary the flow they supply from zero up to their rated maximum flow, or until the output pressure reaches the supplied pressure. More complex servo valves can control other parameters. For instance, some have internal feedback so that the input signal effectively control flow or output pressure, rather than spool position. Description: Servo valves are often used in a feedback control where the position or force on a hydraulic cylinder is measured, and fed back into a controller that varies the signal sent to the servo valve. This allows very precise control of the cylinder. Examples of usage: Manufacturing One example of servo valve use is in blow molding where the servo valve controls the wall thickness of extruded plastic making up the bottle or container by use of a deformable die. The mechanical feedback has been replaced by an electric feedback with a position transducer. Integrated electronics close the position loop for the spool. These valves are suitable for electrohydraulic position, velocity, pressure or force control systems with extremely high dynamic response requirements. Examples of usage: Aircraft Servo valves are used to regulate the flow of fuel into a turbofan engine governed by FADEC. In fly-by-wire aircraft the control surfaces are often moved by servo valves connected to hydraulic cylinders. The signals to the servo valves are controlled by a flight control computer that receives commands from the pilot and monitors the flight of the aircraft.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Acinic cell carcinoma** Acinic cell carcinoma: Acinic cell carcinoma is a malignant tumor representing 2% of all salivary tumors. 90% of the time found in the parotid gland, 10% intraorally on buccal mucosa or palate. The disease presents as a slow growing mass, associated with pain or tenderness in 50% of the cases. Often appears pseudoencapsulated. Diagnosis: Basophilic, bland cells similar to acinar cells. Growth pattern: solid - acinar cells, microcytic - small cystic spaces mucinous or eosinophilic, papillary-cystic - large cystic lined by epithelium, follicular - similar to thyroid tissue. These tumors, which resemble serous acinar cells, vary in their behavior from locally aggressive to blatantly malignant. Diagnosis: It can also appear in the breast. The pancreatic form of acinic cell carcinoma is a rare subtype of exocrine pancreatic cancer. Exocrine pancreatic cancers are the most common form of pancreatic cancer when compared to endocrine pancreatic cancer.Acinic cell carcinomas arise most frequently in the parotid gland. Other sites of primary tumors have included the submandibular gland and other major and minor salivary glands. There have been rare cases of primary tumors involving the parapharyngeal space and the sublingual gland. Prognosis: Prognosis is good for acinic cell carcinoma of the parotid gland, with five-year survival rates approaching 90%, and 20-year survival exceeding 50%. Patients with acinic cell carcinomas with high grade transformation (sometimes also called dedifferentiation) have significantly worse survival.The prognosis of an acinic cell carcinoma originating in the lung is much more guarded than cases of this rare histotype occurring in most other organs, but is still considerably better than for other types of lung cancer. Treatment: Surgical resection is mainstay of treatment, whenever possible. If tumor is completely removed, post-operative radiation therapy is typically not needed since acinic cell carcinoma is considered a low-grade histology. Post-operative radiation therapy for acinic cell carcinoma is used if: margins are positive incomplete resection tumor invades beyond gland positive lymph nodes Neutron beam radiation Conventional radiation Chemotherapy Epidemiology: Acinic cell carcinoma appears in all age groups, but presents at a younger median age (approx. 52 years) than most other salivary gland cancers. Occurrences in children are quite common. Salivary gland cancers seem on the rise in many Western Nations and their risk factors are still the challenges ahead, not being fully identified. Among the known risk factors there are external and internal radioactive exposure, as iodine and cesium radionuclides. Acinic cell carcinoma of the lung: Acinic cell carcinoma of the lung is a very rare variant of lung cancer that, in this organ, is classified among the salivary gland-like carcinoma of the lung. Fewer than 1% of malignancies beginning in the lower respiratory tract are acinic cell carcinomas.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ekiga** Ekiga: Ekiga (formerly called GnomeMeeting) is a VoIP and video conferencing application for GNOME and Microsoft Windows. It is distributed as free software under the terms of the GNU GPL-2.0-or-later. It was the default VoIP client in Ubuntu until October 2009, when it was replaced by Empathy. Ekiga supports both the SIP and H.323 (based on OPAL) protocols and is fully interoperable with any other SIP compliant application and with Microsoft NetMeeting. It supports many high-quality audio and video codecs.Ekiga was initially written by Damien Sandras in order to graduate from the University of Louvain (UCLouvain). It is currently developed by a community-based team led by Sandras. The logo was designed based on his concept by Andreas Kwiatkowski.Ekiga.net was also a free and private SIP registrar, which enabled its members to originate and terminate (receive) calls from and to each other directly over the Internet. Ekiga: The service was discontinued at the end of 2018. Features: Features of Ekiga include: Integration Ekiga is integrated with a number of different software packages and protocols such as LDAP directories registration and browsing along with support for Novell Evolution so that contacts are shared between both programs and zeroconf (Apple Bonjour) support. It auto-detects devices including USB, ALSA and legacy OSS soundcards, Video4linux and FireWire camera. User interface Ekiga supports a Contact list based interface along with Presence support with custom messages. It allows for the monitoring of contacts and viewing call history along with an addressbook, dialpad, and chat window. SIP URLs and H.323/callto support is built-in along with full-screen videoconferencing (accelerated using a graphics card). Features: Technical features Call forwarding on busy, no answer, always (SIP and H.323) Call transfer (SIP and H.323) Call hold (SIP and H.323) DTMF support (SIP and H.323) Basic instant messaging (SIP) Text chat (SIP and H.323) Register with several registrars (SIP) and gatekeepers (H.323) simultaneously Ability to use an outbound proxy (SIP) or a gateway (H.323) Message waiting indications (SIP) Audio and video (SIP and H.323) STUN support (SIP and H.323) LDAP support Audio codec algorithms: iLBC, GSM 06.10, MS-GSM, G.711 A-law, G.711 μ-law, G.726, G.721, Speex, G.722, CELT (also G.723.1, G.728, G.729, GSM 06.10, GSM-AMR, G.722.2 [GSM‑AMR-WB] using Intel IPP) Video codec algorithms: H.261, H.263+, H.264, Theora, MPEG-4 History: Ekiga was originally started over Christmas in the year 2000. Originally written by Damien Sandras it grew to being maintained by a team of nine regular contributors by 2011. Sandras wanted to create a Netmeeting clone for Linux as his graduating project at UCLouvain.Ekiga was referred to as GnomeMeeting until 2004 when a name change was thought necessary by the developers. Concerns were cited that the original name was associated with a discontinued Microsoft product called NetMeeting, and not always recognized as VoIP software. It was also proposed that some people assumed they needed to run GNOME to run GnomeMeeting, which was no longer the case. Eventually on January 18, 2006 the name Ekiga was chosen based on an old way of communicating between villages in Cameroon. Around that the time the direction of the software project was changed and it turned into a SIP client.The following shows major version releases: March 2004 – Version 1.0 under the name GnomeMeeting March 2006 – Version 2.0 was released under name Ekiga, it was bundled with GNOME 2.14 April 2007 – Version 2.0.9 was the first version to include support for Microsoft Windows September 2008 – Version 3.0.0 March 2009 - First release of the 3.2.x series. Added support for G.722 audio as well as unified the support for H.263 and H.263. History: November 2012 - Ekiga 4.0.0, "The Victory Release", is a major release with many major improvements. February 2013 - Ekiga 4.0.1, "The Victory Release", this version has many improvements. 2015; - Ekiga 5.0, new Version with GTK+ 3 and new codecs announced.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Marine geophysics** Marine geophysics: Marine geophysics is the scientific discipline that employs methods of geophysics to study the world's ocean basins and continental margins, particularly the solid earth beneath the ocean. It shares objectives with marine geology, which uses sedimentological, paleontological, and geochemical methods. Marine geophysical data analyses led to the theories of seafloor spreading and plate tectonics. Methods: Marine geophysics uses techniques largely employed on the continents, from fields including exploration geophysics and seismology, and methods unique to the ocean such as sonar. Most geophysical instruments are used from surface ships but some are towed near the seafloor or function autonomously, as with Autonomous Underwater Vehicles or AUVs. Objectives of marine geophysics include determination of the depth and features of the seafloor, the seismic structure and earthquakes in the ocean basins, the mapping of gravity and magnetic anomalies over the basins and margins, the determination of heat flow through the seafloor, and electrical properties of the ocean crust and Earth's mantle. Navigation Modern marine geophysics, as with most oceanographic surveying with research ships, use Global Positioning System satellites, either the U.S. GPS array or the Russian GLONASS for ship navigation. Geophysical instruments towed near the seafloor typically use acoustic transponder navigation sonar networks. Methods: Ocean depth The depth of the seafloor is measured using echo sounding, a sonar method developed during the 20th century and advanced during World War II. Common variations are based on the sonar beam width and number of sonar beams as is used in multibeam sonar or swath mapping that became more advanced toward the latter half of the 20th century. Methods: Sedimentary cover of the seafloor The thickness and type of sediments covering the ocean crust are estimated using the seismic reflection technique. This method was highly advanced by offshore oil exploration companies. The method employs a sound source at the ship with much lower frequencies than echo sounding, and an array of hydrophones towed by the ship, that record echoes from the internal structure of the sediment cover and the crust below the sediment. In some cases, reflections from the internal structure of the ocean crust can be detected. Echo sounders that use lower frequencies near 3.5 kHz are used to detect both the seafloor and shallow structure below the seafloor. Side-looking sonar, where the sonar beams are aimed just below horizontal, is used to map the seafloor bottom texture to ranges from tens of meters to a kilometer or more depending on the device. Methods: Structure of the ocean crust and upper mantle When the sound or energy source is separated from the recording devices by distances of several kilometers or more, then refracted seismic waves are measured. Their travel time can be used to determine the internal structure of the ocean crust, and from the seismic velocities determined by the method, an estimate can be made of the crustal rock type. Recording devices include hydrophones at the ocean surface and also ocean bottom seismographs. Refraction experiments have detected anisotropy of seismic wave speed in the oceanic upper mantle. Methods: Measuring Earth’s magnetic and gravity fields within the ocean basins The usual method of measuring the Earth’s magnetic field at the sea surface is by towing a total field proton precession magnetometer several hundred meters behind a survey ship. In more limited surveys magnetometers have been towed at a depth close to the seafloor or attached to deep submersibles. Gravimeters using the zero-length spring technology are mounted in the most stable location on a ship; usually towards the center and low. They are specially designed to separate the acceleration of the ship from changes in the acceleration of Earth’s gravity, or gravity anomalies, which are several thousand times less. In limited cases, gravity measurements have been made at the seafloor from deep submersibles. Methods: Determine the rate of heat flow from the Earth through the seafloor The geothermal gradient is measured using a 2-meter long temperature probe or with thermistors attached to sediment core barrels. Measured temperatures combined with the thermal conductivity of the sediment give a measure of the conductive heat flow through the seafloor. Methods: Measure the electrical properties of the ocean crust and upper mantle Electrical conductivity, or the converse resistivity, can be related to rock type, the presence of fluids within cracks and pores in rocks, the presence of magma, and mineral deposits like sulfides at the seafloor. Surveys can be done at either the sea surface or seafloor or in combination, using active current sources or natural Earth electrical currents, known as telluric currents.In special cases, measurements of natural gamma radiation from seafloor mineral deposits have been made using scintillometers towed near the seafloor. Examples of the impact of marine geophysics: Evidence for seafloor spreading and plate tectonics Echo sounding was used to refine the limits of the known mid-ocean ridges, and to discover new ones. Further sounding mapped linear seafloor fracture zones that are nearly orthogonal to the trends of the ridges. Later, determining earthquake locations for the deep ocean discovered that quakes are restricted to the crests of the mid-ocean ridges and stretches of fracture zones that link one segment of a ridge to another. These are now known as transform faults, one of the three classes of plate boundaries. Echo sounding was used to map the deep trenches of the oceans and earthquake locations were noted to be located in and below the trenches. Examples of the impact of marine geophysics: Data from marine seismic refraction experiments defined a thin ocean crust, approximately 6 to 8 kilometers in thickness, divided into three layers. Seismic reflection measurements made over the ocean ridges found they are devoid of sediments at the crest, but covered by increasingly thicker sediment layers with increasing distance from the ridge crest. This observation implied that the ridge crests are younger than the ridge flanks. Examples of the impact of marine geophysics: Magnetic surveys discovered linear magnetic anomalies that in many areas ran parallel to an ocean ridge crest and showed a mirror-image symmetrical pattern centered on ridge crests. Correlation of the anomalies to the history of Earth’s magnetic field reversals allowed the age of the seafloor to be estimated. This connection was interpreted as the spreading of the seafloor from the ridge crests. Linking spreading centers and transform faults to a common cause helped to develop the concept of plate tectonics.When the age of the ocean crust as determined by magnetic anomalies or drill hole samples was compared to the ocean depth it was observed that depth and age are directly related in a seafloor depth age relationship. This relationship was explained by the cooling and contracting of an oceanic plate as it spreads away from a ridge crest. Examples of the impact of marine geophysics: Evidence for paleoclimate Seismic reflection data combined with deep-sea drilling at some locations have identified widespread unconformities and distinctive seismic reflectors in the deep sea sedimentary record. These have been interpreted as evidence of past global climate change events. Seismic reflection surveys made on polar continental selves have identified buried sedimentary features due to the advance and retreat of continental ice sheets. Swath sonar mapping has revealed the gouge tracks of ice sheets cut as they traversed polar continental shelves in the past. Examples of the impact of marine geophysics: Evidence for hydrothermal vents Heat flow measured in the ocean basins revealed that conductive heat flow decreased with the increased depth and crustal age of flanks of ocean ridges. On the ridge crest, however, conductive heat flow was found to be unexpectedly low for a location where active volcanism accompanies seafloor spreading. This anomaly was explained by the possible heat transfer by hydrothermal venting of seawater circulating in deep fissures in the crust at the ridge crest spreading centers. This hypothesis was borne out in the late 20th century when investigations by deep submersibles discovered hydrothermal vents at spreading centers. Examples of the impact of marine geophysics: Evidence for Mid-Ocean Ridge structure and properties Marine gravity profiles made across Mid-Ocean Ridges showed a lack of a gravity anomaly, the Free-air anomaly is small or near zero when averaged over a broad area. This suggested that although ridges reached a height at their crest of two kilometers or more above the deep ocean basins, that extra mass was not related to an increase of gravity on the ridge of the magnitude that would be expected. The ridges are isostatically compensated, meaning the total mass below some reference depth in the mantle below the ridge is about the same everywhere. This requires a lower density mantle below the ridge crest and upper ridge flanks. Data from seismic studies revealed lower velocities under the ridges suggesting parts of the mantle below the crests are lower density rock melt. This is consistent with the theories of seafloor spreading and plate tectonics. Centers of research conducting marine geophysics: Ocean University of China Alfred Wegener Institute for Polar and Marine Research Bedford Institute of Oceanography Cambridge University IFREMER Lamont–Doherty Earth Observatory National Institute of Water and Atmospheric Research National Oceanography Centre, Southampton Rosenstiel School of Marine and Atmospheric Science Scripps Institution of Oceanography Texas A&M University University of Hawaii (Manoa) University of Rhode Island University of Washington (Seattle) Woods Hole Oceanographic Institution
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Polycystic lipomembranous osteodysplasia with sclerosing leukoencephalopathy** Polycystic lipomembranous osteodysplasia with sclerosing leukoencephalopathy: Nasu–Hakola disease also known as polycystic lipomembranous osteodysplasia with sclerosing leukoencephalopathy is a rare disease characterised by early-onset dementia and multifocal bone cysts. It is caused by autosomal recessive loss of function mutations in either the TREM2 or TYROBP gene that are found most frequently in the Finnish and Japanese populations. Signs and symptoms: Four stages are recognised in this condition. The first (latent stage) show no symptoms or signs. This stage typically lasts up to the early 20s. This is followed by the osseous stage. This is characterised by recurrent bone pain usually affecting the long bones of the limbs. This is usually followed by pathological fractures of these bones. The third stage (early neurological) is marked by the onset of symptoms typical of a frontal lobe syndrome (euphoria, lack of concentration, loss of judgment and social inhibitions) with memory loss. Epilepsy may occur. This stage usually has its onset in the late 20s and early 30s. The final stage is characterised by severe dementia and paralysis. Death usually occurs in the late 40s or early 50s. Genetics: This condition has been associated with mutations in the TYRO protein tyrosine kinase binding protein (TYROBP) gene and in the triggering receptor expressed on myeloid cells 2 (TREM2) gene. TYROBP is located on the long arm of chromosome 19 (19q13.12) and TREM2 is located on short arm of chromosome 6 (6p21.1). Pathopysiology: This is not understood but appears to involve the microglia. Diagnosis: This syndrome may be suspected on clinical grounds. The diagnosis is established by sequencing the TYROBP and TREM2 genes. Differential diagnosis Frontotemporal dementia Investigations X rays show the presence of bone cysts and osteoporosis. CT or MRI of the brain show loss of tissue in the frontotemporal lobes of the brain. Calcification of the basal ganglia is common. EEG is typically normal initially but diffuse slowing and irritative activity later. Treatment: There is no specific treatment for this condition. Management is supportive. Epidemiology: This condition is considered to be rare, with ~200 cases described in the literature. The estimated population prevalence is 2.0 x 10−6 in Finns. History: This condition was first described in 1973.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Test call generator** Test call generator: Test call generators (TCGs) are revenue assurance software that replicates events on a telecoms network to identify potential revenue leakage and to help achieve regulatory compliance.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stall strips** Stall strips: A stall strip is a small component fixed to the leading edge of the wing of an airplane to modify its aerodynamic characteristics. These stall strips may be necessary for the airplane to comply with type certification requirements. A stall strip typically consists of a small piece of material, usually aluminium, triangular in cross section and often 6-12 inches (15–30 cm) in length. It is riveted or bonded to the wing’s leading edge. Some airplanes have one stall strip on each wing. Some airplanes have only one stall strip on one wing. Operation: A stall strip initiates flow separation on a region of the upper surface of the wing during flight at high angle of attack. This is typically to avoid a tendency to spin following a stall, or to improve the controllability of the airplane as it approaches the stall. A stall strip may be intended to alter the wing’s stall characteristics and ensure that the wing root stalls before the wing tips.In some cases, such as the American Aviation AA-1 Yankee, both wings are designed to incorporate stall strips. In the case of the AA-1 the left and right wings were identical, interchangeable and built on a single wing jig, thus the more traditional use of washout in the wing design was not possible.Stall strips are usually factory-installed but, on rarer occasions, may be an after-market modification.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Circulation (journal)** Circulation (journal): Circulation is a scientific journal published by Lippincott Williams & Wilkins for the American Heart Association. The journal publishes articles related to research in and the practice of cardiovascular diseases, including observational studies, clinical trials, epidemiology, health services and outcomes studies, and advances in applied (translational) and basic research. Its 2020 impact factor is 29.690, ranking it third among journals in the Cardiac and Cardiovascular Systems category and first in the Peripheral Vascular Disease category. Articles become open access after a 12-month embargo period. Circulation (journal): 2008 saw the appearance of six subspecialty journals. The first edition of Circulation: Arrhythmia and Electrophysiology appeared in April 2008, followed by an edition dedicated to heart failure in May titled Circulation: Heart Failure. The remaining four journals launched once per month from July through October 2008. In order of release they were, Circulation: Cardiovascular Imaging, Circulation: Cardiovascular Interventions, Circulation: Cardiovascular Quality and Outcomes, and Circulation: Cardiovascular Genetics (now published as Circulation: Genomic and Precision Medicine since January 2018).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Illth** Illth: Illth, coined by John Ruskin in the 1860s, is the reverse of wealth in the sense of ill being the opposite of well. Illth: In Ruskin's view: Wealth, therefore, is "The possession of the valuable by the valiant"; and in considering it as a power existing in a nation, the two elements, the value of the thing, and the valour of its possessor, must be estimated together. Whence it appears that many of the persons commonly considered wealthy, are in reality no more wealthy than the locks of their own strong boxes are, they being inherently and eternally incapable of wealth; and operating for the nation, in an economical point of view, either as pools of dead water, and eddies in a stream (which, so long as the stream flows, are useless, or serve only to drown people, but may become of importance in a state of stagnation should the stream dry); or else, as dams in a river, of which the ultimate service depends not on the dam, but the miller; or else, as mere accidental stays and impediments, acting not as wealth, but (for we ought to have a correspondent term) as "illth" causing various devastation and trouble around them in all directions; or lastly, act not at all, but are merely animated conditions of delay, (no use being possible of anything they have until they are dead,) in which last condition they are nevertheless often useful as delays, and 'impedimenta,' (Unto This Last, Essay IV, p. 182, 1860)Various other writers have used the term, and continue to do so. A notable example is George Bernard Shaw, who used illth as a subheading in an 1889 essay.In the context of modern economic theory and practice, the term has been central to the work of Herman Daly and his advocacy for a Steady-state economy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stem (audio)** Stem (audio): In audio production, a stem is a discrete or grouped collection of audio sources mixed together, usually by one person, to be dealt with downstream as one unit. A single stem may be delivered in mono, stereo, or in multiple tracks for surround sound.STeM is an acronym, for “Stereo Masters” The beginnings of the process can be found in the production of early non-silent films. In "Das Land ohne Frauen" (Land Without Women), the first entirely German-made feature-length dramatic talkie released in 1929, about one-quarter of the movie contained dialogue, which was strictly segregated from the special effects and music. Mixing for films: In sound mixing for film, the preparation of stems is a common stratagem to facilitate the final mix. Dialog, music and sound effects, called "D-M-E", are brought to the final mix as separate stems. Using stem mixing, the dialog can easily be replaced by a foreign-language version, the effects can easily be adapted to different mono, stereo and surround systems, and the music can be changed to fit the desired emotional response. If the music and effects stems are sent to another production facility for foreign dialog replacement, these non-dialog stems are called "M&E". The dialog stem is used by itself when editing various scenes together to construct a trailer of the film; after this some music and effects are mixed in to form a cohesive sequence. Live sound mixing: When mixing music for recordings and for live sound, a stem is a group of similar sound sources. When a large project uses more than one person mixing, stems can facilitate the job of the final mix engineer. Such stems may consist of all of the string instruments, a full orchestra, just background vocals, only the percussion instruments, a single drum set, or any other grouping that may ease the task of the final mix. Stems prepared in this fashion may be blended together later in time, as for a recording project or for consumer listening, or they may be mixed simultaneously, as in a live sound performance with multiple elements. For instance, when Barbra Streisand toured in 2006 and 2007, the audio production crew used three people to run three mixing consoles: one to mix strings, one to mix brass, reeds and percussion, and one under main engineer Bruce Jackson's control out in the audience, containing Streisand's microphone inputs and stems from the other two consoles. Studio adjustments: Stems may be supplied to a musician in the recording studio so that the musician can adjust a headphone's monitor mix by varying the levels of other instruments and vocals relative to the musician's own input. Stems may also be delivered to the consumer so they can listen to a piece of music with a custom blend of the separate elements. (See List of musical works released in a stem format.) Stems made for sale in music production: It is common in the 21st century for music producers to sell instrumental music licenses for rappers and/or singers to perform and record over. One of the most common license types is the "Premium Stem License", where a customer receives the tracked-out stem files of the instrumental. This allows the artists to have more control over the mixing of the final song.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Crystallization adjutant** Crystallization adjutant: A crystallization adjutant is a material used to promote crystallization, normally in a context where a material does not crystallize naturally from a pure solution. Additives in Macromolecular Crystallization: In macromolecular crystallography, the term additive is used instead of adjutant. An additive can either interact directly with the protein, and become incorporated at a fixed position in the resulting crystal or have a role within the disordered solvent, that in protein crystals constitute roughly 50% of the lattice volume. Additives in Macromolecular Crystallization: Polyethylene glycols of various molecular weights and high-ionic strength salts such as ammonium sulfate and sodium citrate that induce protein precipitation when used in high concentrations are classified as precipitants, while certain other salts such as zinc sulfate or calcium sulfate that may cause a protein to precipitate vigorously even when used in small amounts are considered adjutants. Crystallization adjutants are considered additives when they are effective at relatively low concentrations. Additives in Macromolecular Crystallization: The distinction between buffers and adjutants is also fuzzy. Buffer molecules can become part of the lattice (for example HEPES in becomes incorporated in crystals of human neutrophil collagenase) but their main use is to maintain the rather precise pH requirements for crystallization that many proteins have. Commonly used buffers such as citrate have a high ionic strength and at the typical buffer concentrations they also act as precipitants. Various species such as Ca2+ and Zn2+ are a biological requirement for certain proteins to fold correctly and certain co-factors are needed to maintain a well defined conformation. Certain strategies, like replacing precipitants and buffers with others intended to have a similar effect, have been used to differentiate between the roles played in protein crystallization by the various components in the crystallization solution. Additives for Membrane Protein Crystallization: For membrane proteins, the situation is more complicated because the system that is being crystallized is not the membrane protein itself but the micellar system in which the membrane protein is embedded. Additives for Membrane Protein Crystallization: The size of the protein-detergent mixed micelles are affected by both additives and detergents which will strongly influence the crystals obtained. In addition to varying the concentration of primary detergents, additives (lipids and alcohols) and secondary detergents can be used to modulate the size and shape of the detergent micelles. By reducing the size of the mixed micelles lattice forming protein-protein contacts are encouraged. Lipid cubic phases, spontaneous self-assembling liquid crystals or lipid mesophases have been used successfully in the crystallization of integral membrane proteins. Temperature, salts, detergents, various additives are used in this system to tailor the cubic phase to suit the target protein. Typical detergents used are n-dodecyl-β-d-maltopyranoside, n-decyl-β-d-glucopyranoside, lauryldimethylamine oxide LDAO, n-hexyl-β-d-glucopyranoside, n-nonyl-β-d-glucopyranoside and n-octyl-β-d-glucopyranoside; the various lipids are dioleoyl phosphatidylcholine, dioleoyl phosphatidylethanolamine and monoolein.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Desmoplastic small-round-cell tumor** Desmoplastic small-round-cell tumor: Desmoplastic small-round-cell tumor (DSRCT) is an aggressive and rare cancer that primarily occurs as masses in the abdomen. Other areas affected may include the lymph nodes, the lining of the abdomen, diaphragm, spleen, liver, chest wall, skull, spinal cord, large intestine, small intestine, bladder, brain, lungs, testicles, ovaries, and the pelvis. Reported sites of metastatic spread include the liver, lungs, lymph nodes, brain, skull, and bones. It is characterized by the EWS-WT1 fusion protein. Desmoplastic small-round-cell tumor: The tumor is classified as a soft tissue sarcoma and a small round blue cell tumor. It most often occurs in male children. The disease rarely occurs in females, but when it does the tumors can be mistaken for ovarian cancer. Signs and symptoms: There are few early warning signs that a patient has a DSRCT. Patients are often young and healthy as the tumors grow and spread uninhibited within the abdominal cavity. These are rare tumors and symptoms are often misdiagnosed by physicians. The abdominal masses can grow to enormous size before being noticed by the patient. The tumors can be felt as hard, round masses by palpating the abdomen.First symptoms of the disease often include abdominal distention, abdominal mass, abdominal or back pain, gastrointestinal obstruction, lack of appetite, ascites, anemia, and cachexia.Other reported symptoms include unknown lumps, thyroid conditions, hormonal conditions, blood clotting, kidney and urological problems, testicle, breast, uterine, vaginal, and ovarian masses. Genetics: There are no known risk factors that have been identified specific to the disease. The tumor appears to arise from the primitive cells of childhood, and is considered a childhood cancer.Research has indicated that there is a chimeric relationship between DSRCT and Wilms' tumor and Ewing sarcoma. Together with neuroblastoma and non-Hodgkin's lymphoma, they form the small-cell tumors.DSRCT is associated with a unique chromosomal translocation (notated as t(11;22)(p13:q12)) that merges the EWSR1 FET family gene normally located on band 12 of the long (or "q") arm of chromosome 22 with part of the WT1 transcription factor gene normally located on band 13 of the short arm of chromosome 11. The resulting EWSR1-WT1 fusion gene is converted to a fusion transcript that directs the formation of an EWSR1-WT1 chimeric protein. The EWSR1-WT1 chimeric protein contains the N-terminal transactivation domain of EWSR1 and the DNA-binding domain of WT1. This translocation is seen in virtually all cases of DSRCT.The EWS/WT1 translocation product targets ENT4. ENT4 is also known as PMAT. Pathology: The entity was first described by pathologists William L. Gerald and Juan Rosai in 1989. Pathology reveals well circumscribed solid tumor nodules within a dense desmoplastic stroma. Often areas of central necrosis are present. Tumor cells have hyperchromatic nuclei with increased nuclear/cytoplasmic ratio.On immunohistochemistry, these cells have trilinear coexpression including the epithelial marker cytokeratin, the mesenchymal markers desmin and vimentin, and the neuronal marker neuron-specific enolase. Thus, although initially thought to be of mesothelial origin due to sites of presentation, it is now hypothesized to arise from a progenitor cell with multiphenotypic differentiation. Diagnosis: Differential diagnosis Because this is a rare tumor, not many family physicians or oncologists are familiar with this disease. DSRCT in young patients can be mistaken for other abdominal tumors including rhabdomyosarcoma, neuroblastoma, and mesenteric carcinoid. In older patients DSRCT can resemble lymphoma, peritoneal mesothelioma, and peritoneal carcinomatosis. In males DSRCT may be mistaken for germ cell or testicular cancer while in females DSRCT can be mistaken for ovarian cancer. DSRCT shares characteristics with other small-round blue cell cancers including Ewing's sarcoma, acute leukemia, small cell mesothelioma, neuroblastoma, primitive neuroectodermal tumor, rhabdomyosarcoma, and Wilms' tumor. Treatment: DSRCT is frequently misdiagnosed. Adult patients should always be referred to a sarcoma specialist. This is an aggressive, rare, fast spreading tumor and both pediatric and adult patients should be treated at a sarcoma center.There is no standard protocol for the disease; however, recent journals and studies have reported that some patients respond to high-dose (P6 Protocol) chemotherapy, maintenance chemotherapy, debulking operation, cytoreductive surgery, and radiation therapy. Other treatment options include: hematopoietic stem cell transplantation, intensity-modulated radiation therapy, radiofrequency ablation, stereotactic body radiation therapy, intraperitoneal hyperthermic chemoperfusion, and clinical trials. Prognosis: The prognosis for DSRCT remains poor. Prognosis depends upon the stage of the cancer. Because the disease can be misdiagnosed or remain undetected, tumors frequently grow large within the abdomen and metastasize or seed to other parts of the body. There is no known organ or area of origin. DSRCT can metastasize through lymph nodes or the blood stream. Sites of metastasis include the spleen, diaphragm, liver, large and small intestine, lungs, central nervous system, bones, uterus, bladder, genitals, abdominal cavity, and the brain. A multi-modality approach of high-dose chemotherapy, aggressive surgical resection, radiation, and stem cell rescue improves survival for some patients. Reports have indicated that patients will initially respond to first line chemotherapy and treatment but that relapse is common. Some patients in remission or with inoperable tumor seem to benefit from long term low dose chemotherapy, turning DSRCT into a chronic disease. Research: The Stehlin Foundation currently offers DSRCT patients the opportunity to send samples of their tumors free of charge for testing. Research scientists are growing the samples on nude mice and testing various chemical agents to find which are most effective against the individual's tumor. Patients with advanced DSRCT may qualify to participate in clinical trials that are researching new drugs to treat the disease. The Cory Monzingo Foundation is a 501(c)(3) organization that supports the research for treatments and a cure for DSRCT. The Cory Monzingo Foundation provides funding to MD Anderson Cancer Center and may also provide funding to other nonprofit cancer research organizations. In 2002, Nishio and al, established a novel human tumor cell line derived from the pleural effusion of a patient with a typical intra-abdominal DSRCT, called JN-DSRCT-1 that can now be used in research. St. Jude Children’s Research Hospital has, in 2018, make available resources from the Childhood Solid Tumor Network, that upon request gives access to patient-derived orthotopic xenografts. Alternative names: This disease is also known as: desmoplastic small round blue cell tumor; intra-abdominal desmoplastic small round blue cell tumor; desmoplastic small cell tumor; desmoplastic cancer; desmoplastic sarcoma; DSRCT. There is no connection to peritoneal mesothelioma which is another disease sometimes described as desmoplastic.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**17 Lyrae** 17 Lyrae: 17 Lyrae is a multiple star system in the constellation Lyra, 143 light years away from Earth. Components: The 17 Lyrae system contains two visible components, designated A and B, separated by 2.48" in 1997. The primary star is a single-lined spectroscopic binary with a period of 42.9 days.There was once thought to be a fourth star in the system, the red dwarf binary Kuiper 90, designated 17 Lyrae C, until it was evident that the star's parallax and proper motions were too different for it to be part of the system. The separation between 17 Lyrae AB and C is increasing rapidly, from less than 2' in 1881 to nearly 5' in 2014.A number of other visual companions have been catalogued. The closest is the 11th magnitude star at 39", and the brightest is BD+32 3325 just over 2' away. Properties: The primary component, 17 Lyrae A, is a 5th magnitude main sequence star of the spectral type F0, meaning it has a surface temperature of about 6,750 K. It is about 60% more massive than the sun and 16 times more luminous. It has been catalogued as an Am star but is now believed to be a relatively normal quickly-rotating star.The visible companion 17 Lyrae B is a 9th magnitude star of an unknown spectral type. The spectroscopic companion cannot be detected in the spectrum and its properties are uncertain. Faint sharp spectral lines contrasting with the broadened lines of the primary may originate in a shell of material around the stars.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sub-bituminous coal** Sub-bituminous coal: Sub-bituminous coal is a lower grade of coal that contains 35–45% carbon. The properties of this type are between those of lignite, the lowest grade of coal, and those of bituminous coal, the second-highest grade of coal. Sub-bituminous coal is primarily used as a fuel for steam-electric power generation. Properties: Sub-bituminous coals may be dull, dark brown to black, soft and crumbly at the lower end of the range, to bright jet-black, hard, and relatively strong at the upper end. They contain 15-30% inherent moisture by weight and are non-coking (undergo little swelling upon heating). The heat content of sub-bituminous coals range from 8300 to 11,500 BTu/lb or 19.3 to 26.7 MJ/kg. Their relatively low density and high water content renders some types of sub-bituminous coals susceptible to spontaneous combustion if not packed densely during storage in order to exclude free air flow. Reserves: A major source of sub-bituminous coal in the United States is the Powder River Basin in Wyoming. Current use: Sub-bituminous coals, in the United States, typically have a sulfur content less than 1% by weight, which makes them an attractive choice for power plants to reduce SO2 emissions under the Acid Rain Program.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Six-Word Memoirs** Six-Word Memoirs: Six-Word Memoirs is a project and book series created by the U.S. based online storytelling magazine Smith Magazine. History: In November 2006, Smith's editors Larry Smith and Rachel Fershleiser asked Smith readers to tell their life story in just six words, taking inspiration from novelist Ernest Hemingway (who, according to literary legend, was once challenged to write a short story in only six words, resulting in “For sale: baby shoes, never worn”). Smith readers submitted their memoirs via www.smithmag.net and Smith's Twitter account. In early 2007, Smith signed with Harper Perennial to create the Six-Word Memoir book series. Six-Word Memoir books: The first in Smith's Six-Word Memoir book series, Not Quite What I Was Planning: Six-Word Memoirs from Writers Famous & Obscure was released in early 2008. It collected almost 1,000 memoirs, including contributions from celebrities such as Richard Ford, Deepak Chopra, and Moby. It was a New York Times bestseller, featured in many stories in The New Yorker, and was highlighted on National Public Radio's Talk of the Nation. Six-Word Memoir books: In early 2009, Smith released a follow-up, Six-Word Memoirs on Love and Heartbreak, containing hundreds of personal stories about romance. Another follow-up was released in late 2009; I Can't Keep My Own Secrets: Six-Word Memoirs by Teens Famous & Obscure dealt with the experiences of teenage life and as such was written by and for teens. The most recent in the series, It All Changed in an Instant: More Six-Word Memoirs by Writers Famous & Obscure, was released in early 2010 and was marketed as the general sequel to Not Quite What I Was Planning. Recognition: Not Quite What I Was Planning was listed as a New York Times bestseller in 2008 in the "advice, how to and miscellaneous" category. In April 2009, The Denver Post listed Six-Word Memoirs on Love and Heartbreak as the 5th bestselling non-fiction paperback in the Denver area according to sales at the Tattered Cover Book Store, Barnes & Noble in Greenwood Village, the Boulder Book Store, and Borders Books in Lone Tree. Community impact: The Six-Word Memoir format has been used as a writing exercise by teachers, with examples ranging from second-grade classrooms to graduate schools; furthermore, HarperCollins created a guide to encourage the format as an instructional tool. Six-Word Memoirs have seen use in hospital wards, appeared in a eulogy, and suggested as a form of prayer by a preacher in North Carolina. Six-Word Memoir videos from individuals ranging from teenager Micah Gray to bestselling author Daniel Handler have been posted to YouTube. 6 Words Minneapolis, a public art project, employed the format to build community and empathy among citizens of Minneapolis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**EPC QR code** EPC QR code: The European Payments Council Quick Response Code guidelines define the content of a QR code that can be used to initiate SEPA credit transfer (SCT). It contains all the necessary information in clear text. These QR code guidelines are used on many invoices and payment requests in the countries that support it (Austria, Belgium, Finland, Germany, The Netherlands) enabling tens of millions to pay without requiring manual input leading to lower error rates. EPC QR code: The EPC guidelines are available from the EPC itself. Another version has also been published by the Federation of Finnish Finance Services (FFI). Sample content: So the QR string could be BCD 001 1 SCT BPOTBEB1 Red Cross of Belgium BE72000000001616 EUR1 CHAR Urgency fund Sample EPC QR code History: In the course of 2012, Austrian payment facilitator STUZZA (now part of PSA Payment Services Austria) defined the content of a QR code that could be used to initiate money transfers within the Single Euro Payments Area. In February 2013, the European Payments Council (EPC) published the document 'Quick Response Code: Guidelines to Enable Data Capture for the Initiation of a Credit Transfer'. These guidelines were quickly adopted by the Austrian banks. These QR code can be recognized thanks to the words "Zahlen mit Code" on the right.These guidelines were later on used in Finland (2015), Germany (2015), the Netherlands (2016) and Belgium (2016).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hexanitrostilbene** Hexanitrostilbene: Hexanitrostilbene (HNS), also called JD-X, is an organic compound with the formula [(O2N)3C6H2CH]2. It is a yellow-orange solid. It is used as a heat-resistant high explosive. It is slightly soluble (0.1 - 5 g/100 mL) in butyrolactone, DMF, DMSO, and N-methylpyrrolidone. Production and use: It is produced by oxidizing trinitrotoluene (TNT) with a solution of sodium hypochlorite. HNS boasts a higher insensitivity to heat than TNT, and like TNT it is insensitive to impact. When casting TNT, HNS is added at 0.5% to form erratic micro-crystals within the TNT, which prevent cracking. Because of its insensitivity but high explosive properties, HNS is used in space missions. It was the main explosive fill in the seismic source generating mortar ammunition canisters used as part of the Apollo Lunar Active Seismic Experiments.Its heat of detonation is 4 kJ/g.It was developed by Kathryn Grove Shipp at the U.S. Naval Ordnance Laboratory in the 1960s and has been improved on since then.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Solvent bonding** Solvent bonding: Solvent bonding is one of several methods of adhesive bonding for joining plastics. Application of a solvent to a thermoplastic material softens the polymer, and with applied pressure this results in polymer chain interdiffusion at the bonding junction. When the solvent evaporates this leaves a fully consolidated bond-line. An advantage to solvent bonding versus other polymer joining methods is that bonding generally occurs below the glass transition temperature of the polymer.Solvent bonding differs from adhesive bonding since the solvent does not become a permanent addition to the joined substrate. Solvent bonding differs from other plastic welding processes in that heating energy is generated by the chemical reaction between the solvent and thermoplastic, and cooling occurs during evaporation of the solvent.Solvent bonding can be performed using a liquid or gas solvent. Liquid solvents are simpler and generally have lower manufacturing costs but are sensitive to surface imperfections that may cause inconsistent or unpredictable bonding. Some solvents available may not react with the thermoplastic at room temperature but will react at an elevated temperature resulting in a bond. A curing times are highly variable. Applying solvent methods: Four common application methods are: Brush-on method. The solvent is brushed onto the surfaces to be joined with subsequent pressure being applied until full strength of the bond is formed after the solvent has fully evaporated. Capillary action method. Commonly used with acrylic components, a consistent gap between the parts allows the solvent to flow along the surfaces to be joined via capillary action. Application is generally performed using a hypodermic needle to allow for precise application in the joint gap. Dip-dab method. A surface to be joined is dipped into a vat of solvent, with the solvent depth being a controlled variable, for a set amount of time. Once the part has been removed from the vat, a screen mesh or form pad is used to remove the excess solvent before the bonding surfaces are paired. Solvent dispenser method. A dispenser is used to precisely control the amount of solvent applied on each surface. Thermoplastic and solvent compatibility: The proper solvent choice for bonding is dependent on the solubility of the chosen thermoplastic in the solvent and the processing temperature. The table below provides a selection of solvents commonly used for bonding specific thermoplastics. Mutual solubility between a polymer and a solvent may be determined using the Hildebrand solubility parameter. Polymers will generally be more soluble in solvents with similar solubility parameters to their own in a given state (liquid or solid). The solubility parameters of polymers are not greatly affected by changes in temperature, however the solubility parameters for liquids is affected by temperature. Increasing the temperature lowers the free energy of mixing, promoting dissolution at the interface and interdiffusion bonding. Testing solvent-bonded joints: There are three main mechanical testing methods for plastic bonding joints: tensile testing, tensile shear test and peel test. Tensile testing using a butt joint configuration is not very conducive to polymers, particularly thin sheets due to the challenges of mounting to the load frame. An epoxy may be used for mounting and can lead to failure in the epoxy / polymer interface instead of in the bonded joint. The most common method for testing solvent bonds is the tensile shear test using a lap joint configuration. Specimens are tested in shear to failure at a given overlap cross section via tensile loading. This testing method is particularly conducive to thin specimens due to distortion mitigation distortion in the test specimens due to the loading mechanism. Guidance for tensile shear testing may be found in ASTM D1002-05. Industrial applications: There are several industries that utilize solvent bonding for their applications. A few of these include microchip manufacturing, medical, and potable and sanitary plumbing systems.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Natter Social Network** Natter Social Network: Natter was a social network, often referred to as a "microblogging" or even "micro-microblogging" platform. Natter allowed its users or “natterers” to post up to 100 characters and an image in each post, which could then be seen by any online users. Natter was first launched in 2014 as a website at natter.com, and later launched apps for both iOS and Android. Originally, each post could only contain up to three words and a hashtag. On Friday, July 29, 2016, Natter ceased operations, due to competition, as well as a lack of funds. In February 2017 Natter relaunched with a redesigned chat style app including direct messaging between users. On September 6, 2018, Natter was shut down once again due to a steep decline in its userbase. Potential buyers of the software had approached Neil Stanley, the owner of Natter at the time, but ultimately backed out after too large a demand.As of 2018, Natter's official website is defunct. Community: Since its relaunch, the Natter user base grew rapidly after receiving significant aid from posts made on other social media sites, such as Tumblr and Twitter. Natter has thus developed a tight-knit community to the extent that it has commonly been referred to as a family by regular users. The community often put on role-playing events dubbed "god wars", in which users change their icons and adopt characterized personas. These events have resulted in the development of "Natter Lore" surrounding some users and the nature of the social media site. Community: Once Natter had been shut down, with none willing to purchase it, a Mastodon instance of the same name was created, prompting a significant portion of the community to migrate to it.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**4-AcO-DPT** 4-AcO-DPT: 4-Acetyloxy-N,N-dipropyltryptamine (or 4-AcO-DPT) is a tryptamine derivative. 4-AcO-DPT has been sold as a designer drug. It is an ester of 4-HO-DPT, a psychedelic tryptamine first synthesized by Alexander Shulgin. Anecdotal reports indicate that 4-AcO-DPT exerts psychoactive effects in humans, however, the pharmacology of 4-AcO-DPT has not been examined.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Journal of Clinical Interventional Radiology** Journal of Clinical Interventional Radiology: The Journal of Clinical Interventional Radiology is a triannual open-access peer-reviewed medical journal covering all aspects of vascular and non-vascular interventional radiology. It is published by Thieme Medical Publishers on behalf of the Indian Society of Vascular and Interventional Radiology. It was established in 2017 and the editor-in-chief is Shyamkumar Nidugala Keshava (Christian Medical College Vellore). Abstracting and indexing: The journal is abstracted and indexed in Embase and Scopus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Variation Selectors (Unicode block)** Variation Selectors (Unicode block): Variation Selectors is the block name of a Unicode code point block containing 16 variation selectors used to specify a glyph variant for a preceding character. They are currently used to specify standardized variation sequences for mathematical symbols, emoji symbols, 'Phags-pa letters, and CJK unified ideographs corresponding to CJK compatibility ideographs. At present only standardized variation sequences with VS1, VS2, VS3, VS15 and VS16 have been defined; VS15 and VS16 are reserved to request that a character should be displayed as text or as an emoji respectively.These combining characters are named variation selector-1 (for U+FE00) through to variation selector-16 (U+FE0F), and are abbreviated VS1 – VS16. Each applies to the immediately preceding character. Variation Selectors (Unicode block): As of Unicode 13.0: CJK compatibility ideograph variation sequences contain VS1–VS3 (U+FE00–U+FE02) CJK Unified Ideographs Extension A and B variation sequences contain VS1 (U+FE00) and VS2 (U+FE01) Emoji variation sequences contain VS16 (U+FE0F) for emoji-style (with color) or VS15 (U+FE0E) for text style (monochrome) Basic Latin, Halfwidth and Fullwidth Forms, Manichaean, Myanmar, Myanmar Extended-A, Phags-pa, and mathematical variation sequences contain only VS1 (U+FE00) VS4–VS14 (U+FE03–U+FE0D) are not used for any variation sequencesThis list is continued in the Variation Selectors Supplement. History: The following Unicode-related documents record the purpose and process of defining specific characters in the Variation Selectors block:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Graticule (cartography)** Graticule (cartography): A graticule (from Latin crāticula 'grill/grating'), on a map, is a graphical depiction of a coordinate system as a grid of lines, each line representing a constant coordinate value. It is thus a form of isoline, and is commonly found on maps of many kinds at scales from the local to global. Graticule (cartography): The term is almost always used to specifically refer to the parallels and meridians of latitude and longitude respectively in the geographic coordinate system. Grid lines for other coordinate reference systems, such as Universal Transverse Mercator, are commonly placed on maps for the same purposes, with similar meaning, and using similar design, but they are rarely called graticules. Some cartographers have used the term graticule to refer not only to the visual lines, but to the system of latitude and longitude reference itself; however, in the era of Geographic information systems, this is far less common than calling it the Geographic coordinate system. History: The graticule is of ancient origin, being almost as old as the concept of the spherical earth, coordinate system for measuring geographic locations, and the map projection. Strabo, in his Geography (ca 20AD), states that the maps in Eratosthenes's Geography Book 3 (3rd Century BC, now lost) contained lines "drawn from west to east, parallel to the equatorial line" (thus the term parallel) Ptolemy's Geography (ca 150 AD) gives detailed instructions for drawing the parallels and meridians for his two projections.The works of Ptolemy and other classical geographers were available to the scientists of medieval Islam. Some, such as al-Khwarizmi, further developed these works, including creating maps on a graticule of latitude and longitude. History: During the European middle ages, graticules disappeared from the few maps that were produced; T and O maps in particular were more concerned with religious cosmology than accurate representation of location. The portolan charts of the 13th to 15th centuries were much more accurate, but used rhumb lines that were much more useful for sea navigation than latitude and longitude. At the same time, however, the rediscovery of Ptolemy and other classical knowledge of the shape and size of the Earth led to the recreation of some of the ancient maps with their graticules; the earliest extant copies of Ptolemy's Geography with his maps date to the 14th and 15th centuries. Starting in the 16th Century, the graticule has been ubiquitous on global and continental scale maps. History: There is some debate over whether the Chinese and other Asians knew the world to be spherical prior to Western contact, but most maps appear to assume regions as flat. Although Chinese maps do not portray any concept of latitude and longitude, cartesian grids appear on some maps dating back to the 11th Century. Uses and Design: The graticule may serve several purposes on a map: Aid map users in estimating the coordinates of locations Aid map users in placing locations having known coordinates Indicate the cardinal directions, especially on map projections in which these directions vary across the map (e.g. conic, pseudocylindrical, azimuthal)These are usually secondary to the primary purpose of the map, so graticules are often drawn to be relatively low in the Visual hierarchy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dynamic Markov compression** Dynamic Markov compression: Dynamic Markov compression (DMC) is a lossless data compression algorithm developed by Gordon Cormack and Nigel Horspool. It uses predictive arithmetic coding similar to prediction by partial matching (PPM), except that the input is predicted one bit at a time (rather than one byte at a time). DMC has a good compression ratio and moderate speed, similar to PPM, but requires somewhat more memory and is not widely implemented. Some recent implementations include the experimental compression programs hook by Nania Francesco Antonio, ocamyd by Frank Schwellinger, and as a submodel in paq8l by Matt Mahoney. These are based on the 1993 implementation in C by Gordon Cormack. Algorithm: DMC predicts and codes one bit at a time. It differs from PPM in that it codes bits rather than bytes, and from context mixing algorithms such as PAQ in that there is only one context per prediction. The predicted bit is then coded using arithmetic coding. Algorithm: Arithmetic coding A bitwise arithmetic coder such as DMC has two components, a predictor and an arithmetic coder. The predictor accepts an n-bit input string x = x1x2...xn and assigns it a probability p(x), expressed as a product of a series of predictions, p(x1)p(x2|x1)p(x3|x1x2) ... p(xn| x1x2...xn–1). The arithmetic coder maintains two high precision binary numbers, plow and phigh, representing the possible range for the total probability that the model would assign to all strings lexicographically less than x, given the bits of x seen so far. The compressed code for x is px, the shortest bit string representing a number between plow and phigh. It is always possible to find a number in this range no more than one bit longer than the Shannon limit, log2 1 / p(x). One such number can be obtained from phigh by dropping all of the trailing bits after the first bit that differs from plow. Algorithm: Compression proceeds as follows. The initial range is set to plow = 0, phigh = 1. For each bit, the predictor estimates p0 = p(xi = 0|x1x2...xi–1) and p1 = 1 − p0, the probability of a 0 or 1, respectively. The arithmetic coder then divides the current range, (plow, phigh) into two parts in proportion to p0 and p1. Then the subrange corresponding to the next bit xi becomes the new range. Algorithm: For decompression, the predictor makes an identical series of predictions, given the bits decompressed so far. The arithmetic coder makes an identical series of range splits, then selects the range containing px and outputs the bit xi corresponding to that subrange. In practice, it is not necessary to keep plow and phigh in memory to high precision. As the range narrows the leading bits of both numbers will be the same, and can be output immediately. Algorithm: DMC model The DMC predictor is a table which maps (bitwise) contexts to a pair of counts, n0 and n1, representing the number of zeros and ones previously observed in this context. Thus, it predicts that the next bit will be a 0 with probability p0 = n0 / n = n0 / (n0 + n1) and 1 with probability p1 = 1 − p0 = n1 / n. In addition, each table entry has a pair of pointers to the contexts obtained by appending either a 0 or a 1 to the right of the current context (and possibly dropping bits on the left). Thus, it is never necessary to look up the current context in the table; it is sufficient to maintain a pointer to the current context and follow the links. Algorithm: In the original DMC implementation, the initial table is the set of all contexts of length 8 to 15 bits that begin on a byte boundary. The initial state is any of the 8 bit contexts. The counts are floating point numbers initialized to a small nonzero constant such as 0.2. The counts are not initialized to zero in order to allow values to be coded even if they have not been seen before in the current context. Algorithm: Modeling is the same for compression and decompression. For each bit, p0 and p1 are computed, bit xi is coded or decoded, the model is updated by adding 1 to the count corresponding to xi, and the next context is found by traversing the link corresponding to xi. Algorithm: Adding new contexts DMC as described above is equivalent to an order-1 context model. However, it is normal to add longer contexts to improve compression. If the current context is A, and the next context B would drop bits on the left, then DMC may add (clone) a new context C from B. C represents the same context as A after appending one bit on the right as with B, but without dropping any bits on the left. The link from A will thus be moved from B to point to C. B and C will both make the same prediction, and both will point to the same pair of next states. The total count, n = n0 + n1 for C will be equal to the count nx for A (for input bit x), and that count will be subtracted from B. Algorithm: For example, suppose that state A represents the context 11111. On input bit 0, it transitions to state B representing context 110, obtained by dropping 3 bits on the left. In context A, there have been 4 zero bits and some number of one bits. In context B, there have been 3 zeros and 7 ones (n = 10), which predicts p1 = 0.7. Algorithm: C is cloned from B. It represents context 111110. Both B and C predict p1 = 0.7, and both go to the same next states, E and F. The count for C is n = 4, equal to n0 for A. This leaves n = 6 for B. Algorithm: States are cloned just prior to transitioning to them. In the original DMC, the condition for cloning a state is when the transition from A to B is at least 2, and the count for B is at least 2 more than that. (When the second threshold is greater than 0, it guarantees that other states will still transition to B after cloning). Some implementations such as hook allow these thresholds to be set as parameters. In paq8l, these thresholds increase as memory is used up to slow the growth rate of new states. In most implementations, when memory is exhausted the model is discarded and reinitialized back to the original bytewise order 1 model.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Barium carbonate** Barium carbonate: Barium carbonate is the inorganic compound with the formula BaCO3. Like most alkaline earth metal carbonates, it is a white salt that is poorly soluble in water. It occurs as the mineral known as witherite. In a commercial sense, it is one of the most important barium compounds. Preparation: Barium carbonate is made commercially from barium sulfide by treatment with sodium carbonate at 60 to 70 °C (soda ash method) or, more commonly carbon dioxide at 40 to 90 °C: In the soda ash process, an aqueous solution of barium sulfide is treated with sodium carbonate: BaS + H2O + CO2 → BaCO3 + H2S Reactions: Barium carbonate reacts with acids such as hydrochloric acid to form soluble barium salts, such as barium chloride: BaCO3 + 2 HCl → BaCl2 + CO2 + H2OPyrolysis of barium carbonate gives barium oxide. Uses: It is mainly used to remove sulfate impurities from feedstock of the chlor-alkali process. Otherwise it is a common precursor to barium-containing compounds such as ferrites. Uses: Other uses Barium carbonate is widely used in the ceramics industry as an ingredient in glazes. It acts as a flux, a matting and crystallizing agent and combines with certain colouring oxides to produce unique colours not easily attainable by other means. Its use is somewhat controversial since it can leach from glazes into food and drink. To reduce toxicity concerns, it is often substituted with strontium carbonate, which behaves in a similar way in glazes but is of lower toxicity. Uses: In the brick, tile, earthenware and pottery industries barium carbonate is added to clays to precipitate soluble salts (calcium sulfate and magnesium sulfate) that cause efflorescence.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nigro protocol** Nigro protocol: Nigro protocol is the preoperative use of chemotherapy with 5-fluorouracil and mitomycin and medical radiation for squamous cell carcinomas of the anal canal. Success of the preoperative regimen changed the paradigm of anal cancer treatment from surgical to non-surgical and was the advent of definitive chemoradiation (omitting surgery) being accepted as a standard-of-care for anal squamous cell carcinomas. Larger doses of radiation are used in modern chemoradiotherapy protocols versus the original Nigro protocol radiotherapy dose. Nigro protocol: In the Nigro protocol, the patient receives 30 Gy (3000 rads) of radiation over a three-week period, as well as continuous administration of fluorouracil for the first four days and on days 20–31, with bolus mitomycin on day 1. It is named after Norman Nigro (1912–2009), who developed it in the mid-1970s. In cases of patients who still have residual disease after receiving the protocol, they should undergo salvage APR (abdomino-perineal resection); adequate time should be allowed for regression. The immediate complete response rate was in the 75% range in Nigro's original reports. Response to treatment can be evaluated every 6-8 weeks for many months if disease is regressing or clinically stable. Any sign of progressive disease should prompt reassessment of disease with biopsy and subsequent surgery with the aforementioned APR.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Infinitive** Infinitive: Infinitive (abbreviated INF) is a linguistics term for certain verb forms existing in many languages, most often used as non-finite verbs. As with many linguistic concepts, there is not a single definition applicable to all languages. The name is derived from Late Latin [modus] infinitivus, a derivative of infinitus meaning "unlimited". Infinitive: In traditional descriptions of English, the infinitive is the basic dictionary form of a verb when used non-finitely, with or without the particle to. Thus to go is an infinitive, as is go in a sentence like "I must go there" (but not in "I go there", where it is a finite verb). The form without to is called the bare infinitive, and the form with to is called the full infinitive or to-infinitive. Infinitive: In many other languages the infinitive is a distinct single word, often with a characteristic inflective ending, like cantar ("[to] sing") in Portuguese, morir ("[to] die") in Spanish, manger ("[to] eat") in French, portare ("[to] carry") in Latin and Italian, lieben ("[to] love") in German, читать (chitat', "[to] read") in Russian, etc. However, some languages have no infinitive forms. Many Native American languages, Arabic, Asian languages such as Japanese, and some languages in Africa and Australia do not have direct equivalents to infinitives or verbal nouns. Instead, they use finite verb forms in ordinary clauses or various special constructions. Infinitive: Being a verb, an infinitive may take objects and other complements and modifiers to form a verb phrase (called an infinitive phrase). Like other non-finite verb forms (like participles, converbs, gerunds and gerundives), infinitives do not generally have an expressed subject; thus an infinitive verb phrase also constitutes a complete non-finite clause, called an infinitive (infinitival) clause. Such phrases or clauses may play a variety of roles within sentences, often being nouns (for example being the subject of a sentence or being a complement of another verb), and sometimes being adverbs or other types of modifier. Many verb forms known as infinitives differ from gerunds (verbal nouns) in that they do not inflect for case or occur in adpositional phrases. Instead, infinitives often originate in earlier inflectional forms of verbal nouns. Unlike finite verbs, infinitives are not usually inflected for tense, person, etc. either, although some degree of inflection sometimes occurs; for example Latin has distinct active and passive infinitives. Phrases and clauses: An infinitive phrase is a verb phrase constructed with the verb in infinitive form. This consists of the verb together with its objects and other complements and modifiers. Some examples of infinitive phrases in English are given below – these may be based on either the full infinitive (introduced by the particle to) or the bare infinitive (without the particle to). Phrases and clauses: (to) sleep (to) write ten letters (to) go to the store for a pound of sugarInfinitive phrases often have an implied grammatical subject making them effectively clauses rather than phrases. Such infinitive clauses or infinitival clauses, are one of several kinds of non-finite clause. They can play various grammatical roles like a constituent of a larger clause or sentence; for example it may form a noun phrase or adverb. Infinitival clauses may be embedded within each other in complex ways, like in the sentence: I want to tell you that John Welborn is going to get married to Blair.Here the infinitival clause to get married is contained within the finite dependent clause that John Welborn is going to get married to Blair; this in turn is contained within another infinitival clause, which is contained in the finite independent clause (the whole sentence). Phrases and clauses: The grammatical structure of an infinitival clause may differ from that of a corresponding finite clause. For example, in German, the infinitive form of the verb usually goes to the end of its clause, whereas a finite verb (in an independent clause) typically comes in second position. Clauses with implicit subject in the objective case: Following certain verbs or prepositions, infinitives commonly do have an implicit subject, e.g., I want to eat them as dinner. Clauses with implicit subject in the objective case: For him to fail now would be a disappointment.As these examples illustrate, the implicit subject of the infinitive occurs in the objective case (them, him) in contrast to the nominative case that occurs with a finite verb, e.g., "They ate their dinner." Such accusative and infinitive constructions are present in Latin and Ancient Greek, as well as many modern languages. The atypical case regarding the implicit subject of an infinitive is an example of exceptional case-marking. As shown in the above examples, the object of the transitive verb "want" and the preposition "for" allude to their respective pronouns' subjective role within the clauses. Marking for tense, aspect and voice: In some languages, infinitives may be marked for grammatical categories like voice, aspect, and to some extent tense. This may be done by inflection, as with the Latin perfect and passive infinitives, or by periphrasis (with the use of auxiliary verbs), as with the Latin future infinitives or the English perfect and progressive infinitives. Latin has present, perfect and future infinitives, with active and passive forms of each. For details see Latin conjugation § Infinitives. Marking for tense, aspect and voice: English has infinitive constructions that are marked (periphrastically) for aspect: perfect, progressive (continuous), or a combination of the two (perfect progressive). These can also be marked for passive voice (as can the plain infinitive): (to) eat (plain infinitive, active) (to) be eaten (passive) (to) have eaten (perfect active) (to) have been eaten (perfect passive) (to) be eating (progressive active) (to) be being eaten (progressive passive) (to) have been eating (perfect progressive active) (to) have been being eaten (perfect progressive passive, not often used)Further constructions can be made with other auxiliary-like expressions, like (to) be going to eat or (to) be about to eat, which have future meaning. For more examples of the above types of construction, see Uses of English verb forms § Perfect and progressive non-finite constructions. Marking for tense, aspect and voice: Perfect infinitives are also found in other European languages that have perfect forms with auxiliaries similarly to English. For example, avoir mangé means "(to) have eaten" in French. English: Regarding English, the term "infinitive" is traditionally applied to the unmarked form of the verb (the "plain form") when it forms a non-finite verb, whether or not introduced by the particle to. Hence sit and to sit, as used in the following sentences, would each be considered an infinitive: I can sit here all day. I want to sit on the other chair.The form without to is called the bare infinitive; the form introduced by to is called the full infinitive or to-infinitive. English: The other non-finite verb forms in English are the gerund or present participle (the -ing form), and the past participle – these are not considered infinitives. Moreover, the unmarked form of the verb is not considered an infinitive when it forms a finite verb: like a present indicative ("I sit every day"), subjunctive ("I suggest that he sit"), or imperative ("Sit down!"). (For some irregular verbs the form of the infinitive coincides additionally with that of the past tense and/or past participle, like in the case of put.) Certain auxiliary verbs are defective in that they do not have infinitives (or any other non-finite forms). This applies to the modal verbs (can, must, etc.), as well as certain related auxiliaries like the had of had better and the used of used to. (Periphrases can be employed instead in some cases, like (to) be able to for can, and (to) have to for must.) It also applies to the auxiliary do, like used in questions, negatives and emphasis like described under do-support. (Infinitives are negated by simply preceding them with not. Of course the verb do when forming a main verb can appear in the infinitive.) However, the auxiliary verbs have (used to form the perfect) and be (used to form the passive voice and continuous aspect) both commonly appear in the infinitive: "I should have finished by now"; "It's thought to have been a burial site"; "Let him be released"; "I hope to be working tomorrow." Huddleston and Pullum's Cambridge Grammar of the English Language (2002) does not use the notion of the "infinitive" ("there is no form in the English verb paradigm called 'the infinitive'"), only that of the infinitival clause, noting that English uses the same form of the verb, the plain form, in infinitival clauses that it uses in imperative and present-subjunctive clauses.A matter of controversy among prescriptive grammarians and style writers has been the appropriateness of separating the two words of the to-infinitive (as in "I expect to happily sit here"). For details of this, see split infinitive. Opposing linguistic theories typically do not consider the to-infinitive a distinct constituent, instead regarding the scope of the particle to as an entire verb phrase; thus, to buy a car is parsed like to [buy [a car]], not like [to buy] [a car]. English: Uses of the infinitive The bare infinitive and the to-infinitive have a variety of uses in English. The two forms are mostly in complementary distribution – certain contexts call for one, and certain contexts for the other; they are not normally interchangeable, except in occasional instances like after the verb help, where either can be used. English: The main uses of infinitives (or infinitive phrases) are as follows: As complements of other verbs. The bare infinitive form is a complement of the dummy auxiliary do, most modal auxiliary verbs, verbs of perception like see, watch and hear (after a direct object), and the verbs of permission or causation make, bid, let, and have (also after a direct object). The to-infinitive is used after many transitive verbs like want, aim, like, fail, etc., and as an object complement of a direct object regarding verbs like want, convince, aim, etc. English: In various particular expressions, like had better and would rather (with bare infinitive), in order to, as if to, am to/is to/are to. English: As a noun phrase, expressing its action or state in an abstract, general way, forming the subject of a clause or a predicative expression: "To err is human"; "To know me is to love me". The bare infinitive can be used in such sentences like "What you should do is make a list." A common construction with the to-infinitive involves a dummy pronoun subject (it), with the infinitive phrase placed after the predicate: "It was nice to meet you." Adverbially, to express purpose, intent or result, as the to-infinitive can have the meaning of in order to, e.g. "I closed the door in order to block out any noise." As a modifier of a noun or adjective. This may relate to the meaning of the noun or adjective ("a request to see someone"; "keen to get on"), or it may form a type of non-finite relative clause, like in "the man to save us"; "the method to use"; "nice to listen to". English: In elliptical questions (direct or indirect): "I don't know where to go." After why the bare infinitive is used: "Why reveal it?"The infinitive is also the usual dictionary form or citation form of a verb. The form listed in dictionaries is the bare infinitive, although the to-infinitive is often used in referring to verbs or in defining other verbs: "The word 'amble' means 'to walk slowly'"; "How do we conjugate the verb to go?" For further detail and examples of the uses of infinitives in English, see Bare infinitive and To-infinitive in the article on uses of English verb forms. Other Germanic languages: The original Proto-Germanic ending of the infinitive was -an, with verbs derived from other words ending in -jan or -janan. Other Germanic languages: In German it is -en ("sagen"), with -eln or -ern endings on a few words based on -l or -r roots ("segeln", "ändern"). The use of zu with infinitives is similar to English to, but is less frequent than in English. German infinitives can form nouns, often expressing abstractions of the action, in which case they are of neuter gender: das Essen means the eating, but also the food. Other Germanic languages: In Dutch infinitives also end in -en (zeggen — to say), sometimes used with te similar to English to, e.g., "Het is niet moeilijk te begrijpen" → "It is not hard to understand." The few verbs with stems ending in -a have infinitives in -n (gaan — to go, slaan — to hit). Afrikaans has lost the distinction between the infinitive and present forms of verbs, with the exception of the verbs "wees" (to be), which admits the present form "is", and the verb "hê" (to have), whose present form is "het". Other Germanic languages: In North Germanic languages the final -n was lost from the infinitive as early as 500–540 AD, reducing the suffix to -a. Later it has been further reduced to -e in Danish and some Norwegian dialects (including the written majority language bokmål). In the majority of Eastern Norwegian dialects and a few bordering Western Swedish dialects the reduction to -e was only partial, leaving some infinitives in -a and others in -e (å laga vs. å kaste). In northern parts of Norway the infinitive suffix is completely lost (å lag’ vs. å kast’) or only the -a is kept (å laga vs. å kast’). The infinitives of these languages are inflected for passive voice through the addition of -s or -st to the active form. This suffix appearance in Old Norse was a contraction of mik (“me”, forming -mk) or sik (reflexive pronoun, forming -sk) and was originally expressing reflexive actions: (hann) kallar (“[he] calls”) + -sik (“himself”) > (hann) kallask (“[he] calls himself”). The suffixes -mk and -sk later merged to -s, which evolved to -st in the western dialects. The loss or reduction of -a in active voice in Norwegian did not occur in the passive forms (-ast, -as), except for some dialects that have -es. The other North Germanic languages have the same vowel in both forms. Latin and Romance languages: The formation of the infinitive in the Romance languages reflects that in their ancestor, Latin, almost all verbs had an infinitive ending with -re (preceded by one of various thematic vowels). For example, in Italian infinitives end in -are, -ere, -rre (rare), or -ire (which is still identical to the Latin forms), and in -arsi, -ersi, -rsi, -irsi for the reflexive forms. In Spanish and Portuguese, infinitives end in -ar, -er, or -ir (Spanish also has reflexive forms in -arse, -erse, -irse), while similarly in French they typically end in -re, -er, oir, and -ir. In Romanian, both short and long-form infinitives exist; the so-called "long infinitives" end in -are, -ere, -ire and in modern speech are used exclusively as verbal nouns, while there are a few verbs that cannot be converted into the nominal long infinitive. The "short infinitives" used in verbal contexts (e.g., after an auxiliary verb) have the endings -a,-ea, -e, and -i (basically removing the ending in "-re"). In Romanian, the infinitive is usually replaced by a clause containing the conjunction să plus the subjunctive mood. The only verb that is modal in common modern Romanian is the verb a putea, to be able to. However, in popular speech the infinitive after a putea is also increasingly replaced by the subjunctive. Latin and Romance languages: In all Romance languages, infinitives can also form nouns. Latin infinitives challenged several of the generalizations about infinitives. They did inflect for voice (amare, "to love", amari, to be loved) and for tense (amare, "to love", amavisse, "to have loved"), and allowed for an overt expression of the subject (video Socratem currere, "I see Socrates running"). See Latin conjugation § Infinitives. Latin and Romance languages: Romance languages inherited from Latin the possibility of an overt expression of the subject (as in Italian vedo Socrate correre). Moreover, the "inflected infinitive" (or "personal infinitive") found in Portuguese and Galician inflects for person and number. These, alongside Sardinian, are the only Indo-European languages that allow infinitives to take person and number endings. This helps to make infinitive clauses very common in these languages; for example, the English finite clause in order that you/she/we have... would be translated to Portuguese like para teres/ela ter/termos... (Portuguese is a null-subject language). The Portuguese personal infinitive has no proper tenses, only aspects (imperfect and perfect), but tenses can be expressed using periphrastic structures. For instance, "even though you sing/have sung/are going to sing" could be translated to "apesar de cantares/teres cantado/ires cantar". Latin and Romance languages: Other Romance languages (including Spanish, Romanian, Catalan, and some Italian dialects) allow uninflected infinitives to combine with overt nominative subjects. For example, Spanish al abrir yo los ojos ("when I opened my eyes") or sin yo saberlo ("without my knowing about it"). Hellenic languages: Ancient Greek In Ancient Greek the infinitive has four tenses (present, future, aorist, perfect) and three voices (active, middle, passive). Present and perfect have the same infinitive for both middle and passive, while future and aorist have separate middle and passive forms. Hellenic languages: Thematic verbs form present active infinitives by adding to the stem the thematic vowel -ε- and the infinitive ending -εν, and contracts to -ειν, e.g., παιδεύ-ειν. Athematic verbs, and perfect actives and aorist passives, add the suffix -ναι instead, e.g., διδό-ναι. In the middle and passive, the present middle infinitive ending is -σθαι, e.g., δίδο-σθαι and most tenses of thematic verbs add an additional -ε- between the ending and the stem, e.g., παιδεύ-ε-σθαι. Hellenic languages: Modern Greek The infinitive per se does not exist in Modern Greek. To see this, consider the ancient Greek ἐθέλω γράφειν “I want to write”. In modern Greek this becomes θέλω να γράψω “I want that I write”. In modern Greek, the infinitive has thus changed form and function and is used mainly in the formation of periphrastic tense forms and not with an article or alone. Instead of the Ancient Greek infinitive system γράφειν, γράψειν, γράψαι, γεγραφέναι, Modern Greek uses only the form γράψει, a development of the ancient Greek aorist infinitive γράψαι. This form is also invariable. The modern Greek infinitive has only two forms according to voice: for example, γράψει for the active voice and γραφ(τ)εί for the passive voice (coming from the ancient passive aorist infinitive γραφῆναι). Balto-Slavic languages: The infinitive in Russian usually ends in -t’ (ть) preceded by a thematic vowel, or -ti (ти), if not preceded by one; some verbs have a stem ending in a consonant and change the t to č’, like *mogt’ → moč’ (*могть → мочь) "can". Some other Balto-Slavic languages have the infinitive typically ending in, for example, -ć (sometimes -c) in Polish, -ť in Slovak, -t (formerly -ti) in Czech and Latvian (with a handful ending in -s on the latter), -ty (-ти) in Ukrainian, -ць (-ts') in Belarusian. Lithuanian infinitives end in -ti, Serbo-Croatian in -ti or -ći, and Slovenian in -ti or -či. Balto-Slavic languages: Serbian officially retains infinitives -ti or -ći, but is more flexible than the other slavic languages in breaking the infinitive through a clause. The infinitive nevertheless remains the dictionary form. Balto-Slavic languages: Bulgarian and Macedonian have lost the infinitive altogether except in a handful of frozen expressions where it is the same as the 3rd person singular aorist form. Almost all expressions where an infinitive may be used in Bulgarian are listed here; neverthess in all cases a subordinate clause is the more usual form. For that reason, the present first-person singular conjugation is the dictionary form in Bulgarian, while Macedonian uses the third person singular form of the verb in present tense. Hebrew: Hebrew has two infinitives, the infinitive absolute (המקור המוחלט) and the infinitive construct (המקור הנטוי or שם הפועל). The infinitive construct is used after prepositions and is inflected with pronominal endings to indicate its subject or object: בכתוב הסופר bikhtōbh hassōphēr "when the scribe wrote", אחרי לכתו ahare lekhtō "after his going". When the infinitive construct is preceded by ל‎ (lə-, li-, lā-, lo-) "to", it has a similar meaning to the English to-infinitive, and this is its most frequent use in Modern Hebrew. The infinitive absolute is used for verb focus and emphasis, like in מות ימות‎ mōth yāmūth (literally "a dying he will die"; figuratively, "he shall indeed/surely die"). This usage is commonplace in the Hebrew Bible. In Modern Hebrew it is restricted to high-register literary works. Hebrew: Note, however, that the to-infinitive of Hebrew is not the dictionary form; that is the third person singular past form. Finnish: The Finnish grammatical tradition includes many non-finite forms that are generally labeled as (numbered) infinitives although many of these are functionally converbs. To form the so-called first infinitive, the strong form of the root (without consonant gradation or epenthetic 'e') is used, and these changes occur: the root is suffixed with -ta/-tä according to vowel harmony consonant elision takes place if applicable, e.g., juoks+ta → juosta assimilation of clusters violating sonority hierarchy if applicable, e.g., nuol+ta → nuolla, sur+ta → surra 't' weakens to 'd' after diphthongs, e.g., juo+ta → juoda 't' elides if intervocalic, e.g., kirjoitta+ta → kirjoittaaAs such, it is inconvenient for dictionary use, because the imperative would be closer to the root word. Nevertheless, dictionaries use the first infinitive. Finnish: There are also four other infinitives, plus a "long" form of the first: The long first infinitive is -kse- and must have a personal suffix appended to it. It has the general meaning of "in order to [do something], e.g., kirjoittaakseni "in order for me to write [something]". The second infinitive is formed by replacing the final -a/-ä of the first infinitive with e. It can take the inessive and instructive cases to create forms like kirjoittaessa "while writing". The third infinitive is formed by adding -ma to the first infinitive, which alone creates an "agent" form: kirjoita- becomes kirjoittama. The third infinitive is technically a noun (denoting the act of performing some verb), so case suffixes identical to those attached to ordinary Finnish nouns allow for other expressions using the third infinitive, e.g., kirjoittamalla "by writing". Finnish: A personal suffix can then be added to this form to indicate the agent participle, such that kirjoittamani kirja = "The book that I wrote." The fourth infinitive adds -minen to the first to form a noun that has the connotation of "the process of [doing something]", e.g., kirjoittaminen "[the process of] writing". It, too, can be inflected like other Finnish nouns that end in -nen. Finnish: The fifth infinitive adds -maisilla- to the first, and like the long first infinitive, must take a possessive suffix. It has to do with being "about to [do something]" and may also imply that the act was cut off or interrupted, e.g., kirjoittamaisillasi "you were about to write [but something interrupted you]". This form is more commonly replaced by the third infinitive in adessive case, usually also with a possessive suffix (thus kirjoittamallasi).Note that all of these must change to reflect vowel harmony, so the fifth infinitive (with a third-person suffix) of hypätä "jump" is hyppäämäisillään "he was about to jump", not *hyppäämaisillaan. Seri: The Seri language of northwestern Mexico has infinitival forms used in two constructions (with the verb meaning 'want' and with the verb meaning 'be able'). The infinitive is formed by adding a prefix to the stem: either iha- [iʔa-] (plus a vowel change of certain vowel-initial stems) if the complement clause is transitive, or ica- [ika-] (and no vowel change) if the complement clause is intransitive. The infinitive shows agreement in number with the controlling subject. Examples are: icatax ihmiimzo 'I want to go', where icatax is the singular infinitive of the verb 'go' (singular root is -atax), and icalx hamiimcajc 'we want to go', where icalx is the plural infinitive. Examples of the transitive infinitive: ihaho 'to see it/him/her/them' (root -aho), and ihacta 'to look at it/him/her/them' (root -oocta). Translation to languages without an infinitive: In languages without an infinitive, the infinitive is translated either as a that-clause or as a verbal noun. For example, in Literary Arabic the sentence "I want to write a book" is translated as either urīdu an aktuba kitāban (lit. "I want that I write a book", with a verb in the subjunctive mood) or urīdu kitābata kitābin (lit. "I want the writing of a book", with the masdar or verbal noun), and in Levantine Colloquial Arabic biddi aktub kitāb (subordinate clause with verb in subjunctive). Translation to languages without an infinitive: Even in languages that have infinitives, similar constructions are sometimes necessary where English would allow the infinitive. For example, in French the sentence "I want you to come" translates to Je veux que vous veniez (lit. "I want that you come", come being in the subjunctive mood). However, "I want to come" is simply Je veux venir, using the infinitive, just as in English. In Russian, sentences such as "I want you to leave" do not use an infinitive. Rather, they use the conjunction чтобы "in order to/so that" with the past tense form (most probably remnant of subjunctive) of the verb: Я хочу, чтобы вы ушли (literally, "I want so that you left").
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vibration isolation** Vibration isolation: Vibration isolation is the process of isolating an object, such as a piece of equipment, from the source of vibrations. Vibration isolation: Vibration is undesirable in many domains, primarily engineered systems and habitable spaces, and methods have been developed to prevent the transfer of vibration to such systems. Vibrations propagate via mechanical waves and certain mechanical linkages conduct vibrations more efficiently than others. Passive vibration isolation makes use of materials and mechanical linkages that absorb and damp these mechanical waves. Active vibration isolation involves sensors and actuators that produce disruptive interference that cancels-out incoming vibration. Passive isolation: "Passive vibration isolation" refers to vibration isolation or mitigation of vibrations by passive techniques such as rubber pads or mechanical springs, as opposed to "active vibration isolation" or "electronic force cancellation" employing electric power, sensors, actuators, and control systems. Passive vibration isolation is a vast subject, since there are many types of passive vibration isolators used for many different applications. A few of these applications are for industrial equipment such as pumps, motors, HVAC systems, or washing machines; isolation of civil engineering structures from earthquakes (base isolation), sensitive laboratory equipment, valuable statuary, and high-end audio. Passive isolation: A basic understanding of how passive isolation works, the more common types of passive isolators, and the main factors that influence the selection of passive isolators: Common passive isolation systems Pneumatic or air isolators These are bladders or canisters of compressed air. A source of compressed air is required to maintain them. Air springs are rubber bladders which provide damping as well as isolation and are used in large trucks. Some pneumatic isolators can attain low resonant frequencies and are used for isolating large industrial equipment. Air tables consist of a working surface or optical surface mounted on air legs. These tables provide enough isolation for laboratory instrument under some conditions. Air systems may leak under vacuum conditions. The air container can interfere with isolation of low-amplitude vibration. Passive isolation: Mechanical springs and spring-dampers These are heavy-duty isolators used for building systems and industry. Sometimes they serve as mounts for a concrete block, which provides further isolation. Pads or sheets of flexible materials such as elastomers, rubber, cork, dense foam and laminate materials. Elastomer pads, dense closed cell foams and laminate materials are often used under heavy machinery, under common household items, in vehicles and even under higher performing audio systems. Molded and bonded rubber and elastomeric isolators and mounts These are often used as machinery (such as engines) mounts or in vehicles. They absorb shock and attenuate some vibration. Passive isolation: Negative-stiffness isolators Negative-stiffness isolators are less common than other types and have generally been developed for high-level research applications such as gravity wave detection. Lee, Goverdovskiy, and Temnikov (2007) proposed a negative-stiffness system for isolating vehicle seats.The focus on negative-stiffness isolators has been on developing systems with very low resonant frequencies (below 1 Hz), so that low frequencies can be adequately isolated, which is critical for sensitive instrumentation. All higher frequencies are also isolated. Negative-stiffness systems can be made with low stiction, so that they are effective in isolating low-amplitude vibrations.Negative-stiffness mechanisms are purely mechanical and typically involve the configuration and loading of components such as beams or inverted pendulums. Greater loading of the negative-stiffness mechanism, within the range of its operability, decreases the natural frequency. Passive isolation: Wire rope isolatorsThese isolators are durable and can withstand extreme environments. They are often used in military applications. Base isolators for seismic isolation of buildings, bridges, etc. Base isolators made of layers of neoprene and steel with a low horizontal stiffness are used to lower the natural frequency of the building. Some other base isolators are designed to slide, preventing the transfer of energy from the ground to the building. Tuned mass dampers Tuned mass dampers reduce the effects of harmonic vibration in buildings or other structures. A relatively small mass is attached in such a way that it can dampen out a very narrow band of vibration of the structure. Passive isolation: Do it Yourself Isolators In less sophisticated solutions, bungee cords can be used as a cheap isolation system which may be effective enough for some applications. The item to be isolated is suspended from the bungee cords. This is difficult to implement without a danger of the isolated item falling. Tennis balls cut in half have been used under washing machines and other items with some success. In fact, tennis balls became the de facto standard suspension technique used in DIY rave/DJ culture, placed under the feet of each record turntable which produces enough dampening to neutralize the vibrations of high-powered soundsystems from affecting the delicate, high-sensitivity mechanisms of the turntable needles. Passive isolation: How passive isolation works A passive isolation system, such as a shock mount, in general contains mass, spring, and damping elements and moves as a harmonic oscillator. The mass and spring stiffness dictate a natural frequency of the system. Damping causes energy dissipation and has a secondary effect on natural frequency. Every object on a flexible support has a fundamental natural frequency. When vibration is applied, energy is transferred most efficiently at the natural frequency, somewhat efficiently below the natural frequency, and with increasing inefficiency (decreasing efficiency) above the natural frequency. This can be seen in the transmissibility curve, which is a plot of transmissibility vs. frequency. Passive isolation: Here is an example of a transmissibility curve. Transmissibility is the ratio of vibration of the isolated surface to that of the source. Vibrations are never eliminated, but they can be greatly reduced. The curve below shows the typical performance of a passive, negative-stiffness isolation system with a natural frequency of 0.5 Hz. The general shape of the curve is typical for passive systems. Below the natural frequency, transmissibility hovers near 1. A value of 1 means that vibration is going through the system without being amplified or reduced. At the resonant frequency, energy is transmitted efficiently, and the incoming vibration is amplified. Damping in the system limits the level of amplification. Above the resonant frequency, little energy can be transmitted, and the curve rolls off to a low value. A passive isolator can be seen as a mechanical low-pass filter for vibrations. Passive isolation: In general, for any given frequency above the natural frequency, an isolator with a lower natural frequency will show greater isolation than one with a higher natural frequency. The best isolation system for a given situation depends on the frequency, direction, and magnitude of vibrations present and the desired level of attenuation of those frequencies. All mechanical systems in the real world contain some amount of damping. Damping dissipates energy in the system, which reduces the vibration level which is transmitted at the natural frequency. The fluid in automotive shock absorbers is a kind of damper, as is the inherent damping in elastomeric (rubber) engine mounts. Damping is used in passive isolators to reduce the amount of amplification at the natural frequency. However, increasing damping tends to reduce isolation at the higher frequencies. As damping is increased, transmissibility roll-off decreases. This can be seen in the chart below. Passive isolation: Passive isolation operates in both directions, isolating the payload from vibrations originating in the support, and also isolating the support from vibrations originating in the payload. Large machines such as washers, pumps, and generators, which would cause vibrations in the building or room, are often isolated from the floor. However, there are a multitude of sources of vibration in buildings, and it is often not possible to isolate each source. In many cases, it is most efficient to isolate each sensitive instrument from the floor. Sometimes it is necessary to implement both approaches. Passive isolation: In Superyachts, the engines and alternators produce noise and vibrations. To solve this, the solution is a double elastic suspension where the engine and alternator are mounted with vibration dampers on a common frame. This set is then mounted elastically between the common frame and the hull. Factors influencing the selection of passive vibration isolators Characteristics of item to be isolated Size: The dimensions of the item to be isolated help determine the type of isolation which is available and appropriate. Small objects may use only one isolator, while larger items might use a multiple-isolator system. Weight: The weight of the object to be isolated is an important factor in choosing the correct passive isolation product. Individual passive isolators are designed to be used with a specific range of loading. Movement: Machines or instruments with moving parts may affect isolation systems. It is important to know the mass, speed, and distance traveled of the moving parts. Operating Environment Industrial: This generally entails strong vibrations over a wide band of frequencies and some amount of dust. Laboratory: Labs are sometimes troubled by specific building vibrations from adjacent machinery, foot traffic, or HVAC airflow. Indoor or outdoor: Isolators are generally designed for one environment or the other. Corrosive/non-corrosive: Some indoor environments may present a corrosive danger to isolator components due to the presence of corrosive chemicals. Outdoors, water and salt environments need to be considered. Clean room: Some isolators can be made appropriate for clean room. Temperature: In general, isolators are designed to be used in the range of temperatures normal for human environments. If a larger range of temperatures is required, the isolator design may need to be modified. Vacuum: Some isolators can be used in a vacuum environment. Air isolators may have leakage problems. Vacuum requirements typically include some level of clean room requirement and may also have a large temperature range. Magnetism: Some experimentation which requires vibration isolation also requires a low-magnetism environment. Some isolators can be designed with low-magnetism components. Acoustic noise: Some instruments are sensitive to acoustic vibration. In addition, some isolation systems can be excited by acoustic noise. It may be necessary to use an acoustic shield. Air compressors can create problematic acoustic noise, heat, and airflow. Static or dynamic loads: This distinction is quite important as isolators are designed for a certain type and level of loading. ; Static loading is basically the weight of the isolated object with low-amplitude vibration input. This is the environment of apparently stationary objects such as buildings (under normal conditions) or laboratory instruments. ; Dynamic loading involves accelerations and larger amplitude shock and vibration. This environment is present in vehicles, heavy machinery, and structures with significant movement. Cost: Cost of providing isolation: Costs include the isolation system itself, whether it is a standard or custom product; a compressed air source if required; shipping from manufacturer to destination; installation; maintenance; and an initial vibration site survey to determine the need for isolation. Relative costs of different isolation systems: Inexpensive shock mounts may need to be replaced due to dynamic loading cycles. A higher level of isolation which is effective at lower vibration frequencies and magnitudes generally costs more. Prices can range from a few dollars for bungee cords to millions of dollars for some space applications. Adjustment: Some isolation systems require manual adjustment to compensate for changes in weight load, weight distribution, temperature, and air pressure, whereas other systems are designed to automatically compensate for some or all of these factors. Maintenance: Some isolation systems are quite durable and require little or no maintenance. Others may require periodic replacement due to mechanical fatigue of parts or aging of materials. Size Constraints: The isolation system may have to fit in a restricted space in a laboratory or vacuum chamber, or within a machine housing. Nature of vibrations to be isolated or mitigated Frequencies: If possible, it is important to know the frequencies of ambient vibrations. This can be determined with a site survey or accelerometer data processed through FFT analysis. Amplitudes: The amplitudes of the vibration frequencies present can be compared with required levels to determine whether isolation is needed. In addition, isolators are designed for ranges of vibration amplitudes. Some isolators are not effective for very small amplitudes. Direction: Knowing whether vibrations are horizontal or vertical can help to target isolation where it is needed and save money. Vibration specifications of item to be isolated: Many instruments or machines have manufacturer-specified levels of vibration for the operating environment. The manufacturer may not guarantee the proper operation of the instrument if vibration exceeds the spec. Not For Profit Organizations such as ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers) and VISCMA (Vibration Isolation and Seismic Control Manufacturers Association) provide specifications / standards for isolator types and spring deflection requirements that cover a wide array of industries including electrical, mechanical, plumbing, and HVAC. Passive isolation: Comparison of passive isolators Negative-stiffness vibration isolator Negative-Stiffness-Mechanism (NSM) vibration isolation systems offer a unique passive approach for achieving low vibration environments and isolation against sub-Hertz vibrations. "Snap-through" or "over-center" NSM devices are used to reduce the stiffness of elastic suspensions and create compact six-degree-of-freedom systems with low natural frequencies. Practical systems with vertical and horizontal natural frequencies as low as 0.2 to 0.5 Hz are possible. Electro-mechanical auto-adjust mechanisms compensate for varying weight loads and provide automatic leveling in multiple-isolator systems, similar to the function of leveling valves in pneumatic systems. All-metal systems can be configured which are compatible with high vacuums and other adverse environments such as high temperatures. Passive isolation: These isolation systems enable vibration-sensitive instruments such as scanning probe microscopes, micro-hardness testers and scanning electron microscopes to operate in severe vibration environments sometimes encountered, for example, on upper floors of buildings and in clean rooms. Such operation would not be practical with pneumatic isolation systems. Similarly, they enable vibration-sensitive instruments to produce better images and data than those achievable with pneumatic isolators.The theory of operation of NSM vibration isolation systems is summarized, some typical systems and applications are described, and data on measured performance is presented. The theory of NSM isolation systems is explained in References 1 and 2. It is summarized briefly for convenience. Passive isolation: Vertical-motion isolation A vertical-motion isolator is shown . It uses a conventional spring connected to an NSM consisting of two bars hinged at the center, supported at their outer ends on pivots, and loaded in compression by forces P. The spring is compressed by weight W to the operating position of the isolator, as shown in Figure 1. The stiffness of the isolator is K=KS-KN where KS is the spring stiffness and KN is the magnitude of a negative-stiffness which is a function of the length of the bars and the load P. The isolator stiffness can be made to approach zero while the spring supports the weight W. Passive isolation: Horizontal-motion isolation A horizontal-motion isolator consisting of two beam-columns is illustrated in Figure. 2. Each beam-column behaves like two fixed-free beam columns loaded axially by a weight load W. Without the weight load the beam-columns have horizontal stiffness KS With the weight load the lateral bending stiffness is reduced by the "beam-column" effect. This behavior is equivalent to a horizontal spring combined with an NSM so that the horizontal stiffness is K=KS−KN , and KN is the magnitude of the beam-column effect. Horizontal stiffness can be made to approach zero by loading the beam-columns to approach their critical buckling load. Passive isolation: Six-degree-of-freedom (six-DOF) isolation A six-DOF NSM isolator typically uses three isolators stacked in series: a tilt-motion isolator on top of a horizontal-motion isolator on top of a vertical-motion isolator. Figure 3 (Ref. needed) shows a schematic of a vibration isolation system consisting of a weighted platform supported by a single six-DOF isolator incorporating the isolators of Figures 1 and 2 (Figures 1 and 2 are missing). Flexures are used in place of the hinged bars shown in Figure 1. A tilt flexure serves as the tilt-motion isolator. A vertical-stiffness adjustment screw is used to adjust the compression force on the negative-stiffness flexures thereby changing the vertical stiffness. A vertical load adjustment screw is used to adjust for varying weight loads by raising or lowering the base of the support spring to keep the flexures in their straight, unbent operating positions. Passive isolation: Vibration isolation of supporting joint The equipment or other mechanical components are necessarily linked to surrounding objects (the supporting joint - with the support; the unsupporting joint - the pipe duct or cable), thus presenting the opportunity for unwanted transmission of vibrations. Using a suitably designed vibration-isolator (absorber), vibration isolation of the supporting joint is realized. The accompanying illustration shows the attenuation of vibration levels, as measured before installation of the functioning gear on a vibration isolator as well as after installation, for a wide range of frequencies. Passive isolation: The vibration isolator This is defined as a device that reflects and absorbs waves of oscillatory energy, extending from a piece of working machinery or electrical equipment, and with the desired effect being vibration insulation. The goal is to establish vibration isolation between a body transferring mechanical fluctuations and a supporting body (for example, between the machine and the foundation). The illustration shows a vibration isolator from the series «ВИ» (~"VI" in Roman characters), as used in shipbuilding in Russia, for example the submarine "St.Petersburg" (Lada). The depicted «ВИ» devices allow loadings ranging from 5, 40 and 300 kg. They differ in their physical sizes, but all share the same fundamental design. The structure consists of a rubber envelope that is internally reinforced by a spring. During manufacture, the rubber and the spring are intimately and permanently connected as a result of the vulcanization process that is integral to the processing of the crude rubber material. Under action of weight loading of the machine, the rubber envelope deforms, and the spring is compressed or stretched. Therefore, in the direction of the spring's cross section, twisting of the enveloping rubber occurs. The resulting elastic deformation of the rubber envelope results in very effective absorption of the vibration. This absorption is crucial to reliable vibration insulation, because it averts the potential for resonance effects. The amount of elastic deformation of the rubber largely dictates the magnitude of vibration absorption that can be attained; the entire device (including the spring itself) must be designed with this in mind. The design of the vibration isolator must also take into account potential exposure to shock loadings, in addition to the routine everyday vibrations. Lastly, the vibration isolator must also be designed for long-term durability as well as convenient integration into the environment in which it is to be used. Sleeves and flanges are typically employed in order to enable the vibration isolator to be securely fastened to the equipment and the supporting foundation. Passive isolation: Vibration isolation of unsupporting joint Vibration isolation of unsupporting joint is realized in the device named branch pipe a of isolating vibration. Passive isolation: Branch pipe a of isolating vibration Branch pipe a of isolating vibration is a part of a tube with elastic walls for reflection and absorption of waves of the oscillatory energy extending from the working pump over wall of the pipe duct. Is established between the pump and the pipe duct. On an illustration is presented the image a vibration-isolating branch pipe of a series «ВИПБ». In a structure is used the rubber envelope, which is reinforced by a spring. Properties of an envelope are similar envelope to an isolator vibration. Has the device reducing axial effort from action of internal pressure up to zero. Passive isolation: Subframe isolation Another technique used to increase isolation is to use an isolated subframe. This splits the system with an additional mass/spring/damper system. This doubles the high frequency attenuation rolloff, at the cost of introducing additional low frequency modes which may cause the low frequency behaviour to deteriorate. This is commonly used in the rear suspensions of cars with Independent Rear Suspension (IRS), and in the front subframes of some cars. The graph (see illustration) shows the force into the body for a subframe that is rigidly bolted to the body compared with the red curve that shows a compliantly mounted subframe. Above 42 Hz the compliantly mounted subframe is superior, but below that frequency the bolted in subframe is better. Semi-active isolation: Semiactive vibration isolators have received attention because they consume less power than active devices and controllability over passive systems. Active isolation: Active vibration isolation systems contain, along with the spring, a feedback circuit which consists of a sensor (for example a piezoelectric accelerometer or a geophone), a controller, and an actuator. The acceleration (vibration) signal is processed by a control circuit and amplifier. Then it feeds the electromagnetic actuator, which amplifies the signal. As a result of such a feedback system, a considerably stronger suppression of vibrations is achieved compared to ordinary damping. Active isolation today is used for applications where structures smaller than a micrometer have to be produced or measured. A couple of companies produce active isolation products as OEM for research, metrology, lithography and medical systems. Another important application is the semiconductor industry. In the microchip production, the smallest structures today are below 20 nm, so the machines which produce and check them have to oscillate much less. Active isolation: Sensors for active isolation Piezoelectric accelerometers and force sensors MEM accelerometers Geophones Proximity sensors Interferometers Actuators for active isolation Linear motors Pneumatic actuators Piezoelectric motors
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trichrome staining** Trichrome staining: Trichrome staining is a histological staining method that uses two or more acid dyes in conjunction with a polyacid. Staining differentiates tissues by tinting them in contrasting colours. It increases the contrast of microscopic features in cells and tissues, which makes them easier to see when viewed through a microscope. The word trichrome means "three colours". The first staining protocol that was described as "trichrome" was Mallory's trichrome stain, which differentially stained erythrocytes to a red colour, muscle tissue to a red colour, and collagen to a blue colour. Some other trichrome staining protocols are the Masson's trichrome stain, Lillie's trichrome, and the Gömöri trichrome stain. Purpose: Without trichrome staining, discerning one feature from another can be extremely difficult. Smooth muscle tissue, for example, is hard to differentiate from collagen. A trichrome stain can colour the muscle tissue red, and the collagen fibres green or blue. Liver biopsies may have fine collagen fibres between the liver cells, and the amount of collagen may be estimated based on the staining method. Trichrome methods are now used for differentiating muscle from collagen, pituitary alpha cells from beta cells, fibrin from collagen, and mitochondria in fresh frozen muscle sections, among other applications. It helps in identifying increases in collagenous tissue (i.e., fibrotic changes) such as in liver cirrhosis and distinguishing tumours arising from muscle cells and fibroblasts. Procedure: Trichrome staining techniques employ two or more acid dyes. Normally acid dyes would stain the same basic proteins, but by applying them sequentially the staining pattern can be manipulated. A polyacid (such as phosphomolybdic acid or tungstophosphoric acid) is used to remove dye selectively. Polyacids are thought to behave as dyes with a high molecular weight: they displace easily removed dye from collagen.Usually a red dye in dilute acetic acid is applied first to overstain all components. Then a polyacid is applied to remove the red dye from collagen and some other components by displacement. A second acid dye (blue or green) in dilute acetic acid is applied which, in turn, displaces the polyacid, resulting in collagen stained in a contrasting colour to the initial dye used. If erythrocytes are to be stained, a small molecular weight yellow or orange dye is applied before staining with the red dye. It is usually applied from a saturated solution in 80% ethanol and often in conjunction with picric acid (itself a dye) and a polyacid. The methods exploit minor differences in tissue reaction to dyes, density, accessibility and so on.Trichrome stains in which dyes and a polyacid are applied sequentially are called multi-step trichromes. In "one-step" methods, all the dyes—with or without a polyacid—are combined in a single solution. One of the oldest single-step approaches to trichrome staining is van Gieson's method, which stains muscle and cytoplasm yellow, and collagen red. Another is the Gömöri trichrome stain, which closely mimics Masson's trichrome. In "yellowsolve" methods, a red dye in dilute acetic acid is first applied, then the section is very thoroughly dehydrated to ensure that no moisture remains. The red dye is then displaced by a yellow dye in a solvent, such as cellosolve (2-ethoxy-ethanol). The name yellowsolve is a blend of the terms yellow and cellosolve. Lendrum's phloxine-tartrazine for cell inclusions is one example of a yellowsolve stain. Dyes: Among the dyes used for trichrome staining are: Red Acid fuchsin, xylidine ponceau, chromotrope 2R, Biebrich scarlet, ponceau 6R, phloxine Blue and green Light green SF yellowish, Fast Green FCF, methyl blue, water blue Yellow Picric acid, orange G, Martius yellow, tartrazine, milling yellow
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nuitka** Nuitka: Nuitka (pronounced as ) is a source-to-source compiler which compiles Python code to C source code, applying some compile-time optimizations in the process such as constant folding and propagation, built-in call prediction, type inference, and conditional statement execution. Nuitka initially was designed to produce C++ code, but current versions produce C source code using only those features of C11 that are shared by C++03, enabling further compilation to a binary executable format by modern C and C++ compilers including gcc, clang, MinGW, or Microsoft Visual C++. It accepts Python code compatible with several different Python versions (currently supporting versions 2.6, 2.7, and 3.3–3.10) and optionally allows for the creation of standalone programs that do not require Python to be installed on the target computer. Nuitka: Nuitka was discussed at the 2012 EuroPython conference, and serious development began at the end of the same year. It now supports virtually all of the features of the Python language. Additional compile-time optimizations are planned for future releases, including avoiding the use of Python objects for additional variables whose type can be inferred at compile time, particularly when using iterators, which is expected to result in a large performance increase. Limitations: Currently it is not possible to cross-compile binaries (e.g. building the executable on Windows and shipping it to macOS). Limitations: Standalone binaries built using the --standalone command line option include an embedded CPython interpreter to handle aspects of the language that are not determined when the program is compiled and must be interpreted at runtime, such as duck typing, exception handling, and dynamic code execution (the eval function and exec function or statement), along with those Python and native libraries that are needed for execution, leading to rather large file sizes. Limitations: Nuitka's design heavily relies on the internals of the CPython interpreter, and as a result other implementations of the Python language such as PyPy, Jython, and IronPython cannot be used instead of CPython for the runtime interpreter and library. Usage: Nuitka can be installed from the repositories of many Linux distributions. It can also be installed through pip and pip3, respectively. Compilation is done either with nuitka program.py or by calling Python itself and afterwards defining which module to run, which in this case is Nuitka (python -m nuitka program.py).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Samarium** Samarium: Samarium is a chemical element with symbol Sm and atomic number 62. It is a moderately hard silvery metal that slowly oxidizes in air. Being a typical member of the lanthanide series, samarium usually has the oxidation state +3. Compounds of samarium(II) are also known, most notably the monoxide SmO, monochalcogenides SmS, SmSe and SmTe, as well as samarium(II) iodide. Samarium: Discovered in 1879 by French chemist Paul-Émile Lecoq de Boisbaudran, samarium was named after the mineral samarskite from which it was isolated. The mineral itself was named after a Russian mine official, Colonel Vassili Samarsky-Bykhovets, who thus became the first person to have a chemical element named after him, albeit indirectly. Samarium: Samarium is the 40th most abundant element in Earth's crust and more common than metals such as tin. It occurs in concentration up to 2.8% in several minerals including cerite, gadolinite, samarskite, monazite and bastnäsite, the last two being the most common commercial sources of the element. These minerals are mostly found in China, the United States, Brazil, India, Sri Lanka and Australia; China is by far the world leader in samarium mining and production. Samarium: The main commercial use of samarium is in samarium–cobalt magnets, which have permanent magnetization second only to neodymium magnets; however, samarium compounds can withstand significantly higher temperatures, above 700 °C (1,292 °F), without losing their permanent magnetic properties. The radioisotope samarium-153 is the active component of the drug samarium (153Sm) lexidronam (Quadramet), which kills cancer cells in lung cancer, prostate cancer, breast cancer and osteosarcoma. Another isotope, samarium-149, is a strong neutron absorber and so is added to control rods of nuclear reactors. It also forms as a decay product during the reactor operation and is one of the important factors considered in the reactor design and operation. Other uses of samarium include catalysis of chemical reactions, radioactive dating and X-ray lasers. Samarium(II) iodide, in particular, is a common reducing agent in chemical synthesis. Samarium: Samarium has no biological role; some samarium salts are slightly toxic. Physical properties: Samarium is a rare earth element with a hardness and density similar to zinc. With a boiling point of 1,794 °C (3,261 °F), samarium is the third most volatile lanthanide after ytterbium and europium and comparable in this respect to lead and barium; this helps separation of samarium from its ores. When freshly prepared, samarium has a silvery lustre, and takes on a duller appearance when oxidized in air. Samarium is calculated to have one of the largest atomic radii of the elements; with a radius of 238 pm, only potassium, praseodymium, barium, rubidium and caesium are larger.In ambient conditions, samarium has a rhombohedral structure (α form). Upon heating to 731 °C (1,348 °F), its crystal symmetry changes to hexagonal close-packed (hcp), with the actual transition temperature depending on metal purity. Further heating to 922 °C (1,692 °F) transforms the metal into a body-centered cubic (bcc) phase. Heating to 300 °C (572 °F) plus compression to 40 kbar results in a double-hexagonally close-packed structure (dhcp). Higher pressure of the order of hundreds or thousands of kilobars induces a series of phase transformations, in particular with a tetragonal phase appearing at about 900 kbar. In one study, the dhcp phase could be produced without compression, using a nonequilibrium annealing regime with a rapid temperature change between about 400 °C (752 °F) and 700 °C (1,292 °F), confirming the transient character of this samarium phase. Thin films of samarium obtained by vapor deposition may contain the hcp or dhcp phases in ambient conditions.Samarium and its sesquioxide are paramagnetic at room temperature. Their corresponding effective magnetic moments, below 2 bohr magnetons, are the third-lowest among lanthanides (and their oxides) after lanthanum and lutetium. The metal transforms to an antiferromagnetic state upon cooling to 14.8 K. Individual samarium atoms can be isolated by encapsulating them into fullerene molecules. They can also be intercalated into the interstices of the bulk C60 to form a solid solution of nominal composition Sm3C60, which is superconductive at a temperature of 8 K. Samarium doping of iron-based superconductors – a class of high-temperature superconductor – increases their transition to normal conductivity temperature up to 56 K, the highest value achieved so far in this series. Chemical properties: In air, samarium slowly oxidizes at room temperature and spontaneously ignites at 150 °C (302 °F). Even when stored under mineral oil, samarium gradually oxidizes and develops a grayish-yellow powder of the oxide-hydroxide mixture at the surface. The metallic appearance of a sample can be preserved by sealing it under an inert gas such as argon. Chemical properties: Samarium is quite electropositive and reacts slowly with cold water and rapidly with hot water to form samarium hydroxide: 2Sm(s) + 6H2O(l) → 2Sm(OH)3(aq) + 3H2(g)Samarium dissolves readily in dilute sulfuric acid to form solutions containing the yellow to pale green Sm(III) ions, which exist as [Sm(OH2)9]3+ complexes: 2Sm(s) + 3H2SO4(aq) → 2Sm3+(aq) + 3SO2−4(aq) + 3H2(g)Samarium is one of the few lanthanides with a relatively accessible +2 oxidation state, alongside Eu and Yb. Sm2+ ions are blood-red in aqueous solution. Compounds: Oxides The most stable oxide of samarium is the sesquioxide Sm2O3. Like many samarium compounds, it exists in several crystalline phases. The trigonal form is obtained by slow cooling from the melt. The melting point of Sm2O3 is high (2345 °C), so it is usually melted not by direct heating, but with induction heating, through a radio-frequency coil. Sm2O3 crystals of monoclinic symmetry can be grown by the flame fusion method (Verneuil process) from Sm2O3 powder, that yields cylindrical boules up to several centimeters long and about one centimeter in diameter. The boules are transparent when pure and defect-free and are orange otherwise. Heating the metastable trigonal Sm2O3 to 1,900 °C (3,450 °F) converts it to the more stable monoclinic phase. Cubic Sm2O3 has also been described.Samarium is one of the few lanthanides that form a monoxide, SmO. This lustrous golden-yellow compound was obtained by reducing Sm2O3 with samarium metal at high temperature (1000 °C) and a pressure above 50 kbar; lowering the pressure resulted in incomplete reaction. SmO has cubic rock-salt lattice structure. Compounds: Chalcogenides Samarium forms a trivalent sulfide, selenide and telluride. Divalent chalcogenides SmS, SmSe and SmTe with a cubic rock-salt crystal structure are known. These chalcogenides convert from a semiconducting to metallic state at room temperature upon application of pressure. Whereas the transition is continuous and occurs at about 20–30 kbar in SmSe and SmTe, it is abrupt in SmS and requires only 6.5 kbar. This effect results in a spectacular color change in SmS from black to golden yellow when its crystals of films are scratched or polished. The transition does not change the lattice symmetry, but there is a sharp decrease (~15%) in the crystal volume. It exhibits hysteresis, i.e., when the pressure is released, SmS returns to the semiconducting state at a much lower pressure of about 0.4 kbar. Compounds: Halides Samarium metal reacts with all the halogens, forming trihalides: 2 Sm (s) + 3 X2 (g) → 2 SmX3 (s) (X = F, Cl, Br or I)Their further reduction with samarium, lithium or sodium metals at elevated temperatures (about 700–900 °C) yields the dihalides. The diiodide can also be prepared by heating SmI3, or by reacting the metal with 1,2-diiodoethane in anhydrous tetrahydrofuran at room temperature: Sm (s) + ICH2-CH2I → SmI2 + CH2=CH2.In addition to dihalides, the reduction also produces many non-stoichiometric samarium halides with a well-defined crystal structure, such as Sm3F7, Sm14F33, Sm27F64, Sm11Br24, Sm5Br11 and Sm6Br13.Samarium halides change their crystal structures when one type of halide anion is substituted for another, which is an uncommon behavior for most elements (e.g. actinides). Many halides have two major crystal phases for one composition, one being significantly more stable and another being metastable. The latter is formed upon compression or heating, followed by quenching to ambient conditions. For example, compressing the usual monoclinic samarium diiodide and releasing the pressure results in a PbCl2-type orthorhombic structure (density 5.90 g/cm3), and similar treatment results in a new phase of samarium triiodide (density 5.97 g/cm3). Compounds: Borides Sintering powders of samarium oxide and boron, in a vacuum, yields a powder containing several samarium boride phases; the ratio between these phases can be controlled through the mixing proportion. The powder can be converted into larger crystals of samarium borides using arc melting or zone melting techniques, relying on the different melting/crystallization temperature of SmB6 (2580 °C), SmB4 (about 2300 °C) and SmB66 (2150 °C). All these materials are hard, brittle, dark-gray solids with the hardness increasing with the boron content. Samarium diboride is too volatile to be produced with these methods and requires high pressure (about 65 kbar) and low temperatures between 1140 and 1240 °C to stabilize its growth. Increasing the temperature results in the preferential formation of SmB6. Compounds: Samarium hexaboride Samarium hexaboride is a typical intermediate-valence compound where samarium is present both as Sm2+ and Sm3+ ions in a 3:7 ratio. It belongs to a class of Kondo insulators; at temperatures above 50 K, its properties are typical of a Kondo metal, with metallic electrical conductivity characterized by strong electron scattering, whereas at lower temperatures, it behaves as a non-magnetic insulator with a narrow band gap of about 4–14 meV. The cooling-induced metal-insulator transition in SmB6 is accompanied by a sharp increase in the thermal conductivity, peaking at about 15 K. The reason for this increase is that electrons themselves do not contribute to the thermal conductivity at low temperatures, which is dominated by phonons, but the decrease in electron concentration reduces the rate of electron-phonon scattering. Compounds: Other inorganic compounds Samarium carbides are prepared by melting a graphite-metal mixture in an inert atmosphere. After the synthesis, they are unstable in air and need to be studied under an inert atmosphere. Samarium monophosphide SmP is a semiconductor with a bandgap of 1.10 eV, the same as in silicon, and electrical conductivity of n-type. It can be prepared by annealing at 1,100 °C (2,010 °F) an evacuated quartz ampoule containing mixed powders of phosphorus and samarium. Phosphorus is highly volatile at high temperatures and may explode, thus the heating rate has to be kept well below 1 °C/min. A similar procedure is adopted for the monarsenide SmAs, but the synthesis temperature is higher at 1,800 °C (3,270 °F).Numerous crystalline binary compounds are known for samarium and one of the group 14, 15, or 16 elements X, where X is Si, Ge, Sn, Pb, Sb or Te, and metallic alloys of samarium form another large group. They are all prepared by annealing mixed powders of the corresponding elements. Many of the resulting compounds are non-stoichiometric and have nominal compositions SmaXb, where the b/a ratio varies between 0.5 and 3. Compounds: Organometallic compounds Samarium forms a cyclopentadienide Sm(C5H5)3 and its chloroderivatives Sm(C5H5)2Cl and Sm(C5H5)Cl2. They are prepared by reacting samarium trichloride with NaC5H5 in tetrahydrofuran. Contrary to cyclopentadienides of most other lanthanides, in Sm(C5H5)3 some C5H5 rings bridge each other by forming ring vertexes η1 or edges η2 toward another neighboring samarium, thus creating polymeric chains. The chloroderivative Sm(C5H5)2Cl has a dimer structure, which is more accurately expressed as (η(5)−C5H5)2Sm(−Cl)2(η(5)−C5H5)2. There, the chlorine bridges can be replaced, for instance, by iodine, hydrogen or nitrogen atoms or by CN groups.The (C5H5)− ion in samarium cyclopentadienides can be replaced by the indenide (C9H7)− or cyclooctatetraenide (C8H8)2− ring, resulting in Sm(C9H7)3 or KSm(η(8)−C8H8)2. The latter compound has a structure similar to uranocene. There is also a cyclopentadienide of divalent samarium, Sm(C5H5)2 a solid that sublimates at about 85 °C (185 °F). Contrary to ferrocene, the C5H5 rings in Sm(C5H5)2 are not parallel but are tilted by 40°.A metathesis reaction in tetrahydrofuran or ether gives alkyls and aryls of samarium: SmCl3 + 3LiR → SmR3 + 3LiCl Sm(OR)3 + 3LiCH(SiMe3)2 → Sm{CH(SiMe3)2}3 + 3LiORHere R is a hydrocarbon group and Me = methyl. Isotopes: Naturally occurring samarium is composed of five stable isotopes: 144Sm, 149Sm, 150Sm, 152Sm and 154Sm, and two extremely long-lived radioisotopes, 147Sm (half-life t1/2 = 1.06×1011 years) and 148Sm (7×1015 years), with 152Sm being the most abundant (26.75%). 149Sm is listed by various sources as being stable, but some sources state that it is radioactive, with a lower bound for its half-life given as 2×1015 years. Some observationally stable samarium isotopes are predicted to decay to isotopes of neodymium. The long-lived isotopes 146Sm, 147Sm, and 148Sm undergo alpha decay to neodymium isotopes. Lighter unstable isotopes of samarium mainly decay by electron capture to promethium, while heavier ones beta decay to europium. The known isotopes range from 129Sm to 168Sm. The half-lives of 151Sm and 145Sm are 90 years and 340 days, respectively. All remaining radioisotopes have half-lives that are less than 2 days, and most these have half-life less than 48 seconds. Samarium also has twelve known nuclear isomers, the most stable of which are 141mSm (half-life 22.6 minutes), 143m1Sm (t1/2 = 66 seconds), and 139mSm (t1/2 = 10.7 seconds). Natural samarium has a radioactivity of 127 Bq/g, mostly due to 147Sm, which alpha decays to 143Nd with a half-life of 1.06×1011 years and is used in samarium–neodymium dating. 146Sm is an extinct radionuclide, with the half-life of 1.03×108 years. There have been searches of samarium-146 as a primordial nuclide, because its half-life is long enough such that minute quantities of the element should persist today. It can be used in radiometric dating.Samarium-149 is an observationally stable isotope of samarium (predicted to decay, but no decays have ever been observed, giving it a half-life at least several orders of magnitude longer than the age of the universe), and a product of the decay chain from the fission product 149Nd (yield 1.0888%). 149Sm is a decay product and neutron-absorber in nuclear reactors, with a neutron poison effect that is second in importance for reactor design and operation only to 135Xe. Its neutron cross section is 41000 barns for thermal neutrons. Because samarium-149 is not radioactive and is not removed by decay, it presents problems somewhat different from those encountered with xenon-135. The equilibrium concentration (and thus the poisoning effect) builds to an equilibrium value during reactor operations in about 500 hours (about three weeks), and since samarium-149 is stable, its concentration remains essentially constant during reactor operation. Isotopes: Samarium-153 is a beta emitter with a half-life of 46.3 hours. It is used to kill cancer cells in lung cancer, prostate cancer, breast cancer, and osteosarcoma. For this purpose, samarium-153 is chelated with ethylene diamine tetramethylene phosphonate (EDTMP) and injected intravenously. The chelation prevents accumulation of radioactive samarium in the body that would result in excessive irradiation and generation of new cancer cells. The corresponding drug has several names including samarium (153Sm) lexidronam; its trade name is Quadramet. History: Detection of samarium and related elements was announced by several scientists in the second half of the 19th century; however, most sources give priority to French chemist Paul-Émile Lecoq de Boisbaudran. Boisbaudran isolated samarium oxide and/or hydroxide in Paris in 1879 from the mineral samarskite ((Y,Ce,U,Fe)3(Nb,Ta,Ti)5O16) and identified a new element in it via sharp optical absorption lines.Swiss chemist Marc Delafontaine announced a new element decipium (from Latin: decipiens meaning "deceptive, misleading") in 1878, but later in 1880–1881 demonstrated that it was a mix of several elements, one being identical to Boisbaudran's samarium. Though samarskite was first found in the Ural Mountains in Russia, by the late 1870s it had been found in other places, making it available to many researchers. In particular, it was found that the samarium isolated by Boisbaudran was also impure and had a comparable amount of europium. The pure element was produced only in 1901 by Eugène-Anatole Demarçay.Boisbaudran named his element samarium after the mineral samarskite, which in turn honored Vassili Samarsky-Bykhovets (1803–1870). Samarsky-Bykhovets, as the Chief of Staff of the Russian Corps of Mining Engineers, had granted access for two German mineralogists, the brothers Gustav and Heinrich Rose, to study the mineral samples from the Urals. Samarium was thus the first chemical element to be named after a person. The word samaria is sometimes used to mean samarium(III) oxide, by analogy with yttria, zirconia, alumina, ceria, holmia, etc. The symbol Sm was suggested for samarium, but an alternative Sa was often used instead until the 1920s.Before the advent of ion-exchange separation technology in the 1950s, pure samarium had no commercial uses. However, a by-product of fractional crystallization purification of neodymium was a mix of samarium and gadolinium that got the name "Lindsay Mix" after the company that made it, and was used for nuclear control rods in some early nuclear reactors. Nowadays, a similar commodity product has the name "samarium-europium-gadolinium" (SEG) concentrate. It is prepared by solvent extraction from the mixed lanthanides isolated from bastnäsite (or monazite). Since heavier lanthanides have more affinity for the solvent used, they are easily extracted from the bulk using relatively small proportions of solvent. Not all rare-earth producers who process bastnäsite do so on a large enough scale to continue by separating the components of SEG, which typically makes up only 1–2% of the original ore. Such producers therefore make SEG with a view to marketing it to the specialized processors. In this manner, the valuable europium in the ore is rescued for use in making phosphor. Samarium purification follows the removal of the europium. As of 2012, being in oversupply, samarium oxide is cheaper on a commercial scale than its relative abundance in the ore might suggest. Occurrence and production: With the average concentration of about 8 parts per million (ppm), samarium is the 40th most abundant element in the Earth's crust. It is the fifth most abundant lanthanide and has a higher concentration than many other elements such as tin (which has an average concentration of 2 ppm). Samarium concentration in soils varies between 2 and 23 ppm, and oceans contain about 0.5–0.8 parts per trillion. Distribution of samarium in soils strongly depends on its chemical state and is very inhomogeneous: in sandy soils, samarium concentration is about 200 times higher at the surface of soil particles than in the water trapped between them, and this ratio can exceed 1,000 in clays.Samarium is not found free in nature, but, like other rare earth elements, is contained in many minerals, including monazite, bastnäsite, cerite, gadolinite and samarskite; monazite (in which samarium occurs at concentrations of up to 2.8%) and bastnäsite are mostly used as commercial sources. World resources of samarium are estimated at two million tonnes; they are mostly located in China, US, Brazil, India, Sri Lanka and Australia, and the annual production is about 700 tonnes. Country production reports are usually given for all rare-earth metals combined. By far, China has the largest production with 120,000 tonnes mined per year; it is followed by the US (about 5,000 tonnes) and India (2,700 tonnes). Samarium is usually sold as oxide, which at the price of about US$30/kg is one of the cheapest lanthanide oxides. Whereas mischmetal – a mixture of rare earth metals containing about 1% of samarium – has long been used, relatively pure samarium has been isolated only recently, through ion exchange processes, solvent extraction techniques, and electrochemical deposition. The metal is often prepared by electrolysis of a molten mixture of samarium(III) chloride with sodium chloride or calcium chloride. Samarium can also be obtained by reducing its oxide with lanthanum. The product is then distilled to separate samarium (boiling point 1794 °C) and lanthanum (b.p. 3464 °C).Very few minerals have samarium being the most dominant element. Minerals with essential (dominant) samarium include monazite-(Sm) and florencite-(Sm). These minerals are very rare and are usually found containing other elements, usually cerium or neodymium. Samarium-151 is produced in nuclear fission of uranium with a yield of about 0.4% of all fissions. It is also made by neutron capture by samarium-149, which is added to the control rods of nuclear reactors. Therefore, 151Sm is present in spent nuclear fuel and radioactive waste. Applications: Magnets An important use of samarium is samarium–cobalt magnets, which are nominally SmCo5 or Sm2Co17. They have high permanent magnetization, about 10,000 times that of iron and second only to neodymium magnets. However, samarium magnets resist demagnetization better; they are stable to temperatures above 700 °C (1,292 °F) (cf. 300–400 °C for neodymium magnets). These magnets are found in small motors, headphones, and high-end magnetic pickups for guitars and related musical instruments. For example, they are used in the motors of a solar-powered electric aircraft, the Solar Challenger, and in the Samarium Cobalt Noiseless electric guitar and bass pickups. Applications: Chemical reagent Samarium and its compounds are important as catalysts and chemical reagents. Samarium catalysts help the decomposition of plastics, dechlorination of pollutants such as polychlorinated biphenyls (PCB), as well as dehydration and dehydrogenation of ethanol. Samarium(III) triflate Sm(OTf)3, that is Sm(CF3SO3)3, is one of the most efficient Lewis acid catalysts for a halogen-promoted Friedel–Crafts reaction with alkenes. Samarium(II) iodide is a very common reducing and coupling agent in organic synthesis, for example in desulfonylation reactions; annulation; Danishefsky, Kuwajima, Mukaiyama and Holton Taxol total syntheses; strychnine total synthesis; Barbier reaction and other reductions with samarium(II) iodide.In its usual oxidized form, samarium is added to ceramics and glasses where it increases absorption of infrared light. As a (minor) part of mischmetal, samarium is found in the "flint" ignition devices of many lighters and torches. Applications: Neutron absorber Samarium-149 has a high cross section for neutron capture (41,000 barns) and so is used in control rods of nuclear reactors. Its advantage compared to competing materials, such as boron and cadmium, is stability of absorption – most of the fusion products of 149Sm are other isotopes of samarium that are also good neutron absorbers. For example, the cross section of samarium-151 is 15,000 barns, it is on the order of hundreds of barns for 150Sm, 152Sm, and 153Sm, and 6,800 barns for natural (mixed-isotope) samarium. Applications: Lasers Samarium-doped calcium fluoride crystals were used as an active medium in one of the first solid-state lasers designed and built by Peter Sorokin (co-inventor of the dye laser) and Mirek Stevenson at IBM research labs in early 1961. This samarium laser gave pulses of red light at 708.5 nm. It had to be cooled by liquid helium and so did not find practical applications. Another samarium-based laser became the first saturated X-ray laser operating at wavelengths shorter than 10 nanometers. It gave 50-picosecond pulses at 7.3 and 6.8 nm suitable for uses in holography, high-resolution microscopy of biological specimens, deflectometry, interferometry, and radiography of dense plasmas related to confinement fusion and astrophysics. Saturated operation meant that the maximum possible power was extracted from the lasing medium, resulting in the high peak energy of 0.3 mJ. The active medium was samarium plasma produced by irradiating samarium-coated glass with a pulsed infrared Nd-glass laser (wavelength ~1.05 μm). Applications: Storage phosphor In 2007 it was shown that nanocrystalline BaFCl:Sm3+ as prepared by co-precipitation can serve as a very efficient X-ray storage phosphor. The co-precipitation leads to nanocrystallites of the order of 100–200 nm in size and their sensitivity as X-ray storage phosphors is increased a remarkable ~500,000 times because of the specific arrangements and density of defect centers in comparison with microcrystalline samples prepared by sintering at high temperature. The mechanism is based on reduction of Sm3+ to Sm2+ by trapping electrons that are created upon exposure to ionizing radiation in the BaFCl host. The 5DJ–7FJ f–f luminescence lines can be very efficiently excited via the parity allowed 4f6→4f55d transition at ~417 nm. The latter wavelength is ideal for efficient excitation by blue-violet laser diodes as the transition is electric dipole allowed and thus relatively intense (400 L/(mol⋅cm)). Applications: The phosphor has potential applications in personal dosimetry, dosimetry and imaging in radiotherapy, and medical imaging. Applications: Non-commercial and potential uses The change in electrical resistivity in samarium monochalcogenides can be used in a pressure sensor or in a memory device triggered between a low-resistance and high-resistance state by external pressure, and such devices are being developed commercially. Samarium monosulfide also generates electric voltage upon moderate heating to about 150 °C (302 °F) that can be applied in thermoelectric power converters.Analysis of relative concentrations of samarium and neodymium isotopes 147Sm, 144Nd, and 143Nd allows determination of the age and origin of rocks and meteorites in samarium–neodymium dating. Both elements are lanthanides and are very similar physically and chemically. Thus, Sm–Nd dating is either insensitive to partitioning of the marker elements during various geologic processes, or such partitioning can well be understood and modeled from the ionic radii of said elements.The Sm3+ ion is a potential activator for use in warm-white light emitting diodes. It offers high luminous efficacy due to narrow emission bands; but the generally low quantum efficiency and too little absorption in the UV-A to blue spectral region hinders commercial application.Samarium is used for ionosphere testing. A rocket spreads samarium monoxide as a red vapor at high altitude, and researchers test how the atmosphere disperses it and how it impacts radio transmissions.Samarium hexaboride, SmB6, has recently been shown to be a topological insulator with potential uses in quantum computing. Biological role and precautions: Samarium salts stimulate metabolism, but it is unclear whether this is from samarium or other lanthanides present with it. The total amount of samarium in adults is about 50 μg, mostly in liver and kidneys and with ~8 μg/L being dissolved in blood. Samarium is not absorbed by plants to a measurable concentration and so is normally not part of human diet. However, a few plants and vegetables may contain up to 1 part per million of samarium. Insoluble salts of samarium are non-toxic and the soluble ones are only slightly toxic. When ingested, only 0.05% of samarium salts are absorbed into the bloodstream and the remainder are excreted. From the blood, 45% goes to the liver and 45% is deposited on the surface of the bones where it remains for 10 years; the remaining 10% is excreted.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gonadosomatic index** Gonadosomatic index: In biology, the gonadosomatic index (GSI) is the calculation of the gonad mass as a proportion of the total body mass. It is represented by the formula: GSI = [gonad weight / total tissue weight] × 100 It is a tool for measuring the sexual maturity of animals in correlation to ovary development and testes development. The index is frequently used as reporting point in OECD test guideline, which may be used as indication or evidence of potential endocrine disruption effect of chemicals in regulatory framework (EFSA and ECHA, 2017).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Socket 2** Socket 2: Socket 2 was one of the series of CPU sockets into which various x86 microprocessors were inserted. It was an updated Socket 1 with added support for Pentium OverDrive processors. Socket 2 was a 238-pin low insertion force (LIF) or zero insertion force (ZIF) 19×19 pin grid array (PGA) socket suitable for the 5-volt, 25 to 66 MHz 486 SX, 486 DX, 486 DX2, 486 OverDrive and 63 or 83 MHz Pentium OverDrive processors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tree view** Tree view: A tree view is a graphical widget (graphical control element) within a graphical user interface (GUI) in which users can navigate and interact intuitively with concise, hierarchical data presented as nodes in a tree-like format. It can also be called an outline view. Appearance: A tree view is usually a vertical list of nodes arranged in a tree-like structure. Each node represents a single data item, displayed as an indented line of text or a rectangular box. The indentation (and sometimes a line drawn between nodes) is used to indicate levels of hierarchy. Every treeview has a root node from which all nodes descend. Below the root node and indented to the right are its child nodes. Each node has exactly one parent node and can have zero or more child nodes. If a node (other than the root node) has a child or children, it is called a branch node. If it has no child, then it is a leaf node. This creates a hierarchical tree-like structure, with branches and subbranches emerging downward and rightwards. The nodes can be differentiated by different colors, icons and fonts to represent the nested relationship between parent nodes and child nodes. An item can be expanded to reveal subitems, if any exist, and collapsed to hide subitems. Features: Interactivity Tree view allows users to interact with hierarchical data in a variety of ways, such as : expanding and collapsing nodes to reveal or to hide their child nodes and thus navigate through the tree structure according to one's needs. search and filter nodes based on specific criteria such as date. renaming or deleting using context menus. copying and moving (dragging and dropping) nodes to other sections of the tree to rearrange them. opening a node in a separate window. Features: Customizability Tree views can be customized for visual appeal and efficiency in the following ways: Input methods : Tree views can be customized to support various input methods such as mouse, keyboard, and touch input so that users can interact using their preferred method. Users can use their mouse to click on a node to select it, move their mouse to drag and then release the mouse button to drop nodes to rearrange them. They can also use keyboard shortcuts to navigate and interact with the tree. Features: Look and feel : Developers (and sometimes users) can tailor the look and feel of tree views as well to match specific visual requirements of certain applications. Icons, fonts and colors used to display nodes, animations and effects to represent node expansion and collapse, and custom behaviors for drag and drop actions can be implemented. The context menu options can be customized for an application so that users can only perform specific actions on nodes. Features: Accessibility : tree views can offer accessibility features for users with disabilities. Advantages: Tree views offer the following advantages : They display hierarchical data in a concise and easy-to-follow format, so that users can easily walk through and interact with the data. They are customizable, so their appearance and behavior can be tailored to meet specific requirements of an application. They are interactive and allow the use of different input methods. They are flexible and potent navigational tools which can be used in a variety of applications (such as file managers) Disadvantages: If the nested or hierarchical relationship of items is not to be emphasized, then tree view would not be the optimal choice. A regular list would be more appropriate. For large amounts of data or deeply nested hierarchies, tree views can become visually disorderly and difficult to navigate, leading to inefficiency and productivity loss because users would spend more time walking through the structure than working with the data. They are more complex and thus more difficult to maintain than simpler structures like lists and tables. For the developers, customization options with animations and complex behaviors can increase time spent on implementation and debugging. Application: Tree views are used in situations where hierarchical data needs to be displayed and navigated in a graphical interface. For example, they have been used in: file managers to display the hierarchical structure of directories and files residing in a computer file system so that users can navigate the directory tree and open, close and manage their files more efficiently. Application: email clients to display the hierarchical structure of email folders and messages, helping users to view and reply to email messages, and manage their inbox. organizational charts to display the hierarchical structure of an organization's employees and departments. network topologies programming frameworks for building graphical applications. XML documents to present hierarchical data. outliner applications (as extended tree view), where each node consists of editable text.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Polycomb protein EED** Polycomb protein EED: Polycomb protein EED is a protein that in humans is encoded by the EED gene. Function: Polycomb protein EED is a member of the Polycomb-group (PcG) family. PcG family members form multimeric protein complexes, which are involved in maintaining the transcriptional repressive state of genes over successive cell generations. This protein interacts with enhancer of zeste 2, the cytoplasmic tail of integrin β7, immunodeficiency virus type 1 (HIV-1) MA protein, and histone deacetylase proteins. This protein mediates repression of gene activity through histone deacetylation, and may act as a specific regulator of integrin function. Two transcript variants encoding distinct isoforms have been identified for this gene. Clinical significance: In humans, a de-novo mutation in EED has been reported in an individual displaying symptoms similar to those of Weaver syndrome. Interactions: EED has been shown to interact with:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Prachetas** Prachetas: Pracetas (Sanskrit: प्रचेतस्) lit. '"the prescient one"' is a term in Hindu mythology with a number of definitions: It is an epithet of Varuna. It is a name of one of the ten Prajapatis, the son of Suvarna, a law giver. It is the name of the grandson of the sage Marichi and Kala, Varuna, the water god who is their grandson through their son Kashyapa and his wife Aditi. It is the designation for a group of beings in the Vedas. It is the collective term for the ten great-grandsons of Prithu and Archi. Vedas: Pracetas are those which bring consciousness to the outside, through the development of the senses that are active as sensations. These senses are the five forces of mind, five different angles of reflection; their formation took place with the help of the Pracetas.In the Rig Veda Mantra I.41.1 which reads: यं रक्षन्ति प्रचेतसो वरुणो मित्रो अर्यमा | नू चित्स दभ्यते जनः ||The word, pracetas, refers to men of knowledge, the men who are learned and wise. but in the Rig Veda Mantra I.5.7 which reads: आ त्वा विशन्त्वाशवः सोमास इन्द्र गिर्वणः | शं ते सन्तु प्रचेतसे ||(गिर्वणः इन्द्र) Praise-worthy Lord ! (आशवः सोमासः आ विशन्तु त्वा) Impatient seekers may enter Thee. May they (सन्तु शं) be gratifying (ते) to Thee, (प्र-चेतसे) the super-conscious Being.This refers to the "super-conscious" being in whom it is prayed that the "impatient seekers" be allowed to enter (i.e. be merged with). Puranas: According to the Puranas, Pracetas was a descendant of Druhyu; he was the son of Duryaman who was the son of Dhrita, the great-great-great grandson of Druhyu. Pracetas had one hundred sons who were the princes of the Mlechchhas, the barbarians of the north. Pracetas is one of the Prajapatis, and an ancient sage and law-giver. It is also said that there were ten Pracetas who were the sons of Prāchinabarhis and great grandsons of Prithu; according to the Vishnu Purana they had passed ten thousand years in the great ocean deep in meditation upon Vishnu who made them the progenitors of mankind.As the story goes, the eldest of the ten sons of Prāchinbarhis, collectively known as Pracetas, became the ruler; they cleared forests and made land fit for agriculture; they married the daughters of Soma, who begot sons called Daksha Pracetas. There were 49 kings up to Daksha Pracetas. The Pracetas emerged from the ocean after their long sojourn to find the Earth covered by trees; they created wind and fire and destroyed the trees. Brahma, however, requested that they not do so, and solemnized their marriage with Marisha; it was their union that gave second body to Daksha Prajapati.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rhizomelic chondrodysplasia punctata** Rhizomelic chondrodysplasia punctata: Rhizomelic chondrodysplasia punctata is a rare developmental brain disorder characterized by abnormally short arms and legs (rhizomelia), seizures, recurrent respiratory tract infections and congenital cataracts. The cause is a genetic mutation that results in low levels of plasmalogens, which are a type of lipid found in cell membranes throughout the body, but whose function is not known. Signs and symptoms: Rhizomelic chondrodysplasia punctata has the following symptoms: Bilateral shortening of the femur, resulting in short legs Post-natal growth problems (deficiency) Cataracts Intellectual disability Possible seizures Possible infections of respiratory tract Genetics: This condition is a consequence of mutations in the PEX7 gene, the GNPAT gene (which is located on chromosome 1) or the AGPS gene. The condition is acquired in an autosomal recessive manner. Pathophysiology: The mechanism of rhizomelic chondrodysplasia punctata in the case of type 1 of this condition involves a defect in PEX7, whose product is involved in peroxisome assembly. There are 3 pathways that depend on peroxisomal biogenesis factor 7 activities, including: AGPS (catalyzes plasmalogen biosynthesis) PhYH (catalyzes catabolism of phytanic acid) ACAA1 (catalyzes beta-oxidation of VLCFA - straight) Diagnosis: The diagnosis of rhizomelic chondrodysplasia punctata can be based on genetic testing as well as radiography results, plus a physical examination of the individual. Types Type 1 (RCDP1) is associated with PEX7 mutations; these are peroxisome biogenesis disorders where proper assembly of peroxisomes is impaired. Type 2 (RCDP2) is associated with DHAPAT mutations. Type 3 (RCDP3) is associated with AGPS mutations. Treatment: Management of rhizomelic chondrodysplasia punctata can include physical therapy; additionally orthopedic procedures improved function sometimes in affected people. Prognosis: The prognosis is poor in this condition, and most children die before the age of 10. However, some survive to adulthood, especially if they have a non-classical (mild) form of RCDP.Children with classical, or severe, RCDP1 have severe developmental disabilities. Most of them achieve early developmental skills, such as smiling, but they will not develop skills expected from a baby older than six months (such as feeding themselves or walking). By contrast, children with non-classical mild RCDP1 often learn to walk and talk.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Culturally modified tree** Culturally modified tree: Culturally modified tree (CMT) is a tree modified by indigenous people as part of their tradition. Such trees are important sources for the history of certain regions. The term is used in western Canada and the United States. In British Columbia, one of the most commonly modified trees, particularly on the coast, is the Western Red Cedar. The Sami people of northern Scandinavia and indigenous people of southeast Australia modify trees. Basque herders left thousands of trees in the western United States between 1860 and 1930. Regions: Australia The role of cedars, spruces etc. are taken over by much different species in Australia. Here the red (river) gum (Eucalyptus camaldulensis) and the grey box (Eucalyptus moluccana) are of most importance. There are certain similarities as far as the usage is concerned. Rhoads published in 1992 that within the territory of Southwest Victoria (about 10,000 km²) 228 CMTs were found in the vicinity of ancient camps. Regions: Canada In British Columbia, Canada, these trees are protected by complex laws. Trees dating before 1846 and registered as CMTs are not allowed to be logged. The first lawsuit concerning CMTs was against a Canadian who had logged CMTs over 300 years old. The oldest living CMT was found in British Columbia. It dates back to the 12th century.). In Canada, where research was for obvious reasons concentrated in the western provinces with its old forests, Ontario documented CMTs in 2001. In Nagagamisis Provincial Park most trees found were between 80 and 110 years old, some probably more than 400. Regions: On Hanson Island alone, David Garrick documented 1800 CMTs. The Kwakwaka'wakw of the region could stop the destruction of their archive. Garrick also found trees in the Great Bear Rainforest on the territory of the Gitga'at First Nation. They are supposed to be logged for a street in the area of Langford. In February 2008, the Times Colonist reported of protesters being removed.No license was given between 1996 and 2006, but in that year it was allowed once again - against the resistance of the Haida on Haida Gwaii. Even if loggers accept the restrictions and spare a CMT, these trees are endangered because they lose their "neighbours" and with them the protection against heavy storms. Consequently, Hupacasath First Nation on the western shore of Vancouver Island claims a protective zone around the trees of at least 20 to 30 metres.The trees are a source of utmost importance for the history of the First Nations, a history that is heavily dependent on oral traditions and archaeological findings for the pre-contact phase. Regions: This causes many problems for historians, ethnohistorians, anthropologists. The logging industry has reduced the old forests to a minimum in most regions and thus destroyed the culturally modified trees. Another problem is that knowledge of these places is not necessarily public. That is the case with many Nuu-chah-nulth on Vancouver Island. Only that group knows the corresponding rituals, devices, stories and dances, so their consent is needed. The tiny Island of Flores is home to 71 registered culturally modified trees which are protected like archives, libraries, historic sites or memorials. After historians and the courts had recognized that the trees of Meares Island are crucial for the culture and history of the Indian Nation living there, other indigenous groups started to register CMTs in their own reserves and in their traditional territories to get the same protection for them. Historians and Indians worked closely together. Regions: Gottesfeld could detect 21 species, which played a certain role as CMTs. Of utmost importance is the Western red cedar (Thuja plicata), but the yellow cedar (Chamaecyparis nootkatensis), spruces (Picea glauca u.a.), hemlock (Tsuga heterophylla), pines (Pinus contorta, Pinus ponderosa), in addition Populus tremuloides, Populus trichocarpa and Alnus rubra are also quite frequent. The bark of hemlock and certain spruces was important for nourishment and medicine. The resin of spruces was used as a kind of glue. Regions: Scandinavia CMTs have become important for the history of Scandinavia, too. The Sami people, who also ate certain kinds of bark, were displaced northwards in the 19th century by the Swedish population, who did not eat bark. Consequently, the traces of bark peelers are interrupted from one year to the next, so that historians can exactly tell when the last Sami left the region under examination. The oldest finding ever registered is 2800 years old. Meanwhile, the methods are so much refined that even fossile trees have become an important source for human history. Regions: United States The most surprising fact was a consequence of research within the Bob Marshall Wilderness in northwestern Montana. This is a wilderness of about 3,000 km² (in addition another 3000 of neighbouring wildernesses) that was never used by non-aboriginal people. There are no houses, streets, fields or pastures. Nevertheless, the CMTs showed that between at least 1665 and 1938 indigenous people peeled the bark and conducted other uses. Regions: In 1985 a protection program was started in Washington's Gifford Pinchot National Forest. At 338 spots, more than 6000 CMTs were identified, of which 3000 are protected now. Regions: Seventeen CMTs were found in the Blue Mountain area within Pike National Forest, at least 26 in Florissant Fossil Beds National Monument. Trees more than 200 years old were registered in Manitou Experimental Forest north of Woodland Park. Most of these trees within the territory of the Ute are ponderosa pines. Ute elders have differing opinions as to whether CMTs are a tradition. Researchers know that they haven't got that much time. The trees have a life expectancy of 300 to 600 years. Many could be dated, being peeled between 1816 and 1848. In February 2008, the Colorado Historical Society decided to invest a part of its 7 million dollar budget into a CMT project in Mesa Verde National Park. The Bureau of Land Management provides this form form the documentation of CMTs, in conjunction with History Colorado's Office of Archaeology & Historic Preservation. Reading: R. Andersson, Historical Land-Use Information from Culturally Modified Trees, Diss. Swedish University of Agricultural Sciences, Umea 2005 M. Antrop, Why landscapes of the past are important for the future. Landscape and urban planning 70 (2005) 21-34 Arcas Associates, Native Tree Use on Meares Island, B.C., 4 volumes, Victoria 1986 I. Bergman/L. Östlund/O. Zackrisson, The use of plants as regular food in ancient subarctic economies. A case study based on Sami use of Scots pine innerbark. Arctic anthropology 41 (2004) 1-13 M. D. Blackstock, Faces in the forest: First Nations art created on living trees. Montreal/Kingston: McGill-Queen's University Press 2001, 224 pp. Reading: G. Carver, An Examination of Indigenous Australian Culturally Modified Trees in South Australia. Doctoral thesis. Department of Archaeology, Flinders University, Australia 2001 Juliet Craig: "Nature was the provider". Traditional ecological knowledge and inventory of culturally significant plants and habitats in the Atleo River Watershed, Ahousaht Territory, Clayoquot Sound, PHD. Victoria 1998 V. V. Eetvelde//M. Antrop, Analyzing structural and functional changes of traditional landscapes: two examples from Southern France. Landscape and urban planning 67 (2004) 79-95 T. S. Ericsson, Culture within nature: Key areas for interpreting forest history in boreal Sweden (Acta Universitatis Agriculturae Sueciae), 2001 David Garrick, Shaped Cedars and Cedar Shaping: A Guidebook to Identifying, Documenting, Appreciating and Learning from Culturally Modified Trees. Special Limited Edition, Western Canada Wilderness Committee, 1998. Reading: L. M. J. Gottesfeld, The importance of bark products in the aboriginal economies of northwestern British Columbia, Canada. Economic botany 46 (1992) 148-157 R. J. Hebda/R. W. Mathewes, Holocene history of Cedar and Native Indian cultures of the North American Pacific coast. Science 225 (1984) 711-713 L. M. Johnson, "A place that’s good". Gitksan landscape perception and ethnoecology. Human ecology 28 (2000) 301-325 J. Mallea-Olaetxe, Speaking through the aspens: Basque tree carvings in California and Nevada. University of Nevada Press, Reno / Las Vegas 2000, 237 pp. Reading: Amanda L. Marshall, Culturally modified trees of the Nechako plateau: cambium utilization amongst traditional carrier (Dahkel) peoples. Master's thesis. Department of Archaeology, Simon Fraser University 2002 C. Mobley, The Ship Island site: Tree-ring dating the last battle between the Stikine Tlingit and the Tsimshian. A report to the Alaska Humanities Forum, Grant 1999, 36-96 C.M. Mobley and M. Eldridge, Culturally Modified Trees in the Pacific Northwest. Arctic Anthropology. 29 (1992) 91-110. Reading: J. Oliver, Beyond the Water's Edge: Towards a Social Archaeology of Landscape on the Northwest Coast. Canadian Journal of Archaeology 31 (2007) 1-27. B. Pegg, Dendrochronology, CMTs, and Nuu-chah-nulth History on the West Coast of Vancouver Island. Canadian Journal of Archaeology 24 (2000) 77–88. P. Prince, Dating and Interpreting Pine Cambium collection Scars from Two Parts of the Nechako River Drainage, British Columbia. Journal of Archaeological Science 28 (2001) 253–263. Reading: Sheila D. Ready, Peeled Trees on the Payette National Forest, Inner Bark Utilization as a Food Resource by Native Americans, USDA Payette National Forest, Supervisor's Office, McCall, Idaho 1993 J. W. Rhoads, Significant sites and non-site archaeology: a case-study from south- east Australia. World archaeology 24 (1992) 199-217 Arnoud H. Stryd/Vicki Feddema, Sacred Cedar. The Cultural and Archaeological Significance of Culturally Modified Trees, digital (PDF, 1,3 MB): Stryd/Feddema A.H. Stryd and M. Eldridge, CMT Archaeology in British Columbia: The Mears Island Studies. BC Studies 99 (1993) 184–234. Reading: T. W. Swetnam, Peeled ponderosa pine trees: A record of inner bark utilization by Native Americans. Journal of ethnobiology 4 (1984) 177-190
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Video game remake** Video game remake: A video game remake is a video game closely adapted from an earlier title, usually for the purpose of modernizing a game with updated graphics for newer hardware and gameplay for contemporary audiences. Typically, a remake of such game software shares essentially the same title, fundamental gameplay concepts, and core story elements of the original game, although some aspects of the original game may have been changed for the remake.Remakes are often made by the original developer or copyright holder, and sometimes by the fan community. If created by the community, video game remakes are sometimes also called fangames and can be seen as part of the retro gaming phenomenon. Definition: A remake offers a newer interpretation of an older work, characterized by updated or changed assets. For example, The Legend of Zelda: Ocarina of Time 3D and The Legend of Zelda: Majora's Mask 3D for the Nintendo 3DS are considered remakes of their original versions for the Nintendo 64, and not a remaster or a port, since there are new character models and texture packs. The Legend of Zelda: Wind Waker HD for Wii U would be considered a remaster, since it retains the same, albeit updated upscaled aesthetics of the original.A remake typically maintains the same story, genre, and fundamental gameplay ideas of the original work. The intent of a remake is usually to take an older game that has become outdated and update it for a new platform and audience. A remake will not necessarily preserve the original gameplay especially if it is dated, instead remaking the gameplay to conform to the conventions of contemporary games or later titles in the same series in order to make a game marketable to a new audience.For example, for Sierra's 1991 remake of Space Quest, the developers used the engine, point-and-click interface, and graphical style of Space Quest IV: Roger Wilco and The Time Rippers, replacing the original graphics and text parser interface of the original. However, other elements, like the narrative, puzzles and sets, were largely preserved. Another example is Black Mesa, a remake built entirely from the ground up in the Source Engine that remakes in-game textures, assets, models, and facial animations, while taking place in the events of the original Half-Life game. Resident Evil 2 (2019) is a remake of the 1998 game Resident Evil 2; while the original uses tank controls and fixed camera angles, the remake features "over-the-shoulder" third-person shooter gameplay similar to Resident Evil 4 and more recent games in the series that allows players the option to move while using their weapons similar to Resident Evil 6. Definition: Ports A port is a conversion of a game to a new platform that relies heavily on existing work and assets. A port may include various enhancements like improved performance, resolution, and sometimes even additional content, but differs from a remake in that it still relies heavily on the original assets and engine of the source game. Sometimes, ports even remove content that was present in the original version. For example, the handheld console ports of Mortal Kombat II had fewer characters than the original arcade game and other console ports due to system storage limitations but otherwise were still faithful to the original in terms of gameplay.Compared to the intentional video game remake or remaster which is often done years or decades after the original came out, ports or conversions are typically released during the same generation as the original (the exception being mobile gaming versions of PC games, such as Grand Theft Auto III, since mobile gaming platforms did not exist until the 2010s going forward). Home console ports usually came out less than a year after the original arcade game, such as the distribution of Mortal Kombat for home consoles by Acclaim Entertainment. Since the 2000s as arcade releases are no longer the original launch platform for a video game, publishers tend to release the video game simultaneously on several consoles first and then ported to the PC later. Definition: Remaster A port that contains a great deal of remade assets may sometimes be considered a remaster or a partial remake, although video game publishers are not always clear on the distinction. DuckTales: Remastered for example uses the term "Remastered" to distinguish itself from the original NES game it was based on, even though it is a clean-slate remake with a different engine and assets. Compared to a port which is typically released in the same era as the original, a remaster is done years or decades after the original in order to take advantage of generation technological improvements (the latter which a port avoids doing). Unlike a remake which often changes the now-dated gameplay, a remaster is very faithful to the original in that aspect (in order to appeal to that nostalgic audience) while permitting only a limited number of gameplay tweaks for the sake of convenience. Definition: Reboots Games that use an existing brand but are conceptually very different from the original, such as Wolfenstein 3D (1992) and Return to Castle Wolfenstein (2001) or Tomb Raider (1996) and Tomb Raider (2013) are usually regarded as reboots rather than remakes. History: In the early history of video games, remakes were generally regarded as "conversions" and seldom associated with nostalgia. Due to limited and often highly divergent hardware, games appearing on multiple platforms usually had to be entirely remade. These conversions often included considerable changes to the graphics and gameplay, and could be regarded retroactively as remakes, but are distinguished from later remakes largely by intent. A conversion is created with the primary goal of tailoring a game to a specific piece of hardware, usually contemporaneous or nearly contemporaneous with the original release. An early example was Gun Fight, Midway's 1975 reprogrammed version of Taito's arcade game Western Gun, with the main difference being the use of a microprocessor in the reprogrammed version, which allowed improved graphics and smoother animation than the discrete logic of the original. In 1980, Warren Robinett created Adventure for the Atari 2600, a graphical version of the 1970s text adventure Colossal Cave Adventure. Also in 1980, Atari released the first officially licensed home console game conversion of an arcade title, Taito's 1978 hit Space Invaders, for the Atari 2600. The game became the first "killer app" for a video game console by quadrupling the system's sales. Since then, it became a common trend to port arcade games to home systems since the second console generation, though at the time they were often more limited than the original arcade games due to the technical limitations of home consoles. History: In 1985, Sega released a pair of arcade remakes of older home video games. Pitfall II: Lost Caverns was effectively a remake of both the original Pitfall! and its sequel with new level layouts and colorful, detailed graphics. That same year, Sega adapted the 1982 computer game Choplifter for the arcades, taking the fundamental gameplay of the original and greatly expanding it, adding new environments, enemies, and gameplay elements. This version was very successful, and later adapted to the Master System and Famicom. Both of these games were distinguished from most earlier conversions in that they took major liberties with the source material, attempting to modernize both the gameplay as well as the graphics. History: Some of the earliest remakes to be recognized as such were attempts to modernize games to the standards of later games in the series. Some were even on the same platforms as the original, for example Ultima I: The First Age of Darkness, a 1986 remake of the original that appeared on multiple platforms, including the Apple II, the same platform the source game originated on. Other early remakes of this type include Sierra's early-1990s releases of King's Quest, Space Quest and Leisure Suit Larry. These games used the technology and interface of the most recent games in Sierra's series, and original assets in a dramatically different style. The intent was not simply to bring the game to a new platform, but to modernize older games which had in various ways become dated. History: With the birth of the retrogaming phenomenon, remakes became a way for companies to revive nostalgic brands. Galaga '88 and Super Space Invaders '91 were both attempts to revitalize aging arcade franchises with modernized graphics and new gameplay elements, while preserving many signature aspects of the original games. The 16-bit generation of console games was marked by greatly enhanced graphics compared to the previous generation, but often relatively similar gameplay, which led to an increased interest in remakes of games from the previous generation. Super Mario All-Stars remade the entire NES Mario series, and was met with great commercial success. Remake compilations of the Ninja Gaiden and Mega Man series followed. As RPGs increased in popularity, Dragon Quest, Ys and Kyūyaku Megami Tensei were also remade. In the mid-'90s, Atari released a series of remakes with the 2000 brand, including Tempest 2000, Battlezone 2000, and Defender 2000. After Atari's demise, Hasbro continued the tradition, with 3D remakes of Pong, Centipede, and Asteroids. History: By 1994 the popularity of CD-ROM led to many remakes with digitized voices and, sometimes, better graphics, although Computer Gaming World noted the "amateur acting" in many new and remade games on CD. Emulation also made perfect ports of older games possible, with compilations becoming a popular way for publishers to capitalize on older properties. History: Budget pricing gave publishers the opportunity to match their game's price with the perceived lower value proposition of an older game, opening the door for newer remakes. In 2003, Sega launched the Sega Ages line for PlayStation 2, initially conceived as a series of modernized remakes of classic games, though the series later diversified to include emulated compilations. The series concluded with a release that combined the two approaches, and included a remake of Fantasy Zone II that ran, via emulation, on hardware dating to the time of the original release, one of the few attempts at an enhanced remake to make no attempts at modernization. The advent of downloadable game services like Xbox Live Arcade and PlayStation Network has further fueled the expanded market for remakes, as the platform allows companies to sell their games at a lower price, seen as more appropriate for the smaller size typical of retro games. Some XBLA and PSN remakes include Bionic Commando Rearmed, Jetpac Refuelled, Wipeout HD (a remake not of the original Wipeout but of the two PSP games), Cyber Troopers Virtual-On Oratorio Tangram and Super Street Fighter II Turbo HD Remix. History: Some remakes may include the original game as a bonus feature. The 2009 remake of The Secret of Monkey Island took this a step further by allowing players to switch between the original and remade versions on the fly with a single button press. This trend was continued in the sequel, and is also a feature in Halo: Combat Evolved Anniversary and later in Halo 2 Anniversary as part of Halo: The Master Chief Collection. History: The Nintendo 3DS's lineup has included numerous remasters and remakes, including The Legend of Zelda: Ocarina of Time 3D, Star Fox 64 3D, Cave Story 3D, The Legend of Zelda: Majora's Mask 3D, Pokémon Omega Ruby and Alpha Sapphire, Metroid: Samus Returns, Mario & Luigi: Superstar Saga + Bowser's Minions, Luigi's Mansion, and Mario & Luigi: Bowser's Inside Story + Bowser Jr.'s Journey. Community-driven remakes: Games abandoned by the rights-holders often spark remakes created by hobbyists and game communities. An example is OpenRA, which is a modernized remake of the classic Command & Conquer real-time-strategy games. Beyond cross-platform support, it adds comfort functions and gameplay functionality inspired by successors of the original games. Another notable examples are Pioneers, a remake and sequel in spirit to Frontier: Elite 2; CSBWin, a remake of the Dungeon crawler classic Dungeon Master; and Privateer Gemini Gold, a remake of Wing Commander: Privateer.Skywind is a fan remake of Morrowind (2002) running on Bethesda's Creation Engine, utilising the source code, assets and gameplay mechanics of Skyrim (2011). The original game developers, Bethesda Softworks, have given project volunteers their approval. The remake team includes over 70 volunteers in artist, composer, designer, developer, and voice-actor roles. In November 2014, the team reported to have finished half of the remake's environment, over 10,000 new dialogue lines, and three hours of series-inspired soundtrack. The same open-development project is also working on Skyblivion, a remake of Oblivion (the game between Morrowind and Skyrim) in the Skyrim engine, and Morroblivion, a remake of Morrowind in the Oblivion engine (which still has a significant userbase on older PCs). Demakes: The term demake may refer to games created deliberately with an artstyle inspired by older games of a previous video game generation. The action platformer Mega Man 9 is an example of such a game. Although remakes typically aim to adapt a game from a more limited platform to a more advanced one, a rising interest in older platforms has inspired some to do the opposite, remaking or adapting modern games to the technical standards of older platforms, usually going so far as to implement them on obsolete or abandoned hardware platforms, hence the term "Demake". Such games are either physical or emulated.Modern demakes often change the 3D gameplay to a 2D one. Popular demakes include Quest: Brian's Journey, an official Game Boy Color port of Quest 64; Super Smash Land, a Game Boy-style demake of Super Smash Bros.; D-Pad Hero, a NES-esque demake of Guitar Hero; Rockman 7 FC and Rockman 8 FC, NES-styled demakes of Mega Man 7 and Mega Man 8, respectively; Gang Garrison 2, a pixelated demake of Team Fortress 2; and Halo 2600, an Atari 2600-style demake of Microsoft's Halo series. There is also a NES-style demake of Touhou Project game Embodiment of Scarlet Devil. Some demakes are created to showcase and push the abilities of older generation systems such as the Atari 2600. An example of this is the 2012 game Princess Rescue, which is a demake of the NES title Super Mario Bros. Demakes: While most demakes are homebrew efforts from passionate fans, some are officially endorsed by the original creators such as Pac-Man Championship Edition's Famicom/NES demake being printed onto Japanese physical editions of the Namcot Collection as an original bonus game. For much of the 1990s in China, Hong Kong, and Taiwan, black market developers created unauthorized adaptations of then-modern games such as Street Fighter II, Mortal Kombat, Final Fantasy VII or Tekken for the NES, which enjoyed considerable popularity in the regions because of the availability of low-cost compatible systems.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vulpian–Heidenhain–Sherrington phenomenon** Vulpian–Heidenhain–Sherrington phenomenon: Vulpian–Heidenhain–Sherrington phenomenon is a term given for slow contraction of denervated skeletal muscle by stimulating the autonomic cholinergic fibers innervating its blood vessels. It is named after French neurologist Alfred Vulpian (1826–87), German physiologist Rudolf Heidenhain (1834–1897) and English neurophysiologist Charles Scott Sherrington (1857–1952).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Diurnal air temperature variation** Diurnal air temperature variation: In meteorology, diurnal temperature variation is the variation between a high air temperature and a low temperature that occurs during the same day. Temperature lag: Temperature lag, also known as thermal inertia, is an important factor in diurnal temperature variation. Peak daily temperature generally occurs after noon, as air keeps absorbing net heat for some period of time after noon. Similarly minimum daily temperature generally occurs substantially after midnight, indeed occurring during early morning in the hour around dawn, since heat is lost all night long. The analogous annual phenomenon is seasonal lag. Temperature lag: As the solar energy strikes the Earth's surface each morning, a shallow 1–3-centimetre (0.39–1.18 in) layer of air directly above the ground is heated by conduction. Heat exchange between this shallow layer of warm air and the cooler air above is very inefficient. On a warm summer's day, for example, air temperatures may vary by 16.5 °C (30 °F) from just above the ground to chest height. Incoming solar radiation exceeds outgoing heat energy for many hours after noon and equilibrium is usually reached from 3–5 p.m. but this may be affected by a variety of different things such as large bodies of water, soil type and cover, wind, cloud cover/water vapor, and moisture on the ground. Differences in variation: Diurnal temperature variations are greatest very near Earth's surface. The Tibetan and Andean Plateaus present one of the largest differences in daily temperature on the planet. Differences in variation: High desert regions typically have the greatest diurnal-temperature variations, while low-lying humid areas typically have the least. This explains why an area like the Pinnacles National Park can have high temperatures of 38 °C (100 °F) during a summer day, and then have lows of 5–10 °C (41–50 °F). At the same time, Washington D.C., which is much more humid, has temperature variations of only 8 °C (14 °F); urban Hong Kong has a diurnal temperature range of little more than 4 °C (7.2 °F). Differences in variation: While the National Park Service claimed that the world single-day record is a variation of 102 °F (56.7 °C) (from 46 °F or 7.8 °C to −56 °F or −48.9 °C) in Browning, Montana in 1916, the Montana Department of Environmental Quality claimed that Loma, Montana also had a variation of 102 °F (56.7 °C) (from −54 °F or −47.8 °C to 48 °F or 8.9 °C) in 1972. Both these extreme daily temperature changes were the result of sharp air-mass changes within a single day. The 1916 event was an extreme temperature drop, resulting from frigid Arctic air from Canada invading northern Montana, displacing a much warmer air mass. The 1972 event was a chinook event, where air from the Pacific Ocean overtopped mountain ranges to the west, and dramatically warmed in its descent into Montana, displacing frigid Arctic air and causing a drastic temperature rise. Differences in variation: In the absence of such extreme air-mass changes, diurnal temperature variations typically range from 10 or fewer degrees in humid, tropical areas, to 40-50 degrees in higher-elevation, arid to semi-arid areas, such as parts of the U.S. Western states' Intermountain Plateau areas, for example Elko, Nevada, Ashton, Idaho and Burns, Oregon. The higher the humidity is, the lower the diurnal temperature variation is. Differences in variation: In Europe, due to its more northern latitude and close proximity to large warm water bodies (such as the Mediterranean), differences in daily temperature are not as pronounced as in other continents. However, places in Southern Europe significantly far from the Mediterranean tend to have high differences in daily temperatures, some around 14 °C (25 °F). These include Southwestern Iberia (e.g. Alvega or Badajoz) or the high-altitude plateaus of Turkey (if considered part of Europe) (e.g. Kayseri). Viticulture: Diurnal temperature variation is of particular importance in viticulture. Wine regions situated in areas of high altitude experience the most dramatic swing in temperature variation during the course of a day. In grapes, this variation has the effect of producing high acid and high sugar content as the grapes' exposure to sunlight increases the ripening qualities while the sudden drop in temperature at night preserves the balance of natural acids in the grape.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Small nucleolar RNA Me28S-Am2634** Small nucleolar RNA Me28S-Am2634: In molecular biology, Small nucleolar RNA Me28S-Am2634 (also known as snoRNA Me28S-Am2634) is a non-coding RNA (ncRNA) molecule which functions in the biogenesis of other small nuclear RNAs (snRNAs). Small nucleolar RNAs (snoRNAs) are modifying RNAs and usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. Small nucleolar RNA Me28S-Am2634: snoRNA Me28S-Am2634 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs. It is predicted to guide the 2'-O-methylation of 28S ribosomal RNA (rRNA) residue A-2634.This snoRNA has currently only been identified in the fly species Drosophila melanogaster.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Diffusion** Diffusion: Diffusion is the net movement of anything (for example, atoms, ions, molecules, energy) generally from a region of higher concentration to a region of lower concentration. Diffusion is driven by a gradient in Gibbs free energy or chemical potential. It is possible to diffuse "uphill" from a region of lower concentration to a region of higher concentration, like in spinodal decomposition. Diffusion is a stochastic process due to the inherent randomness of the diffusing entity and can be used to model many real-life stochastic scenarios. Therefore, diffusion and the corresponding mathematical models are used in several fields, beyond physics, such as statistics, probability theory, information theory, neural networks, finance and marketing etc. Diffusion: The concept of diffusion is widely used in many fields, including physics (particle diffusion), chemistry, biology, sociology, economics, statistics, data science and finance (diffusion of people, ideas, data and price values). The central idea of diffusion, however, is common to all of these: a substance or collection undergoing diffusion spreads out from a point or location at which there is a higher concentration of that substance or collection. Diffusion: A gradient is the change in the value of a quantity, for example, concentration, pressure, or temperature with the change in another variable, usually distance. A change in concentration over a distance is called a concentration gradient, a change in pressure over a distance is called a pressure gradient, and a change in temperature over a distance is called a temperature gradient. Diffusion: The word diffusion derives from the Latin word, diffundere, which means "to spread out". A distinguishing feature of diffusion is that it depends on particle random walk, and results in mixing or mass transport without requiring directed bulk motion. Bulk motion, or bulk flow, is the characteristic of advection. The term convection is used to describe the combination of both transport phenomena. If a diffusion process can be described by Fick's laws, it is called a normal diffusion (or Fickian diffusion); Otherwise, it is called an anomalous diffusion (or non-Fickian diffusion). Diffusion: When talking about the extent of diffusion, two length scales are used in two different scenarios: Brownian motion of an impulsive point source (for example, one single spray of perfume)—the square root of the mean squared displacement from this point. In Fickian diffusion, this is 2nDt , where n is the dimension of this Brownian motion; Constant concentration source in one dimension—the diffusion length. In Fickian diffusion, this is 2Dt Diffusion vs. bulk flow: "Bulk flow" is the movement/flow of an entire body due to a pressure gradient (for example, water coming out of a tap). "Diffusion" is the gradual movement/dispersion of concentration within a body, due to a concentration gradient, with no net movement of matter. An example of a process where both bulk motion and diffusion occur is human breathing.First, there is a "bulk flow" process. The lungs are located in the thoracic cavity, which expands as the first step in external respiration. This expansion leads to an increase in volume of the alveoli in the lungs, which causes a decrease in pressure in the alveoli. This creates a pressure gradient between the air outside the body at relatively high pressure and the alveoli at relatively low pressure. The air moves down the pressure gradient through the airways of the lungs and into the alveoli until the pressure of the air and that in the alveoli are equal, that is, the movement of air by bulk flow stops once there is no longer a pressure gradient. Diffusion vs. bulk flow: Second, there is a "diffusion" process. The air arriving in the alveoli has a higher concentration of oxygen than the "stale" air in the alveoli. The increase in oxygen concentration creates a concentration gradient for oxygen between the air in the alveoli and the blood in the capillaries that surround the alveoli. Oxygen then moves by diffusion, down the concentration gradient, into the blood. The other consequence of the air arriving in alveoli is that the concentration of carbon dioxide in the alveoli decreases. This creates a concentration gradient for carbon dioxide to diffuse from the blood into the alveoli, as fresh air has a very low concentration of carbon dioxide compared to the blood in the body. Diffusion vs. bulk flow: Third, there is another "bulk flow" process. The pumping action of the heart then transports the blood around the body. As the left ventricle of the heart contracts, the volume decreases, which increases the pressure in the ventricle. This creates a pressure gradient between the heart and the capillaries, and blood moves through blood vessels by bulk flow down the pressure gradient. Diffusion in the context of different disciplines: The concept of diffusion is widely used in: physics (particle diffusion), chemistry, biology, sociology, economics, and finance (diffusion of people, ideas and of price values). However, in each case the substance or collection undergoing diffusion is "spreading out" from a point or location at which there is a higher concentration of that substance or collection. Diffusion in the context of different disciplines: There are two ways to introduce the notion of diffusion: either a phenomenological approach starting with Fick's laws of diffusion and their mathematical consequences, or a physical and atomistic one, by considering the random walk of the diffusing particles.In the phenomenological approach, diffusion is the movement of a substance from a region of high concentration to a region of low concentration without bulk motion. According to Fick's laws, the diffusion flux is proportional to the negative gradient of concentrations. It goes from regions of higher concentration to regions of lower concentration. Sometime later, various generalizations of Fick's laws were developed in the frame of thermodynamics and non-equilibrium thermodynamics.From the atomistic point of view, diffusion is considered as a result of the random walk of the diffusing particles. In molecular diffusion, the moving molecules are self-propelled by thermal energy. Random walk of small particles in suspension in a fluid was discovered in 1827 by Robert Brown, who found that minute particle suspended in a liquid medium and just large enough to be visible under an optical microscope exhibit a rapid and continually irregular motion of particles known as Brownian movement. The theory of the Brownian motion and the atomistic backgrounds of diffusion were developed by Albert Einstein. Diffusion in the context of different disciplines: The concept of diffusion is typically applied to any subject matter involving random walks in ensembles of individuals. Diffusion in the context of different disciplines: In chemistry and materials science, diffusion refers to the movement of fluid molecules in porous solids. Molecular diffusion occurs when the collision with another molecule is more likely than the collision with the pore walls. Under such conditions, the diffusivity is similar to that in a non-confined space and is proportional to the mean free path. Knudsen diffusion occurs when the pore diameter is comparable to or smaller than the mean free path of the molecule diffusing through the pore. Under this condition, the collision with the pore walls becomes gradually more likely and the diffusivity is lower. Finally there is configurational diffusion, which happens if the molecules have comparable size to that of the pore. Under this condition, the diffusivity is much lower compared to molecular diffusion and small differences in the kinetic diameter of the molecule cause large differences in diffusivity. Diffusion in the context of different disciplines: Biologists often use the terms "net movement" or "net diffusion" to describe the movement of ions or molecules by diffusion. For example, oxygen can diffuse through cell membranes so long as there is a higher concentration of oxygen outside the cell. However, because the movement of molecules is random, occasionally oxygen molecules move out of the cell (against the concentration gradient). Because there are more oxygen molecules outside the cell, the probability that oxygen molecules will enter the cell is higher than the probability that oxygen molecules will leave the cell. Therefore, the "net" movement of oxygen molecules (the difference between the number of molecules either entering or leaving the cell) is into the cell. In other words, there is a net movement of oxygen molecules down the concentration gradient. History of diffusion in physics: In the scope of time, diffusion in solids was used long before the theory of diffusion was created. For example, Pliny the Elder had previously described the cementation process, which produces steel from the element iron (Fe) through carbon diffusion. Another example is well known for many centuries, the diffusion of colors of stained glass or earthenware and Chinese ceramics. History of diffusion in physics: In modern science, the first systematic experimental study of diffusion was performed by Thomas Graham. He studied diffusion in gases, and the main phenomenon was described by him in 1831–1833: "...gases of different nature, when brought into contact, do not arrange themselves according to their density, the heaviest undermost, and the lighter uppermost, but they spontaneously diffuse, mutually and equally, through each other, and so remain in the intimate state of mixture for any length of time." The measurements of Graham contributed to James Clerk Maxwell deriving, in 1867, the coefficient of diffusion for CO2 in the air. The error rate is less than 5%. History of diffusion in physics: In 1855, Adolf Fick, the 26-year-old anatomy demonstrator from Zürich, proposed his law of diffusion. He used Graham's research, stating his goal as "the development of a fundamental law, for the operation of diffusion in a single element of space". He asserted a deep analogy between diffusion and conduction of heat or electricity, creating a formalism similar to Fourier's law for heat conduction (1822) and Ohm's law for electric current (1827). History of diffusion in physics: Robert Boyle demonstrated diffusion in solids in the 17th century by penetration of zinc into a copper coin. Nevertheless, diffusion in solids was not systematically studied until the second part of the 19th century. William Chandler Roberts-Austen, the well-known British metallurgist and former assistant of Thomas Graham studied systematically solid state diffusion on the example of gold in lead in 1896. : "... My long connection with Graham's researches made it almost a duty to attempt to extend his work on liquid diffusion to metals." In 1858, Rudolf Clausius introduced the concept of the mean free path. In the same year, James Clerk Maxwell developed the first atomistic theory of transport processes in gases. The modern atomistic theory of diffusion and Brownian motion was developed by Albert Einstein, Marian Smoluchowski and Jean-Baptiste Perrin. Ludwig Boltzmann, in the development of the atomistic backgrounds of the macroscopic transport processes, introduced the Boltzmann equation, which has served mathematics and physics with a source of transport process ideas and concerns for more than 140 years.In 1920–1921, George de Hevesy measured self-diffusion using radioisotopes. He studied self-diffusion of radioactive isotopes of lead in the liquid and solid lead. History of diffusion in physics: Yakov Frenkel (sometimes, Jakov/Jacob Frenkel) proposed, and elaborated in 1926, the idea of diffusion in crystals through local defects (vacancies and interstitial atoms). He concluded, the diffusion process in condensed matter is an ensemble of elementary jumps and quasichemical interactions of particles and defects. He introduced several mechanisms of diffusion and found rate constants from experimental data. History of diffusion in physics: Sometime later, Carl Wagner and Walter H. Schottky developed Frenkel's ideas about mechanisms of diffusion further. Presently, it is universally recognized that atomic defects are necessary to mediate diffusion in crystals.Henry Eyring, with co-authors, applied his theory of absolute reaction rates to Frenkel's quasichemical model of diffusion. The analogy between reaction kinetics and diffusion leads to various nonlinear versions of Fick's law. Basic models of diffusion: Diffusion flux Each model of diffusion expresses the diffusion flux with the use of concentrations, densities and their derivatives. Flux is a vector J representing the quantity and direction of transfer. Given a small area ΔS with normal ν , the transfer of a physical quantity N through the area ΔS per time Δt is ΔN=(J,ν)ΔSΔt+o(ΔSΔt), where (J,ν) is the inner product and o(⋯) is the little-o notation. If we use the notation of vector area ΔS=νΔS then ΔN=(J,ΔS)Δt+o(ΔSΔt). Basic models of diffusion: The dimension of the diffusion flux is [flux] = [quantity]/([time]·[area]). The diffusing physical quantity N may be the number of particles, mass, energy, electric charge, or any other scalar extensive quantity. For its density, n , the diffusion equation has the form ∂n∂t=−∇⋅J+W, where W is intensity of any local source of this quantity (for example, the rate of a chemical reaction). Basic models of diffusion: For the diffusion equation, the no-flux boundary conditions can be formulated as (J(x),ν(x))=0 on the boundary, where ν is the normal to the boundary at point x Fick's law and equations Fick's first law: the diffusion flux is proportional to the negative of the concentration gradient: J=−D∇n,Ji=−D∂n∂xi. The corresponding diffusion equation (Fick's second law) is ∂n(x,t)∂t=∇⋅(D∇n(x,t))=DΔn(x,t), where Δ is the Laplace operator, Δn(x,t)=∑i∂2n(x,t)∂xi2. Basic models of diffusion: Onsager's equations for multicomponent diffusion and thermodiffusion Fick's law describes diffusion of an admixture in a medium. The concentration of this admixture should be small and the gradient of this concentration should be also small. The driving force of diffusion in Fick's law is the antigradient of concentration, −∇n In 1931, Lars Onsager included the multicomponent transport processes in the general context of linear non-equilibrium thermodynamics. For multi-component transport, Ji=∑jLijXj, where Ji is the flux of the ith physical quantity (component) and Xj is the jth thermodynamic force. Basic models of diffusion: The thermodynamic forces for the transport processes were introduced by Onsager as the space gradients of the derivatives of the entropy density s (he used the term "force" in quotation marks or "driving force"): Xi=∇∂s(n)∂ni, where ni are the "thermodynamic coordinates". Basic models of diffusion: For the heat and mass transfer one can take n0=u (the density of internal energy) and ni is the concentration of the i th component. The corresponding driving forces are the space vectors X0=∇1T,Xi=−∇μiT(i>0), because ds=1Tdu−∑i≥1μiTdni where T is the absolute temperature and μi is the chemical potential of the i th component. It should be stressed that the separate diffusion equations describe the mixing or mass transport without bulk motion. Therefore, the terms with variation of the total pressure are neglected. It is possible for diffusion of small admixtures and for small gradients. Basic models of diffusion: For the linear Onsager equations, we must take the thermodynamic forces in the linear approximation near equilibrium: Xi=∑k≥0∂2s(n)∂ni∂nk|n=n∗∇nk, where the derivatives of s are calculated at equilibrium n∗ The matrix of the kinetic coefficients Lij should be symmetric (Onsager reciprocal relations) and positive definite (for the entropy growth). The transport equations are div div ⁡Xj=∑k≥0[−∑j≥0Lij∂2s(n)∂nj∂nk|n=n∗]Δnk. Basic models of diffusion: Here, all the indexes i, j, k = 0, 1, 2, ... are related to the internal energy (0) and various components. The expression in the square brackets is the matrix Dik of the diffusion (i,k > 0), thermodiffusion (i > 0, k = 0 or k > 0, i = 0) and thermal conductivity (i = k = 0) coefficients. Basic models of diffusion: Under isothermal conditions T = constant. The relevant thermodynamic potential is the free energy (or the free entropy). The thermodynamic driving forces for the isothermal diffusion are antigradients of chemical potentials, −(1/T)∇μj , and the matrix of diffusion coefficients is Dik=1T∑j≥1Lij∂μj(n,T)∂nk|n=n∗ (i,k > 0). Basic models of diffusion: There is intrinsic arbitrariness in the definition of the thermodynamic forces and kinetic coefficients because they are not measurable separately and only their combinations {\textstyle \sum _{j}L_{ij}X_{j}} can be measured. For example, in the original work of Onsager the thermodynamic forces include additional multiplier T, whereas in the Course of Theoretical Physics this multiplier is omitted but the sign of the thermodynamic forces is opposite. All these changes are supplemented by the corresponding changes in the coefficients and do not affect the measurable quantities. Basic models of diffusion: Nondiagonal diffusion must be nonlinear The formalism of linear irreversible thermodynamics (Onsager) generates the systems of linear diffusion equations in the form ∂ci∂t=∑jDijΔcj. Basic models of diffusion: If the matrix of diffusion coefficients is diagonal, then this system of equations is just a collection of decoupled Fick's equations for various components. Assume that diffusion is non-diagonal, for example, 12 ≠0 , and consider the state with c2=⋯=cn=0 . At this state, 12 Δc1 . If 12 Δc1(x)<0 at some points, then c2(x) becomes negative at these points in a short time. Therefore, linear non-diagonal diffusion does not preserve positivity of concentrations. Non-diagonal equations of multicomponent diffusion must be non-linear. Basic models of diffusion: Einstein's mobility and Teorell formula The Einstein relation (kinetic theory) connects the diffusion coefficient and the mobility (the ratio of the particle's terminal drift velocity to an applied force). For charged particles: D=μkBTq, where D is the diffusion constant, μ is the "mobility", kB is the Boltzmann constant, T is the absolute temperature, and q is the elementary charge, that is, the charge of one electron. Basic models of diffusion: Below, to combine in the same formula the chemical potential μ and the mobility, we use for mobility the notation m The mobility-based approach was further applied by T. Teorell. In 1935, he studied the diffusion of ions through a membrane. He formulated the essence of his approach in the formula: the flux is equal to mobility × concentration × force per gram-ion.This is the so-called Teorell formula. The term "gram-ion" ("gram-particle") is used for a quantity of a substance that contains the Avogadro number of ions (particles). The common modern term is mole. Basic models of diffusion: The force under isothermal conditions consists of two parts: Diffusion force caused by concentration gradient: ln eq )) Electrostatic force caused by electric potential gradient: q∇φ .Here R is the gas constant, T is the absolute temperature, n is the concentration, the equilibrium concentration is marked by a superscript "eq", q is the charge and φ is the electric potential. Basic models of diffusion: The simple but crucial difference between the Teorell formula and the Onsager laws is the concentration factor in the Teorell expression for the flux. In the Einstein–Teorell approach, if for the finite force the concentration tends to zero then the flux also tends to zero, whereas the Onsager equations violate this simple and physically obvious rule. The general formulation of the Teorell formula for non-perfect systems under isothermal conditions is exp external force per mole )), where μ is the chemical potential, μ0 is the standard value of the chemical potential. The expression exp ⁡(μ−μ0RT) is the so-called activity. It measures the "effective concentration" of a species in a non-ideal mixture. In this notation, the Teorell formula for the flux has a very simple form external force per mole )). The standard derivation of the activity includes a normalization factor and for small concentrations a=n/n⊖+o(n/n⊖) , where n⊖ is the standard concentration. Therefore, this formula for the flux describes the flux of the normalized dimensionless quantity n/n⊖ external force per mole ))]. Fluctuation-dissipation theorem Fluctuation-dissipation theorem based on the Langevin equation is developed to extend the Einstein model to the ballistic time scale. According to Langevin, the equation is based on Newton's second law of motion as md2xdt2=−1μdxdt+F(t) where x is the position. μ is the mobility of the particle in the fluid or gas, which can be calculated using the Einstein relation (kinetic theory). m is the mass of the particle. F is the random force applied to the particle. t is time.Solving this equation, one obtained the time-dependent diffusion constant in the long-time limit and when the particle is significantly denser than the surrounding fluid, D(t)=μkBT(1−e−t/(mμ)) where kB is the Boltzmann constant; T is the absolute temperature. μ is the mobility of the particle in the fluid or gas, which can be calculated using the Einstein relation (kinetic theory). m is the mass of the particle. t is time. Basic models of diffusion: Teorell formula for multicomponent diffusion The Teorell formula with combination of Onsager's definition of the diffusion force gives Ji=miai∑jLijXj, where mi is the mobility of the ith component, ai is its activity, Lij is the matrix of the coefficients, Xj is the thermodynamic diffusion force, Xj=−∇μjT . For the isothermal perfect systems, Xj=−R∇njnj . Therefore, the Einstein–Teorell approach gives the following multicomponent generalization of the Fick's law for multicomponent diffusion: ∂ni∂t=∑j∇⋅(Dijninj∇nj), where Dij is the matrix of coefficients. The Chapman–Enskog formulas for diffusion in gases include exactly the same terms. Earlier, such terms were introduced in the Maxwell–Stefan diffusion equation. Basic models of diffusion: Jumps on the surface and in solids Diffusion of reagents on the surface of a catalyst may play an important role in heterogeneous catalysis. The model of diffusion in the ideal monolayer is based on the jumps of the reagents on the nearest free places. This model was used for CO on Pt oxidation under low gas pressure. The system includes several reagents A1,A2,…,Am on the surface. Their surface concentrations are c1,c2,…,cm. The surface is a lattice of the adsorption places. Each reagent molecule fills a place on the surface. Some of the places are free. The concentration of the free places is z=c0 . The sum of all ci (including free places) is constant, the density of adsorption places b. The jump model gives for the diffusion flux of Ai (i = 1, ..., n): Ji=−Di[z∇ci−ci∇z]. The corresponding diffusion equation is: div ⁡Ji=Di[zΔci−ciΔz]. Due to the conservation law, z=b−∑i=1nci, and we have the system of m diffusion equations. For one component we get Fick's law and linear equations because (b−c)∇c−c∇(b−c)=b∇c . For two and more components the equations are nonlinear. Basic models of diffusion: If all particles can exchange their positions with their closest neighbours then a simple generalization gives Ji=−∑jDij[cj∇ci−ci∇cj] ∂ci∂t=∑jDij[cjΔci−ciΔcj] where Dij=Dji≥0 is a symmetric matrix of coefficients that characterize the intensities of jumps. The free places (vacancies) should be considered as special "particles" with concentration c0 Various versions of these jump models are also suitable for simple diffusion mechanisms in solids. Basic models of diffusion: Diffusion in porous media For diffusion in porous media the basic equations are (if Φ is constant): J=−ϕD∇nm ∂n∂t=DΔnm, where D is the diffusion coefficient, Φ is porosity, n is the concentration, m > 0 (usually m > 1, the case m = 1 corresponds to Fick's law). Basic models of diffusion: Care must be taken to properly account for the porosity (Φ) of the porous medium in both the flux terms and the accumulation terms. For example, as the porosity goes to zero, the molar flux in the porous medium goes to zero for a given concentration gradient. Upon applying the divergence of the flux, the porosity terms cancel out and the second equation above is formed. Basic models of diffusion: For diffusion of gases in porous media this equation is the formalization of Darcy's law: the volumetric flux of a gas in the porous media is q=−kμ∇p where k is the permeability of the medium, μ is the viscosity and p is the pressure. The advective molar flux is given as J = nq and for p∼nγ Darcy's law gives the equation of diffusion in porous media with m = γ + 1. In porous media, the average linear velocity (ν), is related to the volumetric flux as: υ=q/ϕ Combining the advective molar flux with the diffusive flux gives the advection dispersion equation ∂n∂t=DΔnm−ν⋅∇nm, For underground water infiltration, the Boussinesq approximation gives the same equation with m = 2. For plasma with the high level of radiation, the Zeldovich–Raizer equation gives m > 4 for the heat transfer. Diffusion in physics: Diffusion coefficient in kinetic theory of gases The diffusion coefficient D is the coefficient in the Fick's first law J=−D∂n/∂x , where J is the diffusion flux (amount of substance) per unit area per unit time, n (for ideal mixtures) is the concentration, x is the position [length]. Consider two gases with molecules of the same diameter d and mass m (self-diffusion). In this case, the elementary mean free path theory of diffusion gives for the diffusion coefficient D=13ℓvT=23kB3π3mT3/2Pd2, where kB is the Boltzmann constant, T is the temperature, P is the pressure, ℓ is the mean free path, and vT is the mean thermal speed: ℓ=kBT2πd2P,vT=8kBTπm. Diffusion in physics: We can see that the diffusion coefficient in the mean free path approximation grows with T as T3/2 and decreases with P as 1/P. If we use for P the ideal gas law P = RnT with the total concentration n, then we can see that for given concentration n the diffusion coefficient grows with T as T1/2 and for given temperature it decreases with the total concentration as 1/n. Diffusion in physics: For two different gases, A and B, with molecular masses mA, mB and molecular diameters dA, dB, the mean free path estimate of the diffusion coefficient of A in B and B in A is: DAB=23kB3π312mA+12mB4T3/2P(dA+dB)2, The theory of diffusion in gases based on Boltzmann's equation In Boltzmann's kinetics of the mixture of gases, each gas has its own distribution function, fi(x,c,t) , where t is the time moment, x is position and c is velocity of molecule of the ith component of the mixture. Each component has its mean velocity {\textstyle C_{i}(x,t)={\frac {1}{n_{i}}}\int _{c}cf(x,c,t)\,dc} . If the velocities Ci(x,t) do not coincide then there exists diffusion. Diffusion in physics: In the Chapman–Enskog approximation, all the distribution functions are expressed through the densities of the conserved quantities: individual concentrations of particles, {\textstyle n_{i}(x,t)=\int _{c}f_{i}(x,c,t)\,dc} (particles per volume), density of momentum {\textstyle \sum _{i}m_{i}n_{i}C_{i}(x,t)} (mi is the ith particle mass), density of kinetic energy The kinetic temperature T and pressure P are defined in 3D space as 32kBT=1n∫cmi(ci−Ci(x,t))22fi(x,c,t)dc;P=kBnT, where {\textstyle n=\sum _{i}n_{i}} is the total density. Diffusion in physics: For two gases, the difference between velocities, C1−C2 is given by the expression: 12 {∇(n1n)+n1n2(m2−m1)Pn(m1n1+m2n2)∇P−m1n1m2n2P(m1n1+m2n2)(F1−F2)+kT1T∇T}, where Fi is the force applied to the molecules of the ith component and kT is the thermodiffusion ratio. Diffusion in physics: The coefficient D12 is positive. This is the diffusion coefficient. Four terms in the formula for C1−C2 describe four main effects in the diffusion of gases: ∇(n1n) describes the flux of the first component from the areas with the high ratio n1/n to the areas with lower values of this ratio (and, analogously the flux of the second component from high n2/n to low n2/n because n2/n = 1 – n1/n); n1n2(m2−m1)n(m1n1+m2n2)∇P describes the flux of the heavier molecules to the areas with higher pressure and the lighter molecules to the areas with lower pressure, this is barodiffusion; m1n1m2n2P(m1n1+m2n2)(F1−F2) describes diffusion caused by the difference of the forces applied to molecules of different types. For example, in the Earth's gravitational field, the heavier molecules should go down, or in electric field the charged molecules should move, until this effect is not equilibrated by the sum of other terms. This effect should not be confused with barodiffusion caused by the pressure gradient. Diffusion in physics: kT1T∇T describes thermodiffusion, the diffusion flux caused by the temperature gradient.All these effects are called diffusion because they describe the differences between velocities of different components in the mixture. Therefore, these effects cannot be described as a bulk transport and differ from advection or convection. In the first approximation, for rigid spheres; for repulsing force 12 r−ν. Diffusion in physics: The number A1(ν) is defined by quadratures (formulas (3.7), (3.9), Ch. 10 of the classical Chapman and Cowling book) We can see that the dependence on T for the rigid spheres is the same as for the simple mean free path theory but for the power repulsion laws the exponent is different. Dependence on a total concentration n for a given temperature has always the same character, 1/n. Diffusion in physics: In applications to gas dynamics, the diffusion flux and the bulk flow should be joined in one system of transport equations. The bulk flow describes the mass transfer. Its velocity V is the mass average velocity. It is defined through the momentum density and the mass concentrations: V=∑iρiCiρ. where ρi=mini is the mass concentration of the ith species, {\textstyle \rho =\sum _{i}\rho _{i}} is the mass density. Diffusion in physics: By definition, the diffusion velocity of the ith component is vi=Ci−V , {\textstyle \sum _{i}\rho _{i}v_{i}=0} The mass transfer of the ith component is described by the continuity equation ∂ρi∂t+∇(ρiV)+∇(ρivi)=Wi, where Wi is the net mass production rate in chemical reactions, {\textstyle \sum _{i}W_{i}=0} In these equations, the term ∇(ρiV) describes advection of the ith component and the term ∇(ρivi) represents diffusion of this component. Diffusion in physics: In 1948, Wendell H. Furry proposed to use the form of the diffusion rates found in kinetic theory as a framework for the new phenomenological approach to diffusion in gases. This approach was developed further by F.A. Williams and S.H. Lam. For the diffusion velocities in multicomponent gases (N components) they used ln ⁡T)); ln ⁡P)+gj; gj=ρP(Yj∑k=1NYk(fk−fj)). Here, Dij is the diffusion coefficient matrix, Di(T) is the thermal diffusion coefficient, fi is the body force per unit mass acting on the ith species, Xi=Pi/P is the partial pressure fraction of the ith species (and Pi is the partial pressure), Yi=ρi/ρ is the mass fraction of the ith species, and 1. Diffusion in physics: {\textstyle \sum _{i}X_{i}=\sum _{i}Y_{i}=1.} Diffusion of electrons in solids When the density of electrons in solids is not in equilibrium, diffusion of electrons occurs. For example, when a bias is applied to two ends of a chunk of semiconductor, or a light shines on one end (see right figure), electrons diffuse from high density regions (center) to low density regions (two ends), forming a gradient of electron density. This process generates current, referred to as diffusion current. Diffusion in physics: Diffusion current can also be described by Fick's first law J=−D∂n/∂x, where J is the diffusion current density (amount of substance) per unit area per unit time, n (for ideal mixtures) is the electron density, x is the position [length]. Diffusion in physics: Diffusion in geophysics Analytical and numerical models that solve the diffusion equation for different initial and boundary conditions have been popular for studying a wide variety of changes to the Earth's surface. Diffusion has been used extensively in erosion studies of hillslope retreat, bluff erosion, fault scarp degradation, wave-cut terrace/shoreline retreat, alluvial channel incision, coastal shelf retreat, and delta progradation. Although the Earth's surface is not literally diffusing in many of these cases, the process of diffusion effectively mimics the holistic changes that occur over decades to millennia. Diffusion models may also be used to solve inverse boundary value problems in which some information about the depositional environment is known from paleoenvironmental reconstruction and the diffusion equation is used to figure out the sediment influx and time series of landform changes. Diffusion in physics: Dialysis Dialysis works on the principles of the diffusion of solutes and ultrafiltration of fluid across a semi-permeable membrane. Diffusion is a property of substances in water; substances in water tend to move from an area of high concentration to an area of low concentration. Blood flows by one side of a semi-permeable membrane, and a dialysate, or special dialysis fluid, flows by the opposite side. A semipermeable membrane is a thin layer of material that contains holes of various sizes, or pores. Smaller solutes and fluid pass through the membrane, but the membrane blocks the passage of larger substances (for example, red blood cells and large proteins). This replicates the filtering process that takes place in the kidneys when the blood enters the kidneys and the larger substances are separated from the smaller ones in the glomerulus. Random walk (random motion): One common misconception is that individual atoms, ions or molecules move randomly, which they do not. In the animation on the right, the ion in the left panel appears to have "random" motion in the absence of other ions. As the right panel shows, however, this motion is not random but is the result of "collisions" with other ions. As such, the movement of a single atom, ion, or molecule within a mixture just appears random when viewed in isolation. The movement of a substance within a mixture by "random walk" is governed by the kinetic energy within the system that can be affected by changes in concentration, pressure or temperature. (This is a classical description. At smaller scales, quantum effects will be non-negligible, in general. Thus, the study of the movement of a single atom becomes more subtle since particles at such small scales are described by probability amplitudes rather than deterministic measures of position and velocity.) Separation of diffusion from convection in gases While Brownian motion of multi-molecular mesoscopic particles (like pollen grains studied by Brown) is observable under an optical microscope, molecular diffusion can only be probed in carefully controlled experimental conditions. Since Graham experiments, it is well known that avoiding of convection is necessary and this may be a non-trivial task. Random walk (random motion): Under normal conditions, molecular diffusion dominates only at lengths in the nanometre-to-millimetre range. On larger length scales, transport in liquids and gases is normally due to another transport phenomenon, convection. To separate diffusion in these cases, special efforts are needed. Random walk (random motion): Therefore, some often cited examples of diffusion are wrong: If cologne is sprayed in one place, it can soon be smelled in the entire room, but a simple calculation shows that this can not be due to diffusion. Convective motion persists in the room because of the temperature [inhomogeneity]. If ink is dropped in water, one usually observes an inhomogeneous evolution of the spatial distribution, which clearly indicates convection (caused, in particular, by this dropping).In contrast, heat conduction through solid media is an everyday occurrence (for example, a metal spoon partly immersed in a hot liquid). This explains why the diffusion of heat was explained mathematically before the diffusion of mass. Random walk (random motion): Other types of diffusion Anisotropic diffusion, also known as the Perona–Malik equation, enhances high gradients Atomic diffusion, in solids Bohm diffusion, spread of plasma across magnetic fields Eddy diffusion, in coarse-grained description of turbulent flow Effusion of a gas through small holes Electronic diffusion, resulting in an electric current called the diffusion current Facilitated diffusion, present in some organisms Gaseous diffusion, used for isotope separation Heat equation, diffusion of thermal energy Itō diffusion, mathematisation of Brownian motion, continuous stochastic process. Random walk (random motion): Knudsen diffusion of gas in long pores with frequent wall collisions Lévy flight Molecular diffusion, diffusion of molecules from more dense to less dense areas Momentum diffusion ex. the diffusion of the hydrodynamic velocity field Photon diffusion Plasma diffusion Random walk, model for diffusion Reverse diffusion, against the concentration gradient, in phase separation Rotational diffusion, random reorientation of molecules Surface diffusion, diffusion of adparticles on a surface Taxis is an animal's directional movement activity in response to a stimulus Kinesis is an animal's non-directional movement activity in response to a stimulus Trans-cultural diffusion, diffusion of cultural traits across geographical area Turbulent diffusion, transport of mass, heat, or momentum within a turbulent fluid
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bass–Serre theory** Bass–Serre theory: Bass–Serre theory is a part of the mathematical subject of group theory that deals with analyzing the algebraic structure of groups acting by automorphisms on simplicial trees. The theory relates group actions on trees with decomposing groups as iterated applications of the operations of free product with amalgamation and HNN extension, via the notion of the fundamental group of a graph of groups. Bass–Serre theory can be regarded as one-dimensional version of the orbifold theory. History: Bass–Serre theory was developed by Jean-Pierre Serre in the 1970s and formalized in Trees, Serre's 1977 monograph (developed in collaboration with Hyman Bass) on the subject. Serre's original motivation was to understand the structure of certain algebraic groups whose Bruhat–Tits buildings are trees. However, the theory quickly became a standard tool of geometric group theory and geometric topology, particularly the study of 3-manifolds. Subsequent work of Bass contributed substantially to the formalization and development of basic tools of the theory and currently the term "Bass–Serre theory" is widely used to describe the subject. History: Mathematically, Bass–Serre theory builds on exploiting and generalizing the properties of two older group-theoretic constructions: free product with amalgamation and HNN extension. However, unlike the traditional algebraic study of these two constructions, Bass–Serre theory uses the geometric language of covering theory and fundamental groups. Graphs of groups, which are the basic objects of Bass–Serre theory, can be viewed as one-dimensional versions of orbifolds. History: Apart from Serre's book, the basic treatment of Bass–Serre theory is available in the article of Bass, the article of G. Peter Scott and C. T. C. Wall and the books of Allen Hatcher, Gilbert Baumslag, Warren Dicks and Martin Dunwoody and Daniel E. Cohen. Basic set-up: Graphs in the sense of Serre Serre's formalism of graphs is slightly different from the standard formalism from graph theory. Here a graph A consists of a vertex set V, an edge set E, an edge reversal map E→E,e↦e¯ such that e ≠ e and e¯¯=e for every e in E, and an initial vertex map o:E→V . Thus in A every edge e comes equipped with its formal inverse e. The vertex o(e) is called the origin or the initial vertex of e and the vertex o(e) is called the terminus of e and is denoted t(e). Both loop-edges (that is, edges e such that o(e) = t(e)) and multiple edges are allowed. An orientation on A is a partition of E into the union of two disjoint subsets E+ and E− so that for every edge e exactly one of the edges from the pair e, e belongs to E+ and the other belongs to E−. Basic set-up: Graphs of groups A graph of groups A consists of the following data: A connected graph A; An assignment of a vertex group Av to every vertex v of A. An assignment of an edge group Ae to every edge e of A so that we have Ae=Ae¯ for every e ∈ E. Basic set-up: Boundary monomorphisms αe:Ae→Ao(e) for all edges e of A, so that each αe is an injective group homomorphism.For every e∈E the map αe¯:Ae→At(e) is also denoted by ωe Fundamental group of a graph of groups There are two equivalent definitions of the notion of the fundamental group of a graph of groups: the first is a direct algebraic definition via an explicit group presentation (as a certain iterated application of amalgamated free products and HNN extensions), and the second using the language of groupoids. Basic set-up: The algebraic definition is easier to state: First, choose a spanning tree T in A. The fundamental group of A with respect to T, denoted π1(A, T), is defined as the quotient of the free product (∗v∈VAv)∗F(E) where F(E) is a free group with free basis E, subject to the following relations: e¯αe(g)e=αe¯(g) for every e in E and every g∈Ae . (The so-called Bass–Serre relation.) ee = 1 for every e in E. Basic set-up: e = 1 for every edge e of the spanning tree T.There is also a notion of the fundamental group of A with respect to a base-vertex v in V, denoted π1(A, v), which is defined using the formalism of groupoids. It turns out that for every choice of a base-vertex v and every spanning tree T in A the groups π1(A, T) and π1(A, v) are naturally isomorphic. Basic set-up: The fundamental group of a graph of groups has a natural topological interpretation as well: it is the fundamental group of a graph of spaces whose vertex spaces and edge spaces have the fundamental groups of the vertex groups and edge groups, respectively, and whose gluing maps induce the homomorphisms of the edge groups into the vertex groups. One can therefore take this as a third definition of the fundamental group of a graph of groups. Basic set-up: Fundamental groups of graphs of groups as iterations of amalgamated products and HNN-extensions The group G = π1(A, T) defined above admits an algebraic description in terms of iterated amalgamated free products and HNN extensions. First, form a group B as a quotient of the free product (∗v∈VAv)∗F(E+T) subject to the relations e−1αe(g)e = ωe(g) for every e in E+T and every g∈Ae e = 1 for every e in E+T.This presentation can be rewritten as where e∈E+T,g∈Ge} which shows that B is an iterated amalgamated free product of the vertex groups Av. Basic set-up: Then the group G = π1(A, T) has the presentation where e∈E+(A−T),g∈Ge⟩, which shows that G = π1(A, T) is a multiple HNN extension of B with stable letters {e|e∈E+(A−T)} Splittings An isomorphism between a group G and the fundamental group of a graph of groups is called a splitting of G. If the edge groups in the splitting come from a particular class of groups (e.g. finite, cyclic, abelian, etc.), the splitting is said to be a splitting over that class. Thus a splitting where all edge groups are finite is called a splitting over finite groups. Basic set-up: Algebraically, a splitting of G with trivial edge groups corresponds to a free product decomposition G=(∗Av)∗F(X) where F(X) is a free group with free basis X = E+(A−T) consisting of all positively oriented edges (with respect to some orientation on A) in the complement of some spanning tree T of A. Basic set-up: The normal forms theorem Let g be an element of G = π1(A, T) represented as a product of the form g=a0e1a1…enan, where e1, ..., en is a closed edge-path in A with the vertex sequence v0, v1, ..., vn = v0 (that is v0=o(e1), vn = t(en) and vi = t(ei) = o(ei+1) for 0 < i < n) and where ai∈Avi for i = 0, ..., n. Basic set-up: Suppose that g = 1 in G. Then either n = 0 and a0 = 1 in Av0 or n > 0 and there is some 0 < i < n such that ei+1 = ei and ai∈ωei(Aei) .The normal forms theorem immediately implies that the canonical homomorphisms Av → π1(A, T) are injective, so that we can think of the vertex groups Av as subgroups of G. Basic set-up: Higgins has given a nice version of the normal form using the fundamental groupoid of a graph of groups. This avoids choosing a base point or tree, and has been exploited by Moore. Bass–Serre covering trees: To every graph of groups A, with a specified choice of a base-vertex, one can associate a Bass–Serre covering tree A~ , which is a tree that comes equipped with a natural group action of the fundamental group π1(A, v) without edge-inversions. Moreover, the quotient graph A~/π1(A,v) is isomorphic to A. Bass–Serre covering trees: Similarly, if G is a group acting on a tree X without edge-inversions (that is, so that for every edge e of X and every g in G we have ge ≠ e), one can define the natural notion of a quotient graph of groups A. The underlying graph A of A is the quotient graph X/G. The vertex groups of A are isomorphic to vertex stabilizers in G of vertices of X and the edge groups of A are isomorphic to edge stabilizers in G of edges of X. Bass–Serre covering trees: Moreover, if X was the Bass–Serre covering tree of a graph of groups A and if G = π1(A, v) then the quotient graph of groups for the action of G on X can be chosen to be naturally isomorphic to A. Fundamental theorem of Bass–Serre theory: Let G be a group acting on a tree X without inversions. Let A be the quotient graph of groups and let v be a base-vertex in A. Then G is isomorphic to the group π1(A, v) and there is an equivariant isomorphism between the tree X and the Bass–Serre covering tree A~ . More precisely, there is a group isomorphism σ: G → π1(A, v) and a graph isomorphism j:X→A~ such that for every g in G, for every vertex x of X and for every edge e of X we have j(gx) = g j(x) and j(ge) = g j(e). Fundamental theorem of Bass–Serre theory: This result is also known as the structure theorem.One of the immediate consequences is the classic Kurosh subgroup theorem describing the algebraic structure of subgroups of free products. Examples: Amalgamated free product Consider a graph of groups A consisting of a single non-loop edge e (together with its formal inverse e) with two distinct end-vertices u = o(e) and v = t(e), vertex groups H = Au, K = Av, an edge group C = Ae and the boundary monomorphisms α=αe:C→H,ω=ωe:C→K . Then T = A is a spanning tree in A and the fundamental group π1(A, T) is isomorphic to the amalgamated free product G=H∗CK=H∗K/ncl{α(c)=ω(c),c∈C}. Examples: In this case the Bass–Serre tree X=A~ can be described as follows. The vertex set of X is the set of cosets VX={gK:g∈G}⊔{gH:g∈G}. Two vertices gK and fH are adjacent in X whenever there exists k ∈ K such that fH = gkH (or, equivalently, whenever there is h ∈ H such that gK = fhK). The G-stabilizer of every vertex of X of type gK is equal to gKg−1 and the G-stabilizer of every vertex of X of type gH is equal to gHg−1. For an edge [gH, ghK] of X its G-stabilizer is equal to ghα(C)h−1g−1. For every c ∈ C and h ∈ 'k ∈ K' the edges [gH, ghK] and [gH, ghα(c)K] are equal and the degree of the vertex gH in X is equal to the index [H:α(C)]. Similarly, every vertex of type gK has degree [K:ω(C)] in X. Examples: HNN extension Let A be a graph of groups consisting of a single loop-edge e (together with its formal inverse e), a single vertex v = o(e) = t(e), a vertex group B = Av, an edge group C = Ae and the boundary monomorphisms α=αe:C→B,ω=ωe:C→B . Then T = v is a spanning tree in A and the fundamental group π1(A, T) is isomorphic to the HNN extension G=⟨B,e|e−1α(c)e=ω(c),c∈C⟩. Examples: with the base group B, stable letter e and the associated subgroups H = α(C), K = ω(C) in B. The composition ϕ=ω∘α−1:H→K is an isomorphism and the above HNN-extension presentation of G can be rewritten as G=⟨B,e|e−1he=ϕ(h),h∈H⟩. In this case the Bass–Serre tree X=A~ can be described as follows. The vertex set of X is the set of cosets VX = {gB : g ∈ G}. Examples: Two vertices gB and fB are adjacent in X whenever there exists b in B such that either fB = gbeB or fB = gbe−1B. The G-stabilizer of every vertex of X is conjugate to B in G and the stabilizer of every edge of X is conjugate to H in G. Every vertex of X has degree equal to [B : H] + [B : K]. Examples: A graph with the trivial graph of groups structure Let A be a graph of groups with underlying graph A such that all the vertex and edge groups in A are trivial. Let v be a base-vertex in A. Then π1(A,v) is equal to the fundamental group π1(A,v) of the underlying graph A in the standard sense of algebraic topology and the Bass–Serre covering tree A~ is equal to the standard universal covering space A~ of A. Moreover, the action of π1(A,v) on A~ is exactly the standard action of π1(A,v) on A~ by deck transformations. Basic facts and properties: If A is a graph of groups with a spanning tree T and if G = π1(A, T), then for every vertex v of A the canonical homomorphism from Av to G is injective. If g ∈ G is an element of finite order then g is conjugate in G to an element of finite order in some vertex group Av. If F ≤ G is a finite subgroup then F is conjugate in G to a subgroup of some vertex group Av. If the graph A is finite and all vertex groups Av are finite then the group G is virtually free, that is, G contains a free subgroup of finite index. If A is finite and all the vertex groups Av are finitely generated then G is finitely generated. If A is finite and all the vertex groups Av are finitely presented and all the edge groups Ae are finitely generated then G is finitely presented. Trivial and nontrivial actions: A graph of groups A is called trivial if A = T is already a tree and there is some vertex v of A such that Av = π1(A, A). This is equivalent to the condition that A is a tree and that for every edge e = [u, z] of A (with o(e) = u, t(e) = z) such that u is closer to v than z we have [Az : ωe(Ae)] = 1, that is Az = ωe(Ae). Trivial and nontrivial actions: An action of a group G on a tree X without edge-inversions is called trivial if there exists a vertex x of X that is fixed by G, that is such that Gx = x. It is known that an action of G on X is trivial if and only if the quotient graph of groups for that action is trivial. Trivial and nontrivial actions: Typically, only nontrivial actions on trees are studied in Bass–Serre theory since trivial graphs of groups do not carry any interesting algebraic information, although trivial actions in the above sense (e. g. actions of groups by automorphisms on rooted trees) may also be interesting for other mathematical reasons. Trivial and nontrivial actions: One of the classic and still important results of the theory is a theorem of Stallings about ends of groups. The theorem states that a finitely generated group has more than one end if and only if this group admits a nontrivial splitting over finite subgroups that is, if and only if the group admits a nontrivial action without inversions on a tree with finite edge stabilizers.An important general result of the theory states that if G is a group with Kazhdan's property (T) then G does not admit any nontrivial splitting, that is, that any action of G on a tree X without edge-inversions has a global fixed vertex. Hyperbolic length functions: Let G be a group acting on a tree X without edge-inversions. For every g∈G put min {d(x,gx)|x∈VX}. Then ℓX(g) is called the translation length of g on X. The function ℓX:G→Z,g∈G↦ℓX(g) is called the hyperbolic length function or the translation length function for the action of G on X. Basic facts regarding hyperbolic length functions For g ∈ G exactly one of the following holds:(a) ℓX(g) = 0 and g fixes a vertex of G. In this case g is called an elliptic element of G. Hyperbolic length functions: (b) ℓX(g) > 0 and there is a unique bi-infinite embedded line in X, called the axis of g and denoted Lg which is g-invariant. In this case g acts on Lg by translation of magnitude ℓX(g) and the element g ∈ G is called hyperbolic.If ℓX(G) ≠ 0 then there exists a unique minimal G-invariant subtree XG of X. Moreover, XG is equal to the union of axes of hyperbolic elements of G.The length-function ℓX : G → Z is said to be abelian if it is a group homomorphism from G to Z and non-abelian otherwise. Similarly, the action of G on X is said to be abelian if the associated hyperbolic length function is abelian and is said to be non-abelian otherwise. Hyperbolic length functions: In general, an action of G on a tree X without edge-inversions is said to be minimal if there are no proper G-invariant subtrees in X. Hyperbolic length functions: An important fact in the theory says that minimal non-abelian tree actions are uniquely determined by their hyperbolic length functions: Uniqueness theorem Let G be a group with two nonabelian minimal actions without edge-inversions on trees X and Y. Suppose that the hyperbolic length functions ℓX and ℓY on G are equal, that is ℓX(g) = ℓY(g) for every g ∈ G. Then the actions of G on X and Y are equal in the sense that there exists a graph isomorphism f : X → Y which is G-equivariant, that is f(gx) = g f(x) for every g ∈ G and every x ∈ VX. Important developments in Bass–Serre theory: Important developments in Bass–Serre theory in the last 30 years include: Various accessibility results for finitely presented groups that bound the complexity (that is, the number of edges) in a graph of groups decomposition of a finitely presented group, where some algebraic or geometric restrictions on the types of groups considered are imposed. These results include: Dunwoody's theorem about accessibility of finitely presented groups stating that for any finitely presented group G there exists a bound on the complexity of splittings of G over finite subgroups (the splittings are required to satisfy a technical assumption of being "reduced"); Bestvina–Feighn generalized accessibility theorem stating that for any finitely presented group G there is a bound on the complexity of reduced splittings of G over small subgroups (the class of small groups includes, in particular, all groups that do not contain non-abelian free subgroups); Acylindrical accessibility results for finitely presented (Sela, Delzant) and finitely generated (Weidmann) groups which bound the complexity of the so-called acylindrical splittings, that is splittings where for their Bass–Serre covering trees the diameters of fixed subsets of nontrivial elements of G are uniformly bounded. Important developments in Bass–Serre theory: The theory of JSJ-decompositions for finitely presented groups. This theory was motivated by the classic notion of JSJ decomposition in 3-manifold topology and was initiated, in the context of word-hyperbolic groups, by the work of Sela. JSJ decompositions are splittings of finitely presented groups over some classes of small subgroups (cyclic, abelian, noetherian, etc., depending on the version of the theory) that provide a canonical descriptions, in terms of some standard moves, of all splittings of the group over subgroups of the class. There are a number of versions of JSJ-decomposition theories: The initial version of Sela for cyclic splittings of torsion-free word-hyperbolic groups. Important developments in Bass–Serre theory: Bowditch's version of JSJ theory for word-hyperbolic groups (with possible torsion) encoding their splittings over virtually cyclic subgroups. The version of Rips and Sela of JSJ decompositions of torsion-free finitely presented groups encoding their splittings over free abelian subgroups. The version of Dunwoody and Sageev of JSJ decompositions of finitely presented groups over noetherian subgroups. The version of Fujiwara and Papasoglu, also of JSJ decompositions of finitely presented groups over noetherian subgroups. A version of JSJ decomposition theory for finitely presented groups developed by Scott and Swarup. Important developments in Bass–Serre theory: The theory of lattices in automorphism groups of trees. The theory of tree lattices was developed by Bass, Kulkarni and Lubotzky by analogy with the theory of lattices in Lie groups (that is discrete subgroups of Lie groups of finite co-volume). For a discrete subgroup G of the automorphism group of a locally finite tree X one can define a natural notion of volume for the quotient graph of groups A as vol(A)=∑v∈V1|Av|. Important developments in Bass–Serre theory: The group G is called an X-lattice if vol(A)< ∞. The theory of tree lattices turns out to be useful in the study of discrete subgroups of algebraic groups over non-archimedean local fields and in the study of Kac–Moody groups.Development of foldings and Nielsen methods for approximating group actions on trees and analyzing their subgroup structure. The theory of ends and relative ends of groups, particularly various generalizations of Stallings theorem about groups with more than one end. Quasi-isometric rigidity results for groups acting on trees. Generalizations: There have been several generalizations of Bass–Serre theory: The theory of complexes of groups (see Haefliger, Corson Bridson-Haefliger) provides a higher-dimensional generalization of Bass–Serre theory. The notion of a graph of groups is replaced by that of a complex of groups, where groups are assigned to each cell in a simplicial complex, together with monomorphisms between these groups corresponding to face inclusions (these monomorphisms are required to satisfy certain compatibility conditions). One can then define an analog of the fundamental group of a graph of groups for a complex of groups. However, in order for this notion to have good algebraic properties (such as embeddability of the vertex groups in it) and in order for a good analog for the notion of the Bass–Serre covering tree to exist in this context, one needs to require some sort of "non-positive curvature" condition for the complex of groups in question (see, for example ). Generalizations: The theory of isometric group actions on real trees (or R-trees) which are metric spaces generalizing the graph-theoretic notion of a tree (graph theory). The theory was developed largely in the 1990s, where the Rips machine of Eliyahu Rips on the structure theory of stable group actions on R-trees played a key role (see Bestvina-Feighn). This structure theory assigns to a stable isometric action of a finitely generated group G a certain "normal form" approximation of that action by a stable action of G on a simplicial tree and hence a splitting of G in the sense of Bass–Serre theory. Group actions on real trees arise naturally in several contexts in geometric topology: for example as boundary points of the Teichmüller space (every point in the Thurston boundary of the Teichmüller space is represented by a measured geodesic lamination on the surface; this lamination lifts to the universal cover of the surface and a naturally dual object to that lift is an R-tree endowed with an isometric action of the fundamental group of the surface), as Gromov-Hausdorff limits of, appropriately rescaled, Kleinian group actions, and so on. The use of R-trees machinery provides substantial shortcuts in modern proofs of Thurston's Hyperbolization Theorem for Haken 3-manifolds. Similarly, R-trees play a key role in the study of Culler-Vogtmann's Outer space as well as in other areas of geometric group theory; for example, asymptotic cones of groups often have a tree-like structure and give rise to group actions on real trees. The use of R-trees, together with Bass–Serre theory, is a key tool in the work of Sela on solving the isomorphism problem for (torsion-free) word-hyperbolic groups, Sela's version of the JSJ-decomposition theory and the work of Sela on the Tarski Conjecture for free groups and the theory of limit groups. Generalizations: The theory of group actions on Λ-trees, where Λ is an ordered abelian group (such as R or Z) provides a further generalization of both the Bass–Serre theory and the theory of group actions on R-trees (see Morgan, Alperin-Bass, Chiswell).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mac Mini** Mac Mini: Mac Mini (stylized as Mac mini) is a small form factor desktop computer developed and marketed by Apple Inc. As of 2022, it is positioned between the consumer all-in-one iMac and the professional Mac Studio and Mac Pro as one of four current Mac desktop computers. Since launch, it has shipped without a display, keyboard, and mouse. The machine was initially branded as "BYODKM" (Bring Your Own Display, Keyboard, and Mouse) as a strategic pitch to encourage users to switch from Windows and Linux computers. Mac Mini: In January 2005, the original Mac Mini was introduced with the PowerPC G4 CPU. In February 2006, Apple announced a new Intel Core Solo model, the first with an Intel processor. A thinner unibody redesign, unveiled in June 2010, added an HDMI port, and was more readily positioned as a home theater device and an alternative to the Apple TV. Mac Mini: The 2018 Space Gray Mac Mini model has Thunderbolt, an Intel Core i5 or i7 CPU, and also changed the case's default silver for space gray. This model also has solid-state storage and replaces most of the data ports with USB-C sockets. The Apple silicon Mac Mini was introduced in November 2020 in the original silver style; the 2018 space gray model remained available as a high-end model with more RAM options. Mac Mini: A server version of the Mac Mini that is bundled with the Server edition of the OS X operating system was offered from 2009 to 2014. The Mac Mini received generally tepid reviews except for the Apple silicon model, which was praised for its compatibility, performance, processor, price, and power efficiencies, though it drew some occasional criticisms for its ports, speaker, integrated graphics, non-user-upgradable RAM and storage, and the expensive cost to buy associated accessories and displays. Form and design: The Mac Mini was modeled on the shape of a standard digital media player, and runs the macOS operating system (previously Mac OS X and OS X). It was initially advertised as "BYODKM" (Bring Your Own Display, Keyboard, and Mouse), aiming to expand Apple's market-share of customers using other operating systems such as Microsoft Windows and Linux. Mac Mini was the company's only consumer computer that shipped without a paired display, keyboard, and mouse since its original release in 2005.Since the unibody redesign in 2010, the Kensington Security Slot and the optical drive were removed from all models, leaving internal storage spaces for either a second internal hard drive or an SSD, which can be ordered from Apple or as an upgrade kit from third party suppliers. G4 polycarbonate (2005): Apple's release of a small form factor computer had been widely speculated upon and requested before the Mac Mini. In January 2005, the Mac Mini G4 was introduced alongside the iPod shuffle at the Macworld Conference & Expo; Apple CEO Steve Jobs marketed "The cheapest, and most affordable Mac ever". The machine was intended as an entry-level computer for budget-minded customers. In comparison to regular desktops, which use standard-sized components such as 3.5-inch hard drives and full-size DIMMs, the Mac Mini G4 uses low-power laptop components to fit into small cases and avoid overheating.The aluminum case, the top and bottom of which is capped with polycarbonate plastic, has an optical drive slot on the front, and the I/O ports and vents for the cooling system on the back. It has an external 85W power supply. Mac Mini G4 has no visible screws, reflecting Apple's intention the computer may not be upgraded by the user. Some Mac Mini owners used a putty knife or a pizza cutter to open the case to install third-party memory, which could be obtained more cheaply than Apple's offering.The Mac Mini G4 is based on a single-core, 32-bit, PowerPC CPU with 512 KB of on-chip L2 cache. The processor, running at 1.25, 1.33, 1.42, or 1.5 GHz depending on the model, accesses memory through a front-side bus clocked at 167 MHz. The CPU can be overclocked to higher frequencies by either soldering or desoldering certain zero-ohm resistors on the logic board.An ATI Radeon 9200 graphics processor (GPU) with 32 megabytes (MB) of DDR SDRAM was supplied as standard; in the final 2005 model, Apple added a high-end option of 64 MB VRAM. In Apple's early marketing of the Mac Mini G4, it touted the superiority the discrete graphics board over the integrated graphics in many budget PCs.The machine uses 333 MHz DDR SDRAM and has one desktop-sized DIMM slot for RAM, allowing a maximum of 1 gigabyte (GB) of memory, a relatively small amount that often forced the system to page against the hard drive, slowing operation considerably. The Mac Mini G4 uses a single 2.5-inch Ultra ATA/100 hard drive that offers a maximum transfer rate of 100 megabytes per second (MB/s). It is not possible to open the sealed enclosure to upgrade the hard drive without possibly voiding the warranty of the system. The Mac Mini G4 also contains a second ATA cable that connects to the optical drive. A Combo drive was included as standard while a SuperDrive that could write to DVDs was also an option.The Mac Mini G4 has two USB 2.0 ports and one FireWire 400 port. Networking is supported with 10/100 Ethernet and a 56k V.92 modem, while 802.11b/g Wi-Fi and Bluetooth were additional, build-to-order options. External displays are supported via a DVI port, and adapters for VGA, S-Video, and composite video output were available. The system contains a built-in speaker and an 1/8-inch stereo mini jack for analog sound output. The new Wi-Fi card no longer used an MMCX-Female connector for the antenna, as do prior models, but rather a proprietary Apple one.The Mac Mini G4 was initially supplied with Mac OS X 10.3, then later with Mac OS X 10.4, and can run Mac OS 9 applications, as long as a bootable copy of the OS 9 system folder is installed from which to run the Classic environment (although the Mac Mini G4 cannot natively boot to Mac OS 9). As of Mac OS X 10.5, the ability to run the Classic environment was removed. Later, Mac OS 9 was able to run on the Mac Mini G4 through an unofficial patcher, though this was not supported by Apple. It is compatible with operating systems designed for the PowerPC architecture. Users can install the AmigaOS-compatible MorphOS, OpenBSD, and Linux distributions such as Debian and Ubuntu. G4 polycarbonate (2005): Technical specifications The serial number and specifications sticker on the underside of the latest revision do not carry the actual specs of the upgrade. For example, on a 1.5 GHz model, 1.42 GHz is listed. The product packaging also did not reflect the upgrade. Apple did not revise the official specifications on their web site.All of these models are obsolete. Intel polycarbonate (2006–2009): In February 2006, Apple announced the first Intel Mac Mini, as part of the Mac's transition to Intel processors. Based on the Intel Core Solo CPU, it is four times faster than its predecessor PowerPC G4. An updated server version of the machine was released in October 2009, having been marketed as an affordable server for small financial and academic uses; this model omitted the optical drive and used a hard drive instead.The 2006 and 2007 models are fitted with 32-bit Intel Core Solo CPUs that is upgradable with the 64-bit Core 2 Duo processors. The 2006 and 2007 Merom-based Mac Mini models were supplied with socketed CPUs; the 32-bit processor can be removed, and replaced with a compatible 64-bit Intel Core 2 Duo processor. Models manufactured in and after 2009 had their CPUs soldered onto a logic board, preventing its upgradability. The upgrades make the 2006/2007 models perform better than the 2009 models. Geekbench has shown the 2.33 GHz Core 2 Duo fitted Mac Mini with 2 GB of RAM has a score of 3060 whereas a late 2009 Mac Mini with 2 GB of RAM has 3056 making the two machines fairly comparable.The built-in Intel GMA was criticized for producing stuttering video despite supporting hardware accelerated H.264 video playback, and disappointing frame rates in graphics-intensive 3D games. Early and Late 2009 models corrected these performance issues with an improved NVIDIA-based GeForce 9400M chipset.The Intel-based Mac Mini includes four USB 2.0 ports and one FireWire 400 port. The I/O ports were changed with the early 2009 revision, adding a fifth USB 2.0 and swapping the FireWire 400 port for a FireWire 800 port. An infrared receiver was added, allowing the use of an Apple Remote. Bluetooth 2.0+EDR and 802.11g Wi-Fi became standard and the Ethernet port was upgraded to Gigabit. A built-in 56k modem was no longer available. The 2009 models added 802.11 draft-n and later 802.11n Wi-Fi, and Bluetooth was upgraded from 2.0 to 2.1. External displays are supported through a DVI port. The 2009 models have Mini-DVI and Mini DisplayPort video output, allowing the use of two displays. The Mini DisplayPort supports displays with a resolution up to 2560×1600, which allows use of the 30-inch Cinema Display. The Intel-based Mac Mini has separate Mini-TOSLINK/3.5 mm mini-jacks that support both analog audio input and output, and optical digital S/PDIF input and output. Intel polycarbonate (2006–2009): Technical specifications All of these models are obsolete. Unibody (2010–2014): In June 2010, Apple redesigned the Mac Mini, giving it a more compact, thinner unibody aluminum case that has an internal power supply, an SD card slot, a Core 2 Duo CPU, and a HDMI port for video output that Apple marketed as HDMI 1.4 compliant, replacing the Mini-DVI port of the previous models.In July 2011, a hardware update was announced; models were now fitted with a Thunderbolt port, dual-core Intel Core i5 and 4-core i7 CPUs, support for up to 16 GB of memory, Bluetooth 4.0, and either an Intel HD Graphics 3000 integrated graphics or an AMD Radeon HD 6630M dedicated graphics. The revision, however, removed the internal CD/DVD optical drive. The server model was upgraded to a quad-core Core i7 processor. Apple updated the line in October 2012, with Ivy Bridge processors, USB 3.0, and upgraded graphics. In October 2014, the line was updated with Haswell processors, improved graphics, 802.11ac Wi-Fi, and Thunderbolt 2, with a second Thunderbolt port replacing the FireWire 800 port. The price of the base model was lowered by $100. Two holes that were used to open the case were removed from the case because the memory, being soldered to the logic board, was no longer upgradable. Because the integrated GPU does not have its own dedicated memory, the system shares some of the main system memory with it. 4K video output via HDMI was added.Comparing the high-end models of both releases, the 2012 model has a 4-core, 8-thread Intel Core i7-3720QM whereas the 2014 model has a 2-core, 4-thread Intel Core i7-4578U. The 2014 updated model has Intel Iris graphics (GT3), which greatly outperforms the Intel HD Graphics 4000 (GT2) in the previous models. The 2014 CPUs were more energy-efficient: their maximal thermal design power (TDP) was 62% lower than that of the 2012 models. The 2014 revision underwent internal process transition to dual-core CPUs, performing a lower-quality of multi-threaded workloads compared to the quad-core processors in the 2012 model, though the single-threaded workload interactions speeds increased. Unibody (2010–2014): Technical specifications All of these models are obsolete. Space gray (2018): In October 2018, Apple announced a "space gray"-colored Mac Mini with Intel Coffee Lake series CPUs, the T2 series chip for internal security, Bluetooth 5, four Thunderbolt 3 ports with USB 3.1 gen 2 support, two USB 3.0 Type-A ports, and HDMI 2.0. PCIe-based flash storage is standard with no option to fit a hard drive. The baseline storage was changed to 128 GB with a maximum of 2 TB. RAM was increased to a baseline of 8 GB and a maximum of 64 GB of SO-DIMM DDR4. The chassis is a carryover from Mac Minis released between 2010 and 2014, and has the same dimensions, but its color was changed from silver to "space gray", similar to the iMac Pro.The 2018 Mac Mini removes legacy I/O such as the SD card reader, SATA drive bay, IR receiver, optical S/PDIF (TOSLINK) audio out, and audio in. macOS Catalina added support for Dolby Atmos, Dolby Vision, and HDR10. Memory can again be replaced. According to Apple, memory is not officially user-replaceable, and requires service by an Apple Store or Apple Authorized Service Provider. The CPU and flash storage are soldered to the logic board and cannot be replaced.In March 2020, Apple doubled the default storage in both base models. Apple discontinued the Core i3 model following the release of the M1 Mac Mini in November 2020, but continued to sell the Core i5/i7 models until January 2023. Apple silicon (2020–present): As part of the Mac transition to Apple silicon, Apple announced a new Mac Mini with the Apple M1 chip on November 10, 2020. It was released on November 17, 2020, and was one of the first three Apple silicon-based Macs released (alongside the MacBook Air and MacBook Pro).With the M1, this Mac Mini has a 3x faster eight-core CPU, a 6x faster GPU, and 15x faster machine learning performance than its predecessor, the base 2018 model. Options for more than 16 GB of RAM are not available on M1-based systems. Support for external displays is reduced to one display over USB-C/Thunderbolt, though a second display can be connected using HDMI; the previous Intel-based model could drive two 4K displays over USB-C/Thunderbolt. On April 20, 2021, 10 Gigabit Ethernet with Lights Out Management was added as a built-to-order option. Its internal cooling system has a thermal-based design that according to Apple performs five times more quickly than the best-selling Windows-based desktop computer in its price range.The price of the Apple silicon Mac Mini dropped US$100 from that of the previous model to $699. It added support for Wi-Fi 6, USB4, and 6K video output to run the Pro Display XDR. Externally, it is very similar to the 2018 Mac Mini but has a lighter, silver finish similar to that of the models released from 2010 to 2014.The release of the Apple silicon Mac Mini was preceded by the June 2020 release of the A12Z-based Developer Transition Kit, a prototype with a Mac Mini enclosure made for developers to port their apps to Apple silicon. The 2020 DTK has 16 GB of RAM, 512 GB of storage, and two USB-C ports.On January 17, 2023, Apple announced updated models based on the M2 and M2 Pro chips. The updated models also include Bluetooth 5.3 and Wi-Fi 6E connectivity. The M2 Pro model includes two additional USB-C/Thunderbolt ports and supports HDMI 2.1. Reception: The Mac Mini has been praised as a relatively affordable computer with a solid range of features. Reviews noted it is possible to purchase small computers at the same price with faster CPUs, better graphics cards, more memory, and more storage. The small size has made the Mac Mini particularly popular for home theater use, and its size and reliability has helped keep resale values high.The G4 model received a considerably lukewarm score among critics. Those at CNET positively identified it as an affordable, quiet, and compact machine, but they disliked the slow hard drive and that it only had two below-expected quantities of USB 2.0 ports. Ars Technica indicated criticisms on its non-user-upgradable RAM and storage options and the extra expensive fees for additional drives. Overall, they felt that the performance was fairly acceptable.The Intel polycarbonate model was moderately praised. Engadget aggregated that critics generally praised the Core Duo transition, connectivity, and the Front Row performance. The listed reviewers inspected it to be about a 10 to 15% higher performance boost in media-center-related tasks. CNET admired its cost, software, home-theater system, and Windows compatibility. Despite this, they found criticisms on the poor video output graphic processing units, small hard drive, and the limited remote controllability and upgrade options. Ars Technica encountered it to be somewhat underpowered to play high-resolution HD streams at standard frame rates. They opposed the integrated graphics implemented within the model because it delivered marginal performance when compared to dedicated graphics processors.The unibody model reviews were tepid. Engadget praised the HDMI port, compact design, and power efficiency. They disputed its lack of Blu-Ray options on home theater and the expensive price. CNET wrote a positive review on the HDMI output and the near-decent graphics capability, citing criticisms on the limited user upgrade options and the high cost. The same sources of criticism were also mentioned in an Ars Technica review.The space gray model received lukewarm praises. The Verge praised its significant leap of power and speed and the high-quality port integration. They wrote negatively on its high-cost base model and the lack of GPU performance. In an Engadget review, it was admired for its compact design, versatile port selection, CPU performance, and that it was the least expensive in the Macintosh lineup, while criticisms included the limited GPU performance, expensive upgrade options, and the non-user-upgradable RAM. CNET wrote positively on its high-quality processor performances, the ports, and the Ethernet configuration; they criticized the non-replaceable integrated graphics and the expensive cost to purchase associated accessories and displays.Reviews for the Apple silicon model were very positive in the media. Wired praised its relatively low-cost affordability and its integration of Apple Silicon; the latter was assessed as efforts of significant performance and power efficiency enhancements. Null experimented the system to be "peppy and responsive" without any crashes; however, he panned the transitional disabilities of the Silicon which discontinued supports for Intel-era system extensions. Similarly, ZDNet wrote positively on the price, processor units, compact design, and quiet performance. Nevertheless, they argued over the expensive non-user-installable RAM and storage upgrades and the non-discrete-or-external GPU. Technical writers Samuel Axon (Ars Technica), Chris Welch (The Verge), and Jeremy Laukkonen (Lifewire) all gave high praises. Axon evaluated a positive grade on its high-quality performance and solid Legacy x86 macOS app compatibilities, citing the RAM and storage installment limitation as his chief element of criticisms. Agreeably, Welch emphasized appeals to the performance and the power efficiencies. In addition, he regarded negatively its external GPU incompatibility, low-quality speaker, and that it has fewer USB-C ports than the previous Intel model. Collectively, Laukkonen recited these debates. Home theater and server: Home theater Due to its similarity, compact volume and functions, the Mac Mini is often used as a home theater PC or as an alternative to the Apple TV. The system has a native interface with Front Row software that is based on the original Apple TV interface. Unlike the Apple TV, the Mac Mini is backward compatible with televisions that have only composite or S-Video inputs.Pre-2009 models have a video connector that is compatible with DVI, HDMI (video only), SVGA, S-Video, and composite video with appropriate adapters; for audio output, it has both the analog mini-headphone port and a digital optical fiber port. The addition of a HDMI port on the 2010 Mac Mini simplified connection to high-definition televisions and home theater AV receivers. The HDMI port supports video resolutions of up to 1080p and eight-channel, 24-bit audio at 192 kHz, and Dolby Surround 5.1 and stereo output. The 2014 model added 4K output, and the 2018 model supports Dolby Atmos, Dolby Vision, and HDR10, and uses the macOS Catalina operating system. Home theater and server: Distributed computing Sound On Sound's Mark Wherry said the Mac Mini was useful for distributed audio processing of audio plugins using Logic Node, a companion tool to Logic Pro. Writing for MacTech magazine, university IT director Mary Norbury-Glaser demonstrated the use of Xgrid on a Mac Mini. Home theater and server: Server Apple offered a server configuration of the Mac Mini that was originally supplied with the OS X Server operating system, a version of OS X, but this was later switched to the standard version of OS X with a separate OS X Server package. The file included component applications such as "Server App" and "File Sharing". In June 2011, it was available from Mac App Store for other Macintosh computers. The Mid 2010 Mac Mini Server was initially the only model without an optical drive, which was replaced with a second hard drive. The Mid 2011 models also eliminated the optical drive.The Mac Mini Server hardware was discontinued in the Late 2014 model. The macOS Server software package, however, could be purchased from the Mac App Store. In 2018, coinciding with the release of macOS Mojave, Apple shipped macOS Server version 5.71, which stopped bundling open-source services including DHCP, DNS, email, firewall, FTP, RADIUS, VPN, Web, and Wiki. Apple states customers are able to receive support for these services directly from open-source providers. Other Apple-proprietary services such as Airport, Calendar, Contacts, Messages, and NetBoot were also removed with no corresponding open-source options.Alternative operating systems for Mac users include Linux and virtualized Windows; they can also install third-party Unix packages via open-source package managers such as Conda, Fink, Homebrew, MacPorts, Nix, pkgsrc, and Rudix. A few services, such as caching, files, Time Machine, and Web, were moved to the macOS Mojave client but can have limited configuration capability via the Sharing control panel. The Apache server GUI manager is replaced by apachectl commands in Terminal. The only services remaining in macOS Server 5.7.1 are Open Directory, Profile Manager, and Xsan.Some have used Mac Minis as replacements for Apple's discontinued Xserve rack-mounted servers. Providers like AWS, Macstadium, and Scaleway provide the ability to rent Mac Minis located in their data centers, a process called colocation. These can be used as continuous integration servers (also known as build servers) for Xcode, or used for application testing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tint, shade and tone** Tint, shade and tone: In color theory, a tint is a mixture of a color with white, which increases lightness, while a shade is a mixture with black, which increases darkness. Both processes affect the resulting color mixture's relative saturation. A tone is produced either by mixing a color with gray, or by both tinting and shading. Mixing a color with any neutral color (including black, gray, and white) reduces the chroma, or colorfulness, while the hue (the relative mixture of red, green, blue, etc., depending on the colorspace) remains unchanged. Tint, shade and tone: In the graphic arts, especially printmaking and drawing, "tone" has a different meaning, referring to areas of continuous color, produced by various means, as opposed to the linear marks made by an engraved or drawn line. In common language, the term shade can be generalized to encompass any varieties of a particular color, whether technically they are shades, tints, tones, or slightly different hues. Meanwhile, the term tint can be generalized to refer to any lighter or darker variation of a color (e.g. "tinted windows").When mixing colored light (additive color models), the achromatic mixture of spectrally balanced red, green, and blue (RGB) is always white, not gray or black. When we mix colorants, such as the pigments in paint mixtures, a color is produced which is always darker and lower in chroma, or saturation, than the parent colors. This moves the mixed color toward a neutral color—a gray or near-black. Lights are made brighter or dimmer by adjusting their brightness, i.e., energy level; in painting, lightness is adjusted through mixture with white, black, or a color's complement. Tint, shade and tone: The Color Triangle depicting tint, shade, and tone was proposed in 1937 by Faber Birren. In art: It is common among some artistic painters to darken a paint color by adding black paint—producing colors called shades—or to lighten a color by adding white—producing colors called tints. However, this is not always the best way for representational painting, since one result is for colors to also shift in their hues. For instance, darkening a color by adding black can cause colors such as yellows, reds and oranges to shift toward the greenish or bluish part of the spectrum. Lightening a color by adding white can cause a shift towards blue when mixed with reds and oranges. (See Abney effect.) Another practice when darkening a color is to use its opposite, or complementary, color (e.g. violet-purple added to yellowish-green) in order to neutralize it without a shift in hue, and darken it if the additive color is darker than the parent color. When lightening a color this hue shift can be corrected with the addition of a small amount of an adjacent color to bring the hue of the mixture back in line with the parent color (e.g. adding a small amount of orange to a mixture of red and white will correct the tendency of this mixture to shift slightly towards the blue end of the spectrum).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Triethyl phosphite** Triethyl phosphite: Triethyl phosphite is an organophosphorus compound with the formula P(OCH2CH3)3, often abbreviated P(OEt)3. It is a colorless, malodorous liquid. It is used as a ligand in organometallic chemistry and as a reagent in organic synthesis. The molecule features a pyramidal phosphorus(III) center bound to three ethoxide groups. Its 31P NMR spectrum features a signal at around +139 ppm vs phosphoric acid standard. Triethyl phosphite: Triethylphosphite is prepared by treating phosphorus trichloride with ethanol in the presence of a base, typically a tertiary amine: PCl3 + 3 EtOH + 3 R3N → P(OEt)3 + 3 R3NH + 3 Cl−In the absence of the base, the reaction of ethanol and phosphorus trichloride affords diethylphosphite ((EtO)2P(O)H). Of the many related compounds can be prepared similarly, triisopropyl phosphite is an example (b.p. 43.5 °C/1.0 mm; CAS# 116-17-6). As a ligand: In coordination chemistry and homogeneous catalysis, triethylphosphite finds use as a soft ligand. Its complexes are generally lipophilic and feature metals in low oxidation states. Examples include the colorless complexes FeH2(P(OEt)3)4 and Ni(P(OEt)3)4 (m.p. 108 °C).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Femisphere** Femisphere: The femisphere is a solid that has one single surface, two edges, and four vertices. Description: The form of the femisphere is reminiscent of that of a sphericon but without straight lines. Instead of that, it has circular arcs of arbitrary radius. For this reason, when rolled over a sphere, it contacts the whole surface area of it in a single revolution.The area of a femisphere of unit radius is S=4π Wooden femispheres can be made by turning them on a wood lathe.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TB11Cs5H1 snoRNA** TB11Cs5H1 snoRNA: TB11Cs5H1 is a member of the H/ACA-like class of non-coding RNA (ncRNA) molecule that guide the sites of modification of uridines to pseudouridines of substrate RNAs. It is known as a small nucleolar RNA (snoRNA) thus named because of its cellular localization in the nucleolus of the eukaryotic cell. TB11Cs5H1 is predicted to guide the pseudouridylation of SSU ribosomal RNA (rRNA) at residue Ψ1710.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CuproBraze** CuproBraze: CuproBraze is a copper-alloy heat exchanger technology for high-temperature and pressure environments such as those in the latest generations of diesel engines. The technology, developed by the International Copper Association (ICA), is licensed for free to heat exchanger manufacturers around the world. CuproBraze: Applications for CuproBraze include charge air coolers, radiators, oil coolers, climate control systems, and heat transfer cores. CuproBraze is particularly suited for charge air coolers and radiators in heavy industry where machinery must operate for long periods of time under harsh conditions without premature failures. CuproBraze is being specialized for off-road vehicles, trucks, buses, industrial engines, generators, locomotives, and military equipment. The technology is also amenable for light trucks, SUVs and passenger cars with special needs.As an innovation from previous heat exchanger models CuproBraze creates new materials for heat exchanger parts that have previously been made of soldered copper/brass plate fin, soldered copper brass serpentine fin, and brazed aluminum serpentine fin to suit more demanding applications. Aluminum heat exchangers are viable and economical for cars, light trucks, and other light-duty applications. However, they are not amenable for environments characterized by high operating temperatures, humidity, vibration, salty corrosive air, and air pollution. In these environments, the additional tensile strength, durability, and corrosion resistance that CuproBraze technology provides are useful.The CuproBraze technology uses brazing instead of soldering to join copper and brass radiator components. The heat exchangers are made with anneal-resistant copper and brass alloys. The tubes are fabricated from brass strip and coated with a brazing filler material in form of a powder-based paste or an amorphous brazing foil is laid between the tube and fin. There is another method of coating the tube in-line on the tube mill. This is done using the twin wire-arc spray process where the wire is the braze alloy, deposited on the tube as it is being manufactured at 200-400 fpm. This saves one process step of coating the tube later. The coated tubes, along with copper fins, headers and side supports made of brass, are fitted together into a core assembly which is brazed in a furnace.The technology enables brazed serpentine fins to be used in copper-brass heat exchanger designs. They are stronger, lighter, more durable, and have tougher joints. Performance benefits: CuproBraze radiators have important performance advantages over radiators made with other materials. These include better thermal performance, heat transfer, size, strength, durability, emissions, corrosion resistance, reparability, and antimicrobial benefits. Performance benefits: Thermal performance The ability to withstand elevated temperatures is essential in high-heat applications. Aluminum alloys are challenged at higher temperatures due to their lower melting points. The yield strength of aluminum is compromised above 200 °C. Problems with fatigue cracking are exacerbated at elevated temperatures. CuproBraze heat exchangers are capable of operating at temperatures of 290 °C and above. Special anneal-resistant copper and brass strip ensure that radiator cores maintain their strength without softening, despite exposures to high brazing temperatures. Performance benefits: Heat transfer efficiency Cooling efficiency is a measure of heat rejection from a given space by a heat exchanger. The overall thermal efficiency of a heat exchanger core depends on many factors, such as thermal conductivity of fins and tubes; strength and weight of the fins and tubes; spacing, size, thickness and shape of fins and tubes; velocity of the air passing through the core; and other factors.The main performance criterion for heat exchangers is cooling efficiency. Heat exchanger cores made from copper and brass can reject more heat per unit volume than any other material. This is why copper-brass heat exchangers generally have a greater cooling efficiency than alternate materials. Brazed copper-brass heat exchangers are also more rugged than soldered copper-brass and alternate materials, including brazed aluminum serpentine.Air pressure drop is a good evaluator of heat exchanger design. A heat exchanger core with a smaller air pressure drops from the front to the back of the core (i.e., from the windward to the leeward side in a wind tunnel test) is more efficient. Air pressure drops typically are 24% less for CuproBraze versus aluminum heat exchangers. This advantage, responsible for a 6% increase in heat rejection, contributes to CuproBraze's overall greater efficiency.Since copper's thermal conductivity is higher than aluminum, copper has a higher capacity to dissipate heat. By using thinner material gauges in combination with higher fin density, heat dissipation capacity with CuproBraze can be increased with air pressure drops still at reasonable levels. Performance benefits: Size Due to its high heat transfer efficiency, CuproBraze offers a significant amount of cooling capacity in a small size. This is because the same heat rejection level can be achieved with a smaller-sized core. Hence, a significant reduction in frontal area and volume is achievable with CuproBraze versus other materials. Performance benefits: Strength and durability Three new alloys were developed to enhance the strength and durability of CuproBraze heat exchangers: 1) an anneal-resistant fin material that maintains its strength after brazing; 2) an anneal-resistant tube alloy that retains its fine grain structure after brazing and provides ductility and fatigue strength in the brazed heat exchanger core; and 3) the brazing alloy. Brazing at 650 °C creates a joint that is stronger than a soldered joint and comparable in strength to a welded joint. Unlike welding, brazing does not melt the base metals. Therefore, brazing is more amenable to joining dissimilar alloys.CuproBraze has more strength at elevated temperatures than soldered copper-brass or aluminum. Due to the lower thermal expansion of copper versus aluminum, there is less thermal stress during the manufacturing of CuproBraze and in its use as a heat exchanger. CuproBraze heat exchangers have stronger tube-to-header joints than other materials. These braze joints are the most critical in heat exchangers and must be leak-free. CuproBraze also has higher tolerances to internal pressures because its thin-gauge high strength materials provide stronger support for the tubes. The material is also less sensitive to bad coolants than aluminum heat exchangers.Test results demonstrate a much longer fatigue life for CuproBraze joints compared to similar soldered copper-brass or brazed aluminum joints. Stronger joints allow for the use of thinner fins and new radiator and cooler designs.The copper fins are not easily bent when dirty radiators are washed with high pressure water. Anticorrosive coatings further improve strength and resistance against humidity, sand erosion, and stone impingement on copper fins. Performance benefits: For further information, see: CuproBraze: Durability and reliability (Technology Series): and CupropBraze durability (design criteria series). Performance benefits: Emissions New legislation in Europe, Japan and the U.S. call for strong reductions in NOX and particulate emissions from diesel engines used in trucks, buses, power plants, and other heavy equipment. These goals can in part be accomplished by using cleaner-performing turbocharged diesel engines and charge air coolers. Turbocharging enables better power outputs. Charge air coolers allow power to be produced with more efficiency by reducing the temperature of the air charge entering the engine, thereby increasing its density.The charge air cooler, located between the turbocharger and the engine air inlet manifold, is an air-to-air heat exchanger. It reduces the inlet air temperatures of turbocharged diesel engines from 200 °C to 45 °C while increasing inlet air densities to increase engine efficiencies. Even higher inlet temperatures (246 °C or higher) and boost pressures may be necessary to comply with the emissions standards in the future.Present-day charge air cooler systems, based on aluminum alloys, experience durability problems at temperatures and pressures necessary to meet the U.S. Tier 4i standards for stationary and mobile engines. Published reports estimate that the average life of an aluminum charge air cooler is currently only about 3,500 hours. Aluminum is near its upper technological limit to accommodate higher temperatures and thermal stress levels because the tensile strength of the metal declines rapidly at 150 °C and repetitive thermal cycling between 150 °C and 200 °C substantially weakens it. Thermal cycling creates weak spots in aluminum tubes, which in turn causes charge air coolers to fail prematurely. A potential option is to install stainless steel precoolers in aluminum charge air coolers, but limited space and the complexity of this solution is a hampering factor for this option.A CuproBraze charge air cooler can operate at temperatures as high as 290 °C without creep, fatigue, or other metallurgical problems. Performance benefits: Corrosion resistance Exterior corrosion resistance in a heat exchanger is especially important in coastal areas, humid areas, polluted areas, and in mining operations. Corrosion mechanisms of copper and aluminum alloys are different. CuproBraze tube contains 85% copper which provides high resistance against dezincification and stress corrosion cracking. The copper alloys tend to corrode uniformly over entire surfaces at known rates. This predictability of copper corrosion is important for proper maintenance management. Aluminum, on the other hand, is more likely to corrode locally by pitting, resulting eventually in holes.In accelerated corrosion tests, such as SWAAT for salt spray and marine conditions, CuproBraze performed better than aluminum.The corrosion resistance of CuproBraze is generally better than soft soldered heat exchangers. This is because the materials in CuproBraze heat exchangers are of equal nobility, so galvanic differences are minimized. On soft soldered heat exchangers, the solder is less noble than fin and tube materials and can suffer from galvanic attack in corrosive environments. Performance benefits: Repairability CuproBraze can be easily repaired. This advantage of the technology is especially important in remote areas where spare parts may be limited. CuproBraze can be repaired with lead-free soft solder (for example 97% tin, 3% copper) or with common silver-containing brazing alloys. Antimicrobial Biofouling is often a problem in HVAC systems that operate in warm, dark, and humid environments. The antimicrobial properties of CuproBraze alloys eliminate foul odors, thereby improving indoor air quality. CuproBraze is being investigated in mobile air conditioner units as a solution to bad odors from fungus and bacteria in aluminum-based heat exchange systems. OEMs and end users: Russian OEMs, such as Kamaz and Ural Automotive Plant, are using CuproBraze radiators and charge air coolers in heavy-duty trucks for off-highway and on-highway applications. Other manufacturers include UAZ and GAZ (Russia) and MAZ (Belarus). The Finnish Radiator Manufacturing Company, also known as FinnRadiator, produces 95% of its radiators and charge air-coolers with CuproBraze for OEM manufacturers of off-road construction equipment. Nakamura Jico Co., Ltd. (Japan) manufactures CuproBraze heat exchangers for construction equipment, locomotives and on-highway trucks. Young Touchstone supplies CuproBraze radiators to MotivePower's diesel-powered commuter train locomotives in North America. Siemens AG Transportation Systems plans to use the technology for its Asia Runner locomotive for South Vietnam and other Asian markets. Bombardier Transportation heat exchangers cool transformer oil in electric-powered locomotives. These huge oil coolers have been used successfully in coal trains for South African Railways. Kohler Power Systems Americas, one of the largest users of diesel engines for power generation, adopted CuproBraze for diesel engine turbocharger air-to-air cooling in its "gen sets".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mileage Plan** Mileage Plan: Mileage Plan is the frequent-flyer program of Alaska Airlines. Members accrue program "miles" by flying Alaska Airlines and partner-operated flights, using co-branded credit cards, and booking vacation and hotel packages, among other methods. Mileage Plan miles can be redeemed for award flights on Alaska Airlines and partner carriers and provide eligibility for elite status with Mileage Plan. History: In June 1983, Alaska Airlines introduced their frequent-flyer program, Gold Coast Travel. In 1987, Alaska Airlines acquired Jet America Airlines, which offered a frequent-flyer program that awarded credit by flight segments (number of flights taken), compared to Gold Coast Travel, where members earned credit based on the mileage of flights taken. After acquisition by Alaska Airlines, participants of Jet America's frequent-flyer program were enrolled in Gold Coast Travel, and in September 1989, Gold Coast Travel was renamed Mileage Plan. Alaska Airlines also purchased regional commuter airline Horizon Air in 1986 and incorporated the carrier into Mileage Plan. History: In 2016, Alaska Airlines acquired Virgin America, which offered a revenue-based accrual program, Elevate. On January 1, 2018, Elevate was discontinued, with all remaining accounts converted to Mileage Plan accounts. Oneworld Membership: In February 2020, Alaska Airlines announced its intention to join the Oneworld airline alliance in the summer of 2021. This would add seven new airline partners, including Iberia, Malaysia Airlines, Qatar Airways, Royal Air Maroc, Royal Jordanian, S7 Airlines, and SriLankan Airlines. In October 2020, Alaska announced its Oneworld membership date had been moved to March 21, 2021.In December 2020, Oneworld member Qatar Airways established a mileage partnership with Alaska prior to the commencement of nonstop service from Seattle to Doha. On March 31, 2021, Alaska Airlines officially joined the Oneworld alliance. Alaska temporarily suspended its partnership with S7 Airlines on March 1, 2022, in response to the 2022 Russian invasion of Ukraine.Alaska Airlines flights, including Horizon Air flights, have been bookable as part of Oneworld Global Explorer fares since 2008. Accrual Structure: On all scheduled Alaska Airlines flights, including flights operated by Horizon Air and SkyWest, members accrue one mile per mile flown. On flights under 500 miles, a minimum of 500 miles are awarded. Miles can also be accrued by flying eligible flights on Alaska Global Partners.Beginning in 2015, most airlines in the United States switched to revenue-based accrual programs, where credit is awarded based on the fare paid. Alaska Airlines has previously indicated that retaining a distance-based accrual program is a competitive advantage over other carriers. Among major carriers in the United States, only Frontier Airlines and Hawaiian Airlines also accrue one mile per mile flown. No other major airline offers a minimum accrual amount (e.g. 500 miles) for passengers lacking elite status. Partner Airlines: Mileage Plan includes 28 partner airlines that comprise the Alaska Global Partners. Partner airlines include Oneworld, SkyTeam, and Star Alliance members, as well as unaffiliated airlines. Partner Airlines: The airline partners of Mileage Plan are: Delta had been a mileage partner from 2004 until 2017, when it established Seattle as a hub city. The partnership with Delta was preceded by a partnership with Northwest Airlines in 1995, prior to the merger of the two airlines. Alaska Airlines was also a former mileage partner with other SkyTeam members, including Aeromexico, from 2013 until 2017, and Air France and KLM, from 2007 until 2018.Emirates ceased to be a partner in 2021, shortly after Alaska joined the Oneworld alliance. The two airlines had a mileage agreement starting in 2012. Membership Tiers: Mileage Plan's elite tiers for frequent travelers are MVP, MVP Gold and MVP Gold 75K. Members in elite tiers receive additional benefits compared to non-elite members, including complimentary upgrades to First and Premium Class, increased mileage accrual, priority check-in and boarding, discount or free lounge access, complimentary checked bags and other benefits. Membership Tiers: Members may attain elite status through one of the following methods: Notes^1 Includes Alaska Airlines and Horizon Air ^2 Both methods require flying a minimum number of segments on Alaska or Horizon Air (two for MVP; 6 for MVP Gold; 12 for MVP Gold 75K; 25 for MVP Gold 100K)There is no minimum spending requirement in order to receive elite status in Mileage Plan. Additionally, Members who fly 1,000,000 miles on Alaska Airlines are granted MVP Gold status for life. Additional Benefits: Mileage Plan members are permitted to check in one case of wine (up to 12 bottles) for free when flying domestically, if departing from one of 32 airports in California, Idaho, Oregon, or Washington.In March 2016, Alaska Airlines introduced a promotional award where Mileage Plus members could redeem 10,000 miles to cover the $85 cost of a TSA PreCheck screening application charged by the Transportation Security Administration. Club 49: Members who are residents of the State of Alaska, or military personnel permanently stationed in Alaska are eligible to participate in Alaska Airlines' Club 49 program. Club 49: Benefits include: Two complimentary checked bags on flights to or from Alaska, for participants and all passengers traveling on the same reservation Two Travel Now annual discounts for 30% off refundable one-way coach fares to, from, or within Alaska on Alaska Airlines, when booked within four days of departure Access to constituent fare rates, at a 30% discount, for Alaskans to access the state legislature and state agencies in Juneau Weekly fare sales via email for destinations within and outside Alaska The Freight For Less program, allowing participants to ship up to 100 pounds of freight within Alaska through Alaska Air Cargo for $10 (when flying) or $40 (all other times). Recognition: As of 2022, Mileage Plan has been the recipient of the U.S. News & World Report Best Airline Reward Program for seven consecutive years since 2015, based on an airline's overall evaluation of six criteria: ease of earning a free round-trip flight, benefits, network coverage, flight volume, award flight availability and airline quality ratings.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Movement marketing** Movement marketing: Movement marketing, or cultural movement marketing, is a marketing model that begins with an idea on the rise in culture. StrawberryFrog, the world's first Cultural Movement agency, invented the movement marketing model in 1999 working for such brands as Smart Car and IKEA. “Movements” as a new brand building marketing model begins with an idea on the rise in culture rather than the product itself. Scott Goodson, the founder of StrawberryFrog has written that brands can "identify, crystallize, curate and sponsor movements, accelerating their rise." Definition: Cultural movements is a marketing model that builds brands by identifying, sparking, organizing, leading and/or aligning with an idea on the rise in culture and building a multiplatform communications around this idea so that passionate advocates can belong, rally, engage and bring about change. Cultural movement" requires a radical rethink of the old rules of marketing. Definition: Instead of being about “the individual” it is about the group Instead of being about persuading people to believe something, it is about understanding & tapping into what they already believe Instead of being about selling, it is about sharing Perhaps most radical of all, it requires advertisers to stop talking about themselves – and to join in a conversation that is about anything and everything but the productStrawberryFrog defines the cultural movement model as having five phases: Strategy Declaration Provocation Go MASSive Sustainability Examples: A pioneer in cultural movement marketing model is Apple. Large companies applying this model currently include Mahindra, PepsiCo, and Procter & Gamble.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dimiracetam** Dimiracetam: Dimiracetam is a nootropic drug of the racetam family, derivatives of which may have application in the treatment of neuropathic pain. Legality: Australia Dimiracetam is a schedule 4 substance in Australia under the Poisons Standard (February 2020). A schedule 4 substance is classified as "Prescription Only Medicine, or Prescription Animal Remedy – Substances, the use or supply of which should be by or on the order of persons permitted by State or Territory legislation to prescribe and should be available from a pharmacist on prescription."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Essentials of Programming Languages** Essentials of Programming Languages: Essentials of Programming Languages (EOPL) is a textbook on programming languages by Daniel P. Friedman, Mitchell Wand, and Christopher T. Haynes. Essentials of Programming Languages: EOPL surveys the principles of programming languages from an operational perspective. It starts with an interpreter in Scheme for a simple functional core language similar to the lambda calculus and then systematically adds constructs. For each addition, for example, variable assignment or thread-like control, the book illustrates an increase in expressive power of the programming language and a demand for new constructs for the formulation of a direct interpreter. The book also demonstrates that systematic transformations, say, store-passing style or continuation-passing style, can eliminate certain constructs from the language in which the interpreter is formulated. Essentials of Programming Languages: The second part of the book is dedicated to a systematic translation of the interpreter(s) into register machines. The transformations show how to eliminate higher-order closures; continuation objects; recursive function calls; and more. At the end, the reader is left with an "interpreter" that uses nothing but tail-recursive function calls and assignment statements plus conditionals. It becomes trivial to translate this code into a C program or even an assembly program. As a bonus, the book shows how to pre-compute certain pieces of "meaning" and how to generate a representation of these pre-computations. Since this is the essence of compilation, the book also prepares the reader for a course on the principles of compilation and language translation, a related but distinct topic. Apart from the text explaining the key concepts, the book also comprises a series of exercises, enabling the readers to explore alternative designs and other issues.Like SICP, EOPL represents a significant departure from the prevailing textbook approach in the 1980s. At the time, a book on the principles of programming languages presented four to six (or even more) programming languages and discussed their programming idioms and their implementation at a high level. The most successful books typically covered ALGOL 60 (and the so-called Algol family of programming languages), SNOBOL, Lisp, and Prolog. Even today, a fair number of textbooks on programming languages are just such surveys, though their scope has narrowed. Essentials of Programming Languages: EOPL was started in 1983, when Indiana was one of the leading departments in programming languages research. Eugene Kohlbecker, one of Friedman's PhD students, transcribed and collected his "311 lectures". Other faculty members, including Mitch Wand and Christopher Haynes, started contributing and turned "The Hitchhiker's Guide to the Meta-Universe"—as Kohlbecker had called it—into the systematic, interpreter and transformation-based survey that it is now. Over the 25 years of its existence, the book has become a near-classic; it is now in its third edition, including additional topics such as types and modules. Its first part now incorporates ideas on programming from HtDP, another unconventional textbook, which uses Scheme to teach the principles of program design. The authors, as well as Matthew Flatt, have recently provided DrRacket plug-ins and language levels for teaching with EOPL. Essentials of Programming Languages: EOPL has spawned at least two other related texts: Queinnec's Lisp in Small Pieces and Krishnamurthi's Programming Languages: Application and Interpretation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**1,3-Propane sultone** 1,3-Propane sultone: 1,3-Propane sultone is the organosulfur compound with the formula (CH2)3SO3. It is a cyclic sulfonate ester, a class of compounds called sultones. It is a readily melting colorless solid. Synthesis: It may be prepared by the acid catalyzed reaction of allyl alcohol and sodium bisulfite. Reactions: 1,3-propane sultone is an activated ester and is susceptible to nucleophilic attack. It hydrolyzes to the hydroxysulfonic acid. It has been used in the synthesis of specialist surfactants, such as CHAPS detergent. Safety: Typical of activated esters, 1,3-propane sultone is an alkylating agent. 1,3-Propane sultone is toxic, carcinogenic, mutagenic, and teratogenic.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pitchfork** Pitchfork: A pitchfork or hay fork is an agricultural tool used to pitch loose material, such as hay, straw, manure, or leaves. It has a long handle and usually two to five thin tines designed to efficiently move such materials. The term is also applied colloquially, but inaccurately, to the garden fork. While similar in appearance, the garden fork is shorter and stockier than the pitchfork, with three or four thicker tines intended for turning or loosening the soil of gardens. Alternative terms: In some parts of England, a pitchfork is known as a prong. In parts of Ireland, the term sprong is used to refer specifically to a four-pronged pitchfork. Description: The typical pitchfork consists of a wooden shaft bearing two to five slightly curved metal tines fixed to one end of a handle. These are typically made of steel, wrought iron, or some other alloy, though historically wood or bamboo were used. Unlike a garden fork, a pitchfork lacks a grab at the end of its handle. Pitchforks with few tines set far apart are typically used for bulky material such as hay or straw; those with more and more closely spaced are used for looser materials such as silage, manure, leaves, or compost. History: In Europe, the pitchfork was first used in the Early Middle Ages, at about the same time as the harrow. These were made entirely of wood.The pitchfork is occasionally employed as an improvised weapon, as in a mob or riot. In popular culture: Artwork Paintings by various artists depict a wide variety of pitchforks in use and at rest. A notable American work is American Gothic (1930) by Grant Wood, which features a three-pronged tool. In popular culture: Politics Because of its association with peasantry and farming, the pitchfork has been used as a populist symbol and appended as a nickname for certain leading populist figures, such as "Pitchfork" Ben Tillman and "Pitchfork" Pat Buchanan.The Gangster Disciples, a street gang in the Midwestern United States, use a three-pointed pitchfork as one of their symbols.Venezuelan far-right political party, New Order use three-pointed pitchforks as their symbol. In popular culture: Religious symbolism The pitchfork is often used in lieu of the visually similar weapon, the trident, in popular portrayals and satire of Christian demonology. Many humorous cartoons, both animated and otherwise, feature a caricature of a demon ostensibly wielding a "pitchfork" (often actually a trident) sitting on one shoulder of a protagonist, opposite an angel on the other.The Hellenistic deity Hades wields a bident, a two-pronged weapon similar in form to a pitchfork but actually related to the trident in design and purpose.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Swaged sleeve** Swaged sleeve: A swaged sleeve is a connector that gets crimped using a hand tool and die (swaged). This type of compressed sleeve is commonly used to make mechanical or conductive connections. These sleeves join or terminate wire rope, aircraft cable, synthetic cable, fibrous rope, or electrical conductor cables. Oval swaged sleeve: When properly applied to 7×7, 7×19 or 6×19 IWRC classification wire rope, the eye-splice configuration termination provides a secured connection equal to the breaking strength of the wire rope. The product was originally developed for the US Military, patented in 1942 and currently used in a wide range of applications and industries including: aerospace, defense, marine, material handling, and structural applications. Oval swaged sleeve: The product which is used to make an eye splice is known as: Oval Sleeve, Figure 8 Sleeve, Hourglass Sleeve, Duplex Sleeve, Ferrule and Nicos. Correct installation is critical to the performance of the product; this includes utilizing the correct tool groove and/or die, number of presses/bites, press sequence and gauging. Adhering to the manufacturer's instructions will avoid catastrophic failure. Stop sleeve: The round stop sleeve is intended to be pressed on single wire or synthetic ropes, e.g. for use as an end stop. Splicing sleeve: Electrical conductor splicing sleeves are designed to splice a range of conductors. Full tension sleeves are made of high conductivity copper, aluminum or steel with a specially bonded inner bore coating Tools: A range of swaging tools are available to compress sleeves correctly. Tools range from manual pliers type and toggle-action to pneumatic, hydraulic and battery-operated hydraulic tools.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pro-simplicial set** Pro-simplicial set: In mathematics, a pro-simplicial set is an inverse system of simplicial sets. A pro-simplicial set is called pro-finite if each term of the inverse system of simplicial sets has finite homotopy groups. Pro-simplicial sets show up in shape theory, in the study of localization and completion in homotopy theory, and in the study of homotopy properties of schemes (e.g. étale homotopy theory).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OPLS** OPLS: The OPLS (Optimized Potentials for Liquid Simulations) force field was developed by Prof. William L. Jorgensen at Purdue University and later at Yale University, and is being further developed commercially by Schrödinger, Inc. Functional form: The functional form of the OPLS force field is very similar to that of AMBER: E(rN)=Ebonds+Eangles+Edihedrals+Enonbonded Ebonds=∑bondsKr(r−r0)2 Eangles=∑angleskθ(θ−θ0)2 cos cos cos cos ⁡(4ϕ−ϕ4)]) 12 −Cijrij6+qiqje24πϵ0rij) with the combining rules Aij=AiiAjj and Cij=CiiCjj Intramolecular nonbonded interactions Enonbonded are counted only for atoms three or more bonds apart; 1,4 interactions are scaled down by the "fudge factor" 0.5 , otherwise 1.0 . All the interaction sites are centered on the atoms; there are no "lone pairs". Parameterization: Several sets of OPLS parameters have been published. There is OPLS-ua (united atom), which includes hydrogen atoms next to carbon implicitly in the carbon parameters, and can be used to save simulation time. OPLS-aa (all atom) includes every atom explicitly. Later publications include parameters for other specific functional groups and types of molecules such as carbohydrates. OPLS simulations in aqueous solution typically use the TIP4P or TIP3P water model. Parameterization: A distinctive feature of the OPLS parameters is that they were optimized to fit experimental properties of liquids, such as density and heat of vaporization, in addition to fitting gas-phase torsional profiles. Implementation: The reference implementations of the OPLS force field are the BOSS and MCPRO programs developed by Jorgensen. Other packages such as TINKER, GROMACS, PCMODEL, Abalone, LAMMPS, Desmond and NAMD also implement OPLS force fields.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ishikawa diagram** Ishikawa diagram: Ishikawa diagrams (also called fishbone diagrams, herringbone diagrams, cause-and-effect diagrams) are causal diagrams created by Kaoru Ishikawa that show the potential causes of a specific event.Common uses of the Ishikawa diagram are product design and quality defect prevention to identify potential factors causing an overall effect. Each cause or reason for imperfection is a source of variation. Causes are usually grouped into major categories to identify and classify these sources of variation. Overview: The defect, or the problem to be solved, is shown as the fish's head, facing to the right, with the causes extending to the left as fishbones; the ribs branch off the backbone for major causes, with sub-branches for root-causes, to as many levels as required.Ishikawa diagrams were popularized in the 1960s by Kaoru Ishikawa, who pioneered quality management processes in the Kawasaki shipyards, and in the process became one of the founding fathers of modern management. Overview: The basic concept was first used in the 1920s, and is considered one of the seven basic tools of quality control. It is known as a fishbone diagram because of its shape, similar to the side view of a fish skeleton. Mazda Motors famously used an Ishikawa diagram in the development of the Miata (MX5) sports car. Root causes: Root-cause analysis is intended to reveal key relationships among various variables, and the possible causes provide additional insight into process behavior. It shows high-level causes that lead to the problem encountered by providing a snapshot of the current situation.There can be considerable confusion about the relationships between problems, causes, symptoms and effects. Smith highlights this and the common question, “Is that a problem or a symptom?” This question mistakenly presumes that problems and symptoms are contrasting categories, like light and heavy, such that something can’t be both. A problem is a situation that bears improvement; a symptom is the effect of a cause: a situation can be both a problem and a symptom. Root causes: At a practical level, a cause is whatever is responsible for, or explains, an effect - a factor "whose presence makes a critical difference to the occurrence of an outcome".The causes emerge by analysis, often through brainstorming sessions, and are grouped into categories on the main branches off the fishbone. To help structure the approach, the categories are often selected from one of the common models shown below, but may emerge as something unique to the application in a specific case. Root causes: Each potential cause is traced back to find the root cause, often using the 5 Whys technique.Typical categories include: The 5 Ms (used in manufacturing) Originating with lean manufacturing and the Toyota Production System, the 5 Ms is one of the most common frameworks for root-cause analysis:Manpower / mind power (physical or knowledge work, includes: kaizens, suggestions) Machine (equipment, technology) Material (includes raw material, consumables, and information) Method (process) Measurement / medium (inspection, environment)These have been expanded by some to include an additional three, and are referred to as the 8 Ms: Mission / mother nature (purpose, environment) Management / money power (leadership) Maintenance The 8 Ps (used in product marketing) This common model for identifying crucial attributes for planning in product marketing is often also used in root-cause analysis as categories for the Ishikawa diagram:Product (or service) Price Place Promotion People (personnel) Process Physical evidence (proof) Performance The 4 or 5 Ss (used in service industries) An alternative used for service industries, uses four categories of possible cause: Surroundings Suppliers Systems Skill Often an important 5th S is added - Safety
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Group 9 element** Group 9 element: Group 9, by modern IUPAC numbering, is a group (column) of chemical elements in the periodic table. Members of Group 9 include cobalt (Co), rhodium (Rh), iridium (Ir) and meitnerium (Mt). These are all transition metals in the d-block, considered to be some of the most rare of which.Like other groups, the members of this family show patterns in electron configuration, especially in the outermost shells, resulting in trends in chemical behavior; however, rhodium deviates from the pattern. Group 9 element: "Group 9" is the modern standard designation for this group, adopted by the IUPAC in 1990.In the older group naming systems, this group was combined with group 8 (iron, ruthenium, osmium, and hassium) and group 10 (nickel, palladium, platinum, and darmstadtium) and called group "VIIIB" in the Chemical Abstracts Service (CAS) "U.S. system", or "VIII" in the old IUPAC (pre-1990) "European system" (and in Mendeleev's original table). Chemistry: [*] Predicted. The first three elements are hard silvery-white metals: Cobalt is a metallic element that can be used to turn glass a deep blue color. Rhodium can be used in jewelry as a shiny metal. Iridium is mainly used as a hardening agent for platinum alloys. All known isotopes of meitnerium are radioactive with short half-lives. Only minute quantities have been synthesized in laboratories. It has not been isolated in pure form, and its physical and chemical properties have not been determined yet.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Schema matching** Schema matching: The terms schema matching and mapping are often used interchangeably for a database process. For this article, we differentiate the two as follows: Schema matching is the process of identifying that two objects are semantically related (scope of this article) while mapping refers to the transformations between the objects. For example, in the two schemas DB1.Student (Name, SSN, Level, Major, Marks) and DB2.Grad-Student (Name, ID, Major, Grades); possible matches would be: DB1.Student ≈ DB2.Grad-Student; DB1.SSN = DB2.ID etc. and possible transformations or mappings would be: DB1.Marks to DB2.Grades (100-90 A; 90-80 B: etc.). Schema matching: Automating these two approaches has been one of the fundamental tasks of data integration. In general, it is not possible to determine fully automatically the different correspondences between two schemas — primarily because of the differing and often not explicated or documented semantics of the two schemas. Impediments: Among others, common challenges to automating matching and mapping have been previously classified in especially for relational DB schemas; and in – a fairly comprehensive list of heterogeneity not limited to the relational model recognizing schematic vs semantic differences/heterogeneity. Most of these heterogeneities exist because schemas use different representations or definitions to represent the same information (schema conflicts); OR different expressions, units, and precision result in conflicting representations of the same data (data conflicts). Impediments: Research in schema matching seeks to provide automated support to the process of finding semantic matches between two schemas. This process is made harder due to heterogeneities at the following levels Syntactic heterogeneity – differences in the language used for representing the elements Structural heterogeneity – differences in the types, structures of the elements Model / Representational heterogeneity – differences in the underlying models (database, ontologies) or their representations (key-value pairs, relational, document, XML, JSON, triples, graph, RDF, OWL) Semantic heterogeneity – where the same real world entity is represented using different terms or vice versa Schema matching: Methodology Discusses a generic methodology for the task of schema integration or the activities involved. According to the authors, one can view the integration. Preintegration — An analysis of schemas is carried out before integration to decide upon some integration policy. This governs the choice of schemas to be integrated, the order of integration, and a possible assignment of preferences to entire schemas or portions of schemas. Comparison of the Schemas — Schemas are analyzed and compared to determine the correspondences among concepts and detect possible conflicts. Interschema properties may be discovered while comparing schemas. Conforming the Schemas — Once conflicts are detected, an effort is made to resolve them so that the merging of various schemas is possible. Merging and Restructuring — Now the schemas are ready to be superimposed, giving rise to some intermediate integrated schema(s). The intermediate results are analyzed and, if necessary, restructured in order to achieve several desirable qualities. Schema matching: Approaches Approaches to schema integration can be broadly classified as ones that exploit either just schema information or schema and instance level information.Schema-level matchers only consider schema information, not instance data. The available information includes the usual properties of schema elements, such as name, description, data type, relationship types (part-of, is-a, etc.), constraints, and schema structure. Working at the element (atomic elements like attributes of objects) or structure level (matching combinations of elements that appear together in a structure), these properties are used to identify matching elements in two schemas. Language-based or linguistic matchers use names and text (i.e., words or sentences) to find semantically similar schema elements. Constraint based matchers exploit constraints often contained in schemas. Such constraints are used to define data types and value ranges, uniqueness, optionality, relationship types and cardinalities, etc. Constraints in two input schemas are matched to determine the similarity of the schema elements. Schema matching: Instance-level matchers use instance-level data to gather important insight into the contents and meaning of the schema elements. These are typically used in addition to schema level matches in order to boost the confidence in match results, more so when the information available at the schema level is insufficient. Matchers at this level use linguistic and constraint based characterization of instances. For example, using linguistic techniques, it might be possible to look at the Dept, DeptName and EmpName instances to conclude that DeptName is a better match candidate for Dept than EmpName. Constraints like zipcodes must be 5 digits long or format of phone numbers may allow matching of such types of instance data.Hybrid matchers directly combine several matching approaches to determine match candidates based on multiple criteria or information sources. Schema matching: Most of these techniques also employ additional information such as dictionaries, thesauri, and user-provided match or mismatch informationReusing matching information Another initiative has been to re-use previous matching information as auxiliary information for future matching tasks. The motivation for this work is that structures or substructures often repeat, for example in schemas in the E-commerce domain. Such a reuse of previous matches however needs to be a careful choice. It is possible that such a reuse makes sense only for some part of a new schema or only in some domains. For example, Salary and Income may be considered identical in a payroll application but not in a tax reporting application. There are several open ended challenges in such reuse that deserves further work. Schema matching: Sample Prototypes Typically, the implementation of such matching techniques can be classified as being either rule based or learner based systems. The complementary nature of these different approaches has instigated a number of applications using a combination of techniques depending on the nature of the domain or application under consideration. Schema matching: Identified relationships The relationship types between objects that are identified at the end of a matching process are typically those with set semantics such as overlap, disjointness, exclusion, equivalence, or subsumption. The logical encodings of these relationships are what they mean. Among others, an early attempt to use description logics for schema integration and identifying such relationships was presented. Several state of the art matching tools today and those benchmarked in the Ontology Alignment Evaluation Initiative are capable of identifying many such simple (1:1 / 1:n / n:1 element level matches) and complex matches (n:1 / n:m element or structure level matches) between objects. Schema matching: Evaluation of quality The quality of schema matching is commonly measured by precision and recall. While precision measures the number of correctly matched pairs out of all pairs that were matched, recall measures how many of the actual pairs have been matched.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Water ionizer** Water ionizer: A water ionizer (also known as an alkaline ionizer) is a home appliance which claims to raise the pH of drinking water by using electrolysis to separate the incoming water stream into acidic and alkaline components. The alkaline stream of the treated water is called alkaline water. Proponents claim that consumption of alkaline water results in a variety of health benefits, making it similar to the alternative health practice of alkaline diets. Such claims violate basic principles of chemistry and physiology. There is no medical evidence for any health benefits of alkaline water. Extensive scientific evidence has completely debunked these claims.The machines originally became popular in Japan and other East Asian countries before becoming available in the U.S. and Europe. Health claims: Water ionizers are often marketed on the basis of health claims which are normally focused on their putative ability to make water more alkaline. A wide variety of benefits have been claimed, including the ability to slow aging, prevent disease, give the body more energy, and offset alleged effects of acidic foods.There is no empirical evidence to support these claims, nor the claims that drinking ionized water will have a noticeable effect on the body. Drinking ionized water or alkaline water does not alter the body's pH due to acid-base homeostasis. Additionally, marketers have inaccurately claimed that the process of electrolysis changes the structure of water from large non-bioavailable water clusters to small bioavailable water clusters, called "micro clusters".Some proponents of alkaline water and the alkaline diet as a whole claim a link between alkaline intake and cancer prevention; no scientific evidence exists for such a connection, and as such, several cancer societies have denounced this claim.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zemax** Zemax: Zemax is a company that sells optical design software. OpticStudio is its flagship product and a commonly used optical design program for Microsoft Windows. It is used for the design and analysis of both imaging and illumination systems. History: OpticStudio, then called Zemax, was originally written by Ken Moore and was the first optical design program specifically written for Microsoft Windows. It became commercially available in 1990. The first version was called Max, named after the programmer's dog. The name was later changed to Zemax due to a trademark conflict. The software was rebranded as OpticStudio in 2016.The program was originally sold by Focus Software, which later became Zemax Development Corp. The latter merged with Radiant Imaging in 2011 to form Radiant Zemax. In 2014 Radiant sold Zemax to Arlington Capital Partners, which named the company Zemax, LLC. Arlington Capital Partners sold Zemax to EQT June 26, 2018.On 31 August 2021, it was announced Ansys acquired the company. Features and applications: OpticStudio is an optical design program that is used to design and analyze imaging systems such as camera lenses, as well as illumination systems. It works by ray tracing—modelling the propagation of rays through an optical system. It can model the effect of optical elements such as simple lenses, aspheric lenses, gradient-index lenses, mirrors, and diffractive optical elements, and can produce standard analysis diagrams such as spot diagrams and ray-fan plots. OpticStudio can also model the effect of optical coatings on the surfaces of components. It includes a library of stock commercial lenses. OpticStudio can perform standard sequential ray tracing through optical elements, non-sequential ray tracing for analysis of stray light, and physical optics beam propagation. It also has tolerancing capability, to allow analysis of the effect of manufacturing defects and assembly errors.The physical optics propagation feature can be used for problems where diffraction is important, including the propagation of laser beams and the coupling of light into single-mode optical fibers. OpticStudio's optimization tools can be used to improve an initial lens design by automatically adjusting parameters to maximize performance and reduce aberrations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cold frame** Cold frame: In agriculture and gardening, a cold frame is a transparent-roofed enclosure, built low to the ground, used to protect plants from adverse weather, primarily excessive cold or wet. The transparent top admits sunlight and prevents heat escape via convection that would otherwise occur, particularly at night. Essentially, a cold frame functions as a miniature greenhouse to extend the growing season.Historically, cold frames were built to be used in addition to a heated greenhouse. The name itself exemplifies the distinction between the warm greenhouse and the unheated cold frame. They were frequently built as part of the greenhouse's foundation brickwork along the southern wall (in northern latitudes). This allowed seeds to be germinated in the greenhouse and then easily moved to the attached cold frame to be "hardened-off" before final planting outside. Cold frames are similar to some enclosed hotbeds, also called hotboxes. The difference is in the amount of heat generated inside. This is parallel to the way that some greenhouses are called "hothouses" to emphasize their higher temperature, achieved either by the solar effects alone or by auxiliary heating via a heater or HVAC system of some kind. Cold frame: Cold frames are found in home gardens and in vegetable farming. They create microclimates that provide several degrees of air and soil temperature insulation, and shelter from wind. In cold-winter regions, these characteristics allow plants to be started earlier in the spring, and to survive longer into the fall and winter. They are most often used for growing seedlings that are later transplanted into open ground, and can also be a permanent home to cold-hardy vegetables grown for autumn and winter harvest. Construction: Cold frame construction is a common home or farm building project, although kits and commercial systems are available. A traditional plan makes use of old glass windows: a wooden frame is built, about one to two feet tall, and the window placed on top. The roof is often sloped towards the winter sun to capture more light, and to improve runoff of water, and hinged for easy access. Clear plastic, rigid or sheeting, can be used in place of glass. An electric heating cable, available for this purpose, can be placed in the soil to provide additional heat. Uses: Cold frames can be used to extend the growing season for many food and ornamental crops, primarily by providing increased warmth in early spring. This means that it's possible to harvest vegetable crops ahead of their normal season when they are extremely expensive to buy. Some crops suitable for growing in a cold frame include lettuces, parsley, salad onions, spinach, radishes and turnips etc. One vegetable crop can occupy the whole of a cold frame or a combination of crops can be grown so that they mature in rotation in order to get a wide range of different vegetables throughout the year from a single cold frame. Bulb frame: A "bulb frame" is a specialized kind of cold frame, designed for growing hardy or almost hardy ornamental bulbous plants, particularly in climates with wet winters. Typically it is raised further above ground level than a normal cold frame, so that the plants can be seen better when in flower. They are often used for the cultivation of winter-growing bulbs which flower in the autumn or spring. The covers are used in winter to provide some protection from very bad weather, while allowing good ventilation. Then in the summer, the covers provide dry, warm conditions which many such bulbs need.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded